Multi-media reading

August 12th, 2016

I haven't put any posts up recently; after my wife died unexpectedly in late December, I began "Lynn Stories," a series of vignettes about her life before and with me (48 & 29 years respectively) and some Lynn-connected events that have occurred since (e.g., I've given a matching grant for two of the local non-profit organizations she strongly supported, both with donations and with her time as a volunteer, pro bono therapist. They will share The Lynnette C. Jung Memorial Mental Health Fund.) I'm at 60,000 words into "Lynn Stories" as of today with many more vignettes to add, but it's therapy for me and I have no intention of publishing it.

But I've just recently started what I call multi-media reading, a concept which many others may have come up with, but I'd heard nothing about.

I almost always read non-fiction, often several books at the same time and yesterday, sitting on a recumbent bike in the health club I go to six days a week, started something that was new to me at least.

I will leave on a trip to five National Parks soon with my sixteen-year-old grandson ( he can now do part of the driving), so am reading a book called "Lassoing the Sun: A year in America's National Parks. We only have a week so plan to visit Great Sand Dunes NP, Mesa Verde, Canyonlands, Arches and the Black Canyon of the Gunnison before heading back to DIA.

As I'm reading, using the Kindle app on my iPad Air, I'm toggling in and out of the book, looking up names, photos, Wikipedia articles and lots of images.

This obviously slows down my usual reading pace, but it expands my knowledge and my pleasure in the book immensely.

Try the concept if you're not already using it

Will I live a much longer life than I expected?

May 20th, 2016

In April I became three-quarters of a century old. I am basically healthy and I'm in the gym six days a week although I've got five orthopedic/metabolic issues even after having a right total knee surgery fifteen years ago and three low back operations. The "Curse of the Springbergs'" back continues to plague me; a nerve in the left side of my neck is pinched from time to time; my right shoulder is intermittently painful; I have a trigger finger on my dominant hand and I've had, in the last four months three attacks of gout or perhaps pseudogout, where the crystals deposited aren't uric acid,, but a calcium compound. My recent blood test for my uric acid level was right in the middle of the accepted normal range; that doesn't mean the attacks weren't gout, but deterred my podiatrist from giving my a prescription for allopurinol which could potentially prevent further attacks.

On the other hand, my brain still works pretty well and I'm below my college wrestling weight, so I would potentially be interested in living a long life and treating those nagging, but admittedly minor ailments.

Most members of my family have lived to considerable ages, with only one self-inflicted exception (not suicide, just terrible eating habits resulting in gaining fifty pounds plus really poor adherence to medications prescribed). My mother died at ninety, my father was nearly ninety-five, his sister was ninety and three aunts on my mother's side lived to ninety.

So I was intrigued by an article that appeared in The New York Times recently titled "Chasing Immortality: Dogs Test Drug Aimed at Humans' Biggest Killer: Age. (1)

We typically think of people in developed countries dying, as my wife did in late December at age seventy-five, of a stroke, or as my brother (vide supra) did at fifty-seven, of a heart attack. Cancer, diabetes and Alzheimer's (and other dementias) are also common causes of death in our population, but the article pointed out that "treatment breakthroughs" for these maladies, although crucial for those affected, would likely increase our overall populations life expectancy by just a few years.

I looked at the reference cited in the article. It's a 1990 article in the journal Science and only the abstract is easily available, but its bottom line was even if we found cures for those "major degenerative diseases," it would only result in raising the life expectancy at birth to 85 years, not the Biblical 120, much less even longer life spans.

Now there's a canine trial of rapamycin, a drug first isolated in 1972 from a bacteria found on Easter Island and named after the native designation for that island, Rapa Nui. It was originally developed to fight fungal infections, but later used in preventing organ transplant rejections. It has been shown to increase lifespan in mice. Whether rapamycin slows down aging, however, remains unclear.  The life-extending effect seems to be related to rapamycin's suppression of tumors, which represent the main causes of death in in those animals. (2)

The early results in canines are promising, but the research reported is what's termed by the NIH as a Phase One study, a relatively brief test of a new drug in a small group of subjects to evaluate its safety, determine a safe dosage range, and identify side effects. Bela, the eight-year-old dog featured in the article may have been given rapamycin or may actually be receiving a placebo.

After Phase One we would have considerable time before humans got the drug in a clinical trial, much less having the drug available to your physician or mine for more general use. The dog studies would likely go through Phase Two and Phase Three studies; then there would be considerable discussion before a human Phase One test was started. And there are other phases to accomplish before approval of rapamycin for the purpose of life extension in our species. I copied and pasted definitions of those phases for you and added a few comments.

Phase II studies test the efficacy of a drug or device (Does it work for the particular purpose intended, in this case extending our life span?). This second phase of testing can last from several months to two years, and involves up to several hundred patients. Most phase II studies are randomized trials where one group of patients receives the experimental drug, while a second "control" group receives a standard treatment or placebo. Often these studies are "blinded" which means that neither the patients nor the researchers know who has received the experimental drug. (This is crucial since the hope is to eliminate the "placebo effect...I got the new medicine; I'm feeling better, so it must be from that drug. This is called  the "Post hoc, ergo propter hoc" fallacy. It happened after X, so it must have been caused by X. An extreme and silly example would be the sun came up and shortly thereafter I had a car accident, therefore sunrise causes car crashes.) A Phase Two study allows investigators to provide the pharmaceutical company and the FDA with comparative information about the relative safety and effectiveness of the new drug. About one-third of experimental drugs successfully complete both Phase I and Phase II studies.

Phase III studies involve randomized and blind testing in several hundred to several thousand patients. This large-scale testing, which can last several years, provides the pharmaceutical company and the FDA with a more thorough understanding of the effectiveness of the drug or device, the benefits and the range of possible adverse reactions. 70% to 90% of drugs that enter Phase III studies successfully complete this phase of testing. Once Phase III is complete, a pharmaceutical company can request FDA approval for marketing the drug. (This is the one we should wait for when we read early reports of a new medication that may help a medical issue we are afflicted with).

Fortunately, there is another phase, one that has a number of purposes that drug companies are interested in, but one that can protect our larger population, especially when the new use of an old drug leads to a much larger group of us being exposed to the drug.

Phase IV studies, often called Post Marketing Surveillance Trials, are conducted after a drug or device has been approved for consumer sale. Pharmaceutical companies have several objectives at this stage: (1) to compare a drug with other drugs already in the market; (2) to monitor a drug's long-term effectiveness and impact on a patient's quality of life; and (3) to determine the cost-effectiveness of a drug therapy relative to other traditional and new therapies. Phase IV studies can result in a drug or device being taken off the market or restrictions of use could be placed on the product depending on the findings in the study. (I underlined the last sentence, as this is a further protection for those of us in the general population, assuming we weren't lucky enough to be in a study group. I have a friend who had cancer, with a recurrence after surgery, but got into a Phase Three study and is now cancer-free.)

I found a list of 35 such medications, including Quaalude and Vioxx that have been withdrawn from the market in the last forty or so years. Quaalude is now a Schedule One drug, taking its place alongside heroin. Vioxx was implicated in over 27,000 deaths. (3)

It's a juggling act, to use a phrase that may seem inappropriate. What if rapamycin extended the life span of 98% of those who got it long-term, but severely damaged or even killed 2%. Would you take the drug under those circumstances? Would you want it banned from being prescribed?

What if, in extending human life span, it also allowed time for researchers to come up with treatments or even cures for that list of the major killing diseases?

Mull over those tough questions...we've got a few years before they need to be answered.

Links: (1) http://www.nytimes.com/2016/05/17/us/aging-research-disease-dogs.html?_r=0                                                                                                               (2) https://www.sciencedaily.com/releases/2013/07/130725141715.htm                                                                                                                                         (3) http://prescriptiondrugs.procon.org/view.resource.php?resourceID=005528

 

 

 

 

Gout: an unwelcome and surprising visitor

April 12th, 2016

A few months ago I rather suddenly developed an acute painful swelling at the base of my left big toe. "I think I'm having an attack of gout, but why me?" I wondered

Of course this had to happen on a weekend, when the Family Practice Clinic I'd usually go to was closed, as was my Podiatrist's office.

I wondered if I could have septic arthritis, an infected joint, but had no obvious reason for this much more frightening diagnosis. I remembered there was also an entity called pseudogout, where the crystals were calcium pyrophosphate, not uric acid.

So I went to the Urgent Care Center our local hospital established a few blocks away from my house. A Family Practice physician examined me and said, "I think you have gout, but you'll need to go to the hospital's Emergency Department (ED) so they can get some fluid from your joint and decide if it's really gout. We don't do that test here." She was also concerned about the rather slim possibility of septic arthritis.

At the ED I was triaged as someone who could wait a while and eventually seen by a Physician's Assistant (PA) who said, "I think you have gout!"

Now I trained at Duke, the home of the original PA program and one of the second generation of PA's taught me how to do dialysis in the ICU for patients with acute kidney failure, so I have no problem at all being seen by a PA.

"What do we do to make sure it's gout and not septic arthritis or pseudogout?" I asked.

"I really don't think you have an infected joint. We'd have to "tap your toe" (i.e., aspirate some fluid from the joint), to make sure it's gout, but that's a tricky procedure, not one that I would do. You should see a podiatrist. In the meantime, take big dose Ibuprofen."

I had a bottle of that at home and knew I could take 800 milligrams three times a day as long as I ate a sandwich or a meal first. I wasn't going to bother the on-call Podiatrist that weekend unless the pills didn't help.

When I did see my regular Podiatrist, two days later, the clinical signs of acute inflammation that I had learned in Latin and English during my first year of medical school:, rubor (redness), calor (increased heat), tumor (swelling), dolor (pain), had all diminished markedly.

"I don't stick joints unless I have to," said my highly experienced Podiatrist. "The PA you saw at the hospital was right; it's not something we do routinely. You're better and this was a classical gout attack, so keep taking the NSAID for a few more days."

When I was in medical school, half a century ago, I thought of gout as a kind of acute arthritis that affected corpulent, older men who ate too much red meat and drank port wine. They then deposited uric acid crystals in joints and sometimes developed a chronic form of the disease called tophaceous gout, where nodular masses of the crystals (tophi) are deposited in different soft tissue areas of the body. Tophi are most commonly found as hard nodules around the fingers, at the tips of the elbows, and around the big toe, but they can appear anywhere in the body, even in the ears, vocal cords, or even against the spinal cord! As a Nephrologist, I was also aware that some people develop uric acid kidney stones.

Here's a link to an article with photos of acute gout in a toe and Henry VIII who suffered from the disease. http://www.dailymail.co.uk/health/article-2210797/Disease-kings-rise-people-gout-increase-obesity.html

I didn't fit the image I had of someone who'd have a gouty attack. I wasn't overweight, didn't drink much (I usually have a glass of wine or whiskey three times a week), didn't have a family history of gout and hadn't over-indulged in the high-purine foods that can increase uric acid levels.

Purines are natural substances found in all of the body's cells, and in virtually all foods. A relatively small number of foods, however, contain concentrated amounts of purines. For the most part, these high-purine foods are also high-protein foods, and they include organ meats like kidney, fish like mackerel, herring, sardines and mussels, and also yeast.

When cells die and get recycled, the purines in their genetic material also get broken down. Uric acid is the chemical formed when purines have been broken down completely. Low-purine diets are often used to help treat severe gout in which excessive uric acid is deposited in the tissues of the body. Purines from meat and fish clearly increase our risk of gout, while purines from vegetables fail to change our risk. Dairy foods (which can contain purines) actually appear to lower our risk of gout.

I had been taking a baby aspirin a day for reasons that were "iffy," and some recent research had implicated even low-dose aspirin as a possible risk factor for gout. Here's a link to that summary online http://www.ncbi.nlm.nih.gov/pubmed/23345599. So I stopped taking aspirin.

I had no personal history of heart disease, although my brother, who had many risks factors, died at 57 of a heart attack and my mother had one at age 74. Aspirin for secondary prevention of cardiovascular events seems to make sense to me (always discuss this with your own physician before you start taking long-term aspirin or any other drug). But primary prevention, that is taking aspirin if you haven't had a heart attack or angina, is a different matter, one that is being hashed over in the medical literature.

I found an article on pseudogout on the WebMD website  http://www.webmd.com/osteoarthritis/arthritis-pseudogout and wondered if that's what I had had an episode of. Time would tell.

Then recently I flew to the DC area to visit family and friends, but mostly to hear and see Jordi, my sixteen-year-0ld grandson, who had the male lead roll in "High School Musical," at HB Woodlawn High.

I normally drink three very large glasses of water a day and my family had purchased the limes I squeeze into the water (It tastes better, so I drink more.) But on the flight home I didn't drink much at all and I was aware of pain in my right big toe as I came off the plane.

I woke up at 4:15 a.m. the next morning with fairly severe pain in  that toe, took 800 milligrams of Ibuprofen after eating a thick slice of bread abundantly smeared with cream cheese and jam, and went back to sleep.

I was fortunate; my Podiatrist had an appointment cancellation the next morning and he said, "Clinically this is gout. I'm going to give us a "script for another NSAID that I find works better for acute gouty arthritis. And drink lots of fluids"

I went to the pharmacy and got a bottle of indomethacin, an NSAID I haven't used for many years. I had asked him if he wanted me to get a uric acid level blood test and he said, "You can have gout without having an elevated uric acid, but if you do have one, we can treat it. I think the time to get blood tests is after the acute attack resolves.'

I was aware that another old drug used in gout, Allopurinol, was still around, but had to look up its mechanism of action. What I found was Allopurinol reduces the production of uric acid in your body, so if I did have an elevated blood level of uric acid, it could be used to potentially prevent me from having gouty attacks or forming uric acid kidney stones..

I hadn't been aware that an elevated uric acid level might go down during an acute gouty attack; I visualized uric acid crystals migrating to my big toe, but was unsure if that's what the Podiatrist meant.

I'd be very happy if the next episode waits a long time to happen, but I'm not betting on that being the case.

 

Hospice: costs and your own choices

February 26th, 2016

The Wall Street Journal had a February 19th front-page article on how much Hospice is costing Medicare with the emphasis being on those patients who go into home-based Hospice care and stay there for prolonged periods. While many of that group have Alzheimer's or other dementing diseases, others have chronic obstructive pulmonary disease (COPD, e.g., emphysema), heart failure or cancer. The average time spent on Hospice in 2013 alone was 93.2 days for people with Alzheimer's and similar dementias and they consumed 22% of the total Medicare spent on End-of-life (EOL) care that year.

The statistics in the article were striking: over an eight-year period (2005-2013) 107,000 patients had been on Hospice for much longer, close to a thousand days. That's a very small subset of all those who had sought out Hospice care during that timeframe, 1.3%, but that relatively small number of patients cost Medicare a huge amount, 14% of its total dollars spent on Hospice in that timeframe.

The program itself was originally set up for those whose physicians could certify were terminal, within six months of dying. The overall Medicare Hospice expense total in 2013 was roughly $15 billion and the WSJ's data references a study stating that that care, at least for those who did not have cancer, actually cost Medicare 19% more than for similar patients who did not seek out Hospice care.

That's especially true for those who have dementia, but other chronic conditions (where the time to death is less predictable than it often is for patients with cancer) clearly play a role also.

As I was not surprised there's often another side of the picture. I have had two examples in my extended family; one involving Paul, my son-in-law and the other Lynnette, my wife. My son-in-law's father had Lyme disease with significant brain involvement. He went from being a distinguished systems engineer to needing help with many of the activities of daily life. It was a long time before his family was able to get him on home Hospice; when they finally did, my son-in-law said to me, "it was the difference between day and night." The family really appreciated the care given to him during his final weeks.

Recently my wife had a major stroke and lived one week, one hour and seven minutes after the onset of the bleed into her brain. I had seen her stumble in our bedroom, realized she was unable to get up, carried her to our bed and called 911. We went from ambulance to a local hospital where CT scanner which showed a considerable area of bleeding in her brain. She was immediately moved to a bigger hospital where a neurosurgeon was available. He ordered a second CT scan which showed even more bleeding. All the things he could do would come with consequences she had always said she would not want to live with.

I was her health care surrogate and carried out her oft-expressed wishes, rejecting the surgery, so she went, several days later, to Hospice at a third hospital. The care there was exemplary, but this was clearly to be short-term, inpatient Hospice.

So when I read the WSJ article, I did so with a jaundiced eye, not from the viewpoint of a physician or a taxpayer, but from that of a family member who had experienced the positive side of Hospice.

The issue for me, as it was for my wife, is quality of life. That's a hard subject for many of us to discuss, but I think it's crucial. We had had those talks repeatedly over the past five or six years and both of us knew what the other wanted.

I don't have answer for others, but I urge you to think about the subject, talk about it with your spouse, significant other and children, whoever might be asked to make a choice for you if you are unable to make it yourself.

There was the other aspect we hadn't fully explored. Neither my daughter, who would have been the one to make choices for me if my wife was unable to do so, or a nephew, who would have been my wife's secondary decision maker, had been party to our discussions.

So what would have happened if Lynn and I had been in a car crash, had both been severely injured and our alternate health care surrogates had to make those tough choices?

I don't know the answer, but it's clear to me now that EOL discussions should involve more than you and your spouse or significant other.

It's also clear that neither Lynn nor I would have wanted prolonged home Hospice, but would have (I did at least) really appreciated its availability for relatively short-term care, whether in our home or, as it turned out in a hospital setting. You may or may not feel the same way.

Please think of having those talks now with whomever in your family might be called upon to choose for you.

 

 

 

Even the best of us...smallpox, anthrax, influenza and the CDC

July 16th, 2014
This is our premier laboratory

This is our premier laboratory

The Center for Disease Control and Prevention, AKA the CDC, America's central medical laboratory has recently had multiple problematic episodes. I was trying to follow up on the vials of smallpox virus that were found in an old refrigerator that the FDA apparently had forgotten, The question, of course, was whether the virus samples were long dead or still viable. They had been sent to the CDC to have that highly significant issue resolved.

Since then there has been a followup announcement, but also several articles on significant issues with procedures and safety at the CDC itself. The first was published in The New York Times, AKA NYT, (as well as in other papers, but I get the NYT daily on my iPad , so saw it there first). The startling title was "C.D.C. Closes Anthrax and Flu Labs after Accidents." The current director of the CDC, Dr. Thomas Frieden, called the lab/agency "the reference laboratory to the world," but admitted there had been a series of accidents (actually lapses in set safety procedures), in the recent past, that were quite frightening.

A month ago potentially infectious samples of anthrax, a bacteria found naturally in soil and commonly affecting wild and domesticated animals worldwide, causing an extremely serious, but rare illness in people, were sent to labs that were not equipped to deal with them (anthrax would normally be handled only with the highest level of protective biosafety gear and procedures (BSL-4). The CDC also has a rather simplistic YouTube video discussing anthrax's use as a potential bioterrorism weapon, but in this case 62 or more CDC employees were potentially exposed to the bacteria in the course of their work.

The good news is it appeared nobody was in danger; all those employees were given the anthrax vaccine and also begun on antibiotics. The background information available online says there has never been person to person spread of the disease.

It appears that it's exceedingly tough to get rid of anthrax in the environment; I'll go over the classic historical example of how careful government researchers have been with its spores..

In the 1940s, British scientists used a small Scottish island (Gruinard) for germ warfare research. That island, thoroughly contaminated with anthrax spores, remained off-limits for forty+ years before extraordinary efforts, begun in 1986, rendered it safe for ordinary use. The surface of the island was only 484 acres; it was sprayed with a herbicide, then all dead vegetation was burned off. Next 200 tons of formaldehyde solution was diluted in 2,000 tons of seawater and sprayed over the entire island. Perforated tubing was used to ensure that 50 liters of solution were applied to every square meter being treated.

Later the effectiveness of the decontamination process was assessed by taking two duplicate sets of soil samples. Each was tested at two major government labs. Anthrax spores were detected only in "small quantities in a few places." These specific areas were treated in July 1987, followed by further soil sampling in October 1987. No further traces of anthrax spores were found.

Blood samples from local rabbits were also tested for anthrax antibodies. No such antibodies were found.

Following these measures, a farmer grazed part of his flock of sheep on the island for six months. The sheep were inspected monthly by the District Veterinary Officer, and returned to the mainland in October 1987 in excellent condition.

On April 24, 1990, 4 years after the decontamination works had been completed, a Defense Minister visited the island and removed the safety signs, indicating that the island had finally been rendered safe. Then, per agreement  the island was sold back to the heirs of the original owner for the WWII sale price of £500.

But a senior British archeologist said he still wouldn't set foot on the island; he was concerned because of potentially infectious particles found in some of his digs.

Yet another NYT piece, "Ticking Viral Bombs, Left in Boxes," this one written by a distinguished physician, Lawrence K. Altman, M.D. recalls the irony of the outcry for mass smallpox vaccination of our entire U.S. population after 9-11 (when no Iraqi supply of the deadly bacterium was ever located), contrasted with the recent finding of six vials, two with live smallpox bugs, being found in in Bethesda, almost within "spitting distance" of our center of government.

In 2011 the Birmingham Mail reviewed a tragic lab accident which led to the last known smallpox death . The city, now England's second largest, was a site of a medical research laboratory associated with the local medical school. Viral particles got into an air duct and a photographer whose studio was one story up from the lab became the last known case of active smallpox and died from the disease in spite of having been vaccinated twelve years before

Dr. Altman discusses the pros and cons of eradicating the last two known stocks of the virus, one at the CDC, the other in a Russian lab in Siberia. Even if the natural virus is finally and totally eliminated , a rogue group may well be able to re-establish their own supply from the known genetic sequence of smallpox.

Lastly I saw a NYT article with an even more disturbing title, "After Lapses, C.D.C. Admits a Lax Culture at Labs." CDC workers had somehow shipped a dangerous strain of avian influenza to a poultry research lab run by the Department of Agriculture. Known as H5N1, the virus had killed more than half of the 650 people who had been infected with it since 2003. Again there were no deaths from this mistake.

After all of this recent furor plus the historical examples, I'm heartily in favor of the idea that's been broached saying such dangerous organisms should be confined to a minimal number of labs and even those clearly need to tighten up their standards.

 

 

 

 

 

 

Smallpox: vials found in NIH lab

July 9th, 2014

I was glancing through The Wall Street Journal. this morning (that period is intentional as I found out recently in their 150th anniversary issue) and saw an article about smallpox,  that old enemy of mankind. The CDC issued a media statement saying six vials labeled with the technical name of the disease, variola, had been found in an old storage room belonging to an FDA lab that is on the NIH Bethesda, Maryland facility. Forty-two years ago the FDA took over that lab, among others, and only now were those labs being moved to the main FDA location in the DC area. The vials themselves date back ~60 years and now will be tested to see if the material in them is viable (i.e., live smallpox viruses).

I reviewed the CDC's Bio-hazard Safety Levels; they range from 1 to 4 with more serious infectious agents occupying the higher levels. A BSL-3 agent can cause serious or deadly disease, but either doesn't spread from person to person (at least not easily) and/or has a preventive or treatment known. Plague, rabies virus and West Nile fit into this category. Smallpox is obviously a BSL-4 bug, the most dangerous kind and in the company of Ebola virus. A February 15, 2012 Reuters article, "How secure are labs handling the world's deadliest pathogens?" talked about the precautions used in such a lab in Galveston, Texas. The boss there got entry by swiping a key card, was scanned by 100+ closed-circuit cameras as he opened seven additional locked doors before he reached the lab where another card swipe and a fingerprint scan were necessary for entry. The Washington Post article on the recently found vials has a six-minute video on BSL-4 procedures with a comment that there are three over-lapping types of safety precautions: those for containment of the hazardous material; those for personal protection and overall administrative factors.

And this may get you into BSL-3/

And this may get you into BSL-3

The air flow and exhaust systems used in Galveston, the full-body suits with their own air supply and the intruder drills that are conducted all made me somewhat more comfortable. But that's in a government-funded laboratory. Even in the United States, a private-funded lab may not be subject to the same rules and regulations, Elsewhere the procedures that must be followed vary. In 2011 there were 24 known BSL-4 labs in the world with six in the U.S. (The GAO said we had considerably more.) In 2013 there was considerable protest in Boston over the proposed BSL-3 and BSL-4 lab there.

We don't see these anymore.

We don't see these anymore.

I've written about smallpox before, but a brief history, available online on a dermatology website was worth reviewing. The disease likely originated in Africa about 12,000 years ago. caused a major epidemic during an Egyptian-Hittite war in 1,350 B.C.E and left typical scars on the mummy of Pharaoh Ramses V who died in 1157 B.C.E. It got to Europe somewhere between the 5th and 7th centuries C.E.; millions died in Europe and the Western Hemisphere before Edward Jenner developed vaccination in 1796. The term came from the Latin word for cow (vaca), as Jenner used fluid from a cowpox-infected dairymaid's hand to inoculate an eight-year-old boy. In 1967 WHO estimated there were 15 million cases of smallpox and 2 million deaths from the disease. Total smallpox deaths over the past 12,000 years have been estimated at 300-500 million, more than all the world wars combined.

By 1980 the World Health Organization declared the globe smallpox-free. In this country, we quit vaccinating the general population in 1972 and stopped giving the inoculation to new military personnel in 1990.

My wife's old shot record shows she got her first vaccination against small pox in 1956 and the last booster in 1980. We were both assigned to bases in the Far East in the early and mid 80s. I can't find my military vaccination record from that time frame, but logically wouldn't have had a booster after 1986 when I got back to a base in Texas. Since immunity is unlikely to last more than ten years, at this stage we'd both be vulnerable to smallpox, like most everyone else.

The only known supplies of the virus remained in government laboratories in the United States and Russia. There has been considerable international protest against keeping those specimens alive starting in the early 1990s, but thus far neither country wants to give in to that pressure. One rationale was the genetic structure of the virus was known, so it could conceivably be recreated by terrorists.

In 2004 the CDC said they had stockpiled enough smallpox vaccine on hand to vaccinate everyone in the United States. I haven't found any updates on that statement. But the U.S. military was still giving those shots to personnel going to USCENTCOM areas (the Middle East and the "stans") until the middle of May, 2014, to Korea for another two weeks and to some special mission troops after that with an end date unspecified.

So now it's the middle of 2014 and, in one manner or another, smallpox is still lingering, fortunately not as an active disease. The CDC is testing those re-found vials of the virus  and we'll hear in a couple of weeks is they were still viable.

 

 

 

 

.

Long-term acute care hospitals

June 30th, 2014

An article in The New York Times (NYT) several days ago opened a new topic and re-visited an old discussion in our household. The title was telling, "At These Hospitals Recovery is Rare, but Comfort is Not" and talked about what Medicare calls long-term care hospitals (LTCHs). I had never hear of the term.

The article said there were 400 of these facilities in the United States, but lots of practicing physicians are unaware of them. I did an online search and found a 20-bed facility in thus category about 15 miles from where we live and a 63-bed hospital in Denver, roughly 65 miles away. I wasn't sure of any in the southeastern part of Wyoming, 40-50 miles north of us.

The Medicare website on Long-Term Care hospitals says they focus on those whose inpatient stay is likely to be more than 25 days. The contrast is stark as this is an age when many surgeries are done in a technically "outpatient" fashion (the current definition of an inpatient says you're in the hospital at least two midnights). Medicare says LTCHs specialize in treating patients, even those with more than one serious medical condition, who may improve with time and eventually return home.

Yet the NYT piece talks of patients who are critically ill, may be unresponsive, even comatose and, except for those who are younger and have been in an accident, may stay for months, years, or the rest of their lives. In 2010 another NYT article discussed significant issues with some LTCHs.

At that point my wife and I both said, "Don't put me in a LTCH!" We are 73 years old, relatively healthy at the present time, and enjoy life. We have living wills and medical durable power of attorney documents naming each other as first decision makers if we can't choose for ourselves.

I've mentioned before how my parents approached this quandary. Mom had a cardiac arrest at age 74, was resuscitated by Dad who was still a practicing physician, and lived 16 more years. But when she was in her declining last four years and they had moved from totally independent living to a seniors' residence, they encountered a situation that influenced their future decisions. Mom had a minor acute illness and moved short-term into an associated facility.

She was there for a few days to get some antibiotics and nursing care, but in the next room was a woman, the wife of one of their friends, who had been in extended care for seven years. For the last four of those she no longer recognized her husband, yet he requested treatment of her bouts of pneumonia on three separate occasions. Dad and Mom each said, "Don't do that to me!" They had signed the appropriate end-of-life documents before Mom showed signs of initial dementia.

A 2011 article in Kaiser Health News stresses that making end-of-life decisions can be tough, especially if they aren't made in advance. But a professor of ethics was noted as saying more than 90% of all families who have a loved one in intensive care want to hear prognostic information that will help them make those difficult decisions.

Hospital care has changed..a lot, since I last saw inpatients. It used to be that the physician who organized your treatment was the same one you saw in her or his outpatient office. Now the primary care physicians I know, unless they are part of a residency program, don't see their long-term patients at all when they are hospitalized. Instead patients see an ER doc, a hospitalist (physician whose focus is inpatient care) and, if they go to an ICU, an intensivist.

Intensivists  are physicians who have completed specialty training, often in Internal Medicine or Anesthesia and then take an additional two-to-three year fellowship in critical care medicine (some are triple board certified, in lung disease (Pulmonology), for example. They are often thought of as primary critical care physicians and in major academically-oriented clinics and their associated hospitals (e.g., the Cleveland Clinic), they may provide most or all of the physician care in the ICU.

Do you need an intensivist?

Do you need an intensivist?

The article from the NYT said we spend over $25 billion a year in long-term acute care in the United States., The path to landing in a LTCH often begins in an ICU. The major task for intensivists is keeping patients alive during critical illnesses. That often means deciding on short- or long-term-ventilator support, the possibility of a tracheostomy (a surgically created hole through the front of the neck into the trachea (AKA windpipe) to allow this, feeding tubes of several varieties or long-term intravenous access.

I don't ever want to be on a ventilator long-term. I might allow one short-term if I had a clearly treatable, potentially curable patch of pneumonia, for instance, but I would  want to set a time allowance for that.

When my mother quit eating, her physician wanted to create a long-term method of feeding her, a percutaneous endoscopic gastrostomy (PEG). If someone cannot eat and needs to be fed long term, one method of doing so is to place a PEG tube through the wall of the abdomen directly into the stomach.

This could be done for someone who has had a stroke and is at risk of aspirating food if fed normally. In my Mom's case, by then she had developed significant dementia and Dad said, "We've made our decisions; she is not having a PEG tube."

She could have gone into a LTCH and lived a while longer, but Dad knew that her refusal to eat meant she had come to a logical stop point.

There are an estimated 380,000 patients in LTCHs at present. Some (roughly 10 to 15%) are there for appropriate reasons and have a reasonable chance of recovery; many are not. A study by a Duke critical care specialist found half who enter these facilities die within a year; a majority of the remainder are in "custodial care."

I don't choose to join their ranks.

So there are some decisions that you and your family may want to make. I'd suggest you read the NYT articles and think about what your choices might be. It's never easy, but a careful discussion in advance with your long-term goals in mind makes sense.

 

 

 

Dengue fever and its major Mosquito vector

June 21st, 2014

I don't like being bitten by mosquitoes any more than the rest of you do, but worldwide the real reason to avoid them, kill them or alter them is the enormous disease burden they cause. One recent estimate , surprising to me, said "mosquitoes have been responsible for half the deaths in human history." I was aware, having lived as an Air Forced physician in the Philippines and traveled in South America and Africa, that malaria was one enormously dangerous, mosquito-carried disease, but there's a long list of other illnesses that contribute to the threat from these insects.

This one doesn't carry dengue

This one doesn't carry dengue

From 1690 to 1905 major epidemics of yellow fever struck parts of southern and eastern America: Boston, New York, Philadelphia, New Orleans killing over 40,000 people. A 2006 PBS website gives short summaries of nine of the outbreaks and alludes to even larger mortality figures.

And then there's dengue, a disease primarily transmitted by the bite of infected female Aedes Aegypti mosquitoes. They don't make the telltale sound that alerts you to other mosquitoes, they also strike during daytime and may follow their human target, biting repeatedly.

Dengue attacks 400 million people every year world-wide., mostly in the tropics and sub-tropics. Three-fourths of those infected never develop symptoms and of the remaining 100 million, a large majority have a mild to moderate nonspecific acute illness with a fever. But 5% can have severe, even life-threatening disease with terrible joint and muscle pain (It's called break-bone fever), hemorrhages and shock. The World Health Organization estimates 22,000 die from dengue yearly, but other estimates range from 12,000 to 50,000.

The first known case in the United States occurred in Philadelphia in 1780 and was documented by Benjamin Rush, the distinguished physician who was a signer of the Declaration of Independence.

The Center for Disease Control and Prevention (AKA the CDC) has an entire chapter on dengue in its "Infectious Diseases Related to Travel" publication and a shorter version with links for travelers. Their maps of disease distribution focus on warmer areas in Africa, Central and South America, Asia and Oceania.

There has been no vaccine available to prevent the disease and no specific anti-viral treatment for those with severe cases of dengue. Because of known bleeding complications, those who get the dengue are advised to avoid taking aspirin or any of the nonsteroidal anti-inflammatory drugs , AKA NSAIDs, such as ibuprofen.

The continental United States was essentially dengue-free for over fifty years, but marked increases in dengue infection rates have occurred in our hemisphere, mostly in South America and Mexico.

Now Aedes Aegypti is back in Florida, Texas, and Hawaii. The article in The New Yorker mentioned a small 2009 outbreak of dengue in Key West with fewer than 30 cases, but that was the first real brush with the disease there in over seventy years. In 2010 there were twice as many cases. An entomologist (insect specialist) with the Florida Keys Mosquito Control District reminded the reporters that the manner in which the populace lived was crucial; from 1980 to 1999 there were only sixty-four cases on the the Texas side of the Rio Grande and 100 times as many just across the river.

What was the difference? Likely screens on windows, cars with AC running and windows closed and how often people were exposed outdoors. Key West, in a 2013 followup, had seen no further cases, but the World Health Organization called dengue the "fastest-spreading vector-borne viral disease," saying cases had gone up thirty-fold over half a century.

Why has this happened and what can be done about it?

How can we do this?

How can we do this?

Is this another consequence of global warming? After all dengue has appeared in France and Croatia for the first time. But I just watched an online video by Dr. Paul Reiter, a world-famous medical entomologist, who spent much of his professional career at the CDC's Dengue Control branch. It was obvious that he does not believe in man-made global warming (I do) or that any form of global temperature change is responsible for the spread of malaria or dengue.

How about used tires? He thinks they are great incubators for mosquitoes and billions of those tires have been moved around the globe. So Aedes aegypti has adapted to the city, in part because of our habit of having water-containing used tires around the places where we live.

I don't have any old tires in my yard and I change the dog's water bowl and the bird water outside frequently.

A few new ideas are out there: a British company called Oxitec has genetically modified (GM) mosquitoes, making the males able to mate, but also giving them a gene which kills their offspring soon after they hatch. An initial field trial in Brazil was successful in markedly reducing the population of disease-carrying adult females (remember, males don't bite humans for a blood meal; females do).

Further field trials of these GM-mosquities, titled OX513A, have met with considerable opposition and an engineer involved has published a paper examining the ethical issues involved. The lifespan of mosquitoes is short and they don't appear to be a major food source for other creatures; the most significant issue likely is fully informing the people in the test area are who may consider OX513A to be just another threat.

A French pharmaceutical company recently announced an experimental vaccine for dengue was moderately successful in a late-stage, placebo-controlled clinical trial involving 10,000 children in Southeast Asia, reducing dengue incidence by 56%. A similar clinical trial is underway in South America.

It's a bad disease, coming back at us, but perhaps there's some good news on the horizon.

 

 

 

Using (or at least minimizing) our food waste

May 21st, 2014

I recently read an article in The New York Times with the interesting title, "Recycling the Leftovers." It was written by Stephanis Strom, one of their regular correspondents, and covered a variety of programs in America for recycling food scraps. Lynnette and I have been separating our own waste streams that for at least ten years and have a garbage bin, a trash sack, a recycle sack and a composting pail in our kitchen and laundry room. Our waste-collecting company keeps adding new items that can be recycled, but at present we only put out two containers for them: trash goes to the curb to be picked up weekly and recyclables go out every other week.

Composting is one approach to food waste.

Composting is one approach to food waste.

Now the city of Austin, Texas has plans to markedly extend its  food waste pilot project; Strom's article says 14,000 Austin residences currently have a third garbage bin, one for food scraps, collected weekly Twenty-five years ago the city started with a "Dillo Dirt" program; the city made over a quarter million dollars last year selling the end product, compost made from yard clippings and treated sewage sludge. The newer approach, adding organic waste, currently has enrolled less than 10% of the city's ~185,000 households; the plan is for all of them to be offered the service. I'm unaware of a city-wide program here in Fort Collins for food scrap recycling; ours end up  in a vermiculture bin that's outdoors, but in a fenced-in corner. The worms doing most of the work in turning food waste into compost thus far have survived our winters.

The concept is being highlighted nationally by the U.S. Food Waste Challenge (FWC), a joint project of the U.S. Department of Agriculture (USDA) and the Environmental Protection Agency (EPA). The goal of the FWC is to bring about a "fundamental shift" in the way we manage and even think about food and food waste. The USDA/EPA wants to have 400 organizations enrolled this years and 1,000 by 2020 and they are well on their way already with an impressive list of governmental and private partners including companies, colleges and universities, K-12 schools and at least one major hospital having joined.

We as individuals can't join the FWC, but there is a webpage of suggestions for consumers. Basically it says shop thoughtfully, cook with care, serve portions that you'll eat then and there, save whatever can be kept (while eating what would otherwise spoil) and, if possible, grow part of your meal. It also mentions we should shop our own refrigerators first; plan meals before we go grocery shopping so as to buy only those items we actually need; freeze, preserve or can seasonal produce; ask restaurants to bag up leftovers and be realistic at all-you-can eat buffets.

I was at a writers' meeting recently and drove to the event with my long-time writing mentor. She said her family almost always eats everything she buys, but even with a husband and three teenagers on board I knew she was being modest. She obviously shops carefully and plans ahead.

Our lunch yesterday featured a Quiche my wife (professionally a Jung) made that was "Jung Fun." It wasn't your typical recipe, but used up everything in the vegetable drawer that was needing to be eaten ASAP. We still occasionally have spoiled vegetables and fruits, especially when our CSA gives us more than its usual abundance, but those go into the compost bin.

We did go to the CSA a few days ago to purchase four beefsteak tomato plants. We've got a special above-ground gadget for planting tomatoes and have consistently done well with those we bought at a nursery, but, having grown up eating beefsteak tomatoes, I'm really looking forward to have an abundance of them. Our local grocery store generally has good produce, much of it grown locally or regionally, yet it's been my experience that homegrown tomatoes are several orders of magnitude better than anything I can buy at a store.

Beefsteak tomatoes are yummy!

Beefsteak tomatoes are yummy!

The EPA's Food Recovery Challenge webpage has a horrifying set of statistics from 2011 (they're still collecting/collating the 2013 stats apparently, but what happened to 2012?). Almost all (96%) of the 36 million tons of food waste generated in 2011 ended up in landfills or incinerators. The food sent to landfills breaks down and releases methane, a nasty greenhouse gas, twenty times as effective in increasing global temperature than CO2 is. More than a third of all methane released into the atmosphere comes from landfills (domesticated livestock accounts for 20% and natural gas and oil systems another 20%) .

While all that food is being wasted and much of it is contributing to global climate changes, 14.9% of U.S. households were food insecure in 2011, not knowing where they'd get their next meal. Fortunately we have a strong local Food Bank serving Larimer County and their "Plant It Forward" campaign's 2014 goal is to obtain 15,000 pounds of produce donated by local gardeners.

So where are you in the nationwide quest to cut food waste?

 

Food Safety Issues: America in 2014

March 19th, 2014

Having written recently about China's food problems, I knew there were some remaining in the Untied States, but their scope amazed me. Each year forty-eight million of us suffer from food poisoning. Over 125,000 of that group are ill enough to be hospitalized and 3,000 die.

Having seen those numbers on a government website, I decided to review the modern timeline of food-related illness in America and how our laws help prevent it.

One step in meat processing

One step in meat processing

My initial thought was of Upton Sinclair's 1906 novel, The Junglea powerful expose' of the American meat-packing industry. After its publication, public outcry led President Theodore Roosevelt to appoint a special commission to verify Sinclair's tale of the horrors of meat production in Chicago and elsewhere, and eventually led to the meat Inspection Act of 1906 and the Pure Food and Drug Act.

For many years a so-called "Poke and Sniff" system prevailed. The 1907 law said federal employees could inspect and approve (or disapprove) any animal carcasses which were to be sold across state lines. The inspectors could physically monitor the slaughter line, touching and smelling the animals and cuts of meat. They could remove any meat that was rotten, damaged or had abrasions or growths. Some felt that provided only minimal protection for the public, but that's what we had for over eighty years.

I grew up in Wisconsin in the 40s and 50s. My father, in addition to his medical practice, was the local Public Health Officer and I remember going to inspect local area dairy herds with his sanitarian when I was a teenager. I don't recall major food safety issues surfacing in those decades., although there may have been some isolated cases that I didn't pay attention to.

I was in medical school from 1962 to 1966. During that time, two women died in Michigan from botulism, a rare but extremely serious paralytic disease caused by a toxin produced by a bacteria. In their case the toxin was in canned tuna fish. There were other botulism outbreaks in 1971, 1977, 1978 and 1983 with 59 people being affected in the largest such episode. All were related to food being improperly canned or prepared.

In 1985 a huge outbreak of another form of food poisoning happened. This one involved at least 16,284 people (and perhaps up to 200,000) in six different states and was caused by bacterial contamination of milk.

Some new laws only applied to a few food items.

Some new laws only applied to a few food items.

The Department of Agriculture's food safety and inspection timeline appears to skip over a considerable period of time, although a number of laws were passed to strengthen federal regulation of the food chain. The 1957 Poultry Products Inspection Act and the 1970 Egg Products Inspection Act added to the government's ability to prevent food-related illness in specific areas, but wouldn't have prevented the major food-related episodes I just mentioned.

Then in late 1992 and early 1993 an E. coli outbreak sickened 623 and killed 4 children in four western states (Washington, Idaho, Nevada and California). It was eventually traced to contamination of under-cooked Jack in the Box hamburgers with that common bowel bacterium. Those affected developed bloody diarrhea and, in a few cases, severe kidney disease from an entity termed hemolytic-uremic syndrome (HUS). This is a disease which is the most common cause of acute kidney failure in children and usually occurs when an infection in the digestive system produces toxic substances that destroy red blood cells, causing severe kidney injury. The CDC traced the meat back to five slaughter plants in the United States and one in Canada.

In 1998 the USDA introduced a brand-new method for inspecting meat. The "Hazard Analysis and Critical Control Point (HACCP) system had been pioneered by NASA. That agency had protected our astronauts by adopting a system of critical control points, anywhere a germ, invisible to the naked eye, could find its way into a food meant for a space mission.

Pinging off the NASA approach, the USDA also mandated inspectors could order meat plants to do microbial testing. The meat industry became responsible for establishing and submitting their own HACCP plans. Then USDA would review the plan, approve it if it seemed appropriate and inspectors could monitor the plans' compliance with their own safety plans. The problem is the age-old one of the fox guarding the hen-house; inspectors no longer had the power to physically examine the meat on the line. The acronym HACCP was often derided as "Have a cup of coffee and pray."

On January 10th, 2014 two articles were published that changed my mind: the first, in UPI.com's website simply said, "U.S. food Safety a big issue in 2014." It mentioned that already in 2014 the U.S. Department of Agriculture had shut down a meat-processing facility in the state of Minnesota.

The other online article was written by Dr. Margaret A. Hamburg, the Commissioner of Food and Drugs, i.e., the head of the FDA. It discusses the Food Safety Modernization Act (FSMA), signed by President Obama in early January, 2011. It was a reaction to the figures I mentioned at the start of this article.

This law gave the FDA "a legislative mandate to require comprehensive, science-based preventive controls across the food supply."

But let's look at its provisions, some of which make eminent sense and others, in my opinion, ask for the impossible.

On the one hand the FSMA required food facilities to have a written preventive control plan. I agree with that idea, but note it's a complex process with multiple steps involved. Such a plan includes evaluation of possible hazards, figuring out what one has to do to marked alleviate or totally eliminate them, noting how you will monitor these control measures (and keep appropriate records) and specifying what you will do when issues arise. Oh, and by the way, you had a year and a half to do all that.

Other parts of the FSMA involved standards for safely producing and harvesting vegetables and fruits plus another set involving the prevention of "intentional contamination" of food. The latter may be quite difficult. As the law is written, 600 such foreign food facilities must be inspected in its first year with the number doubling for each of five additional years. let's see, that's 600, 1,200, 2,400, 4,800, 9,600 and 19,200. Where in the world would the FDA get enough trained inspectors? And that's assuming that the foreign countries would allow such detailed examinations of their food-producing and exporting businesses.

One of every six Americans becomes ill from food-bourne disease each year. Only a small fraction of  them (approximately 1/4th of 1%) need to be hospitalized and even of those who do only 2.3% die. But another way of looking at those mortality statistics is to say it's equivalent to almost 10% of the number who die from motor vehicle accidents each year in this country.