‘Weird’ remedies and the problem of ‘folklore’

“For a child that wets the bed, roast a mouse and give him the gravy to drink, and it will cure certainly”.

“For whooping cough, take a large hazel nut, bore a small hole in one end and take out the kernel; then place in the hollow a living spider, close up the hole and place to the child’s neck. When the spider dies, the child will be cured”.

“to discern the king’s evil, hold an earthworm to the aggriev’d place. If it dies it be king’s evil, otherwise not”

These are just a few examples of what might be, and indeed often are, termed ‘folkloric’ remedies. They are taken from various Welsh sources and are typical of the sorts of animal/ritual healing receipts that commonly occur in recipe collections and through recorded oral testimony. My own academic work on Welsh medical history has tended to move away from ‘folklore’. I’ve tended to look at a broader picture, looking at the ways that medical remedies reflect a culture of knowledge exchange, and highlight transmission and the importance of social networks. As such, I’ve concentrated less on folkloric medicine per se for two reasons. First, so much has been written on Welsh folklore that it is difficult to say much that is new or objective. When I started my academic career, there was only one book about Welsh medicine that wasn’t largely based on folklore, and that was written in 1975. Second, however, I have a problem with the whole concept and terminology of ‘folklore’. Not the remedies themselves, but the ways that they have been interpreted. Why?

‘Folklore’ – a nineteenth-century term – is an extremely loaded term. It carries associations of backwardness, of quaint antiquity and almost immediately sets up an ‘other’. Folkloric medicine (I’ll stop using the single quotes now) is implicitly an antipode to regular or orthodox medicine. Until the more concentrated efforts in the past 10 or 15 years to understand folkloric medicine in a broader context (by historians such as Owen Davies and Lisa Tallis), traditional approaches were antiquarian, typified in articles with titles such as ‘Weird Welsh Remedies’ in the early 1900s, and still in local history journals into the 1970s and beyond. In one article from the 1960s, for example, titled ‘An old Welsh receipt book’, the author included several transcribed remedies but included her own pithy observations. You therefore have the first line of the remedy “Take a mold (sic)…” followed by the bracketed “(presumably a mole, poor beast!)” and elsewhere “take frogspawn in the spring” with the author’s side-splitting comment “I have yet to find frogspawn at any other time of the year!”. This is an extreme example, but it is the condescending approach that I find frustrating. There is still a virtual cottage industry of books about old, weird remedies.

One effect of this is to set up a tension between academic and ‘popular’ history. People love strange remedies and, let’s be honest, some of them are indeed very strange to modern eyes. Early modern cures, such as the one for a stopped heart involving the blowing of tobacco smoke up the unfortunate victim’s bottom with a pair of bellows, cannot but be odd and, dare I say, sometimes funny. Studying these sources is great because they constantly throw up something more unusual or extreme, usually when you think you’ve seen the strangest one already. But, on the other side, there is a need to see remedies in their proper context and to see them objectively as part of a broader medical culture. By laughing at the remedies we are, by extension, mocking the people who created them, and this is neither academically sound nor fair. In fact, if we look closely at even the strangest remedies, we can often see patterns emerging which are completely logical if you subscribe to the model of the body that our ancestors did.

One common ingredient in remedies for eye complaints, for example, was snails. One remedy for a web (or stye) in the eye was to impale a garden snail on a pin and let the juice drop into the eye. On the face of it this seems counter-intuitive. On the other hand, it fits perfectly well with a core belief in early-modern medicine – that of the ‘doctrine of sympathies’. Put simply, if a plant, animal or substance resembled a part of the body, then it was assumed to have healing properties for that part. A snail, like the human eye, is viscous and slimy. For the same reason, herbs like ‘eyebright’ were used to treat ophthalmic conditions because their leaves resembled the human eye.

Eyebright.

In other cases, the ‘sympathy’ might be metaphoric as in the case of ‘oil of swallows’ to cure withered limbs. In this recipe, occurring widely in remedy collections across early modern Britain, an oil was made by catching 20-30 live swallows, baking them to powder and then adding a variety of other herbs to make the ointment, which was sometimes also placed in a dunghill for a period of time before it was ready. Here, again, this recipe appears to have no immediate logic. But swallows appear in summer – a time of warmth, flourishing and, in humoural terms, youth. Likewise, in physical terms, swallows are always in vigorous flight, always on the move and full of vitality. What better thing to apply to a limb that has lost its vitality and movement than something which bears the physical properties of these animals. (See Rebecca Laroche’s and Michelle DiMeo’s excellent article on the oil of swallows here: http://recipes.hypotheses.org/308)

The use of animal products is also an important point. Anything which had once been living had important properties – known as animus. It had a vital spirit which could be applied to revivify tired or ailing bodies. This might include living creatures (e.g. cutting the comb of a live cockerel and letting the fresh blood drip over a tumorous growth, or cutting a live pigeon in half and holding it to the neck to cure goitre). At the extreme end of this belief was the product known as ‘mummy’ made literally from the flesh of desiccated human remains – sometimes hanged criminals. Animal products were not weird to people at the time; they were mainstream.

This raises another question about why people used substances and products that today would be regarded as dangerous. The answer to this, again, brings us to approaches to the body and to sickness. On one level, people simply did not understand or believe certain substances to be harmful. Therefore it made sense to try a variety of different substances and especially ones that logically had some connection with healing. People adopted a ‘carpet-bombing’ approach to sickness, throwing whatever they could at their symptoms in the hope that something stuck.

On another, though, a different concept was in operation about what a medicine should do. For us, today, a medicine works if it makes us better. For people in the early modern period, though, a medicine worked if it had an effect. If a medicine made you vomit, then it had worked i.e. it had rendered an effect upon the body. This was part of the process of getting better. When people who had taken a particular medicine did recover, they naturally attributed their recovery to that substance. This would be subsumed into their own personal pharmacopoeia and passed on to others as a ‘probatum’ cure, i.e. one that was proved.

This can be seen in the longevity of some remedies and their recurrence across many centuries. Eye recipes involving snails, for example, were still in use in the 19th century and were being reported in antiquarian articles as still being in existence. An interesting survey was taken in the 1970s of herbal remedies still in use in rural Wales, which had some evidence of long-term family use. In many cases, recipes and ingredients they provided can be readily found in early modern collections. In Mid-Wales up to the 1950s, for example, it was apparently common to use the herb rue in preparations for children suffering from worms. Similar remedies occur in several Welsh collections of the 17th century. Lungwort and eyebright were still in evidence in the 1970s for respiratory and ocular conditions, respectively, and can be traced well back almost into antiquity. Human urine was another common ingredient in the seventeenth century in a variety of remedies and, in living memory, has still been noted as having cosmetic value and also in the treatment of ear conditions. Perhaps most interestingly, in a journal article of 1906, it was reported that a Montgomeryshire woman who injured herself with a scythe went back to the scythe for seven days after and repeated an incantation over it. This bears extraordinary similarity to the so-called ‘weapon salve’ or ‘powder of sympathy’ noted by Theophrastus in the seventeenth century, whereby the idea was to treat the instrument that had injured somebody, rather than the wound itself.

What stands out in most cases is the extent to which apparently ‘weird’ remedies were not an alternative to a mainstream medicine…they were mainstream. People at the time did not view things in terms of alternatives – they took a more holistic view and saw a bank of medical ingredients and approaches from which they could draw. Whilst people like cunning folk were certainly common, and used metaphysical means such as charms and spells to augment their healing, these were still part of a range of choices for the early modern sufferer.

The problem comes in how to refer to such remedies. In the early modern period, while ‘folkore’ did not exist, there was an awareness of a ‘popular’ medicine and contemporary physicians referred to the popular errors of unlearned, common or vulgar practisers of medicine. There was perhaps a distinction between strictly orthodox Galenic medicine and a looser popular tradition, but the two intruded into one another to such an extent as to make the distinction almost arbitrary. Also, when physicians complained about non-licensed empiricks, this had more to do with the threat to their livelihoods than it did with concern about recipe ingredients.

Perhaps the time is right for a more concentrated study – and a redefinition – of ‘folklore’ in medicine in order to remove some of the implicit condescension. These are fantastic sources and it is always pleasing to see people’s reactions to them, and also when people pass on their own family remedies to me as they often do in public lectures. As we understand more about the transmission and reception of early modern remedies, we get closer to the lived experience of sickness and of the medical worldview of our ancestors. It is also worth remembering that modern biomedicine has been around for perhaps 150 years; humoural medicine lasted for millennia.

That doesn’t mean we should head for the lettuces and grab the snails quite yet though!

Norovirus and the reporting of epidemics through history

This winter has already witnessed an unprecedented increase in cases of Norovirus – the so-called ‘winter vomiting bug’. For some reason, across the globe, the infection has spread with increasing virulence and also lingered longer than normal in parts of the world now moving from spring to summer.  Norovirus is an especially durable and adaptable virus. It is perfectly suited to what it does; spreading from person to person either through airborne contact with minute particles of vomit, or through surface contact with the virus…on some surfaces it can last for up to two weeks. Given that I have a pathological phobia of vomiting, this one is the stuff of nightmares!

In Britain, the Health Protection Agency is the public face of public health and is charged with providing a virtual barometer of sickness. Their website contains a list of the current maladies doing the rounds and, in the case of flu and norovirus, weekly updates on the numbers of the stricken. The site also contains tips on how to prevent the spread of the virus and some advice (if little comfort) to those who have already succumbed.

To my mind, the information on the HPA website is extremely reminiscent of the information disseminated to the public in past times of epidemic disease – say the seventeenth-century plagues. It strikes me that authorities throughout history have had to balance the need to provide practical details of encroaching sickness with the need to avoid spreading panic. The language of sickness reporting in fact has a long history, and show remarkably similar patterns.

The reporting of the numbers of sufferers, for example, is something that was certainly an important element in the way the Great Plague of 1665 was reported. In seventeenth-century London, the so-called ‘Bills of Mortality’ gave a weekly update on deaths in the city, in the form of a published pamphlet. Information for these pamphlets was gleaned from the ‘searchers of the dead’ – people (often women) who were employed to examine fresh corpses to discern the cause of their demise. Their diagnoses were diverse. In one bill dating from 1629, the causes range from predictable conditions such as measles, cold and cough and gout to other, stranger, ones such as ‘teeth and worms’, ‘excessive drinking’ and ‘suddenly’!

As the plague increased though, the Bills of Mortality became rapidly dominated by these numbers, and Londoners pored over the pages every week to gauge the seriousness of the situation. News of the contagion was a regular topic of conversation and people were eager to learn if things were getting better or worse. The newly burgeoning cheap presses of the mid seventeenth century went into action, with everything from treatises on the causes of the plague to ‘strange newes’ about the latest outbreaks or figures and even popular cures.

The authorities were clearly worried about the danger of epidemic sickness, and took measures to try and limit its spread. One of these was to try and restrict popular gatherings such as fairs, to try and prevent the disease running rampant. This Royal proclamation from 1637, for example, entreated people not to attend the popular Sturbridge Fair that year, the king ‘Forseeing the danger that might arise to his subjects in generall”.

So, the authorities published the numbers of sufferers, took preventative measures against the spread of contagion and, in general, maintained a dialogue with the public, updating them on disease types, currency and potential ways to avoid them. The popular press also served to stir up fears, however, and perpetuated public dialogue about infection. Disease and health have always been topics of conversation but, in times of contagion, they tend to become more concentrated, and people become more engaged in dialogue about them.

Fast forward to 2013 and it is remarkable how similar the situation still is. The HPA website, for example, gives a weekly update on numbers of norovirus sufferers, not only in terms of clinically-reported cases, but of an assumption that for every reported case there are a further 288 or so unreported cases – people who simply decide to stay home and self-medicate. Indeed, at the present time, people are being actively discouraged from attending doctors’ surgeries, and hospital wards are being closed to the public. The impression is one of a wave of contagion breaking over the British Isles and, for me at least, one that is coming to get me!

There is indeed a fine line to tread between reporting facts and sparking panic. When SARS first emerged, there was a great deal of information (and misinformation), with various ‘experts’ calling it variously a massive threat to humanity, or simply the latest in the processional line of epidemics to afflict humankind.  A few years ago, a virtual global panic was instigated by the apparent mutation of avian flu, or bird flu. This outbreak made ‘pandemic’ the buzzword of the late 2000s and, again, much space was devoted (and indeed still is to some degree) on educating people on what it is, who has got it, and how to avoid it. In 2005, a UN health official warned that bird flu was capable of killing 150 million people worldwide. According to Dr David Nabarro, speaking to the BBC at the time “”It’s like a combination of global warming and HIV/Aids 10 times faster than it’s running at the moment,”. The World Health Organization, perhaps seeing the potential panic that this could cause, immediately distanced itself from the comment. The fact that the outbreak was ultimately relatively mild emphasises the problem that epidemic disease causes for health officials. How to alert people without scaring them?

None of this is helped by the press who, like their seventeenth-century counterparts, are keen to give the largest mortality figures, or emphasize the spread of diseases. In June 2012, for example, Reuters were still warning that a global bird flu pandemic could happen at any moment.  http://www.reuters.com/article/2012/06/21/us-birdflu-pandemic-potential-idUSBRE85K1ES20120621

The same pattern is now happening with the norovirus – although clearly this does not carry the same levels of danger. Here we are talking about contagion, rather than mortality.  Let’s take the headline on the Western Mail newspaper of 20th December though: “Norovirus: Now more wards are closing as hospitals in Wales hit”. The breathy style of this banner line emphasises its rapidity, not just a straight report, “NOW” it’s coming. What purpose do these reports ultimately serve? Put another way, why do we need to be told? Logically, if preventative measures are possible then it makes sense to tell as many people as possible. But often this is not the purpose of newspaper copy in times of sickness which, to me, almost seems at times to be deliberately provocative.

The answer seems to be a deeply-set human interest in sickness, ultimately linked to our own mortality. Even in this apparently scientific and modern age of medicine, there are still many things which are incurable, and many diseases which have the ability to wipe us out at a stroke. It is this uncomfortable reality which perhaps continues to fascinate and frighten us. We live in an age of control, but some things are still beyond our control, and it is perhaps this innate fear of disease – of our own transience – which makes these headlines ultimately so compelling.