Nadine Burke Harris, M.D. is the author of "the deepest well - Healing the long-term effects of childhood adversity."

Nadine Burke Harris, M.D. is the author of "the deepest well - Healing the long-term effects of childhood adversity."

Dr. Harris interview on NPR, discusses what negative experiences can do to a growing child’s health. Children’s exposure to adverse childhood experiences, such as, physical, emotional or sexual abuse, physical or emotional neglect, parental mental illness, substance dependence, incarceration, parental separation or divorce or domestic violence can negatively affect health outcomes.

Are We Failing Our Elders?  The Rise of Senior Poverty in the US

Are We Failing Our Elders? The Rise of Senior Poverty in the US

One in every six elderly Americans now live below the federal poverty line, and over half of all ‘baby boomers’ have reported a deterioration in their quality of life over the past few years[1]. 21 per cent of married couples and 43 per cent of single people over the age of 65 depend on Social Security for a massive 90 per cent of their income, and almost 62% of households headed by someone over the age of 60 are in debt[2]. 2.9 million older households in the US suffer from food insecurity[3], and between 1991 and 2007, the number of 65- to 74-year-olds applying for bankruptcy increased by a huge 178%[4].

Now, “25 million Americans aged 60 and above are economically insecure”[5], leaving them to struggle with housing, rising healthcare costs, nutrition, transportation, and savings. To make matters worse, they are often left isolated and alone, living in suburbs and surrounded by other elderly people, whose families have grown and flown the nest[6]

An Aging Labor Force – Working Until You Drop

With the increasing financial difficulties faced by the older population, many continue working long after the traditional retirement age, with 40% of ‘baby boomers’ claiming that they plan to “work until they drop”[7]. Of course, for some, this is a choice – a desire to continue in their current roles or even explore alternative career options, but for too many it’s a financial necessity. In the year 2,000, 4 million senior citizens continued to work after retirement age, but by 2017 that number had jumped to 9 million – the highest senior employment rate in 55 years[8]! In fact, it’s the highest senior employment rate since before retirees earned the right to healthcare and Social Security benefits in the 1960s.

To put further strain on older employees, the workforce is changing. Technology is becoming more and more prevalent, and senior employees are left to learn new skills or change the way they work[9]. This leaves some forced out of their jobs and as a result, many sell their properties, give up their lifestyles, and travel the country in search of seasonal work[10]. The Bureau of Labor Statistics (BLS) estimates that by the year 2022, workers aged 55 and above will make up a huge 25% of the labor force[11] and it doesn’t look like things are going to get any better any time soon. We’re failing our elders, and we need to do something about it.

Why Some People Choose to Amputate, and the Implications that Has for the Rest of Us

Why Some People Choose to Amputate, and the Implications that Has for the Rest of Us

For the majority of us, the amputation of a limb is something from our nightmares – the result of a horrific accident or debilitating illness, perhaps a birth defect – something to be avoided at all costs. It's certainly not something we would choose to do. Approximately 185,000[1] people in the US experience the amputation of a limb every single year but surprisingly, more and more people are opting for an elective amputation, an amputation that is not the result of a life-threatening defect but is voluntary, and one that is taken upon for a variety of different reasons. But why do people choose to have such a life-changing procedure? And what implications does that have for the rest of us?

 Elective Amputation of Problematic Limbs

Currently, the majority of elective amputees are those who are suffering from problematic – though not life-threatening – issues, such as damaged foot or a mangled knee. Although many doctors and surgeons still oppose the idea, these patients choose amputation as a way of improving their lives. It often ends painful suffering, can improve mobility, and offer a higher quality of life in general. Hugh Herr, double amputee and biophysicist in the biomechatronics department of Massachusetts Institute of Technology (MIT), is one such person[2] and he believes that as time goes on, people will increasingly choose to amputate in order to replace their "heavy and stupid"[3] legs with high-concept, technological prostheses.

 

 

Should We Be Afraid of AIs?

Should We Be Afraid of AIs?

As technology continues to improve and the development of artificial intelligence hurtles towards progress, it’s no wonder we find ourselves asking whether we should be worried. What will artificial intelligence be like? Will we retain control or will something go wrong? Will it help us or hinder us? There is certainly enough science fiction to suggest the latter. Books and movies are littered with examples of rogue robots and AIs gone bad – machines that take over the world and enslave the human population or worse, kill us all off; but that’s just fiction. And besides, we’re a long way from walking, talking AIs who are part of our everyday lives, right?

 A Present Problem

Wrong. We already encounter artificial intelligence in much of the technology that we use every single day. Apple’s Siri and Microsoft’s Cortana are, after all, forms of AI – albeit more basic than those we see roaming the streets in the latest sci-fi movies. Then there’s smart cars that can drive themselves, iRobot Roomba vacuum cleaners that guide themselves around your room and then return themselves to their charging stations, and security surveillance that can follow potential crime without human control. There are fraud detectors, predictors for retailers, recommendation services like those you find on Amazon, and even automated online customer service support[1].

And that’s just the beginning of it. In Japan, the tech firm SoftBank has released a best-selling humanoid-style robot named Pepper who can recognize emotions and responds accordingly[2]. There are sex robots currently in production – alongside a sweep of controversy, automated weapons are being developed, and more and more AIs are taking the jobs of human beings. So what seems like a future problem is actually something that we are very much embroiled in already.

The Addiction of Pleasure and All its Damning Consequences

The Addiction of Pleasure and All its Damning Consequences

Hedonists have long argued that the path to happiness is the pursuit of pleasure – the more pleasurable activities you participate in, the happier you are likely to be. It seems somewhat self-evident too – pleasure makes you happy, albeit for a limited time, so if you bunch together a number of happy-inducing pleasure activities, you will ultimately be happy. However, evidence suggests the opposite. In a world where pleasure activities such as alcohol, drugs, sugar, sex, pornography, wealth like general populace has never seen before, even social media and smart phones are abound, we seem to be unhappier than ever. In a world of increased privilege, we are increasingly discontent, and that in itself has negative consequences we could never have foreseen. As we become addicted to the pursuit of pleasure, are we actually ruining our chances of genuine happiness? And could we potentially be sending ourselves to an early grave?

 The Increase of Pleasure Activities

The strive for pleasure is evident within our culture, and it's becoming easier and easier to grasp at as our lives become less fraught with worries such as war and famine. The average American now consumes 94 grams of sugar per day – almost double the government's recommended limit of 50 grams per day. This has increased from 87 grams per day in 1970[1]. Not only is the availability of this 'feel good food' increasing, the desire for it is sky-rocketing too, suggesting an addictive tendency of this pleasure-seeking habit. It's not just sugar either. Drug use has increased by almost six per cent since 2007[2] and the use of smart phones has shot up from approximately 62 million people in 2010 to 224 million people in 2017[3]. Pornography, narcotics, social media use, and alcohol intake are all on the rise too. What's more, the average annual household income has increased from $49,354 in 2007 (ranging from $36,338 in Mississippi to $62,369 in New Hampshire) to $57,856 in 2015 (ranging from $40,037 to $75,675)[4], meaning that we can now pursue pleasure quicker and easier than ever before.

Cheat Meals: Sin or Savior?

Cheat Meals: Sin or Savior?

Dieting can be really tough, especially when there is so much conflicting information out there – eat this, not that; do that, not this. The topic of cheat meals – or even cheat days – is no exception, and the debate it engenders can get a little heated. After all, both food and health are passionate subjects. What’s it all about, though? Is it really possible to have one meal a week, where you eat whatever you want, and still maintain – or even lose – weight? Cheat meal advocates say yes, and they say they’ve got the science to back them up. 

A Psychological Boost

One of the biggest benefits of cheating, advocates claim, is actually a psychological one rather than a physical one. Everyone knows how tough it can be to stick to a strict regime, and the idea is that a cheat meal will allow you to relax your regime once a week, helping you to stick to it the rest of the time. It provides that added incentive to be ‘good’, because you know that you’re earning a splurge on the weekend[1]. That’s a dangerous road though, and only works for some. It can, potentially, lead to that famous slippery slope. Joe Vennare, creator of The Hybrid Athlete, warns that “some people can’t make the switch from healthy to unhealthy. Once they get a taste of sweets, they binge and can’t go back. It throws off their entire diet plan, serving as a setback instead of a small break from the rules”[2].

 

The Eco-Conscious Consumer Part II: Closing the Loop

The Eco-Conscious Consumer Part II: Closing the Loop

It's no secret that the excesses of modern-day consumption are at the heart of the current environmental crisis. Overwhelming demand for cheap, often disposable goods is rapidly depleting the earth's finite resources while filling up landfills, producing air and water pollution, and littering our oceans with chemical-laden, non-biodegradable materials. Environmental advocates and economists alike increasingly recognize that an economy built upon continued growth in consumption rates is fundamentally unsustainable. To truly reduce the environmental impact of our consumption we need to rethink our approach to consumerism altogether.

            In Part I of my Eco-Conscious Consumer series I argued the importance of supporting businesses that prioritize sustainable practices and using consumer power to pressure giant corporations to operate in ways that are environmentally responsible. But truly mindful consumption requires more than just picking and choosing the companies we buy from; it requires us to examine how our own consumer behaviors contribute to the environmental crises we face today. Ask yourself: How often do you buy things you don't really need? What did it take to make those things? And what happens to those things when they're eventually discarded?

            Here, I'll explore the steps consumers can take toward promoting a closed-loop system wherein the earth's precious resources are used as efficiently as possible. A note of warning: being an NYC resident, many of the services and organizations I highlight are New York-based. New York is far from perfect, but we do have a strong coalition of nonprofits and city initiatives that offer a host of resources for living a low-impact lifestyle. For readers outside the Big Apple, don't hate – investigate! Find out what kind of comparable services exist near you. If the options are sparse, considering using the examples here as templates for your own eco-conscious venture. 

What Exactly is so Super About Superfoods?

What Exactly is so Super About Superfoods?

We all know the importance of healthy eating, and we all know the dangers that come with a bad diet and an unhealthy lifestyle, but it can certainly get confusing with all that conflicting information out there. The concept of ‘superfoods’ is no different. The term has been subject to both praise and condemnation since it became popularized in the 1990 book Superfood by Michael Van Straten and Barbara Griggs[1], although it still remains quite firmly in the lexicon of many health-food advocates. In fact, between 2011 and 2015, the number of food or drink products containing the word ‘superfood’, ‘superfruit’, or ‘supergrain’ has doubled[2], and they claim to be stuffed full of nutrients and antioxidants that will not only make you look and feel better, but will ultimately help you to live longer. That’s quite an appealing consequence, but are superfoods really as super as they claim to be?

Superfoods

Alison Rumsey at the Academy of Nutrition and Dietetics in New York City explains that superfoods are those foods which have a high content of vitamins, nutrients, and antioxidants, and they are important, she claims, because “a lot of things can cause inflammation in our bodies, and cells get oxidized, which can cause many different disease states. Antioxidants help to get rid of these free radicals that happen when you have oxidation”[3]. Superfoods can lower your risk of chronic disease, improve the ageing process, improve depression, increase intelligence, and improve physical ability,[4] according to advocates.

Although, as the American Heart Association point out, there is no set criteria for determining what exactly is and is not a superfood[5], there are certainly some foods that fit the description of being especially nutritious and as a result, seem to uphold the idea that superfood advocates seek to promote. Take almonds as an example. There is solid, scientific evidence to show that almonds are one of the richest sources of vitamin E, and research demonstrates that they can help control cholesterol and blood sugar whilst reducing inflammation. Avocados, similarly, are fantastically rich in nutrients, providing around 40% daily recommended intake of fiber for a woman, 25% vitamin C, 16% vitamin E, 39% vitamin K, and 30% folic acid – all of which makes avocados great for cholesterol control, for diabetes, and even to act as a natural sunscreen. Kale is another oft-stated superfood that has the research to back it up. At only 33 calories for 100g, kale has 200% of your daily vitamin A intake, 134% vitamin C, and a massive 700% vitamin K, making it great for bone health and to help prevent blood clotting[6]. With evidence like that, it’s hard not to take superfood claims at face value.

 

Brain-Computer Interfaces: When Computers Can Read Your Mind

Brain-Computer Interfaces: When Computers Can Read Your Mind

The idea of controlling a computer with your mind seems like something out of a sci-fi novel, something that couldn’t possible happen any time soon, but we might be closer to the technology than we think. Brain-computer interfaces, or BCIs, are machines that read the electronic impulses that our brains release, thus knowing what you want and giving it to you immediately – without you having to lift a finger. Clicking on that mouse button could well soon become a thing of the past!

There are many ongoing research projects into just this, and the technology is being developed for a number of different reasons – aiding disability, telepathy, empathy, education, enjoyment, and supplementing human intelligence[1] being just a few. Elon Musk, entrepreneur and founder of Neuralink, a company working towards the development of wireless BCIs, argues that while initially, the technology will be used to treat disabilities and disorders, ultimately it will be used by everyone. “We are,” he says, “about eight to ten years away from this being usable by people with disability”[2].

 Old Technology

With comments like that, it may feel like the future is fast approaching, but actually, BCIs are not as new as they seem. It’s based on EEG (electroencephalogram) technology that was first developed by German psychiatrist Hans Berger when he was performing neurosurgery on a 17-year-old in 1924[3]. Berger recorded the electronic signals sent from his patient’s brain in order to produce a picture of it – and this technology is still used today in identifying and diagnosing disorders and abnormalities. By 1973, Jacques Vidal was examining the possibility of using EEG-style signals to carry information from the brain to a computer, and it was him who coined the term ‘brain-computer interface’[4].

There are other examples too. Cochlear implants, for instance, have used exactly this technology since their inception in 1982[5]. They bypass the parts of the ear that don’t work, take the sound waves from the air, turn them into electric signals, and sends them to the auditory nerves[6]. It’s a bit more complicated to do this for visual data, but ultimately, BCIs could do a similar thing for blind people – sending impulses to the brain from a camera, allowing the blind person to ‘see’[7].

The Rise of Unassisted Childbirth

The Rise of Unassisted Childbirth

Pregnancy and childbirth are every day occurrences, and the medical care that mothers and their new-borns receive gets better and better all the time. Despite that, more and more people are opting for an unassisted childbirth, which can range from a homebirth with no medical practitioner present to a complete separation from the medical world, including no doctors, no midwives, no pregnancy check-ups, and no scans. It’s still relatively rare, but since the mid-1990s, the popularity of unassisted childbirth has been on the rise and it’s now at its highest since 1975[1]. Those who choose an unassisted birth, however, face the backlash of medical organisations all around the world who warn of the dangers of shunning medical advice and assistance. So why are more people opting for it, and is it really as dangerous as medical organisations claim it to be?

 

What is unassisted childbirth and why are people opting for it?

It’s worth noting that unassisted childbirth is different to a homebirth which includes an attending medical practitioner, be it a doctor, nurse, or midwife. Unassisted births are more about ‘going back to nature’ and are usually attended by a non-medical birthing partner or family and close friends only[2]. Also called ‘freebirth’, as coined by Pavrati Baker[3], the notion of unassisted childbirth grew out of the Natural Childbirth movement fronted by, among others, Grantly Dick, that promoted the idea of childbirth without medical intervention and in particular, without anaesthesia[4]. Dr. Amos Grunebaum, the director of obstetrics at the New York Presbyterian Hospital and Weill Cornell Medical College explains that homebirths have risen in popularity by 79% in recent years, and of those 140,000 homebirths per year, approximately one third of them are unassisted[5].

The arguments that pro-unassisted childbirthers make are surprisingly simple. The medical system is negative and sterile, many say, and an unassisted pregnancy and birth is more exciting, more loving[6]. Women have been giving birth since the dawn of human existence and all this medical intervention is relatively recent occurrence, others argue. If women could do it before, why not now? After all, childbirth is not a medical emergency – it’s not an illness or disease or injury – so why is a hospital required[7]? Marilyn A Moran, a proponent of unassisted childbirth argues that childbirth is an inherently private and sexual matter[8] and Laura Kaplan Shanley argues that “birth is sexual and spiritual, magical and miraculous – but not when it’s managed, controlled, and manipulated by the medical establishment”[9]. Ultimately, then, the desire for unassisted childbirth arises from a disillusionment with the medical world, and a desire to stay as natural as possible.

 

Google+