Designer Babies: Is Gene Selection Ever Acceptable?

Designer Babies: Is Gene Selection Ever Acceptable?

The term ‘designer babies’ has been bandied around for some years now, and refers to babies whose genetic make-up has been selected or altered. This could be to eradicate a disease or defect, or it could be to ensure that particular genes are present. The idea has been discussed in science for many years and in science fiction, the concept has been around for much, much longer. The issue, however, is extremely controversial and as science races to meet science-fiction, the debate is becoming a serious one.  

Pre-Implantation Genetic Diagnosis

Actually, the concept of ‘designer babies’ is not that far-fetched and already, doctors and scientists use some form of gene-selection during in-vitro fertilization (IVF) treatments. Pre-implantation genetic diagnosis, or PGD, refers to the practice of screening IVF embryos both for disease and for gender selection. Using this process, scientists can remove the defective mitochondria (the ‘powerhouse’ of cells) from an embryo and replace it with healthy mitochondria from a donor egg, and in this way they can effectively ‘design’ babies without certain diseases. Of course, the process doesn’t work for all diseases, and they can actually use this process for non-medical preferences too, such as the gender of the resulting child. Both forms of PGD are currently legal in the US, although the American Congress of Obstetricians and Gynecologists frowns upon the latter use, arguing that by allowing parents to choose the gender of their child, we run the risk of increasing sex discrimination[1].

DNA Editing

There is more to come though. Dr. Tony Perry of the University of Bath in the UK was one of the first scientists to clone mice and pigs, and he claims that more in-depth DNA editing is on its way. It won’t be long, he says, until we can pick and choose which parts of our baby’s DNA that we want to cut out and potentially replace with new pieces of gene-code[2]. In fact, it’s already begun. Earlier this year, scientists in China took discarded IVF embryos and began experiments to correct the abnormal gene that causes the blood disorder beta thalassemia[3]. Even though these embryos were due to be destroyed, the experiments incited much controversy. Whilst few would argue against potential disease eradication, this technique could be used to alter healthy genes too and the real question is how far would be too far?

ADHD -  Diagnosis, Treatment and Careful Management

ADHD - Diagnosis, Treatment and Careful Management

In recent years, ADHD has received more and more public attention. Rates of diagnosis have increased in children nationwide – 11% of all American children had been diagnosed with ADHD as of 2011, compared to 7.8% in 2003 and 9.5% in 2007.[i] Understandably, such an increase has provoked skepticism and concern about the validity of the diagnosis, particularly as treatment for the disorder usually involves stimulant medications that can be easily abused. Perhaps this is one of the reasons why, in 2011 CDC data, statistics showed that less than a third of all US children over 6 with ADHD were receiving both medication and behavioral therapy as is generally recommended, and nearly 20% of children with the disorder were receiving neither.[ii] Plenty of people are skeptical about whether ADHD is all that serious a problem, when many of the symptoms seem to be simply magnified versions of normal behavior for children.

 

The Benefits of a Good Scare

The Benefits of a Good Scare

It is around this time of year that scaring becomes a big deal. Houses are strung with Halloween decorations from spiders to ghosts and ghouls, and then often splashed with ‘blood’ as well. Families and friends play tricks on each other or gather together in dark rooms to watch scary movies or tell terrifying tales. Haunted houses make a killing too – if you’ll excuse the pun! Of course, it’s not just in October that we liked to be scared either. In fact, all year round, people partake in extreme sports or adrenaline pumping activities and the scare industry is big business. We make an event of being scared, eagerly anticipating it and taking a thrill from it afterwards. In short: we love it!

What’s odd about this is that should any of these activities have a genuine, real-life affect, we’d be terrified – and not in a good way, as intended. Most people would not want to be put into a genuinely life threatening situation and fear is our brain’s way of protecting us from that. It warns us of a threat and helps us to react accordingly, but when it comes to thrill seeking, a lot of people thrive on a good scare. So we know that some people enjoy it. The question is, though, can a scare actually be good for you?

The answer? Absolutely it can, and this is why:

 The High

The first and perhaps most obvious benefit of being scared is the natural high that comes with it. When faced with a potentially threatening situation, our bodies go into what is known as the fight or flight response. At its base, this is a release of adrenaline that allows us to either flee from the situation or act quickly and efficiently to fight it. At a more complex level, it’s a physiological change. Our heart rate increases, our breath quickens, we begin to sweat, our muscle tense, and our concentration focuses narrowly on the perceived threat. Our brains are also flooded with chemicals such as dopamine and endorphins – yes, the same ones that give you a buzz after exercising[1].

It’s not fight or flight that is, in itself, enjoyable. In a genuinely threatening situation, we wouldn’t get a buzz from this response, but rather would be able to react in a way that better suited the circumstances. Dr. Margee Kerr, a scare specialist at Robert Morris University says the response becomes enjoyable when we know that we are away from harm. Once we accept that we are safe, we are free to enjoy the chemical rush and enjoy that sense of relief – or even of achievement – that we feel once we get through a haunted house or scary movie[2].

 

 

Modafinil and Neuro-enhancement: Delightful or Dangerous?

Modafinil and Neuro-enhancement: Delightful or Dangerous?

What Is It?

Science fiction is abound with neuro-enhancement technologies and medications.  The Bradley Cooper film Limitless is just one example, in which characters discover a street drug that allows them to unleash 100% of their brain power, becoming not just more productive but more charming, cleaner, and more energetic.  But could such a drug ever exist?  Perhaps.  Whilst not on the same level as the drug in Limitless, Modafinil is tipped to be the first true neuro-enhancement drug suitable for healthy people. 

The FDA approved drug, which is marketed as Provigil in the US and the UK is a schedule IV drug, meaning that you must have a doctor’s prescription in order to legally buy it or possess it[1], although there are plenty of off-label versions of the drug being sold on illegal, overseas websites.  At its base, it’s a stimulant that is prescribed to people suffering from sleep disorders such as narcolepsy and is used to increase the cognitive functions of people with neuropsychiatric disorders and shift-work related sleep deprivation[2].  It has later become a nootropic, or ‘smart drug’, taken by healthy people to increase concentration, memory, alertness, energy, and motor skill as well as reducing sleepiness[3]

Doctor Peter Morgan from Yale University explains that it is effective because it acts on several different neurotransmitters at once.  It affects your dopamine levels, making you more alert and more interested in things.  It affects your norepinephrine, again improving alertness and focus.  It affects histamine too, which keeps you awake.  It is also believed to enhance short-term memory by as much as ten per cent by influencing the neurotransmitter glutamate[4].  It could affect other transmitters too, meaning that the reaction is different for different people. 

That’s a lot of cognitive improvement from one little pill, and the list of people taking it is impressive.  It’s prescribed to surgeons who need the boost to get them through long surgical procedures whilst maintaining a steady hand.  It’s prescribed to long-haul airline pilots and shift workers.  There are also many famous people who reportedly take it to help with day-to-day living.  Tim Ferriss, author of The 4-Hour Workweek is one, comedian and actor Joe Rogan is another.  Even President Obama is rumored to have taken it[5].  For those not so famous, the internet is littered with case studies and personal proclamations regarding the greatness of this drug and its potential for the future of neuro-enhancement.  All this though, makes it easy to wonder: is it just too good to be true?

Does Modern Life Cause Early-Onset Dementia?

Does Modern Life Cause Early-Onset Dementia?

     Dementia affects millions of people worldwide and in the US, there are presently an estimated five million people suffering from age-related dementia.  If you are in America and you are over the age of 85, you have a one in two chance of developing some sort of dementia[1].  It is the sixth leading cause of death[2].  In 2015, nearly one in five Medicare dollars will be spent on dementia and Alzheimer’s will cost $226 billion.  By 2050, that cost is expected to rise to $1.1 trillion[3].  It’s a terrifying fact. 

     Dementia is an umbrella term for disorders of mental processes caused by brain diseases or injury.  There are a number of different types of dementia but by far the most prevalent and most well-known is Alzheimer’s, which currently accounts for around 70% of all dementia diagnoses[4].  What’s worrying is that dementia diagnoses are increasing as the population ages, as are deaths from neurological diseases like Alzheimer’s.  What’s more worrying, though, is that dementia is affecting individuals at a younger and younger age. 

 Dementia and the Youth of Today

     Whilst previously, ‘early-onset dementia’ referred to people in their mid to late 60s, it is now starting to refer to people diagnosed as young as 30 and 40.  That’s a frightening concept, and whilst some claim that it’s the result of living longer and being better at curing other diseases (because, they claim, everybody has to die of something), Colin Pritchard of Bournemouth University in the UK is not so sure.  Pritchard and his team of researchers examined the mortality data from the World Health Organization and looked at the changing pattern of neurological deaths across 21 western countries, from as far back as 1979[5].  What they discovered was startling. 

 

Gender Expression and the Umbrella of Terms

Gender Expression and the Umbrella of Terms

     Transgender Americans and their struggles have become much more prominent in recent times. Last year, an article about the transgender rights movement focusing on actress Laverne Cox made the cover of TIME magazine;[i] and earlier this year, Caitlyn Jenner made her transition from male to female public, discussing it in interviews with 20/20[ii] and Vanity Fair.[iii] It would be fair to say, then, that there is a fair amount of public interest and discussion on the subject of transgender people and the life they experience at present. However, most of the public discourse appears to focus on a clearly delineated change: male to female, female to male. But for many people, gender can actually a much more complicated issue than simply being one or the other.

     People who do not feel they fit in the world as either male or female will often refer to themselves as “genderqueer” or “nonbinary” rather than simply transgender.[iv] Both are umbrella terms used to cover many ranges of gender expression. Some people will use both terms interchangeably; others feel they have slightly different connotations. For the purposes of this article, I will be using the term “nonbinary” to refer to this group of people, as it came into use for this purpose more recently than “genderqueer,” and I have seen it used more often in recent discussions of gender. The word nonbinary refers to the fact that these people consider themselves as living outside of the gender binary, which is to say, the male/female dichotomy we usually think of when describing a person’s gender.

 

Fat Shaming vs. Fat Acceptance: Is It Okay to be Fat?

Fat Shaming vs. Fat Acceptance: Is It Okay to be Fat?

Fat: it’s big news.  In today’s world, everyone wants to talk about every body, be it big, little, or oddly shaped, and fat is right there at the top of the agenda.  There’s the fat shamers (those whose purpose it is to shame ‘fatties’ into becoming ‘thinnies’) and the fatosphere (people who write blogs for and in support of fat people).  Then there is everybody in between and it seems that no-one wants to be left out of the debate.  So with all this going on, and with the fat acceptance activists doing daily battle with the fat shamers, the real question remains: is it okay to be fat?

Fat Shaming

Fat shaming (heckling and harassing obese people) is becoming increasingly popular.  The idea is that shame will motivate overweight and obese people into doing something about their situation.  It is suggested, too that the whole concept of fat shaming stems from the idea that people don’t lose weight because they are lazy, lack willpower, and have little or no self-discipline[1].  These ‘shamers’ come from all walks of life, from government campaigns designed to encourage people to lose weight, to the media who aim to stigmatize fat people, right down to the general public who use things like Twitter’s hashtag #fatshamingweek in a way that makes fat shaming seem almost like a hobby, something to take fun from, rather than anything productive.  But does fat shaming actually work?  And perhaps more importantly, is it morally acceptable?

The Millennial Generation: Their Attitudes, Social Behavior, & Religious Independence

The Millennial Generation: Their Attitudes, Social Behavior, & Religious Independence

A lot of thought and effort has been expended on pinpointing what makes up the character of the millennial generation, and how they became that way. The main point that most everyone agrees on is that they are very different from their parents and grandparents before them. Of course, some change in attitudes is inevitable with the passage of time, and millennials do still retain some aspects of their forebears’ perspective and values. Still, there is one area in particular where millennials break quite radically with their predecessors: as a whole, they are dramatically less religious than the rest of the country.

 

A recent Pew Research Center study[i] on millennial attitudes and social behavior examined how they compared to the previous generations. Many of these findings are unsurprising: for instance, millennials are much more likely than any other group to have posted a selfie, or a picture of oneself (usually taken with a phone camera), on their social media, indicating how far they have integrated recent technology into their lives. Other findings are less immediately apparent. For instance, millennials tend to be more liberal than their parents, but they are far more likely to identify as independent from political organizations than their predecessors than previous generations did at their age, regardless of whether they lean right or left politically; essentially, they don’t identify as members of the parties they vote for.

Baby Boomers and the Elder Orphans That They Become

Baby Boomers and the Elder Orphans That They Become

The Aging Population

The population of America is aging.  We’re getting older.  That may seem self-evident but the problem is, it’s not just us.  Rather, the proportion of older people within our society is increasing and the ratio of young to old is shrinking.  In 2012, there were 43 million people aged 65 or over in the US, compared to just 35 million only ten years earlier, in 2002[1].  It is estimated that by 2029, 20% of the US population will be 65 years old or over, and that by 2056, the population of over-65s will bigger than that of the population of under 18s[2].  This, in part, is due to the so-called baby boom generation – those born in the fertile post-war years between 1946 and 1964.  The oldest of this group turned 65 back in 2011 and the youngest will probably need health care right through to 2060.  Many chose to remain child-free, which in itself isn’t a problem, but as the population continues to age, so difficulties begin to show. 

The Baby-Boomers and What They Become

The baby-boomers are now facing a new, and perhaps less spritely name: the elder orphans.  The term, coined recently, refers to older people who need care yet have no relatives either at all or living nearby.  Dr. Maria Torroella Carny, the chief of geriatric and palliative medicine at North Shore Health System, released a paper last week discussing just that issue.  These elderly people, who are often divorced or widowed and have no children, have no support system and are effectively ‘orphaned’ during a particularly vulnerable time in their lives.  She uses case studies to demonstrate just how serious this can be and how devastating the potential consequences are[3]

Lead Poisoning and Criminal Behavior: Can there really be a link?

Lead Poisoning and Criminal Behavior: Can there really be a link?

     The recent death of Baltimore man Freddie Gray, who’s spinal cord was apparently snapped when in police custody, has sparked not only an investigation into police practices but has also reinvigorated the discussion around the potential effects of lead poisoning in children.  Despite long being known to be harmful, the effects of lead paint in the homes of children are still being discovered and surprisingly, are still having an effect.  The question of the moment, though, is can lead poisoning in children ultimately lead to criminal behavior in adults?

 Lead Paint and its Effects

     Although the practice of putting lead into paint was banned in 1978, many homes, especially in poor socio-economic areas, still have lead paint on the interior and exterior walls.  In time, this paint deteriorates and will chip or release dust that can either be breathed in or more likely swallowed by children.  It wasn’t so long ago that 10 micrograms of lead per deciliter of blood was considered a safe level.  However in 2012, following a 30 year study into the effects of lead poisoning, the Center for Disease Control and Prevention (CDC) reduced that number to just five micrograms per deciliter.  Now, many argue that there is no safe level at all[1].  What’s even scarier is that in 2007, an estimated 25% of homes still contained deteriorating lead paint and currently, more than four per cent of children in the US have some level of lead poisoning.  These figures rise in big cities and poor urban areas. 

Google+