First gene therapy successful against human aging | Neuroscientist News

First gene therapy successful against human aging

American woman gets biologically younger after gene therapies –

In September 2015, then 44 year-old CEO of BioViva USA Inc. Elizabeth Parrish received two of her own company’s experimental gene therapies: one to protect against loss of muscle mass with age, another to battle stem cell depletion responsible for diverse age-related diseases and infirmities.

See Also: Is there a connection between heavy metals and aging?

The treatment was originally intended to demonstrate the safety of the latest generation of the therapies. But if early data is accurate, it is already the world’s first successful example of telomere lengthening via gene therapy in a human individual. Gene therapy has been used to lengthen telomeres before in cultured cells and in mice, but never in a human patient.

Telomeres are short segments of DNA which cap the ends of every chromosome, acting as ‘buffers’ against wear and tear. They shorten with every cell division, eventually getting too short to protect the chromosome, causing the cell to malfunction and the body to age.

In September 2015, telomere data taken from Parrish’s white blood cells by SpectraCell’sspecialized clinical testing laboratory in Houston, Texas, immediately before therapies were administered, revealed that Parrish’s telomeres were unusually short for her age, leaving her vulnerable to age-associated diseases earlier in life.

In March 2016, the same tests taken again by SpectraCell revealed that her telomeres had lengthened by approximately 20 years, from 6.71kb to 7.33kb, implying that Parrish’s white blood cells (leukocytes) have become biologically younger. These findings were independently verified by the Brussels-based non-profit HEALES (Healthy Life Extension Company), and theBiogerontology Research Foundation, a UK-based charity committed to combating age-related diseases.

Parrish’s reaction: “Current therapeutics offer only marginal benefits for people suffering from diseases of aging. Additionally, lifestyle modification has limited impact for treating these diseases. Advances in biotechnology is the best solution, and if these results are anywhere near accurate, we’ve made history”, Parrish said.

Source: First gene therapy successful against human aging | Neuroscientist News

 

How Big Data Creates False Confidence – Facts So Romantic – Nautilus

If I claimed that Americans have gotten more self-centered lately, you might just chalk me up as a curmudgeon, prone to good-ol’-days whining. But what if I said I could back that claim up by analyzing 150 billion words of text? A few decades ago, evidence on such a scale was a pipe dream. Today, though, 150 billion data points is practically passé. A feverish push for “big data” analysis has swept through biology, linguistics, finance, and every field in between.

Although no one can quite agree how to define it, the general idea is to find datasets so enormous that they can reveal patterns invisible to conventional inquiry. The data are often generated by millions of real-world user actions, such as tweets or credit-card purchases, and they can take thousands of computers to collect, store, and analyze. To many companies and researchers, though, the investment is worth it because the patterns can unlock information about anything from genetic disorders to tomorrow’s stock prices.

But there’s a problem: It’s tempting to think that with such an incredible volume of data behind them, studies relying on big data couldn’t be wrong. But the bigness of the data can imbue the results with a false sense of certainty. Many of them are probably bogus—and the reasons why should give us pause about any research that blindly trusts big data.

In the case of language and culture, big data showed up in a big way in 2011, when Google released itsNgrams tool. Announced with fanfare in the journal Science, Google Ngrams allowed users to search for short phrases in Google’s database of scanned books—about 4 percent of all books ever published!—and see how the frequency of those phrases has shifted over time. The paper’s authors heralded the advent of “culturomics,” the study of culture based on reams of data and, since then, Google Ngrams has been, well, largely an endless source of entertainment—but also a goldmine for linguists, psychologists, and sociologists. They’ve scoured its millions of books to show that, for instance, yes, Americans are becoming more individualistic; that we’re “forgetting our past faster with each passing year”; and that moral ideals are disappearing from our cultural consciousness.

Source: How Big Data Creates False Confidence – Facts So Romantic – Nautilus

 

The Science of Making Friends – WSJ

Starting in early adulthood, our number of friends starts to decrease steadily; there are ways to reverse the tide, writes Elizabeth Bernstein.

I’ve been going on a series of dates lately.

I exchanged numbers with the person sitting next to me at a Cabernet tasting at my favorite wine bar and went for a coffee with a neighbor I met walking my dog. I reached out to people from my past I haven’t seen in years, to see if they’re newly available.

I’m trying to make new friends.

A body of research shows that people with solid friendships live healthier, longer lives. Friendship decreases blood pressure and stress, reduces the risk of depression and increases longevity, in large part because someone is watching out for us.

A study published in February in the British Journal of Psychology looked at 15,000 respondents and found that people who had more social interactions with close friends reported being happier—unless they were highly intelligent. People with higher I.Q.s were less content when they spent more time with friends. Psychologists theorize that these folks keep themselves intellectually stimulated without a lot of social interaction, and often have a long-term goal they are pursuing.

Source: The Science of Making Friends – WSJ

 

Making 1 million requests with python-aiohttp

In this post I’d like to test limits of python aiohttp and check its performance in terms of requests per minute. Everyone knows that asynchronous code performs better when applied to network operations, but it’s still interesting to check this assumption and understand how exactly it is better and why it’s is better. I’m going to check it by trying to make 1 million requests with aiohttp client. How many requests per minute will aiohttp make? What kind of exceptions and crashes can you expect when you try to make such volume of requests with very primitive scripts? What are main gotchas that you need to think about when trying to make such volume of requests?Hello

Source: Making 1 million requests with python-aiohttp

 

Jobs Are Scarce for Ph.D.s – The Atlantic

Why do so many people continue to pursue doctorates?

If you’re a grad student, it’s best to read the latest report from the National Science Foundation with a large glass of single-malt whiskey in hand. Scratch that: The top-shelf whiskey is probably out of your budget. Well, Trader Joe’s “Two Buck Chuck” is good, too!

Liquid courage is a necessity when examining the data on Ph.D.s in the latest NSF report, “The Survey of Earned Doctorates,” which utilized figures from the University of Chicago’s National Opinion Research Center. The report finds that many newly minted Ph.D.s complete school after nearly 10 years of studies with significant debt and without the promise of a job. Yet few people seem to be paying attention to these findings; graduate programs are producing more Ph.D.s than ever before.

Getting a Ph.D. has always been a long haul. Despite calls for reform, the time spent in graduate programs hasn’t declined significantly in the past decade. In 2014, students spent eight years on average in graduate school programs to earn a Ph.D. in the social sciences, for example. It takes nine years to get one in the humanities, seven for science fields and engineering, and 12 for education, according to NSF. In other words, Ph.D.s are typically nearing or in their 30s by the time they begin their careers. Many of their friends have probably already banked a decade’s worth of retirement money in a 401K account; some may have already put a down payment on a small town house.

Source: Jobs Are Scarce for Ph.D.s – The Atlantic

 

Born for it

How the image of software developers came about

The stereotype of the socially-awkward, white, male programmer has been around for a long time. Although “diversity in tech” is a much discussed topic, the numbers have not been getting any better. On the contrary, a lot of people inside and outside of the IT industry still take it for granted that this stereotype is the natural norm, and this perception is one of the things that is standing in our way to make the profession more inclusive and inviting. So where does this image come from? Did the demographics of the world’s programmer population really evolve naturally, because “boys just like computers more”? What shaped our perception of programmers? This text is about some possible explanations I found when reading about the history of computing.

Coders

Nathan Ensmenger is a professor at Indiana University who has specialised in the social and historical aspects of computing. In his book “The Computer Boys Take Over”, he explores the origins of our profession, and how programmers were first hired and trained:

Little has yet been written about the silent majority of computer specialists, the vast armies of largely anonymous engineers, analysts, and programmers who designed and constructed the complex systems that make possible our increasingly computerized society.

The title of the book is a reference to where it all started: With the “Computer Girls”. The women programming the ENIAC — one of the very first electronic, general purpose, digital computers — are widely considered to be the first programmers. At the time, the word “programmer”, or the concept of a program, did not even exist yet. The six women (Kay McNulty, Betty Jennings, Betty Snyder, Marlyn Wescoff, Fran Bilas and Ruth Lichterman) were hired to “setup” the ENIAC to perform “plans of computation”. More specifically, they were teaching the machine to calculate trajectories of weapons, to be used by soldiers in the field. The ENIAC women were recruited from the existing groups of women who up until then had been calculating these plans manually.

Source: Born for it

 

Scientists unveil the ‘most clever CRISPR gadget’ so far – STAT

A new CRISPR system can switch single letters of the genome efficiently, in a way that scientists say could reliably repair many disease-causing mutations.

For all the hoopla about CRISPR, the revolutionary genome-editing technology has a dirty little secret: it’s a very messy business. Scientists basically whack the famed double helix with a molecular machete, often triggering the cell’s DNA repair machinery to make all sorts of unwanted changes to the genome beyond what they intended.

On Wednesday, researchers unveiled in Nature a significant improvement — a new CRISPR system that can switch single letters of the genome cleanly and efficiently, in a way that they say could reliably repair many disease-causing mutations.

Source: Scientists unveil the ‘most clever CRISPR gadget’ so far – STAT

 

Machine Learning Meets Economics, Part 2

By using machine learning algorithms, we are increasingly able to use computers to perform intellectual tasks at a level approaching that of humans. Given that computers cost less than employees, many people are afraid that humans will therefore necessarily lose their jobs to computers. Contrary to this belief, in this article I show that even when a computer can perform a task more economically than a human, careful analysis suggests that humans and computers working together can sometimes yield even better business outcomes than simply replacing one with the other.

Specifically, I show how a classifier with a reject option can increase worker productivity for certain types of tasks, and I show how to construct and tune such a classifier from a simple scoring function by using two thresholds. I begin with a parable featuring the same characters as the one from Part 1 of this Machine Learning Meets Economics series. I recommend reading Part 1 first, as it sets up much of the terminology I use here.

Source: Datacratic MLDB

 

Scientific Regress by William A. Wilson | Articles | First Things

The problem with ­science is that so much of it simply isn’t. Last summer, the Open Science Collaboration announced that it had tried to replicate one hundred published psychology experiments sampled from three of the most prestigious journals in the field. Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case. The OSC was the biggest attempt yet to check a field’s results, and the most shocking. In many cases, they had used original experimental materials, and sometimes even performed the experiments under the guidance of the original researchers. Of the studies that had originally reported positive results, an astonishing 65 percent failed to show statistical significance on replication, and many of the remainder showed greatly reduced effect sizes.

Their findings made the news, and quickly became a club with which to bash the social sciences. But the problem isn’t just with psychology. There’s an ­unspoken rule in the pharmaceutical industry that half of all academic biomedical research will ultimately prove false, and in 2011 a group of researchers at Bayer decided to test it. Looking at sixty-seven recent drug discovery projects based on preclinical cancer biology research, they found that in more than 75 percent of cases the published data did not match up with their in-house attempts to replicate. These were not studies published in fly-by-night oncology journals, but blockbuster research featured in Science, Nature, Cell, and the like. The Bayer researchers were drowning in bad studies, and it was to this, in part, that they attributed the mysteriously declining yields of drug pipelines. Perhaps so many of these new drugs fail to have an effect because the basic research on which their development was based isn’t valid.

Source: Scientific Regress by William A. Wilson | Articles | First Things