id
int64
0
17.2k
year
int64
2k
2.02k
title
stringlengths
7
208
url
stringlengths
20
263
text
stringlengths
852
324k
0
2,016
"Human-Animal Chimeras Are Gestating on U.S. Research Farms | MIT Technology Review"
"https://www.technologyreview.com/s/545106/human-animal-chimeras-are-gestating-on-us-research-farms"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Human-Animal Chimeras Are Gestating on U.S. Research Farms By Antonio Regalado archive page Braving a funding ban put in place by America’s top health agency, some U.S. research centers are moving ahead with attempts to grow human tissue inside pigs and sheep with the goal of creating hearts, livers, or other organs needed for transplants. The effort to incubate organs in farm animals is ethically charged because it involves adding human cells to animal embryos in ways that could blur the line between species. Last September, in a reversal of earlier policy, the National Institutes of Health announced it would not support studies involving such “human-animal chimeras” until it had reviewed the scientific and social implications more closely. The agency, in a statement, said it was worried about the chance that animals’ “cognitive state” could be altered if they ended up with human brain cells. The NIH action was triggered after it learned that scientists had begun such experiments with support from other funding sources, including from California’s state stem-cell agency. The human-animal mixtures are being created by injecting human stem cells into days-old animal embryos, then gestating these in female livestock. Based on interviews with three teams, two in California and one in Minnesota, MIT Technology Review estimates that about 20 pregnancies of pig-human or sheep-human chimeras have been established during the last 12 months in the U.S., though so far no scientific paper describing the work has been published, and none of the animals were brought to term. The extent of the research was disclosed in part during presentations made at the NIH’s Maryland campus in November at the agency’s request. One researcher, Juan Carlos Izpisua Belmonte of the Salk Institute, showed unpublished data on more than a dozen pig embryo containing human cells. Another, from the University of Minnesota, provided photographs of a 62-day-old pig fetus in which the addition of human cells appeared to have reversed a congenital eye defect. The experiments rely on a cutting-edge fusion of technologies, including recent breakthroughs in stem-cell biology and gene-editing techniques. By modifying genes, scientists can now easily change the DNA in pig or sheep embryos so that they are genetically incapable of forming a specific tissue. Then, by adding stem cells from a person, they hope the human cells will take over the job of forming the missing organ, which could then be harvested from the animal for use in a transplant operation. “We can make an animal without a heart. We have engineered pigs that lack skeletal muscles and blood vessels,” says Daniel Garry, a cardiologist who leads a chimera project at the University of Minnesota. While such pigs aren’t viable, they can develop properly if a few cells are added from a normal pig embryo. Garry says he’s already melded two pigs in this way and recently won a $1.4 million grant from the U.S. Army, which funds some biomedical research, to try to grow human hearts in swine. “The specter of an intelligent mouse stuck in a laboratory somewhere screaming ‘I want to get out’ would be very troubling to people.” Because chimeras could provide a new supply of organs for needy patients and also lead to basic discoveries, researchers including Garry say they intend to press forward despite the NIH position. In November, he was one of 11 authors who published a letter criticizing the agency for creating “a threat to progress” that “casts a shadow of negativity” on their work. The worry is that the animals might turn out to be a little too human for comfort, say ending up with human reproductive cells, patches of people hair, or just higher intelligence. “We are not near the island of Dr. Moreau, but science moves fast,” NIH ethicist David Resnik said during the agency’s November meeting. “The specter of an intelligent mouse stuck in a laboratory somewhere screaming ‘I want to get out’ would be very troubling to people.” The chance of an animal gaining human consciousness is probably slim; their brains are just too different, and much smaller. Even so, as a precaution, researchers working with farm-animal chimeras haven’t yet permitted any to be born, but instead are collecting fetuses in order to gather preliminary information about how great the contribution of human cells is to the animals’ bodies. Hiromitsu Nakauchi, a stem-cell biologist at Stanford University, began trying to make human-sheep chimeras this year. He says that so far the contribution by human cells to the animals’ bodies appears to be relatively small. “If the extent of human cells is 0.5 percent, it’s very unlikely to get thinking pigs or standing sheep,” he says. “But if it’s large, like 40 percent, then we’d have to do something about that.” Other kinds of human-animal chimeras are already widely used in scientific research, including “humanized” mice endowed with a human immune system. Such animals are created by adding bits of liver and thymus from a human fetus (collected after an abortion) to a mouse after it is born. The new line of research goes further because it involves placing human cells into an animal embryo at the very earliest stage, when it is a sphere of just a dozen cells in a laboratory dish. This process, called “embryo complementation,” is significant because the human cells can multiply, specialize, and potentially contribute to any part of the animal’s body as it develops. In 2010, while working in Japan, Nakauchi used the embryo complementation method to show he could generate mice with a pancreas made entirely of rat cells. “If it works as it does in rodents,” he says, “we should be able have a pig with a human organ.” “What if the embryo that develops is mostly human? It’s something that we don’t expect, but no one has done this experiment, so we can’t rule it out.” Although Nakauchi was a star scientist, Japanese regulators were slow to approve his idea for chimeras—a “pig man” as critics put it—and by 2013 Nakauchi decided to move to the U.S., where no federal law restricts the creation of chimeras. Stanford was able to recruit him with the help of a $6 million grant from the California Institute of Regenerative Medicine, a state agency set up a decade ago to bypass political interference from Washington. While the NIH funding ban doesn’t affect Nakauchi, it has put researchers under pressure to explain the purpose of their work. “I want to show you some chimeras,” Nakauchi said when I visited his laboratory at Stanford last month. He opened the door to a small room containing incubators where the chimeric embryos are stored. Because an early embryo is almost invisible to the human eye, the room houses special microscopes equipped with micro-needles used to inject the human cells into them. The type of human cells being added are called iPS cells, made from skin or blood chemically reprogrammed into more versatile stem cells using a Nobel Prize-winning formula developed by one of Nakauchi’s Japanese colleagues. Nakauchi says that as a matter of convenience, most of the iPS cells his team has been placing into animal embryos are made from his own blood, since recruiting volunteers involves too much paperwork. “We need a special consent if we’re injecting into animals,” he says sheepishly. “So I try to use my own.” The word chimera comes from the creature of Greek myth, part lion, part goat, and part snake. Nakauchi says most people at first imagine his chimeras are monsters, too. But he says attitudes change if he can explain his proposal. One reason is that if his iPS cells develop inside an animal, the resulting tissue will actually be his, a kind of perfectly matched replacement part. Desperately ill people on organ waiting lists might someday order a chimera and wait less than a year for their own custom organ to be ready. “I really don’t see much risk to society,” he says. Before that can happen, scientists will have to prove that human cells can really multiply and contribute effectively to the bodies of farm animals. That could be challenging since, unlike rats and mice, which are fairly close genetically, humans and pigs last shared an ancestor nearly 90 million years ago. To find out, researchers in 2014 decided to begin impregnating farm animals with human-animal embryos, says Pablo Ross, a veterinarian and developmental biologist at the University of California, Davis, where some of the animals are being housed. Ross says at Davis he has transferred about six sets of pig-human embryos into sows in collaboration with the Salk Institute and established another eight or 10 pregnancies of sheep-human embryos with Nakauchi. Another three dozen pig transfers have taken place outside the U.S., he says. These early efforts aren’t yet to make organs, says Ross, but more “to determine the ideal conditions for generating human-animal chimeras.” The studies at Davis began only after a review by three different ethics committees, and even then, he says, the university decided to be cautious and limit the time the animals would be allowed to develop to just 28 days (a pig is born in 114 days). By then, the embryonic pig is only half an inch long, though that’s developed enough to check if human cells are contributing to its rudimentary organs. “We don’t want to grow them to stages we don’t need to, since that would be more controversial,” says Ross. “My view is that the contribution of human cells is going to be minimal, maybe 3 percent, maybe 5 percent. But what if they contributed to 100 percent of the brain? What if the embryo that develops is mostly human? It’s something that we don’t expect, but no one has done this experiment, so we can’t rule it out.” hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain. By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative. By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
1
2,013
"Too Much Information | MIT Technology Review"
"https://www.technologyreview.com/s/522661/too-much-information"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Too Much Information By Amanda Schaffer archive page Pregnant women and their partners can already peer at an unborn child’s chromosomes: with amniocentesis, they can learn about the presence or, more likely, absence of large-scale genetic defects, often gaining peace of mind. But only a small percentage of parents-to-be take the opportunity, because the procedure is invasive and uncomfortable—a large needle is inserted into the amniotic sac—and causes miscarriage in roughly one in 400 cases. Researchers have long hoped to develop a noninvasive alternative. Ever since scientists discovered, in the 1990s, that pregnant women’s blood contains substantial amounts of fetal DNA , they’ve theorized that they could use this genetic material to test for fetal abnormalities like an extra copy of chromosome 21, which causes Down syndrome. That technology has now arrived (see “ Prenatal DNA Sequencing ,” May/June 2013). Several companies have introduced genetic tests that use blood drawn from the mother. These tests can be performed earlier in pregnancy than amniocentesis is usually done, which means that if the results suggest an abnormality, women and their partners have more time to grapple with whether to have an abortion or prepare for a child with special needs. If the results are reassuring, the cloud of anxiety dissipates sooner. Given that the risks of having blood drawn are minimal, the tests are likely to be widely used. While today fewer than 5 percent of pregnant women undergo amniocentesis, “I think we could see 50, 60, 70, 80 percent of American pregnancies getting genetic testing,” says Hank Greely, director of the Center for Law and the Biosciences at Stanford. Things Reviewed Noninvasive prenatal screening The catch, though, is that as the accuracy of these tests continues to improve, they will be able to detect a greater range of genetic variations, including some with murkier implications. For example, rather than indicating something with certainty, they could reveal elevated risks for certain diseases or disorders. These advances could collide with the politics of abortion and raise the ugly specter of eugenics. When, if ever, should parents terminate pregnancies on the basis of genetic results? Do we have the wisdom to direct our own evolution? And perhaps most important, are there limits to how much data parents should have—or want to have—about their children before birth? Corporate contenders The first noninvasive tests to reach the market have screened for the largest-scale genetic defects—namely, abnormal numbers of chromosomes. Sequenom Laboratories, Verinata Health (part of Illumina), Ariosa Diagnostics, and Natera all offer tests that look for trisomies—an extra copy of chromosomes 13, 18, or 21, which cause Patau syndrome, Edwards syndrome, and Down syndrome, respectively. Some also identify an aberrant number of sex chromosomes. This fall, Sequenom expanded its test to encompass additional trisomies as well as selected microdeletions (in which DNA is missing), including those known to cause DiGeorge syndrome, -Cri-du-chat syndrome, and Prader-Willi or Angelman syndrome. The various companies’ tests range in price from less than $1,000 to almost $3,000, though they are covered by some insurance plans. So far, these offerings have not replaced amniocentesis, which remains the gold standard for accuracy. But they can be performed as early as 10 weeks into pregnancy and can help identify women who may need the more invasive test. Companies will modify these tests to flag an increasing number of genetic conditions, including some that are quite rare. The trend is toward “detecting smaller and smaller mutations,” says Jonathan Sheena, chief technology officer of Natera, who predicts that noninvasive identification of inherited single-gene diseases like cystic fibrosis, Tay-Sachs, and neurofibromatosis will soon become commercial reality. In the laboratory, meanwhile, researchers have already used noninvasive methods to sequence a whole fetal genome. In 2012, geneticist Jay Shendure’s group at the University of Washington analyzed blood from the mother as well as a saliva sample from the father to reach this goal. Also in 2012, Stephen Quake’s group at Stanford used a maternal blood sample alone to derive the fetal exome, which consists of the coding parts of genes. “That’s pretty much the whole ball of wax,” Quake told me. (Shendure and Quake are advisors to Ariosa Diagnostics and Verinata, respectively.) These laboratory efforts were not cheap: Shendure says it cost him around $50,000 to do the full genome. But they represent a clear proof of principle. And as the costs of sequencing continue to plummet, far more parents-to-be will potentially have access to far more genetic data about their future children. Quake says he hopes the technology will be used to identify and manage conditions that are well defined and for which early intervention can make a difference; he points to metabolic disorders like phenylketonuria, in which children require a strict diet, and certain immune disorders that can respond to early treatment. If babies’ problems can be diagnosed prenatally, he says, “you’re not putting them in distress for the first few weeks” while everyone is “running around trying to figure out what is wrong.” Another example is a condition called dilated cardiomyopathy, in which the heart is enlarged and weakened. This disorder can go undiagnosed until its victims find themselves short of breath or have a heart attack as teenagers or young adults. By treating them from a young age with drugs, physicians can “dramatically change outcomes,” says Euan Ashley , a Stanford researcher who cofounded Personalis, a genetic screening company. Ethical conundrums But the moral quandaries are sure to intensify as well. If many more women receive information about genetic disorders like Down syndrome earlier in pregnancy, it’s likely that the number of abortions will rise. Inevitably, some people will object to the testing technology because of their opposition to abortion, says Greely. And some current parents of children with Down syndrome will worry that if fewer people are born with the disorder, medical research and public support will start to dry up. The unease deepens with less severe disorders like Kleinfelter’s syndrome, which is caused by an extra X chromosome in males. Boys with this syndrome often have few noticeable symptoms early on and may not be diagnosed until later in life, when they may experience atypical sexual development, learning difficulties, and infertility. If genetic testing identified more cases prenatally, some of those pregnancies would almost surely be terminated. Even firm supporters of abortion rights may find that thought troubling. Similarly, consider achondroplasia , which is an inherited form of dwarfism. If two parents with achondroplasia wanted a child who looked like them, “would it be wrong for them to terminate a normal-sized fetus?” Greely asks. “These are hard questions.” Who knows which ­disorders will be curable or treatable 20 or 30 years from now? For now, testing for intelligence or height or other complex traits that might pique parents’ curiosity appears to be far off: researchers largely seem skeptical that they will be able to predict these traits from an individual’s genome in the foreseeable future. “We’re really bad at it right now,” says Shendure. “In 10 years we’ll probably still be pretty bad at it.” But the underlying issue will still complicate the abortion debate: to what extent should parents be able to choose the traits of their children—and should the calculus change when the traits in question, like sex or hair color or eye color, are not directly linked to disease? For the most part, we tend to trust parents to make the right decisions for their children, but that prerogative may not be absolute, especially when it comes to nonmedical factors. We can’t know how children’s lives will unfold or how important a whole range of traits might turn out to be to them. We surely don’t have the understanding to guide our own evolution, or even to understand the extent to which individuals’ genomes relate to their health or happiness. And given the disastrous history of eugenics, from forced sterilizations to the Holocaust, we should maintain a healthy fear of even small-scale efforts to select some nonmedical traits over others. This is not merely a theoretical matter: parents in India, China, and South Korea who learn their fetuses’ sex through ultrasound have disproportionately chosen abortion in the case of girls. (Arizona has already made it illegal to abort on the basis of sex or race, though introducing criminal penalties for doctors is not necessarily wise either.) Perhaps the biggest question is which information will be meaningful for parents to receive. Genetic interpretation can be a dicey game. It is well known, for instance, that mutations in the BRCA1 gene are strongly associated with breast cancer, but in a disturbingly large number of cases, patients are told they have variants of unknown significance. “It would be very unfortunate if we started delivering ‘variants of unknown significance’ results in the context of reproductive health,” Shendure says. Similarly, when it comes to complex problems like cognitive impairment, it’s not clear how useful it is to test for—or report on—variants that have been associated with disabilities. Research suggests, for instance, that people with specific duplications on chromosome 16 are at higher risk of mental impairment. Some are severely affected, but others are “absolutely, perfectly healthy, functioning normally,” according to Wendy Chung, director of clinical genetics at Columbia University. To date, there is no reliable data on what percentage of duplication carriers fall into each of these categories, meaning that prenatal testing for these variants could greatly increase parents’ anxiety while leaving them at a loss to assess the results quantitatively. Then there are girls with three copies of the X chromosome. They are also at higher risk for cognitive impairment and learning disabilities, but the risk remains small, and the vast majority of them will be normal. How should parents make sense of these possibilities? Most of us find it hard to think about risk, and we are truly bad at predicting how future events will affect us emotionally. And on top of all that, who knows which disorders will be curable or treatable through gene therapy or some other method 20 or 30 years from now? In other words, we’re not ready for the onslaught of information the new tests seem poised to provide. Nevertheless, that information is coming, and parents will have to figure out what they want to know and how to interpret the choices they’re offered. It is critical, then, that the informed–consent process for testing be exceptionally good, says Greely. Ideally, parents should meet with a genetic counselor to discuss what exactly testing might reveal and what wrenching decisions might follow. If formal genetic counseling isn’t available, obstetricians should step in with extended, thorough conversations that take into account the parents’ values, desire for data, and tolerance for uncertainty. Genetic testing, as Greely puts it, should be made distinct from other forms of prenatal care; it should never be “just one more tube of blood” taken in the course of another whirlwind visit to the doctor. Amanda Schaffer is a freelance journalist who writes about science and medicine for Slate , the New York Times , and other publications. hide by Amanda Schaffer Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our January/February 2014 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
2
2,017
"Hacking the Biological Clock | MIT Technology Review"
"https://www.technologyreview.com/magazines/hacking-the-biological-clock"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Magazine View previous issues MIT News Magazine Hacking the Biological Clock Inside the quest to slow aging, extend fertility, and defeat cancer. Letter from the editor View previous issue View next issue Features Categorized in Biotechnology and health Google’s Long, Strange Life-Span Trip Why does a mole rat live 30 years but a mouse only three? With $1.5 billion in the bank, Google’s anti-aging spinout Calico is rich enough to find out. Categorized in Biotechnology and health Rejuvenating the Chance of Motherhood? An audacious startup thinks it can give 40-ish women a better shot at having children. Should desperate would-be parents believe it? Categorized in 17035 The Cancer Lottery Finding telltale mutations in tumors and targeting those cancers with precisely selected drugs is the newest front in the war on cancer. Now researchers just have to figure out why it doesn’t work for everyone. Categorized in Computing The Unacceptable Persistence of the Digital Divide Millions of Americans lack broadband access and computer skills. Can President Trump bring them into the digital economy? Categorized in Policy The Pentagon’s Innovation Experiment The U.S. Department of Defense founded a kind of startup in Silicon Valley to accelerate the development and acquisition of new technologies useful to the military. But will it survive President Trump? Categorized in Artificial intelligence Mining 24 Hours a Day with Robots Mining companies are rolling out autonomous trucks, drills, and trains, which will boost efficiency but also reduce the need for human employees. Also in this issue Ghana’s Last Mile Innovative African e-tailers are offering sought-after goods to the continent’s growing ­middle class. But logistical challenges must be worked out delivery by delivery. Categorized in 17036 Mr. Robot Killed the Hollywood Hacker The popular portrayal of computers as magic boxes capable of anything has done real societal harm. Now one TV show wants to save us. Categorized in 17036 Hotter Days Will Drive Global Inequality Rising temperatures due to climate change will strongly affect economic growth around the world, making some countries richer and some poorer. Categorized in Climate change and energy If Only AI Could Save Us from Ourselves Google has an ambitious plan to use artificial intelligence to weed out abusive comments and defang online mobs. The technology isn’t up to that challenge—but it will help the Internet’s best-behaving communities function better. Categorized in 17041 Meet the World’s First Completely Soft Robot Researchers use an ingenious design to make a soft robot that moves on its own. Categorized in Humans and technology Amazon’s Next Big Move: Take Over the Mall Unable to resist any opportunity to sell you something, the e-commerce leader is opening up brick-and-mortar bookstores. But its online prowess doesn’t yet translate into a very good retail experience. Categorized in 17036 Will the Climate Treaty Get the Money It Needs? If Donald Trump pulls back on the U.S. commitment, the entire plan could crumble. Categorized in 17037 37 Years Ago: How to Fix Democracy A computer scientist who saw congressional decision-making up close in 1980 found it insufficient to the task of solving big problems. Categorized in 17036 Past issues Updated The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
3
2,018
"Six things to do with your data before you die | MIT Technology Review"
"https://www.technologyreview.com/2018/10/23/239999/six-things-to-do-with-your-data-before-you-die"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Six things to do with your data before you die How to make sure your loved ones can get into all your accounts. Or, alternatively—how to cover your tracks. By Simson Garfinkel ’87, PhD ’05 archive page Illustration of skeleton arm lifting corner of doormat shaped like an iphone, revealing a key Benedikt Luft What would happen to your digital estate if you died, suddenly, before finishing this paragraph? Would your survivors be able to find what you left behind? There is nothing hypothetical about this for many people: the problem emerges, wholly formed, when tragedy strikes. What’s worse, more than half of Americans don’t have a will, let alone one that’s up to date, according to a 2016 Gallup Poll. As a result, most survivors lack a road map to the deceased’s assets (physical and digital) or even, in some cases, the legal authority to proceed. Fortunately, there are many things you can do now, without a lawyer, to make things easier for your survivors. #1 Build a back door Fifteen years ago, if you died and your next of kin got your laptop, that person was pretty much guaranteed access to your data. Then, in 2003, Apple introduced full disk encryption, designed to protect your data from a thief, but also keeping it out of the reach of your survivors. Cryptocurrencies pose a similar problem: if no one has access to your digital wallet, then any value there is lost—there’s no Bitcoin central control to complain to. Today there’s a debate as to whether tech companies should put back doors in their crypto technology so law enforcement can get access to data on devices they seize during an investigation. Short of that, it’s easy to back-door your encryption yourself: just write down your hard drive’s master password, put the paper in an envelope, and seal it. Do the same with your Bitcoin wallet. Make sure it’s well hidden but in a location that’s known to your loved ones. #2 Sign up for Inactive Account Manager If you have a Gmail account, use Inactive Account Manager to specify an e-mail address that will be automatically notified three months after your Google account goes inactive. Google defines “activity” broadly: if you check Gmail, log in to a Google website, or perform a search with a Chrome browser that’s logged into your account, Google will assume you’re not dead. But when your digital heartbeat stops, this approach ensures that someone you trust can access your Gmail account, Google Photos, and other data. #3 Download your medical records Your doctor is supposed to keep copies of your test results and other records, but it’s a good idea to keep your own. Ask for copies and scan them. You might also be able to get your records directly if your health-care provider participates in the US government’s Blue Button Connector, which lets you download PDF files for yourself and a special format for other health-care providers (should you wish to give it to them). My elderly father keeps a copy of his records on a USB stick that he carries with him at all times. It comes in handy when he sees a specialist who might not have access to his primary care provider’s computer. Yes, there’s a risk the stick could fall into the wrong hands, but he’s decided that the risk of medical professionals not having access to his records is greater. #4 Use a password manager It used to be straightforward to identify the deceased’s accounts by waiting for bank statements and tax bills to arrive by snail mail. These days, two thirds of Americans do their banking online (according to a 2017 survey by the American Bankers Association), and many people no longer receive paper statements. This significantly increases the chance that your bank accounts or retirement accounts might be declared “abandoned” in the event that you die. So use a password manager like 1Password or LastPass. Now make sure that your spouse, or lawyer, or children, or parents, or somebody has some way to get to your accounts (so they can, for example, save any cherished photos or easily delete your accounts after you’re gone). One way that couples can simply access each other’s accounts is by sharing their passwords. This is getting harder as websites implement two-factor authentication, but it’s still possible by registering multiple second factors (like a FIDO Universal 2nd Factor device) and giving one to each partner. #5 Ponder the complexities of social media If you are an avid user of Facebook or Twitter, take some time to read their data-after-death policies. You might not like what you find. When Facebook is notified that one of its users has become medically incapacitated or died, the company allows authorized individuals to request that the user’s account be either “memorialized” or removed. Be aware: memorialized accounts can be managed by a legacy contact (who has to be specified in advance), but that person can’t log into the Facebook account, remove or change past posts, or read private messages. In one famous case, parents of a 15-year-old German girl who died after being hit by a subway train were unsuccessful in trying to force Facebook to open the girl’s account so that they, the parents, could determine if she had experienced cyber-bullying or depression, or if her death really was a tragic accident. Twitter’s policy is similar: after you die, a family member can contact the company and ask that your account be deleted, according to a help page on its website. Twitter will also, if requested, remove specific imagery or messages sent just before or after an individual’s death. But Twitter will not give family members access to a deceased user’s private messages. So if you’re storing something on Facebook that you’d like people to have access to after you’re gone, you should download that data regularly and store it where your loved ones will have access—for example, in Google Drive. #6 Be careful what you wish for I gave much of this advice at a cybersecurity training seminar a few months ago, and almost everybody in the room thought I was crazy. The people there—mostly men—said they’d never share their passwords with their spouses. And maybe they’ve got a point. Family members should be careful about taking extraordinary measures to crack open these encrypted digital crypts, warns Ibrahim Baggili, associate professor of computer science at the University of New Haven and an expert in digital forensics. “This person I knew died, and his wife managed to finally break into his e-mails and iPad and found all sorts of things about him that she did not want to know,” says Baggili. “She really loved him, and it changed her whole perspective on him.” Simson Garfinkel is a science writer living in Arlington, Virginia, and coauthor of The Computer Book: From the Abacus to Artificial Intelligence, 250 Milestones in the History of Computer Science , published this November by Sterling Milestones. hide by Simson Garfinkel ’87, PhD ’05 Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
4
2,018
"Your genome, on demand | MIT Technology Review"
"https://www.technologyreview.com/2018/10/23/1960/your-genome-on-demand"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Your genome, on demand How your detailed genetic profile can predict your risk of diseases and improve your health. By Ali Torkamani and Eric Topol archive page Photo illustration of a doll's face filled with colorful beads each containing a letter of a nucleotide. A black, unmarked bead appears in a set of tweezers above the face. Nicolas Ortega In early 2018, it was estimated that over 12 million people had had their DNA analyzed by a direct-to-­consumer genetic test. A few months later, that number had grown to 17 million. Meanwhile, geneticists and data scientists have been improving our ability to convert genetic data into useful insights—forecasting which people are at triple the average risk for heart attack, or identifying women who are at high risk for breast cancer even if they don’t have a family history or a BRCA gene mutation. Parallel advances have dramatically changed the way we search for and make sense of volumes of data, while smartphones continue their unrelenting march toward becoming the de facto portal through which we access data and make informed decisions. Taken together, these things will transform the way we acquire and use personal genetic information. Instead of getting tests reactively, on a doctor’s orders, people will use the data proactively to help them make decisions about their own health. With a few exceptions, the genetic tests used today detect only uncommon forms of disease. The tests identify rare variants in a single gene that causes the disease. But most diseases aren’t caused by variants in a single gene. Often a hundred or more changes in genetic letters collectively indicate the risk of common diseases like heart attack, diabetes, or prostate cancer. Tests for these types of changes have recently become possible, and they produce what is known as your “polygenic” risk score. Polygenic risk scores are derived from the combination of these variants, inherited from your mother and father, and can point to a risk not manifest in either parent’s family history. We’ve learned from studies of many polygenic risk scores for different diseases that they provide insights we can’t get from traditional, known risk factors such as smoking or high cholesterol (in the case of heart attack). Your polygenic score doesn’t represent an unavoidable fate—many people who live into their 80s and 90s may harbor the risk for a disease without ever actually getting it. Still, these scores could change how we view certain diseases and help us understand our risk of contracting them. A polygenic risk score might tell you that you’re at high risk for breast cancer and spur you to get more intensive screening. Genetic tests for rare forms of disease caused by a single gene typically give a simple yes or no result. Polygenic risk scores, in contrast, are on a spectrum of probability from very low risk to very high risk. Since they’re derived from combinations of genome letter changes that are common in the general population, they’re relevant to everybody. The question is whether we’ll find a way to make proper use of the information we get from them. Can they inform us about changes to our lifestyle, or point to medications we should take or a screening test we should get, that might improve our chances of staying healthy? Statin drugs are a good case study for this. They’re widely used, even though 95% of the people taking them who ­haven’t had heart disease or stroke get no benefit aside from a nice cholesterol lab test. We can use a polygenic risk score to reduce unnecessary statin use, which not only is expensive but also carries health risks such as diabetes. We know that if you are in the top 20% of polygenic risk for heart attack, you’re more than twice as likely to benefit from statins as people in the bottom 20%; these people can also benefit greatly from improving their lifestyle (stop smoking, exercise more, eat more vegetables). So knowing your polygenic risk might cause you to take statins but also make some lifestyle changes. (And a recent large-scale study in Finland showed that people with high heart-risk scores responded with lifestyle improvements at a much higher rate than those with low risk scores.) And it’s not just about heart disease. A polygenic risk score might tell you that you’re at high risk for breast cancer and spur you to get more intensive screening and avoid certain lifestyle risks. It might tell you that you’re at high risk for colon cancer, and therefore you should avoid eating red meat. It might tell you that you’re at high risk for type 2 diabetes, and therefore you should watch your weight. Yet despite growing evidence that polygenic risk scores are important, until recently there was no service allowing people to determine their own scores, even if they had invested in their own personal direct-to-consumer genetic profiling. We’re attempting to remedy that through the development of MyGeneRank, a free mobile app that estimates users’ polygenic risk for heart attack and stroke from their own genetic data. It also allows them to participate in a clinical trial to measure the influence of polygenic risk information on people’s behavior, as reported by them, and their heath data, captured by mobile sensors linked to their smartphones. There are still some issues and controversies we need to deal with. Equal access is one major concern—especially given that the majority of genetic studies have been performed in populations of European ancestry. For now, it appears that the more powerful the predictions become, the less accurate they become with other populations. In addition, genetic risk information is likely to make some people feel anxious or fatalistic (or might give others a false sense of security). Previous studies suggest that genetic risk information has a minimal influence on these psychological states, but many of those studies were done when the variations in risk you could get via polygenic factors were marginal. As our ability to separate people into increasingly different classes of genetic risk gets better, these issues may become more prominent. Another challenge will be to convince people to forgo or delay medical interventions if they have a low risk of a certain condition. This will require them to agree that they’re better off accepting a very low risk of a catastrophic outcome rather than needlessly exposing themselves to a medical treatment that has its own risks. People tend to overestimate the likelihood of catastrophic events, so if polygenic scores are to achieve their full impact on health outcomes and health-care spending, we’ll need to find a way to effectively communicate those trade-offs. And finally there are the privacy concerns. We need to maintain our current protections against genetic discrimination so that people can benefit from their own genetic information without having to worry that insurance companies will get access to that information and use it to raise their rates or deny coverage. You can’t change your genetic risk. But you can use lifestyle and medical interventions to offset that risk. We can accelerate breast cancer screening for women with a high risk for the disease, and help people with borderline risk of heart disease to make decisions about whether to take statins or not. If we deliver and track the response to polygenic risk information, we can collect real-world evidence on how to optimize the use of that data to give safe and effective health advice. In the near future your smartphone might feature technologies that monitor your physiological, genetic, environmental, and behavioral characteristics. And this information could be linked to virtual medical coaches and AI systems that can synthesize all that information and deliver you insights about your own health, on demand. Ali Torkamani is director of genomic informatics at the Scripps Research Translational Institute. Eric Topol is a cardiologist and the author of books including the upcoming Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again. hide by Ali Torkamani and Eric Topol Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
5
2,018
"Genes I wish they would find | MIT Technology Review"
"https://www.technologyreview.com/2018/10/23/1936/genes-i-wish-they-would-find"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Genes I wish they would find Enough with the useless genes. Here are some that would actually come in handy. By Sarah Cooper archive page 1) Knowing when I will need an umbrella; knowing when I will completely regret bringing an umbrella 2) Ability to talk my way out of a speeding ticket 3) Tolerance for small talk at networking events 4) Ability to mentally drown out the sound of my upstairs neighbor 5) Never late 6) Ability to talk my way up to first class 7) Perfect poker face 8) Jeans always fit perfectly 9) Eyes never closed in pictures 10) Ability to only eat one potato chip hide by Sarah Cooper Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
6
2,018
"AI can’t replace doctors. But it can make them better. | MIT Technology Review"
"https://www.technologyreview.com/2018/10/23/139414/ai-cant-replace-doctors-but-it-can-make-them-better"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts AI can’t replace doctors. But it can make them better. A machine can collate environmental data, genetic data, and patient history way better than I can. By Rahul Parikh archive page Child's drawing of a doctor's office showing the child on the exam table and the doctor at the computer. Drawing by Ag, Age 7, copyright Thomas G. Murphy MD 2011 Several years ago Vinod Khosla, the Silicon Valley investor, wrote a provocative article titled “Do We Need Doctors or Algorithms?” Khosla argued that doctors were no match for artificial intelligence. Doctors banter with patients, gather a few symptoms, hunt around the body for clues, and send the patient off with a prescription. This sometimes (accidentally, maybe) leads to the correct treatment, but doctors are acting on only a fraction of the available information. An algorithm, he wrote, could do better. I’m a pediatric and adolescent physician in the San Francisco Bay Area, where entrepreneurs like Khosla have been knocking on the doors of doctors for years with their pilot technologies and software and hardware. I can say with some authority that Khosla’s is the voice of a savvy outsider who knows what he knows—which isn’t health care. Yes, AI could help us diagnose and treat disease. It can collate and serve up broad swaths of data in a clear and concise way, cutting down on the imprecise judgments that doctors make because of the pressures and complexity of our practices. There’s no doubt that for certain doctors, whose work is highly focused on diagnosis (radiologists or pathologists, for example), that breakthrough may prove an existential threat. A decade ago, for example, researchers showed that AI was as good as radiologists at detecting breast cancer. But for physicians like me in primary care, managing 1,500 to 2,000 patients, AI presents an opportunity. I went to medical school to connect with people and make a difference. Today I often feel like an overpaid bookkeeper instead, taking in information and spitting it back to patients, prescribing drugs and adjusting doses, ordering tests. But AI in the exam room opens up the chance to recapture the art of medicine. It could let me get to know my patients better, learn how a disease uniquely affects them, and give me time to coach them toward a better outcome. Consider what AI could do for asthma, the most common chronic medical disease in childhood. Six million American kids suffer from it. In 2013, they collectively missed 14 million days of school. The cost of medications, visits to the doctor and emergency room, and hospitalizations nears $60 billion a year. I diagnose asthma via a rule of thumb that’s been handed down over time: if you’ve had three or more wheezing episodes and the medicines for asthma help, you have the disease. Once it’s diagnosed, I ask the parents to remember—as best they can—how often they administer medicines to their child. I ask: What seems to trigger episodes? Is the child exposed to anyone who smokes at home? I can also review their records to count how many visits to the emergency room they’ve had, or the number of times they’ve refilled their prescriptions. But even with the most accurate recall by parents and patients, and the most accurate electronic records, it’s still just retrospective knowledge. There’s no proactive, predictive strategy. It’s not that we don’t have the data; it’s just that it’s messy. We spend a great deal of our time trying to make sense of it. It’s not that we don’t have the data; it’s just that it’s messy. Reams of data clog the physician’s in-box. It comes in many forms and from disparate directions: objective information such as lab results and vital signs, subjective concerns that come in the form of phone messages or e-mails from patients. It’s all fragmented, and we spend a great deal of our time as physicians trying to make sense of it. Technology companies and fledging startups want to open the data spigot even further by letting their direct-to-consumer devices—phone, watch, blood-pressure cuff, blood-sugar meter—send continuous streams of numbers directly to us. We struggle to keep up with it, and the rates of burnout among doctors continue to rise. How can AI fix this? Let’s start with diagnosis. While the clinical manifestations of asthma are easy to spot, the disease is much more complex at a molecular and cellular level. The genes, proteins, enzymes, and other drivers of asthma are highly diverse, even if their environmental triggers overlap. A number of experts now think of asthma in the same way they think of cancer—an umbrella term for a disease that varies according to the tumor’s location and cellular characteristics. Ian Adock of the National Heart & Lung Institute at Imperial College, London, studies the link between asthma and the environment. He and his team have been collecting biological samples from asthma patients’ blood, urine, and lung tissue and organizing the genetic and molecular markers he finds into subtypes of asthma. The hypothesis is that with that kind of knowledge, patients can be given the drug that works best for them. AI might also help to manage asthma flares. For many patients, asthma gets worse as air pollution levels rise, as happened this past summer when brush fires swept through Northern California. AI could let us take environmental information and respond proactively. In 2015, researchers published a study showing they could predict the number of asthma-related emergency room visits to a Dallas–Fort Worth hospital. They pulled data from patient records, along with air pollution data from EPA sensors, Google searches, and tweets that used terms like “wheezing,” or “asthma.” The Google and Twitter data were tied to the user’s location data. If I had this kind of data I could say, “Alexa, tell me which asthma patients I need to worry about today.” I could give a heads-up to the affected families. And if I also had some genetic data like Adock’s, I could diagnose asthma before the patient suffered three bouts of wheezing, by ordering blood tests and comparing the results against those molecular markers. This kind of time-saving intelligence frees me to spend more time with my patients. One study showed that asthmatic children only took or received their inhaled medications about half of the time. AI might allow me more time to personally interact with those kids, and get better results. Lots of questions lie ahead. Are patients willing to share more of their personal data with us? If the AI shows your care is better one way, but you or your doctor feel differently, will an insurance company accept it? What if the algorithm misses something or is applied incorrectly? Who is liable, the doctor or the machine’s maker? Not long ago, in the Journal of the American Medical Association , I saw a colorful picture drawn by a child in crayon. It portrayed her pediatrician, eyes glued to the computer, while she sat on the exam table, looking wide-eyed. I hope that AI will soon allow me to turn my attention back to that little girl. Rahul Parikh is a pediatrician in the San Francisco Bay area. hide by Rahul Parikh Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Keep Reading Most Popular This new data poisoning tool lets artists fight back against generative AI The tool, called Nightshade, messes up training data in ways that could cause serious damage to image-generating AI models. By Melissa Heikkilä archive page Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Deepfakes of Chinese influencers are livestreaming 24/7 With just a few minutes of sample video and $1,000, brands never have to stop selling their products. By Zeyi Yang archive page How to fix the internet If we want online discourse to improve, we need to move beyond the big platforms. By Katie Notopoulos archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
7
2,018
"The skeptic: What precision medicine revolution? | MIT Technology Review"
"https://www.technologyreview.com/2018/10/23/139408/the-skeptic-what-precision-medicine-revolution"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The skeptic: What precision medicine revolution? The benefits of genomic drugs are exaggerated, hurting patients and the practice of medicine, says one high-profile oncologist. By Stephen S. Hall archive page John Clark Vinay Prasad is relatively young (35) and still climbing the academic ladder (he’s an associate professor of medicine at Oregon Health & Sciences University in Portland), but he has already established an outsize reputation as a “professional scold” for his sharp critiques of contemporary biomedical research, including personalized medicine. In commentaries in high-profile medical and scientific journals, and in a Twitter account with some 25,000 followers, Prasad has questioned the evidence (or lack thereof) to support the use of precision oncology, the practice of selecting drugs for patients on the basis of specific mutations in their tumors. He has also criticized the inflated cost of cancer drugs and the financial conflicts of interests bedeviling contemporary research. Prasad brings several unique perspectives to the role of medical scold. Born in Euclid, Ohio, outside Cleveland, to an immigrant couple from India, he developed an interest in philosophy in college before attending medical school at the University of Chicago. As a practicing oncologist, the prolific Prasad has generated a boatload of peer-reviewed papers, gathering evidence to suggest, among other things, that genomic-based evidence hasn’t made much of an impact on cancer patients. As a sometimes prickly online persona, he has been faulted for unleashing expletive-laden putdowns but has also attracted a robust audience for what he calls “tweetorials,” which dissect the design of high-profile studies and the data they generate. In the following conversation with veteran medical writer Stephen S. Hall, he takes aim at “precision oncology,” the gaps in direct-to-consumer genetic testing, and what it really costs to bring a new drug to market. Proponents have been promising a revolution in personalized medicine for decades. What’s the reality? I would say, and I think many people will agree, that the promises that were made around the time of the Human Genome Project have largely not materialized, and that the impact of personalized medicine has probably been exaggerated. What’s the danger of exaggerating the promises? I think we have a schizophrenia in science and medicine. On the one hand, people who are good scientists understand that science is difficult. You should not be, nor will you be, having breakthroughs all the time. Breakthroughs are rare. Science is hard. It takes years of slogging to understand very fundamental pathways. On the other hand, we often are tempted to—and I see experts continue to—make grandiose promises, and have a lofty, unrealistic vision for what might be achieved in the next few years. That harms the public understanding of science, because the public comes to believe that unless you guys and gals are producing breakthroughs all the time, we shouldn’t be funding this. That’s wrong, because science needs more funding—needs a lot more funding than what we’re currently investing. Does it hurt the patient? I would say inflated rhetoric about the value of medical practices, technologies, or science harms patients because it distorts their understanding of what a therapy or intervention might do. And by distorting the understanding, it robs them of autonomy. I’ll give you just one example. Precision medicine is very, very seductive. The temptation is that it shouldn’t be assessed in the same way as other treatments. Sometimes cancer patients are on medications that add real side effects to their life, but they believe that there’s going to be some survival benefit by taking this medicine. Every person is making kind of a daily decision: Do I stick with this medicine or not? Are the side effects worth it to me or not? And if that decision is made in a very impartial way, with a good understanding of what the drug does, that’s the right way. But if that decision is made under the cloud of hype, when it’s surrounded and marinated in hype and misinformation, then I think what we’re really doing is that we’re preventing the person from making the decision compatible with their wishes. We’re kind of taking away that choice. And I do fear that that happens quite often. You recently published a study indicating that most cancer patients don’t benefit from personalized genomic medicine, even though it’s been in practice since at least 2006. Why do you think that’s the case? Some people have said that study is pessimistic. It’s neither pessimistic nor optimistic; it is simply the most realistic estimate of how many people have benefited from genome-driven therapies. There clearly are some situations in cancer where drugging a single ­cancer-causing gene is important, and that should not be taken away. Those clearly do exist. The problem is that they simply don’t exist for the majority of patients who will be diagnosed with metastatic cancer. The purpose of our paper was to document what that number is, and what has been the change over time. I’ve heard the rhetoric that we’re reaching exponential growth, or that [precision oncology] is taking off, or there’s an inflection point. We simply don’t see that evidence if you look objectively at the data. Does that mean you’re reluctant to use them in your own practice? Of course I use genome therapies. I love [them]. Where they work, they work well. In fact, I would increase the funding to research them. But at the same time, I think we should be realistic about their prospects. We’re also doing that same kind of analysis right now for immunotherapy drugs and cytotoxic drugs and different kinds of drugs. Can we more accurately compare what has been the impact of these different types of therapies? In a recent article, you suggested that if adopted prematurely, the use of precision medicine might actually increase the risk of inappropriate medical care. How so? Every day there are new potential treatments or therapies or strategies to treat any disease, and they all have some degree of bio-plausibility. When it comes to a new cancer drug, bio-­plausibility is just not enough. You should also test it and prove that it does what you think it does. Precision medicine should be held to the same standard. One of the differences is that precision medicine is very, very seductive. Some of its bio-plausibility is just such a compelling story that I think we do see this temptation by proponents that it shouldn’t be assessed in the same way. It’s so plausible, it should just be adopted—that kind of attitude. That kind of attitude might paradoxically lead us to adopt potentially more things that ultimately turn out not to do what you think they should do. Do you think direct-to-consumer marketing by companies like 23andMe has made it seem as though personalized medicine has arrived already? Yes, I think the constant rhetoric that this is wonderful has shifted the public perception. In terms of the direct-to-consumer advertising, we actually have a paper on the BRCA breast cancer gene test that appeared in [the Journal of the American Medical Association ] about a month or two ago. It points out that there are some limitations to that direct-to-consumer BRCA testing. The test is actually only for three mutations that are very common in the Ashkenazi Jewish population, but not perhaps the most common BRCA mutations among all people with deleterious mutations. And thus there are some unintended consequences. A woman with a family history who may be worried will send off that test, get a negative result, and feel reassured. But that person may have a deleterious BRCA mutation. It may actually be counterproductive. If genomic testing and these other aspects of personalized medicine are not currently predictive of outcomes for individual patients, are the drug companies and medical institutions taking advantage of consumers by pushing these methods? It’s a big category, and there are some things that are very well validated. But I think there are some things that are not. And the consumer doesn’t always know which ones are which, and that’s the challenge. Even some of the people in the field apparently seem to forget which ones are which, and that’s what I try to remind them of. When you remind them, it sounds like you get pretty strong pushback. I appreciate pushback when it’s about the technical merits of any of these arguments. Where I think pushback is counterproductive is when pushback becomes personal or when pushback is about the intention. There are a number of people who have voiced concern that one or more precision therapies don’t have the data. And sometimes I feel as if the argument devolves into the people who want that therapy saying, “Well, we want what’s best for patients. And you people who are saying that we don’t have data, you apparently don’t want what’s best for patients.” I think we have to recognize we all want what’s best for patients. This is an argument about the evidence. And I get personally frustrated when I see people try to pervert the argument in that way. You’ve also criticized the high cost of drugs, and you recently argued that industry estimates of the cost of bringing a new drug to market are wildly exaggerated. What does it really cost? I think that the cleanest estimate that I’ve seen—and I’m a little bit personally biased—is the estimate that Sham Mailankody and I put out in JAMA Internal Medicine , where we estimate that it costs something like $800 million in R&D to bring a cancer drug to market. The industry estimate is $2.6 billion. There’s a big difference there. But at the end of the day, this is one of those few things in life where you don’t have to settle for estimates. Since the industry repeatedly uses the cost of R&D as a justification for the high price—and unsustainable price—of drugs, I think it’s probably fair game for governments to ask them to show the data. Let’s just put all the data on the table and let’s see what it really costs. One of the other things you’ve suggested is that the expert panels that advise the FDA have financial conflicts of interest. Is that compromising the quality of medicines that consumers are getting? I just want to clarify my view here, which is that I wholeheartedly support collaboration between academic investigators and for-profit companies. The additional complexity and challenge is when you have payments made to physicians personally. I think those payments—and they’ve been shown to—do affect our perception of products. If you’re receiving a lot of money from a manufacturer, you may not view their product as impartially as you would if you were not receiving that money. That’s the concern. I think we should try to curb the financial conflicts of for-profit companies in the healthcare space. There are some legitimate questions here about the role of financial conflicts in this space. Does it distort the impartiality around adjudicating medical practices? I fear it does. Given the implications of the kind of critiques that you have been publishing pretty prolifically, why aren’t more people saying the same thing? I ask myself that all the time. These questions feel very obvious to me. There are a lot of people who do care. A lot of them are general internal-­medicine folks. I think we see it a little less in the specialties. And I think we see it much more in the younger crop of physicians than the older crop, in the sense that people who have done this, practiced for many years in this environment and who have found their niche in the environment, they’re comfortable where they are, and they don’t really feel the urge to comment about these more problematic areas. But people who are younger, and approach this field with fresh eyes, feel as if these things are problematic. You don’t always sound like a scold. I’m very optimistic about science, that we will improve outcomes. I just think that we would benefit from a lot more empiricism and impartiality in the process. That’s what I feel is missing—empiricism, impartiality, and more modest rhetoric. I think those three things would go like 90% of the way. Is it true, as reported by The Cancer Letter , that you’ve closed your Twitter account? No, it’s not true at all! I’m on Twitter, @VPplenarysesh. I believe that there are a number of inaccuracies in the Cancer Letter stories about me. I’ll save that for another day. hide by Stephen S. Hall Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
8
2,018
"Look how far precision medicine has come | MIT Technology Review"
"https://www.technologyreview.com/2018/10/23/139378/look-how-far-precision-medicine-has-come"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Look how far precision medicine has come Skeptics say drugs based on genetic insights have underdelivered. But look carefully and they’re everywhere. By Antonio Regalado archive page Personalized Medicine Coalition Sometime this fall, the number of people who have spit in a tube and sent their DNA to the largest consumer DNA testing companies, like Ancestry and 23andMe, is likely to top 20 million. The list by now is certain to include some of your classmates and neighbors. If you are just tuning in, this figure will seem huge. And you might wonder: how did we get here? The answer is little by little. The number of people getting DNA reports has been doubling, roughly, every year since 2010. The figures are now growing by a million each month, and the DNA repositories are so big that they’re enabling surprising new applications. Consumers are receiving scientific predictions about whether they’ll go bald or get cancer. Investigators this year started using consumer DNA data to capture criminals. Vast gene hunts are under way into the causes of insomnia and intelligence. And 23andMe made a $300 million deal this summer with drug company GlaxoSmithKline to develop personalized drugs, starting with treatments for Parkinson’s disease. The notion is that targeted medicines could help the small subset of Parkinson’s patients with a particular gene error, which 23andMe can easily find in its database. Ever since the Human Genome Project—the 13-year, $3 billion effort to decipher the human genetic code—researchers and doctors have been predicting the arrival of “precision medicine.” It’s a term with no agreed-upon definition, although it suggests most strongly just the kinds of medicines that Glaxo and 23andMe are pursuing: more targeted and more effective because they take into account a person’s particular genetic makeup. President Bill Clinton, at the unveiling of the genome’s first draft back in June 2000, said the data would “revolutionize the diagnosis, prevention, and treatment of most, if not all, human diseases.” Seeking better drugs The proportion of patients who actually benefit fron a best-selling drug in each category. Almost two decades after those big promises, it is in vogue to question why precision medicine has not delivered more. A report in the New York Times this summer, noting that deaths from cancer still outnumber cures by a wide margin, asked: “Are We Being Misled About Precision Medicine?” One reason for this seemingly slow progress is that not all precision medicine involves drugs. As gene hunts gain in scope—the latest involve comparisons of more than a million people’s DNA and health records—an inconvenient fact about many common diseases has emerged: they don’t, by and large, have singular causes. Instead, many hundreds of genes play small roles, and there is no obvious point at which to intervene with a pill. So instead of drugs, we are seeing a new predictive science in which genetic risk profiles may say which people should lower their blood pressure, which should steel themselves for Alzheimer’s, and which cancer patients aren’t going to benefit from chemotherapy and can skip the ordeal. To be sure, these sorts of prognostics aren’t widely accepted, and it’s hard to get people to change their behavior. Yet for many people, these predictions may begin to offer a concrete route to precision health and increased knowledge of their own biology. Genetic information explodes Left: Cost of sequencing a genome Right: Number of people who have bought consumer DNA tests Look beyond cancer, and some definitive cures have arrived. As with those growing millions sending in their DNA, it’s easy to miss the change before it’s everywhere. Here are just two medications of note: a drug that mops up hepatitis C in 90% of those who take it and an experimental gene therapy that is curing a rare, fatal, and previously untreatable childhood disease, spinal muscular atrophy. Though these treatments come from different corners of biology, it’s what they have in common that’s important: each benefits from detailed understanding of genetic information and tools to control it. To our thinking, these drugs display real precision. The hep C pill, called Sovaldi, consists of a chemical that is irresistible to the replicating virus, but when the drug comes in contact with the virus’s genome, replication quickly grinds to a halt. The treatment for spinal muscular atrophy, meanwhile, is a genetic replacement part. With gene therapy, doctors can add fresh DNA instructions to the child’s nerve cells. The dozen or so kids who’ve gotten the therapy at a young age don’t develop the disease. All this traces back to even before the Human Genome Project. Think instead of the foundational act of the biotechnology industry, 40 years ago. On September 6, 1978, Genentech announced “the successful laboratory production of human insulin.” Before then, diabetics had injected insulin from pigs. It took around two tons of pig parts to extract eight ounces (227 grams) of pure insulin. But Genentech had found a way to splice the human version of the insulin-producing gene into E. coli bacteria, which then manufactured the hormone. Genentech still keeps the 40-year-old press release online. To the pharmaceutical houses of the 20th century, with their roots in commercial dye making and synthetic chemistry, these new biotech drugs looked at first like a sideshow. They were hard to make and inconvenient to take (by injection, mostly). The pharma giants could easily believe their way of doing things would always dominate. Until well into the 1990s, a single drug company, Merck, was more valuable than all biotech companies combined. It probably seemed as if biotech would never arrive—until it did. Of the 10 best-selling drugs in the US during 2017, seven (including the top seller, the arthritis drug Humira) are biotech drugs based on antibodies. Antibodies embody biological precision too. These tiny blood proteins, normally part of our immune response, fit—like a key in a lock—onto other molecules, like those dotting the surface of a cancer cell. And just like insulin, they’re often constructed using DNA code retrieved from our bodies. Drugs based on DNA Left: Percentage of drugs in development that may be tailored to a person's genetic profile Right: Number of the 10 best-selling drugs in the US that are biological molecules Insulin and antibodies are meant to work the same way on everyone. But no two people’s genomes are exactly the same—about 1% of the DNA letters differ between any two of us. Those differences can explain why one person is ill and another isn’t, or why one person’s version of diabetes is different from another’s. Drugs that take into account these differences in genetic information are called “targeted” drugs. The cancer drug Herceptin, an antibody that reached the market in 1998, was among the first. It was effective, but mostly in people whose newly diagnosed breast cancer was growing because of specific genetic damage—about 20% of cases. It depended on the genome of the tumor itself. Herceptin came to market with the admonition that, to get it, you should first have a test to see if you would benefit. According to the US National Cancer Institute, there are now more than 80 such targeted medicines for cancer on the market. Critics argue rightly enough that such medications still do too little for too few people at too great a cost (often $10,000 a month). In fact, on the whole, those who survive cancer still owe little to targeted drugs. “The single biggest determinant of who survives cancer is who has insurance,” Greg Simon, who leads the Biden Cancer Initiative, has said—not whether there’s a drug to match their mutation. Some think we are spending too much time searching under the lamplight shed by genetic tools. “Perhaps we had been seduced by the technology of gene sequencing—by the sheer wizardry of being able to look at a cancer’s genetic core,” a Pulitzer Prize-winning cancer doctor, Siddhartha Mukherjee, wrote this summer. Big questions need big data Studies are using DNA from more people than ever Big questions need big data 2002 Japanese scientists use a new approach—the genome-wide association study—to hunt for the causes of heart attack. 2005 A gene hunt reveals critical mutations that increase the risk of macular degeneration, a common cause of blindness. 2010 Consumer test company 23andMe contributes user data to a search for Parkinson’s genes. 2013 The FDA cracks down on consumer test companies offering genetic health predictions from DNA, calling the results unreliable. 2015 Why are some people fatter than others? Clues from a genetic study are quickly offered to consumers in the form of “DNA diet” tests. 2017 A massive trove of gene data from the UK Biobank permits simultaneous analysis of 2,000 human traits and diseases. 2018 Researchers identify genes linked to educational success. They warn against using the results as a “DNA IQ test.” A search for the genes behind insomnia is the largest genetic study ever. It relies heavily on the consumer DNA database of 23andMe. He’s right that the impulse toward precision medicine, cost be damned, springs from new technology. It’s what it can do. And so you can be sure even more personalization is on the horizon. Genentech (which created Herceptin) now imagines what it calls “cancer vaccines,” tailored not just to broad subtypes of people but to the unique signature of a person’s tumor. The new approach involves collecting information about the peculiarities of a person’s cancer through high-speed genome sequencing; using software to analyze and predict what a custom biological drug would look like (they will be reverse images of antibodies, known as antigens, that stimulate the immune system); and then quickly manufacturing it. No two of these vaccines would be alike. Also, note this: if and when the US Food and Drug Administration approves these vaccines, it won’t be greenlighting a particular compound. Instead, it will approve a computerized process for turning DNA information into drugs. Medicine as programmatic and predictable as a computer? The idea has begun to exert a potent appeal in Silicon Valley, where some of tech’s biggest names now see biology as “just a code” they can crack. Marc Andreessen (best known for inventing the web browser) is one of them. The venture fund he cofounded, Andreessen Horowitz or a16z, has set aside a total of $650 million since 2015 to put into biotech investments. As the firm’s blog states with awe, “You don’t just read the code of biology but you can also write, or design, with it.” Welcome to biotech, a16z. Yet they’re on to something. Even 40 years after Genentech’s insulin press release, genetic engineering is a marvel worth rediscovering. The ability to see, understand, and manipulate human genes and the proteins they make is the great advance that is still unfolding in all its immense complexity four decades later. Biology isn’t anywhere as neat as a computer program, but little by little, we’re learning how to control it. To enzymes and antibodies we’ve added gene therapy and gene editing. We haven’t sequenced one genome—we’ve sequenced a million. An astute observer might realize we’ve already come a long way. hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
9
2,018
"Designer babies aren’t futuristic. They’re already here. | MIT Technology Review"
"https://www.technologyreview.com/2018/10/22/139478/are-we-designing-inequality-into-our-genes"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Designer babies aren’t futuristic. They’re already here. Are we designing inequality into our genes? By Laura Hercher archive page Illustration of a fetus between the profiles of a man and a woman. The umbilical cord is forming a dollar sign. Benedikt Luft At first, Matthew assumed the weakness in his knee was the sort of orthopedic nuisance that happens when you turn 30. It was weeks before he consulted a doctor, and months before it occurred to him that there could be a connection between his worsening limp and a cousin’s shoulder problem when they were kids. DNA testing confirmed it: Matthew, like his cousin, had a genetic form of dystonia, a condition where muscles contract uncontrollably. Their grandfather most likely had dystonia as well. I’d met Matthew only a few months earlier, when he’d married my friend’s daughter, Olivia, in one of those hip old New York hotels with an elegant downtown vibe. Since I was the only genetic counselor of their acquaintance, they brought their questions to me. With their permission, I am sharing their story. I have changed their names to preserve their privacy. Matthew was lucky. His was a mild version of DYT1 dystonia, and injections of Botox in his knee helped. But the genetic mutation can cause severe symptoms: contractures in joints or deformities in the spine. Many patients are put on psychoactive medications, and some require surgery for deep brain stimulation. Their kids, Matthew and Olivia were told, might not be as lucky. They would have a 50–50 chance of inheriting the gene variant that causes dystonia and, if they did, a 30% chance of developing the disease. The risk of a severely affected child was fairly small, but not insignificant. My friends learned there was an alternative. They could undergo in vitro fertilization and have their embryos genetically tested while still in a laboratory dish. Using a technology called pre-implantation genetic testing, they could pick the embryos that had not inherited the DYT1 mutation. It would be expensive—costs for IVF in the US average over $20,000 for each try, and testing can add $10,000 or more. And it would require an unpleasant two-week process of ovarian stimulation and egg harvesting. “It wasn’t the way I saw myself making a baby,” Olivia told me. But they wanted what the procedure could offer them: a guarantee that dystonia was eliminated for the next generation, and beyond. Matthew and Olivia don’t think of themselves as having a “designer baby.” That term has negative associations, suggesting something trivial, discretionary, or unethical. They weren’t choosing eye color or trying to boost their kid’s SAT score. They were looking out for the health and well-­being of their future child, as parents should. We risk creating a society where some groups, because of culture or geography or poverty, bear a greater burden of genetic disease. Public opinion on the use of assisted reproductive technology consistently draws a distinction between preventing disease and picking traits. The Johns Hopkins Genetics and Public Policy Center, which contacted over 6,000 people through surveys and focus groups from 2002 to 2004, summed up its findings this way: “In general, Americans approve of using reproductive genetic tests to prevent fatal childhood disease, but do not approve of using the same tests to identify or select for traits like intelligence or strength.” The dystonia gene is in a gray zone—some people born with it live perfectly healthy lives—yet presumably few parents would criticize Matthew and Olivia’s choice to weed it out. All embryo testing does fit the “designer” label in one important way, however: it is not available to everybody. Matthew and Olivia opted in to what is a quiet but significant trend. Although the number of couples using this technology remains small, it is growing rapidly. According to the Society for Assisted Reproductive Technology, the number of US IVF attempts with single-gene testing rose from 1,941 in 2014 to 3,271 in 2016, an increase of almost 70%. This is only the beginning. As the price of genetic testing of all kinds drops, more adults are learning about their genetic makeup as part of routine medical care and discovering specific genetic risks before pregnancy. But these people are still most likely to be affluent and educated, like Olivia and Matthew. While they consulted with IVF clinics, Olivia’s own brother and his wife got news of a gene that increased risk for cancer in their kids. “If you could get rid of it, why wouldn’t you?” he asked. Cost was not a concern for these couples, but it is an obstacle for many Americans. The Centers for Disease Control and Prevention (CDC) estimates that 1.7% of babies born in the US today are conceived using IVF. It’s much higher in countries that publicly fund assisted reproductive technology: 4% in Belgium, 5.9% in Denmark. A 2009 study found that 76% of the medical need for assisted reproduction in the US is unmet. Insurance doesn’t normally cover IVF in the US, except for a handful of states where coverage is mandated. Even policies that cover fertility treatment are inconsistent in what they reimburse. Coverage for pre-implantation genetic testing is downright Kafkaesque. Under many policies, testing the embryos is covered, but the IVF procedure itself is not, because the couples are not infertile. “The analogy I like to use,” says James Grifo, director of the Division of Reproductive Endocrinology and Infertility at NYU Langone Health, “is if you were having coronary bypass surgery and they didn’t pay for cracking the chest.” At least part of the reason the IVF industry is growing is not that more people can afford it but that those who can are paying for new kinds of services. Egg banking, for example, is now aggressively marketed to younger women as an insurance policy against age-related infertility. In 2011, egg banking did not even exist as a category in the CDC’s annual report on IVF; by 2016, storing eggs or embryos was the purpose of 25% of all IVF cycles. Elite companies like Facebook offer egg freezing as a perk, but for most people it remains a luxury. Cost isn’t the only barrier. Reproductive technology is less acceptable in racial, ethnic, and religious groups where being seen as infertile carries a stigma. Language barriers can reduce awareness and referrals. Geography also plays a role, since IVF clinics cluster in areas of greatest demand. Presumably, many people would make the same decision as Matthew and Olivia if given the option, but many don’t have that choice. Our discomfort around designer babies has always had to do with the fact that it makes the playing field less level—taking existing inequities and turning them into something inborn. If the use of pre-implantation testing grows and we don’t address these disparities, we risk creating a society where some groups, because of culture or geography or poverty, bear a greater burden of genetic disease. What could change society more profoundly than to take genetic disease—something that has always epitomized our shared humanity—and turn it into something that only happens to some people? hide by Laura Hercher Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
10
2,018
"Your next doctor’s appointment might be with an AI | MIT Technology Review"
"https://www.technologyreview.com/2018/10/16/139443/your-next-doctors-appointment-might-be-with-an-ai"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Your next doctor’s appointment might be with an AI A new wave of chatbots are replacing physicians and providing frontline medical advice—but are they as good as the real thing? By Will Douglas Heaven archive page Illustration of medical equipment and ipad “My stomach is killing me!” “I’m sorry to hear that,” says a female voice. “Are you happy to answer a few questions?” And so the consultation begins. Where’s the pain? How bad is it? Does it come and go? There’s some deliberation before you get an opinion. “This sounds like dyspepsia to me. Dyspepsia is doctor-speak for indigestion.” Doctor-speak, maybe, but it’s not a doctor speaking. The female voice belongs to Babylon, part of a wave of new AI apps designed to relieve your doctor of needless paperwork and office visits—and reduce the time you have to wait for medical advice. If you’re feeling unwell, instead of calling a doctor, you use your phone to chat with an AI. The idea is to make seeking advice about a medical condition as simple as Googling your symptoms, but with many more benefits. Unlike self-diagnosis online, these apps lead you through a clinical-grade triage process—they’ll tell you if your symptoms need urgent attention or if you can treat yourself with bed rest and ibuprofen instead. The tech is built on a grab bag of AI techniques: language processing to allow users to describe their symptoms in a casual way, expert systems to mine huge medical databases, machine learning to string together correlations between symptom and condition. Babylon Health, a London-based digital-first health-care provider, has a mission statement it likes to share in a big, bold font: to put an accessible and affordable health service in the hands of every person on earth. The best way to do this, says the company’s founder, Ali Parsa, is to stop people from needing to see a doctor. When in doubt, the apps will always recommend seeking a second, human opinion. But by placing themselves between us and medical professionals, they shift the front line of health care. When the Babylon Health app started giving advice on ways to self-treat, half the company’s patients stopped asking for an appointment, realizing they didn’t need one. Babylon is not the only app of its kind—others include Ada, Your.MD, and Dr. AI. But Babylon is the front-­runner because it’s been integrated with the UK’s National Health Service (NHS), showing how such tech could change the way health services are run and paid for. Last year Babylon started a trial with a hospital trust in London in which calls to the NHS’s non-­emergency 111 advice line are handled partly by Babylon’s AI. Callers are asked if they want to wait for a human to pick up or download the Babylon-powered “NHS Online: 111” app instead. Around 40,000 people have already opted for the app. Between late January and early October 2017, 40% of those who used the app were directed to self-treatment options rather than a doctor—around three times the proportion of people who spoke to a human operator. But both the AI and the humans staffing the phone line told the same proportion of people to seek emergency care (21%). When the app started giving advice on ways to self-treat, half of patients stopped asking for an appointment, realizing they didn’t need one. Now Babylon has also co-launched the UK’s first digital doctor’s practice, called GP at Hand. People in London can register with the service as they would with their local doctor. But instead of waiting for an appointment slot and taking time off work to see a physician in person, patients can either chat with the app or talk to a GP at Hand doctor on a video link. And in many cases the call isn’t needed. The human doctor becomes your last resort rather than your first. GP at Hand has proved popular; some 50,000 people registered in the first few months, among them Matt Hancock, the UK health minister. Babylon now wants to expand across the UK. The service is also available in Rwanda, where 20% of the adult population has already signed up, according to Mobasher Butt, a doctor and a member of Babylon’s founding team. And it’s setting up services in Canada, with plans to do the same in the US, the Middle East, and China. Your doctor is overloaded For 70 years, the NHS has provided free medical care to anyone who needs it, paid for by UK taxpayers. But it is showing signs of strain. Two generations ago there were 50 million Britons, and their average life expectancy was not much over 60 years. There are now 66 million, and most can expect to live into their 80s. That stretches the resources of a system that has never been flush with cash. On average, people in the UK see a doctor six times a year, twice as often as a decade ago. From 2011 to 2015, the average GP clinic’s patient list grew by 10% and its number of contacts with patients (by phone or in person) grew by 15.4%, according to a survey by the King’s Fund. In a survey by the British Medical Association in 2016, 84% of general practitioners said they found their workload either “unmanageable” or “excessive,” with “a direct impact on the quality” of care they gave their patients. In turn, people often have to wait days to get a non-urgent consultation. Many show up at hospital emergency departments instead, adding even more strain to the system. “We have the perception that it’s older people who turn up [at the emergency room],” says Lee Dentith, CEO and founder of the Now Healthcare Group, a health-tech company based in Manchester, UK. “But it’s not. It’s the 18- to 35-year-olds who are unwilling to wait a week for an appointment.” Population and life expectancy will continue to grow. By 2040, it is estimated, the UK will have more than 70 million people, one in four of whom will be over 65. Most other rich countries are also getting older. At the same time, the next few decades will see more people living with long-term illnesses such as diabetes and heart disease. And better treatment for diseases like cancer means millions more people will be living with or recovering from them. Of course, the UK is not alone. Whether because of prohibitive costs in the US or the lack of medical professionals in Rwanda, “all health systems around the world are stretched,” says Butt. “There’s not enough clinical resources. There’s not enough money.” Which is where companies like Babylon come in. A chatbot can act as a gatekeeper to overworked doctors. Freeing up even more of the doctor’s time, the AI can also handle paperwork and prescriptions, and even monitor care at home. A chatbot can also direct people to the right provider. “A GP is not always the best person to see,” says Naureen Bhatti, a general practitioner in East London. “A nurse might be better at dressing a wound, and a pharmacist might be better for advice about a repeat prescription. Anything that helps unload a very overloaded system, allowing doctors to do what they are best at, is always welcome.” Sometimes AI is just better Bhatti remembers how upset lots of doctors were when patients first started bringing in printouts from their own web searches. “How dare they try and diagnose themselves! Don’t think you can negate my six years at medical school with your one hour on the internet.” But she likes to see it from the patients’ perspective: “Well, don’t think you can negate my six years of living with this illness with your one-hour lecture at medical school.” When a patient does meet a doctor face to face, the AI can still help by suggesting diagnoses and possible treatments. This is useful even when a doctor is highly skilled, says Butt, and it’s “really critical” in poorer countries with a shortage of competent doctors. AI can also help spot serious conditions early. “By the time most diseases are diagnosed, a £10 problem has become a £1,000 one,” says Parsa. “We wait until we break down before going to a doctor.” Catching a disease early slashes the cost of treating it. These apps first hit the market as private health services. Now they are starting to integrate with national health-care providers and insurers. For example, Ada users can share their chatbot sessions with their NHS doctor, and the company is now working with a handful of GP practices to enable the chatbot to refer them to the doctor. Another app, Now Patient, provides video consultations with your existing doctor, and it also acts as an AI pharmacist. Users can buy their drugs from the Now Healthcare Group’s drug-­delivery service. It’s a kind of Amazon for medicines. “How do we make this a job that people want to do? I don’t think ... consulting from their kitchen is why people get into medicine. They come to meet patients.” “This is a service that patients really want, that they didn’t previously have, and that is now being provided to them through the NHS 365 days a year, 24 hours a day, for free,” Butt says of Babylon. “And the brilliant thing is it doesn’t cost the NHS a single penny more to deliver that.” Not only will the AI in these apps get smarter; it will get to know its users better. “We’re building in the ability for patients to manage their health not only when they’re sick, but also when they’re not sick,” says Butt. The apps will become constant companions for millions of us, advising us and coaxing us through everyday health choices. Death by chatbot? Not everyone is happy about all this. For a start, there are safety concerns. Parsa compares what Babylon does with your medical data to what Facebook does with your social activities—amassing information, building links, drawing on what it knows about you to prompt some action. Suggesting you make a new friend won’t kill you if it’s a bad recommendation, but the stakes are a lot higher for a medical app. According to Babylon, its chatbot can identify medical conditions as well as human doctors do, and give treatment advice that’s safer. In a study posted online in June and coauthored with researchers at Imperial College London, Stanford University, and the Northeastern Medical Group, Babylon put its AI through a version of the final exam of the Royal College of General Practitioners (RCGP), which British GPs must pass in order to practice unsupervised. Babylon’s AI scored 81%, 9% higher than the average grade achieved by UK medical students. The RCGP was quick to distance itself from Babylon’s hype, however. “The potential of technology to support doctors to deliver the best possible patient care is fantastic, but at the end of the day, computers are computers, and GPs are highly trained medical professionals: the two can’t be compared and the former may support but will never replace the latter,” said RCGP vice chair Martin Marshall in a statement. “No app or algorithm will be able to do what a GP does.” Others level far more serious charges, suggesting that Babylon has focused on making its service accessible and affordable at the expense of patients’ safety. One Twitter user with the handle DrMurphy11 (he’s an NHS consultant who told me he needs to remain anonymous because of the corporate culture there) has coined the hashtag #DeathByChatbot. In videos showing interactions with the app, DrMurphy11 suggests that Babylon’s AI misses obvious diagnoses and fails to ask the right questions. “I have no concerns about health tech or AI in general,” he says. “No doctor wants to make mistakes, and any system that helps minimize the risk of harm from human error will be welcomed.” But he’s worried that companies are misleading doctors and the public with marketing claims that vastly oversell their current tech. Babylon has also met with criticism in Rwanda, where it runs the Babyl service, for not taking local epidemiology into account. In an interview with the BBC, Rwanda’s minister of health claimed that the Babyl app included no questions about malaria, for example (although Babylon disputes this). Still, while Babylon may not be as good as a real doctor (and such apps are always careful to recommend you see a real doctor when in doubt), playing it too safe would defeat the purpose. “We wanted to re-create the same pragmatic approach that a clinician takes,” says Butt. “If we just had a group of nonclinical people building the service, they might have gone for something that was 100 percent safe, but that could mean you send everyone to hospital, which is not what a real doctor or nurse would do.” Another fear is that digital-­first services will create a two-tiered health-care system. For example, GP at Hand advises people with serious medical issues to think twice about signing up to a practice that offers mostly remote access to doctors. That might seem prudent, but it has led to accusations that GP at Hand is effectively cherry-picking younger patients with less complex—and less expensive—health-care needs. Since British GP practices get per-­patient funding from the NHS, cherry-picking would mean the rest of the health-care system is left to do more with less. For some GPs, this isn’t acceptable. “We take everybody,” says Bhatti. But Oliver Michelson, a spokesperson for the NHS, accepts that GP at Hand has to issue some form of caveat—it can’t realistically welcome everyone. “They are not denying people access but saying that if you’re going to need to come into your GP regularly, a digital-first service may not be the best place to be,” he says. And Butt insists that they exclude nobody. “The service is available to everyone,” he says; it just may not suit some people, such as those with severe learning difficulties or visual impairments, who would struggle with the app. People still come in handy For Bhatti, having a local doctor who knows you is a crucial part of the health system. “Knowing your doctor saves lives,” she says. “Doctors will pick up things because there’s continuity.” She thinks this is just as much an issue for doctors as for patients. “How do we make this a job people want to do?” she says. “I don’t think people working flexibly, consulting from their kitchen, is why people come to medicine. They come to meet patients.” Not even Butt envisions chatbots replacing human doctors entirely. “Care is not just about diagnosing or prescribing medicine,” he says. “It’s about knowing your patient is going to be able to cope with the chemotherapy you’re proposing for them, knowing that their family will be able to offer them the support that they’re going to need for the next few months. Currently there is no software that’s going to be able to replace that.” hide by Will Douglas Heaven Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
11
2,018
"The smartphone app that can tell you’re depressed before you know it yourself | MIT Technology Review"
"https://www.technologyreview.com/2018/10/15/66443/the-smartphone-app-that-can-tell-youre-depressed-before-you-know-it-yourself"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts The smartphone app that can tell you’re depressed before you know it yourself Analyzing the way you type and scroll can reveal as much as a psychological test. By Rachel Metz archive page Photo of Paul Dagum0 There are about 45 million people in the US alone with a mental illness, and those illnesses and their courses of treatment can vary tremendously. But there is something most of those people have in common: a smartphone. A startup founded in Palo Alto, California, by a trio of doctors, including the former director of the US National Institute of Mental Health, is trying to prove that our obsession with the technology in our pockets can help treat some of today’s most intractable medical problems: depression, schizophrenia, bipolar disorder, post-traumatic stress disorder, and substance abuse. Mindstrong Health is using a smartphone app to collect measures of people’s cognition and emotional health as indicated by how they use their phones. Once a patient installs Mindstrong’s app, it monitors things like the way the person types, taps, and scrolls while using other apps. This data is encrypted and analyzed remotely using machine learning, and the results are shared with the patient and the patient’s medical provider. The assessment included classic neuropsychological tests that have been used for decades, like a so-called timed trail-tracing test. The seemingly mundane minutiae of how you interact with your phone offers surprisingly important clues to your mental health, according to Mindstrong’s research—revealing, for example, a relapse of depression. With details gleaned from the app, Mindstrong says, a patient’s doctor or other care manager gets an alert when something may be amiss and can then check in with the patient by sending a message through the app (patients, too, can use it to message their care provider). For years now, countless companies have offered everything from app-based therapy to games that help with mood and anxiety to efforts to track smartphone activities or voice and speech for signs of depression. But Mindstrong is different, because it’s considering how users’ physical interactions with the phones—not what they do, but how they do it—can point to signs of mental illness. That may lead to far more accurate ways to track these problems over time. If Mindstrong’s method works, it could be the first that manages to turn the technology in your pocket into the key to helping patients with a wide range of chronic brain disorders—and may even lead to ways to diagnose them before they start. Digital fingerprints Before starting Mindstrong, Paul Dagum , its founder and CEO, paid for two Bay Area–based studies to figure out whether there might be a systemic measure of cognitive ability—or disability—hidden in how we use our phones. One hundred and fifty research subjects came into a clinic and underwent a standardized neurocognitive assessment that tested things like episodic memory (how you remember events) and executive function (mental skills that include the ability to control impulses, manage time, and focus on a task)—the kinds of high-order brain functions that are weakened in people with mental illnesses. The assessment included neuropsychological tests that have been used for decades, like a so-called timed trail-­tracing test, where you have to connect scattered letters and numbers in the proper order—a way to measure how well people can shift between tasks. People who have a brain disorder that weakens their attention may have a harder time with this. Subjects went home with an app that measured the ways they touched their phone’s display (swipes, taps, and keyboard typing), which Dagum hoped would be an unobtrusive way to log these same kinds of behavior on a smartphone. For the next year, it ran in the background, gathering data and sending it to a remote server. Then the subjects came back for another round of neurocognitive tests. As it turns out, the behaviors the researchers measured can tell you a lot. “There were signals in there that were measuring, correlating—predicting, in fact, not just correlating with—the neurocognitive function measures that the neuropsychologist had taken,” Dagum says. For instance, memory problems, which are common hallmarks of brain disorders, can be spotted by looking at things including how rapidly you type and what errors you make (such as how frequently you delete characters), as well as by how fast you scroll down a list of contacts. (Mindstrong can first determine your baseline by looking at how you use your handset and combining those characteristics with general measures.) Even when you’re just using the smartphone’s keyboard, Dagum says, you’re switching your attention from one task to another all the time—for example, when you’re inserting punctuation into a sentence. He became convinced the connections presented a new way to investigate human cognition and behavior over time, in a way that simply isn’t possible with typical treatment like regularly visiting a therapist or getting a new medication, taking it for a month, and then checking back in with a doctor. Brain-disorder treatment has stalled in part because doctors simply don’t know that someone’s having trouble until it’s well advanced; Dagum believes Mindstrong can figure it out much sooner and keep an eye on it 24 hours a day. In 2016, Dagum visited Verily, Alphabet’s life sciences company, where he pitched his work to a group including Tom Insel , a psychiatrist who had spent 13 years as director of the National Institute of Mental Health before he joined Verily in 2015. Verily was trying to figure out how to use phones to learn about depression or other mental health conditions. But Insel says that at first, what Dagum presented—more a concept than a show of actual data—didn’t seem like a big deal. “The bells didn’t go off about what he had done,” he says. Over several meetings, however, Insel realized that Dagum could do something he believed nobody in the field of mental health had yet been able to accomplish. He had figured out smartphone signals that correlated strongly with a person’s cognitive performance—the kind of thing usually possible only through those lengthy lab tests. What’s more, he was collecting these signals for days, weeks, and months on end, making it possible, in essence, to look at a person’s brain function continuously and objectively. “It’s like having a continuous glucose monitor in the world of diabetes,” Insel says. Why should anyone believe that what Mindstrong is doing can actually work? Dagum says that thousands of people are using the app, and the company now has five years of clinical study data to confirm its science and technology. It is continuing to perform numerous studies, and this past March it began working with patients and doctors in clinics. In its current form, the Mindstrong app that patients see is fairly sparse. There’s a graph that updates daily with five different signals collected from your smartphone swipes and taps. Four of these signals are measures of cognition that are tightly tied to mood disorders (such as the ability to make goal-based decisions), and the other measures emotions. There’s also an option to chat with a clinician. We don’t know how many different illnesses are in the category of depression. Insel hopes Mindstrong can use patient data to find out. For now, Insel says, the company is working mainly with seriously ill people who are at risk of relapse for problems like depression, schizophrenia, and substance abuse. “This is meant for the most severely disabled people, who are really needing some innovation,” he says. “There are people who are high utilizers of health care and they’re not getting the benefits, so we’ve got to figure out some way to get them something that works better.” Actually predicting that a patient is headed toward a downward spiral is a harder task, but Dagum believes that having more people using the app over time will help cement patterns in the data. There are thorny issues to consider, of course. Privacy, for one: while Mindstrong says it protects users’ data, collecting such data at all could be a scary prospect for many of the people it aims to help. Companies may be interested in, say, including it as part of an employee wellness plan, but most of us wouldn’t want our employers anywhere near our mental health data, no matter how well protected it may be. Spotting problems before they start A study in the works at the University of Michigan is looking at whether Mindstrong may be beneficial for people who do not have a mental illness but do have a high risk for depression and suicide. Led by Srijan Sen, a professor of psychiatry and neuroscience, the study tracks the moods of first-year doctors across the country—a group that is known to experience intense stress, frequent sleep deprivation, and very high rates of depression. Participants log their mood each day and wear a Fitbit activity tracker to log sleep, activity, and heart-rate data. About 1,500 of the 2,000 participants also let a Mindstrong keyboard app run on their smartphones to collect data about the ways they type and figure out how their cognition changes throughout the year. Sen hypothesizes that people’s memory patterns and thinking speed change in subtle ways before they realize they’re depressed. But he says he doesn’t know how long that lag will be, or what cognitive patterns will be predictive of depression. Insel also believes Mindstrong may lead to more precise diagnoses than today’s often broadly defined mental health disorders. Right now, for instance, two people with a diagnosis of major depressive disorder might share just one of numerous symptoms: they could both feel depressed, but one might feel like sleeping all the time, while the other is hardly sleeping at all. We don’t know how many different illnesses are in the category of depression, Insel says. But over time Mindstrong may be able to use patient data to find out. The company is exploring how learning more about these distinctions might make it possible to tailor drug prescriptions for more effective treatment. Insel says it’s not yet known if there are specific digital markers of, say, auditory hallucinations that someone with schizophrenia might experience, and the company is still working on how to predict future problems like post-traumatic stress disorder. But he is confident that the phone will be the key to figuring it out discreetly. “We want to be able to do this in a way that just fits into somebody’s regular life,” he says. hide by Rachel Metz Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
12
2,018
"DNA databases are too white. This man aims to fix that. | MIT Technology Review"
"https://www.technologyreview.com/2018/10/15/139472/dna-databases-are-too-white-this-man-aims-to-fix-that"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts DNA databases are too white. This man aims to fix that. Carlos D. Bustamante’s hunt for genetic variations between populations should help us better understand and treat disease. By David Rotman archive page Photo of Carlos D. Bustamante In the 15 years since the Human Genome Project first exposed our DNA blueprint, vast amounts of genetic data have been collected from millions of people in many different parts of the world. Carlos D. Bustamante’s job is to search that genetic data for clues to everything from ancient history and human migration patterns to the reasons people with different ancestries are so varied in their response to common diseases. Bustamante’s career has roughly spanned the period since the Human Genome Project was completed. A professor of genetics and biomedical data science at Stanford and 2010 winner of a MacArthur genius award, he has helped to tease out the complex genetic variation across different populations. These variants mean that the causes of diseases can vary greatly between groups. Part of the motivation for Bustamante, who was born in Venezuela and moved to the US when he was seven, is to use those insights to lessen the medical disparities that still plague us. But while it’s an area ripe with potential for improving medicine, it’s also fraught with controversies over how to interpret genetic differences between human populations. In an era still obsessed with race and ethnicity—and marred by the frequent misuse of science in defining the characteristics of different groups—Bustamante remains undaunted in searching for the nuanced genetic differences that these groups display. Perhaps his optimism is due to his personality—few sentences go by without a “fantastic” or “extraordinarily exciting.” But it is also his recognition as a population geneticist of the incredible opportunity that understanding differences in human genomes presents for improving health and fighting disease. David Rotman, MIT Technology Review ’s editor at large, discussed with Bustamante why it’s so important to include more people in genetic studies and understand the genetics of different populations. How good are we at making sure that the genomic data we’re collecting is inclusive? I’m optimistic, but it’s not there yet. In our 2011 paper, the statistic we had was that more than 96% of participants in genome-wide association studies were of European descent. In the follow-up in 2016, the number went from 96% to around 80%. So that’s getting better. Unfortunately, or perhaps fortunately, a lot of that is due to the entry of China into genetics. A lot of that was due to large-scale studies in Chinese and East Asian populations. Hispanics, for example, make up less than 1% of genome-wide association studies. So we need to do better. Ultimately, we want precision medicine to benefit everybody. Aside from a fairness issue, why is diversity in genomic data important? What do we miss without it? First of all, it has nothing to do with political correctness. It has everything to do with human biology and the fact that human populations and the great diaspora of human migrations have left their mark on the human genome. The genetic underpinnings of health and disease have shared components across human populations and things that are unique to different populations. How does that play out? Diabetes is a great example. If we look at the genetics of diabetes, they are different in different parts of the world. In the early 2010s, the Broad [Institute of MIT and Harvard] did a study with the National Institute of Genomic Medicine in Mexico to study the genetics of diabetes. Sure enough, they found a genetic variant that has a 25% frequency in Mexico that you don’t see in European, East Asian, or African populations. It is largely seen only in the Americas, and it underscores a large part of ethnic disparity in diabetes. “We can’t use genetics for the purpose of trying to define the stories we tell about ourselves.” We’ve done research on seemingly innocuous traits like blond hair. There is no more striking phenotype. Some people have blond hair and some people don’t. And the cause of blond hair in Melanesia is completely different from the cause in Europe—and that’s blond hair. So why do you think diabetes, heart disease, all these other complex traits will have identical causes in all humans? It doesn’t make sense. It turns out the highest prevalence of asthma [in the US] is in individuals of Puerto Rican ancestry, followed by individuals of African-American ancestry, followed by European ancestry. The people with the lowest rate of asthma are those of Mexican ancestry. You have two of the Hispanic populations at the opposite ends of the spectrum. Why is detailing these genetic differences helpful for medicine? If the genetic etiology of disease is different, it gives us an opportunity to discover new drug targets. It gives us new biology that then can be used even for those that don’t necessarily suffer from the disease in that way. It’s important for drug discovery. If you think of it like looking for oil, we’ve only been looking for oil in the North Sea. There are plenty of other places to search, and that benefits everyone. Secondly, we’re finding that polygenic risk scores [disease-risk predictions based on genetic tests] for European ancestry don’t translate easily into other populations. If we don’t have broad representation in medical and population genetics, then we run the risk of widening health disparities, which will be a terrible outcome for precision medicine and precision health. So aren’t you disappointed by the lack of progress in including more populations in genomic data? I’m actually super-excited. We’ve done a great job of mining for drug targets in Europe. Iceland led the way, Britain led the way, and now Finland. So we’re tapping all those resources—awesome. But what about Latin America? What about Africa? What about South Asia? All of those places have tons to contribute to our understanding of health and disease. It is both a moral obligation and a missed scientific opportunity if we don’t go to work in those populations. Many genetic researchers have long argued that race has no basis in science. But the debate doesn’t seem to go away. In a global context there is no model of three, or five, or even 10 human races. There is a broad continuum of genetic variation that is structured, and there are pockets of isolated populations. Three, five, or 10 human races is just not an accurate model; it is far more of a continuum model. Humans are a beautifully diverse species both phenotypically and genetically. This is very classic population genetics. If I walk from Cape Horn all the way to the top of Finland, every village looks like the village next to it, but at the extremes people are different. But as a population geneticist? I don’t find race a meaningful way to characterize people. You walk a tricky line, though, don’t you? You’re pointing out the importance of variance between different populations, but you don’t want to reinforce old categories of race. We can’t use genetics for the purpose of trying to define the stories we tell about ourselves. Social determinants of health are often far more important than genetic determinants of health, but that doesn’t mean genetic determinants aren’t important. So you’ve got to embrace the complexity and figure out how we translate this to a broad general public. I’m actually an optimist. I think the world is becoming a less racist place. If you talk to the next generation of people, millennials on down, those abhorrent ideologies are thrown away. That means it gives us a space to now think about what role does genetics play in health and diseases and human evolution in ways that we can soberly understand and bring to bear on important problems. We can’t allow genetics to get hijacked by identity politics. If you begin to allow politics and other interests to come in, you just muddy the waters. You need to let the data lead. You need to let outcomes lead. And the rest will follow. Data bias in dna studies Precision medicine is getting more precise for some but leaving many others behind. And those left behind are often people with Latin American, African, Native American, and other ancestries that are underrepresented in genomic databases. By far, most of the data in genome-wide association studies, which have been critical in spotting genetic variants tied to common diseases, comes from people with European ancestry. In 2011, Carlos D. Bustamante and his colleagues called out the disparities and the resulting threat that genomic medicine “will largely benefit a privileged few.” In subsequent years, the collection of genomic data has exploded, but the disparities remain. In 2016, Alice Popejoy, who was a PhD student at the University of Washington and is now a postdoc in Bustamante’s lab, updated the results in the journal Nature , finding little progress for most population groups. One result of this lack of data is that genetic tests may be less relevant and accurate for people from underrepresented groups. Increasingly popular consumer genetic tests can be misleading or just plain wrong, and medical genetic tests for some common diseases are often inconclusive. Likewise, Popejoy says, false positives and false negatives in genetic diagnoses are more common in people with non-European ancestry, because the results are interpreted using databases that are incomplete or biased toward European ancestry. hide by David Rotman Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window This story was part of our November/December 2018 issue. Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Uncategorized The Download: how to fight pandemics, and a top scientist turned-advisor Plus: Humane's Ai Pin has been unveiled By Rhiannon Williams archive page The race to destroy PFAS, the forever chemicals Scientists are showing these damaging compounds can be beat. By John Wiegand archive page How scientists are being squeezed to take sides in the conflict between Israel and Palestine Tensions over the war are flaring on social media—with real-life ramifications. By Antonio Regalado archive page These new tools could make AI vision systems less biased Two new papers from Sony and Meta describe novel methods to make bias detection fairer. By Melissa Heikkilä archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
13
2,023
"Humane officially launches the AI Pin, its OpenAI-powered wearable - The Verge"
"https://www.theverge.com/2023/11/9/23953901/humane-ai-pin-launch-date-price-openai"
"The Verge homepage The Verge homepage The Verge The Verge logo. / Tech / Reviews / Science / Entertainment / More Menu Expand Menu Gadgets / Tech / Artificial Intelligence Humane officially launches the AI Pin, its OpenAI-powered wearable Humane officially launches the AI Pin, its OpenAI-powered wearable / It’s a gadget designed for interacting with large language models, not apps, and for talking instead of typing. But it’s not yet entirely clear what you’re supposed to use it for. By David Pierce , editor-at-large and Vergecast co-host with over a decade of experience covering consumer tech. Previously, at Protocol, The Wall Street Journal, and Wired. | Share this story On Thursday, after months of demos and hints about what the AI-powered future of gadgets might look like, Humane finally took the wraps off of its first device: the AI Pin. The device, as we revealed yesterday , is a $699 wearable in two parts: a square device and a battery pack that magnetically attaches to your clothes or other surfaces. In addition to that price, there’s also the $24 monthly fee for a Humane subscription, which gets you a phone number and data coverage through T-Mobile’s network. The company told Wired the device will start shipping in early 2024 and that preorders begin November 16th. The AI Pin is powered by a Snapdragon processor — though it’s not clear which one — and you control it with a combination of voice control, a camera, gestures, and a small built-in projector. The Pin itself weighs about 34 grams, and the “battery booster” adds another 20. The built-in camera takes 13-megapixel photos and will capture video as well after a software update. Related Humane’s AI Pin: all the news about the new AI-powered wearable Unlike a device like the Rewind Pendant , it’s not meant to be always recording, and it’s not even listening for a wake word. You’ll have to activate the device manually by tapping and dragging on the touchpad, and the Pin’s “Trust Light” blinks to let you and supposedly everyone else know it’s collecting data. Introducing Humane Ai Pin from Humane, Inc. on Vimeo. The Pin’s primary job is to connect to AI models through software the company calls AI Mic. Humane’s press release mentions both Microsoft and OpenAI, and previous reports suggested that the Pin was primarily powered by GPT-4 — Humane says that ChatGPT access is actually one of the device’s core features. Its operating system, called Cosmos, is designed to route your queries to the right tools automatically rather than asking you to download and manage apps. The Pin’s primary job is to connect to AI models What Humane is trying to do with the Pin is essentially strip away all the interface cruft from your technology. It won’t have a homescreen or lots of settings and accounts to manage; the idea is that you can just talk to or touch the Pin, say what you want to do or know, and it’ll happen automatically. Over the last year, we’ve seen a huge amount of functionality become available through a simple text command to a chatbot; Humane’s trying to build a gadget in the same spirit. The question, then, is what this thing can actually do. Most of the features Humane mentions in its announcement today are the ones co-founder Imran Chaudhri showed off during a demo at TED earlier this year: voice-based messaging and calling; a “catch me up” feature that can summarize your email inbox; holding up food to the camera to get nutritional information; and real-time translation. Beyond that, though, it seems the device’s primary purpose is as something of a wearable LLM-powered search engine. The company did tell Wired it intends to add navigation and shopping capabilities, though, and plans to give developers ways to build tools of their own. Humane seems to view the AI Pin as the beginning of a larger project, which is probably correct: it will get better as the underlying models get better, and seemingly the whole tech industry is hard at work looking for new things to do with AI. Humane may hope its device evolves the way the smartphone did: better hardware improves the user experience over time, but the real revolution comes from what you can do with the device. There’s a lot of work left to do on that front, but Humane’s apparently ready to get started. Sam Altman fired as CEO of OpenAI Breaking: OpenAI board in discussions with Sam Altman to return as CEO Windows is now an app for iPhones, iPads, Macs, and PCs Screens are good, actually What happened to Sam Altman? Verge Deals / Sign up for Verge Deals to get deals on products we've tested sent to your inbox daily. From our sponsor Advertiser Content From More from this stream Humane’s AI Pin: all the news about the new AI-powered wearable Some journalists got to see Humane’s AI Pin. Nov 10, 2023, 1:03 AM UTC Maybe building a whole product off of generative AI might be a bad idea. Nov 9, 2023, 11:23 PM UTC It sure seems like Humane investor Sam Altman doesn’t care much for the Humane AI Pin. Nov 9, 2023, 6:29 PM UTC Exclusive leak: all the details about Humane’s AI Pin, which costs $699 and has OpenAI integration Nov 8, 2023, 10:38 PM UTC Terms of Use Privacy Notice Cookie Policy Do Not Sell Or Share My Personal Info Licensing FAQ Accessibility Platform Status How We Rate and Review Products Contact Tip Us Community Guidelines About Ethics Statement The Verge is a vox media network Advertise with us Jobs @ Vox Media © 2023 Vox Media , LLC. All Rights Reserved "
14
2,022
"This startup plans to create realistic human embryos | MIT Technology Review"
"https://www.technologyreview.com/2022/08/04/1056633/startup-wants-copy-you-embryo-organ-harvesting"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts This startup wants to copy you into an embryo for organ harvesting With plans to create realistic synthetic embryos, grown in jars, Renewal Bio is on a journey to the horizon of science and ethics. By Antonio Regalado archive page Ms Tech In a search for novel forms of longevity medicine, a biotech company based in Israel says it intends to create embryo-stage versions of people in order to harvest tissues for use in transplant treatments. The company, Renewal Bio, is pursuing recent advances in stem-cell technology and artificial wombs demonstrated by Jacob Hanna, a biologist at the Weizmann Institute of Science in Rehovot. Earlier this week, Hanna showed that starting with mouse stem cells, his lab could form highly realistic-looking mouse embryos and keep them growing in a mechanical womb for several days until they developed beating hearts, flowing blood, and cranial folds. It’s the first time such an advanced embryo has been mimicked without sperm, eggs, or even a uterus. Hanna’s report was published in the journal Cell on Monday. “This experiment has huge implications,” says Bernard Siegel, a patient advocate and founder of the World Stem Cell Summit. “One wonders what mammal could be next in line.” The answer is humans. Hanna tells MIT Technology Review he is already working to replicate the technology starting with human cells and hopes to eventually produce artificial models of human embryos that are the equivalent of a 40- to 50-day-old pregnancy. At that stage basic organs are formed, as well as tiny limbs and fingers. Related Story “We view the embryo as the best 3D bio printer,” says Hanna. “It’s the best entity to make organs and proper tissue.” Researchers can already print or grow simple tissues, like cartilage or bone, but making more complex cell types and organs has proved difficult. An embryo, however, starts building the body naturally. “The vision of the company is ‘Can we use these organized embryo entities that have early organs to get cells that can be used for transplantation?’ We view it as perhaps a universal starting point,” says Hanna. Embryonic blood cells might be collected, multiplied, and transferred to an elderly person in order to reboot the immune system. Another concept is to grow embryonic copies of women with age-related infertility. Researchers could then collect the model embryo’s gonads, which could be further matured, either in the lab or via transplant into the woman’s body, to produce youthful eggs. The startup, funded so far with seed capital from the venture firm NFX , has been briefing other investors, and its pitch materials state that its mission is “renewing humanity—making all of us young and healthy.” Now humans Renewal Bio’s precise technical plan remains under wraps, and the company’s website is just a calling card. “It’s very low on details for a reason. We don’t want to overpromise, and we don’t want to freak people out,” says Omri Amirav-Drory, a partner at NFX who is acting as CEO of the new company. “The imagery is sensitive here.” Some scientists say it will be difficult to grow human embryo models to an advanced stage and that it would be better to avoid the controversy raised by imitating real embryos too closely. “It’s absolutely not necessary, so why would you do it?” says Nicolas Rivron, a stem-cell scientist at the Institute of Molecular Biotechnology in Vienna. He argues that scientists should only create “the minimal embryonic structure necessary” to yield cells of interest. For his part, Amirav-Drory says he hasn’t seen a technology with so much potential since CRISPR gene-editing technology first emerged. “The ability to create a synthetic embryo from cells—no egg, no sperm, no uterus—it’s really amazing,” he says. “We think it can be a massive, transformative platform technology that can be applied to both fertility and longevity.” Mechanical womb To create the succession of breakthroughs, Hanna’s lab has been combining advanced stem-cell science with new types of bioreactors. A year ago, the stem-cell specialist first showed off a “ mechanical womb ” in which he managed to grow natural mouse embryos outside of a female mouse for several days. The system involves spinning jars that keep the embryos bathed in nutritious blood serum and oxygen. In the new research published this week, Hanna used the same mechanical womb, but this time to grow look-alike embryos created from stem cells. Remarkably, when stem cells are grown together in specially shaped containers, they will spontaneously join and try to assemble an embryo , producing structures that are called embryoids, blastoids, or synthetic embryo models. Many researchers insist that despite appearances, these structures have limited relation to real embryos and zero potential to develop completely. By adding these synthetic mouse embryos to his mechanical womb, however, Hanna managed to grow them further than ever before, to the point where hearts started beating, blood began moving, and there was the start of a brain and a tail. “The embryos really look great,” says Hanna, whose report this week provoked awe among other scientists. “They are really, really similar to natural embryos.” Analyses show the synthetic versions are about 95% similar to normal mouse embryos, based on the mix of cell types inside each. Even so, techniques for growing synthetic embryos remain inefficient. Fewer than 1 in 100 attempts to mimic a mouse embryo was successful, and even the model embryos that developed for the longest time eventually suffered abnormalities, including heart problems, perhaps because they couldn’t grow any further without a proper blood supply. Mini-Me In a next set of experiments, Hanna is using his own blood or skin cells (and those of a few other volunteers) as the starting point for making synthetic human embryos. It means his lab could soon be swimming in hundreds or thousands of tiny mini-mes—all genetic clones of himself. Related Story Researchers are growing embryos outside the womb for longer than has ever been possible. Hanna is not troubled by the idea. Despite the startling fact that he’s able to mimic the beginnings of mammals in test tubes, he views these as entities without a future. They’re probably not viable, he says. Plus, right now there is no way to graduate from jar life to real life. Without a placenta and an umbilical cord connected to a mother, no synthetic embryo could survive if transplanted to a uterus. “We are not trying to make human beings. That is not what we are trying to do.” says Hanna. “To call a day-40 embryo a mini-me is just not true.” Still, as this technology progresses, there could be debate as to whether synthetic embryos have any rights—or if they can ethically be used as fodder for science and medicine. In the US, the National Institutes of Health has, in some cases, declined to fund studies on synthetic embryos that it believes would be too close to the real thing. Although Hanna doesn’t think an artificial embryo made from stem cells and kept in a lab will ever count as a human being, he has a contingency plan to make sure there is no confusion. It’s possible, for instance, to genetically engineer the starting cells so the resulting model embryo never develops a head. Restricting its potential could help avoid ethical dilemmas. “We think this is important and have invested a lot in this,” says Hanna. Genetic changes can be made that lead to “no lungs, no heart, or no brain.” The new startup, Renewal, has already hired some of Hanna’s students and licensed his technology from the Weizmann Institute. It’s going to begin spending money improving the incubators, developing sensors to track the embryoids as they develop, and coming up with ways to extend their survival time in the lab. Amirav-Drory says the company is at such an early stage that it is still learning what the technology could be used for—and which applications are the most promising. He and Hanna, who is Renewal’s scientific founder, have been approaching other scientists and doctors to learn what they would do, if they had access to large numbers of synthetic embryos developed for days, or even weeks. “We’ve been asking people, ‘Imagine if we can get to this or that milestone. What does it unlock?’ And people’s eyes light up,” he says. hide by Antonio Regalado Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Biotechnology and health Everything you need to know about artificial wombs Artificial wombs are nearing human trials. But the goal is to save the littlest preemies, not replace the uterus. By Cassandra Willyard archive page Some deaf children in China can hear after gene therapy treatment After deafness treatment, Yiyi can hear her mother and dance to the music. But why is it so noisy at night? By Antonio Regalado archive page Zeyi Yang archive page Scientists just drafted an incredibly detailed map of the human brain A massive suite of papers offers a high-res view of the human and non-human primate brain. By Cassandra Willyard archive page How AI can help us understand how cells work—and help cure diseases A virtual cell modeling system, powered by AI, will lead to breakthroughs in our understanding of diseases, argue the cofounders of the Chan Zuckerberg Initiative. By Priscilla Chan archive page Mark Zuckerberg archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
15
2,021
"Israel’s “green pass” vaccine passport is an early vision of how we leave lockdown | MIT Technology Review"
"https://www.technologyreview.com/2021/03/01/1020154/israels-green-pass-is-an-early-vision-of-how-we-leave-lockdown"
"Featured Topics Newsletters Events Podcasts Featured Topics Newsletters Events Podcasts Israel’s “green pass” is an early vision of how we leave lockdown There are plans all over the world for apps and cards that would prove vaccination. But Israel’s experience suggests major caveats. By Cat Ferguson archive page Joshua Mitnick archive page Maya Alleruzzo/AP The commercial opens with a tempting vision and soaring instrumentals. A door swings wide to reveal a sunlit patio and a relaxed, smiling couple awaiting a meal. “How much have we missed going out with friends?” a voiceover asks. “With the green pass, doors simply open in front of you … We’re returning to life.” It’s an ad to promote Israel’s version of a vaccine passport , but it’s also catnip for anyone who’s been through a year in varying degrees of lockdown. Can we go back to normal life once we’ve been vaccinated? And if we can, what kind of proof should we need? Although there are still many unknowns about vaccines, and many practical issues surrounding implementation, those considering vaccine passport programs include airlines, music venues , Japan , the UK, and the European Union. Some proponents, including those on one side of a fierce debate in Thailand , have focused on ending quarantines for international travelers to stimulate the hard-hit tourism industry. Others imagine following Israel’s lead, creating a two-tiered system that allows vaccinated people to enjoy the benefits of a post-pandemic life while others wait for their shots. What is happening there gives us a glimpse of the promise—and of the difficulties such schemes face. How it works Israel’s vaccine passport was released on February 21, to help the country emerge from a month-long lockdown. Vaccinated people can download an app that displays their “green pass” when they are asked to show it. The app can also display proof that someone has recovered from covid-19. (Many proposed passport systems offer multiple ways to show you are not a danger, such as proof of a recent negative test. The Israeli government says that option will come to the app soon, which will be especially useful for children too young to receive an approved vaccine.) Officials hope the benefits of the green pass will encourage vaccination among Israelis who have been hesitant, many of whom are young. “People who get vaccinated need to know that something has changed for them, that they can ease up,” says Nadav Eyal, a prominent television journalist. “People want to know that they can have some normalcy back.” Related Story Despite the flashy ads, however, it’s still too early to tell how well Israel’s program will work in practice—or what that will mean for vaccine passports in general. Some ethicists argue that such programs may further entrench existing inequalities, and this is already happening with Israel’s pass, since few Palestinians in the occupied territories of Gaza and the West Bank have access to vaccines. The green pass is also a potential privacy nightmare, says Orr Dunkelman, a computer science professor at Haifa University and a board member of Privacy Israel. He says the pass reveals information that those checking credentials don’t need to know, such as the date a user recovered from covid or got a vaccine. The app also uses an outdated encryption library that is more vulnerable to security breaches, Orr says. Crucially, because the app is not open source, no third-party experts can vet whether these concerns are founded. “This is a catastrophe in the making,” says Ran Bar Zik, a software columnist for the newspaper Haaretz. Zik recommends another option currently available under the green pass program: downloading a paper vaccination certificate instead of using the app. Although that’s possible, the app is expected to become the most widespread verification method. Unnecessarily complicated In the US, developers are trying to address such privacy concerns ahead of any major rollout. Ramesh Raskar runs the PathCheck Foundation at MIT, which has partnered with the design consultancy Ideo on a low-tech solution. Their prototype uses a paper card, similar to the one people currently receive when they’re vaccinated. The paper card could offer multiple forms of verification, scannable in the form of QR codes, allowing you to show a concert gatekeeper only your vaccination status while displaying another, more information-heavy option to health-care providers. “Getting on a bus, or getting into a concert, you need to have a solution that is very easy to use and that provides a level of privacy protection,” he says. But other situations may require more information: an airline wants to know that you are who you say you are, for example, and hospitals need accurate medical records. It’s not just about making sure you don’t have to hand over personal information to get into a bar, though: privacy is also important for those who are undocumented or who mistrust the government, Raskar says. It’s important for companies not to create another “hackable repository” when they view your information, he adds. He suggests that right now commercial interests are getting in the way of creating something so simple—it wouldn’t make much money for software companies, which at least want to show off something that could be repurposed later in a more profitable form. Compared with Israel, he says, “we’re making things unnecessarily complicated in the US.” The way forward It’s unclear what the US—which, unlike Israel, doesn’t have a universal identity record or a cohesive medical records system—would need to do to implement a vaccine passport quickly. But whichever options eventually do make it into widespread use, there are also aspects of this idea that don’t get laid out in the ads. For example, proposals have been floated that would require teachers and medical staff to provide proof of vaccination or a negative test to gain admittance to their workplaces. That could be overly intrusive on individual privacy rights, says Amir Fuchs, a researcher at the Israel Democracy Institute. Still, he says, “most people understand that there is a logic in that people who are vaccinated will have less limitations.” Despite the progress in delivering vaccines, all these passport efforts are all still in the early stages. PathCheck’s idea hasn’t rolled out yet, although pilots are under discussion. In Denmark, vaccine passports are still more a promise than a plan. And even in Israel, the vision put forward by government advertising is still just an ambition: while pools and concert venues may be open to green pass holders, dining rooms and restaurants aren’t open yet—for anybody. This story is part of the Pandemic Technology Project , supported by the Rockefeller Foundation. hide by Cat Ferguson & Joshua Mitnick Share linkedinlink opens in a new window twitterlink opens in a new window facebooklink opens in a new window emaillink opens in a new window Popular This new data poisoning tool lets artists fight back against generative AI Melissa Heikkilä Everything you need to know about artificial wombs Cassandra Willyard Deepfakes of Chinese influencers are livestreaming 24/7 Zeyi Yang How to fix the internet Katie Notopoulos Deep Dive Policy Three things to know about the White House’s executive order on AI Experts say its emphasis on content labeling, watermarking, and transparency represents important steps forward. By Tate Ryan-Mosley archive page Melissa Heikkilä archive page How generative AI is boosting the spread of disinformation and propaganda In a new report, Freedom House documents the ways governments are now using the tech to amplify censorship. By Tate Ryan-Mosley archive page Government technology is famously bad. It doesn’t have to be. New York City is fixing the relationship between government and technology–and not in the ways you’d expect. By Tate Ryan-Mosley archive page It’s shockingly easy to buy sensitive data about US military personnel A new report exposes the privacy and national security concerns created by data brokers. US senators tell MIT Technology Review the industry needs to be regulated. By Tate Ryan-Mosley archive page Stay connected Illustration by Rose Wong Get the latest updates from MIT Technology Review Discover special offers, top stories, upcoming events, and more. Enter your email Thank you for submitting your email! It looks like something went wrong. We’re having trouble saving your preferences. Try refreshing this page and updating them one more time. If you continue to get this message, reach out to us at [email protected] with a list of newsletters you’d like to receive. The latest iteration of a legacy Advertise with MIT Technology Review © 2023 MIT Technology Review About About us Careers Custom content Advertise with us International Editions Republishing MIT News Help Help & FAQ My subscription Editorial guidelines Privacy policy Terms of Service Write for us Contact us twitterlink opens in a new window facebooklink opens in a new window instagramlink opens in a new window rsslink opens in a new window linkedinlink opens in a new window "
16
2,017
"The Georgia Runoff Election Doesn't Have a Paper Trail to Safeguard Against Hacks | WIRED"
"https://www.wired.com/story/georgia-runoff-election-hack-audit-vote"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Lily Hay Newman Security The Simple Fix That'd Help Protect Georgia From Election Hacks Getty Images Save this story Save Save this story Save Early voting in the runoff for Georgia’s Sixth District congressional seat kicked off May 30; election day itself comes on June 20. The race has garnered national attention as one in which Democrats could pick up a long-held Republican seat. It has also generated scrutiny, though, for taking place in a state with some of the most lax protections against electoral fraud, at a time when Russia has meddled freely in campaigns in the US and abroad. But Georgia's voting issues aren't rooted in any specific hacking threat. The problem instead lies in the state's inability to prove if fraud or tampering happened in the first place. By not deploying a simple paper backup system, Georgia opens itself up to one of the most damaging electoral outcomes of all: uncertainty. “You have an un-provable system," says Pamela Smith, president of Verified Voting, a group that promotes best practices at the polls. “It might be right, it might not be right, and that absence of authoritative confirmation is the biggest problem. It’s corrosive.” First, the good news. Unlike dozens of other states, Georgia officials maintain that Russian hackers did not compromise the state’s election infrastructure in any way during the 2016 presidential election. Georgia voting machines also don't connect to the internet, and the state says it tests them before, during, and after voting to confirm that they aren't running unapproved software, like viruses or other malicious programs. More Election Security Security Brian Barrett Security Andy Greenberg Cyber Espionage Andy Greenberg But researchers have demonstrated that hackers can compromise machines like those used in Georgia. The state has also suffered election security lapses, including a recent incident, detailed in Politico , in which a huge amount of sensitive election data sat exposed for many months in Georgia's unified election center at Kennesaw State University. These fresh developments feed longstanding concerns about the security of Georgia’s election infrastructure overall—and the need for paper backups so the system can be audited. “This has been an ongoing issue since 2002,” says Sara Henderson, policy director at Common Cause Georgia, a non-partisan group that advocates for government transparency. “Our machines haven’t been updated since 2005, they’re running on Windows 2000. It’s ridiculous. We need to ensure that we can verify and audit our election processes here in Georgia, which we can’t do right now.” Voting in the U.S. can be cumbersome enough as it is; you don’t hear a lot of nostalgia for the punch-card systems of yesteryear. Technologies like voter registry databases, touchscreen voting machines, and digital scanners can make voting protocols easy to use for voters, poll workers, and election officials alike. And it is possible to robustly secure this digital infrastructure. But when it comes to securing the vote experts have urged states around the US to go low-tech—at least in one respect. The best contingency in case something goes wrong (like, say, some-nation state meddles in the election)? Good ol’ paper backups that citizens verify when they cast their votes. By keeping paper copies on hand, election officials have the option to audit election results randomly or if they suspect a mishap, and recount as much of the vote as they need to without having to worry that a digital hack has impacted their backup. “If there is uncertainty after an election, either because of the possibility of tampering or just the possibility of error or malfunction, a paperless system like Georgia’s doesn’t have any way to go back to other evidence to figure out what really happened,” says Ed Felten, one of the security researchers who showed that the voting machine model used in Georgia can be hacked, and who served as deputy US chief technology officer from 2015 until this year. “That evidence is key to being able to resolve any kind of uncertainty or dispute that might arise.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Georgia joins just four other states—Delaware, Louisiana, South Carolina, and New Jersey—that only use digital voting throughout, though some individual counties elsewhere offer no paper trail as well. Georgia also declined Department of Homeland Security election defense consultation during the 2016 presidential race, citing concerns that DHS wanted to inappropriately federalize the election system. A few other states, like Indiana, followed suit. As a result, if an election hack does take place either in this runoff or in the future, Georgia officials will have nothing to fall back on that they can trust. You can trace Georgia's inability to audit back to 2000, when a dramatic recount in the US presidential election spurred many states to replace “punch card” voting systems with digital upgrades. The paper ballots had yielded too many "hanging chads" that made voter intention hard to suss out. In 2002, Georgia moved to standardize its election infrastructure across all counties by distributing AccuVote TS touch-screen voting machines. At the same time, the state abolished mechanisms for creating paper-vote backups. The direct-recording electronic voting machines that Georgia uses to this day do have an option to add a paper backup mechanism, but Georgia's 2005 paper pilot program ran into technical problems. It also didn't adequately protect the privacy of voters with disabilities. 'It’s not satisfactory after your airplane lands to say well, it didn’t crash this time.' — Security researcher Ed Felten So yes, adding a paper trail to Georgia’s election infrastructure would involve a full overhaul. But it could likely use one anyway, given that the current devices are approaching the end of their 20-year max lifespan. And the potential consequences of not adding a paper audit continue to escalate. “Every [election] we cross our fingers and hope that this is not the time that there’s an incident,” Felten says. “It’s really not satisfactory to say afterward that we didn’t have a problem this time, just in the same way that it’s not satisfactory after your airplane lands to say well, it didn’t crash this time.” In May, voting rights advocates filed an emergency motion to compel the use of paper ballots in the upcoming 6th District runoff. Superior Court Judge Kimberly Esmond Adams dismissed the case on Friday, though, concluding that to grant it would illegally interfere with Georgia Secretary of State Brian Kemp’s authority over state elections. Since the motion named Kemp as the defendant in his role as Secretary of State, Adams also noted that he is protected by Georgia’s sovereign immunity laws. But advocates of voting-system reform say that the case has still had a positive impact. “Lawsuits close to elections are very challenging,” Verified Voting's Smith says. “But it certainly does draw attention to the issue, and you can get some testimony on the record.” Georgia officials maintain that overhauling election infrastructure would be cost-prohibitive, but advocates point out that the 2017-2018 state budget increased by 3.5 percent, with none of that money earmarked for maintaining or replacing election systems. That's not for lack of awareness of the issues. A 2015 Brennan Center for Justice investigation asked Merle King, the executive director of Georgia's Center for Election Systems, how many jurisdictions would want to purchase new voting machines if they could. "They all would," she said. And that was before the extent of Russia's electoral interference ambitions came to light. Whatever the reason for the delay, Georgia’s elections remain vulnerable to manipulation, simply because there would be no recourse if the vote were somehow tainted. The way to safeguard the vote is clear; it seemingly lacks the will to do so. Senior Writer X Topics election hacking elections hacks National Affairs Dell Cameron Andy Greenberg Dell Cameron David Gilbert Andy Greenberg Dhruv Mehrotra Matt Burgess Andy Greenberg Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "
17
2,021
"Will Future Electric Vehicles Be Powered by Deep-Sea Metals? | WIRED"
"https://www.wired.com/story/will-future-electric-vehicles-be-powered-by-deep-sea-metals"
"Open Navigation Menu To revist this article, visit My Profile, then View saved stories. Close Alert Backchannel Business Culture Gear Ideas Science Security Merch To revist this article, visit My Profile, then View saved stories. Close Alert Search Backchannel Business Culture Gear Ideas Science Security Merch Podcasts Video Artificial Intelligence Climate Games Newsletters Magazine Events Wired Insider Jobs Coupons Eric Niiler Science Will Future Electric Vehicles Be Powered by Deep-Sea Metals? Greenpeace International activists aboard the Rainbow Warrior display banners reading “Stop Deep Sea Mining” in front of the Maersk Launcher, a ship chartered by DeepGreen, one of the companies spearheading the drive to mine the barely understood deep sea ecosystem. Photograph: Marten van Dijl/Greenpeace Save this story Save Save this story Save The push to build more electric vehicles to combat climate change rests on an inconvenient truth: The metals used in EV batteries are pretty dirty. From exploited child laborers digging cobalt in the Democratic Republic of Congo to toxic waste leaking from nickel mines in Indonesia, the sources of key ingredients to power climate-friendly transportation have been assailed by activists and led to lawsuits against the tech firms that use the metals. US and European carmakers have been looking for alternative sources of these materials that would allow them to bypass some of these troublesome practices, while avoiding having to buy batteries produced by global competitor China. They also want a piece of President Joe Biden’s new plan to spend $174 billion to promote electric cars and build new charging stations. Could materials mined from the deep sea be the answer? That’s what commercial mining firms and scientists are trying to determine this month during two separate expeditions to a remote part of the Pacific Ocean known as the Clarion-Clipperton Zone (CCZ). A potential treasure chest of metals waiting to be plucked is at stake: This region of water is the size of the continental US, and its floor is littered with potato-sized metallic nodules, each containing high concentrations of cobalt, nickel, copper, and manganese, which are used in EV batteries. (Lithium, another key component, is primarily mined from Australia.) These materials would all be harvested as minerals, then refined into metals that could be used in batteries, usually by adding an oxide. Of course, the trick is getting the nodules off the bottom, which is 12,000 to 18,000 feet deep, without killing the creatures that live there or the fish that swim above. For the next few weeks, the two expeditions will be traversing the CCZ to test undersea mining technologies and how much damage they cause. A 295-foot supply ship called the Maersk Launcher is hosting Canada-based mining firm DeepGreen and a crew of independent scientists. Another expedition is operating in a separate section of the zone to test a bottom-crawling mechanical harvester called the Patania II operated by Global Sea Mineral Resource (GSR), a subsidiary of the Belgian dredging firm DEME Group. The harvester is designed to scoop up the precious minerals and is controlled from the surface vessel through a 3-mile-long tether that provides power and communication capabilities to it. The trial will test how well a smaller version of the robo-harvester can maneuver along the seafloor and pick up nodules. If successful, GSR will build a full-scale collector with a riser and lift system to bring the materials to the surface. A view of the Normand Energy retrieving the Patania II nodule collector visible (green), seen from the Rainbow Warrior. The vessel is chartered by Global Sea Mineral Resources (GSR), a Belgian company researching deep sea mining in the Pacific. Photograph: Marten van Dijl/Greenpeace Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Both expeditions will collect baseline environmental data on the kinds of marine organisms that live on the seafloor, the composition and chemistry of bottom sediments, and the flow of underwater currents at different depths. Knowing these control measurements will be important in determining whether such mining can be done without destroying the underwater habitat. “Our goal is to find out how much sediment the harvester will take off along with the nodules,” says Matthias Haeckel , a marine biochemist at the GEOMAR Helmholtz Centre for Ocean Research in Kiel, Germany, who is coordinating the environmental review of GSR’s activities for a project called MiningImpact. “That has never been done before.” Plumes of sediment can harm bottom-dwelling creatures like sponges and corals that form the base of the food chain in the deep-sea ecosystem. If the grit remains suspended in the water, it can also affect fish and other marine life. Haeckel and his team have about 50 different types of sensors to measure the sediment in both the water and on the seafloor surface. This will provide the first quantitative scientific evidence on the environmental consequences of nodule extraction under real-world mining scenarios, according to Haeckel. “We know that the sediment plume doesn’t rise very high, just 5 or 10 meters,” he says. “Now it's basically to understand how far the particles settle. We want to measure how thick of a layer it is and how it thins out over distance, so we can determine its impact.” DeepGreen and GSR have received exploration licenses from the International Seabed Authority , a UN-affiliated agency that controls access to the area’s mineral riches. Neither will be permitted to start actual mining until the authority adopts new environmental rules and issues extraction licenses. The agency has granted 30 exploration contracts involving 22 different countries and affiliated mining companies for deep-sea minerals. Gerard Barron, the founder and CEO of DeepGreen, says he’s committed to operating in an environmentally responsible manner. Barron says ocean minerals are a better option than sourcing from China or from mines in politically troubled regions. “Everyone realizes that moving to electric vehicles is very metal-intensive, and the question is, where the hell are they going to come from?” says Barron. “We represent an opportunity for America to get some independence.” Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg Barron says it takes 64 metric tons of rock to produce enough of the four minerals—a total of about 341 pounds—needed to make an EV battery and its wiring from a mine on land. But it takes only 6 tons of the polymetallic seafloor nodules to make the same amount, because the metals are more concentrated. The nodules formed over millions of years as naturally occuring minerals precipitated from both seawater and sediments and formed around cores that could have been microscopic bits of debris, rock, bone or even pieces of other nodules. They are more common in areas where there are low levels of dissolved oxygen, and under certain geological conditions, such as in the equatorial Pacific, which contains an estimated 21 billion tons of them. According to a company spokesperson, DeepGreen currently has about $570 million available to fund mining. The firm is considering sites in Texas, Quebec, and Norway for a processing plant to turn the nodules into usable materials for batteries, sites that are close to renewable energy sources as well as markets for the minerals. Barron says the processing of the seafloor nodules would be pretty simple. They are first dried in a rotary kiln, which is a type of electric furnace. “It’s the first step to separate the manganese from the nickel, cobalt, and copper,” he says. “They form a mat-like material for the battery grade material, whether it’s powders or metallic sulfates.” Of course, that processing is done on land. Operating a floating mining camp several days away from the nearest port has its own engineering uncertainties, such as bad weather that could shut down operations. And it raises several ecological questions. After the precious nodules are sucked from the harvester to the mining ship through a hose, leftover mud and sediments are released underwater. That could pose a risk to marine life, according to environmental groups. In addition, seafloor mining scars do not recover quickly. A 2019 study in the journal Nature found that seafloor tracks off the coast of Peru lasted 30 years, and that there were fewer species of plant and animal life in the disturbed areas. Another study published in 2016 found that one deep-sea octopus likes to lay its eggs on manganese nodules in that same region, a sign that mining could be a threat to those cephalopods. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg These studies indicate that not enough is known about the bottom habitat and whether it can recover from large-scale mining with mechanical harvesters, says Douglas McCauley, a professor of ocean science at the University of California, Santa Barbara. “Deep ocean ecosystems are the least resilient ecosystems on the planet,” McCauley says. “It’s a weird place, biologically speaking. The pace of life moves more slowly in the deep ocean than any other place. Species live a long time, and ecosystems take a long time to recover.” McCauley says the loss of habitat could destroy yet-unknown organisms that might provide new sources of biopharmaceuticals or disease-fighting compounds. “If you grind up the habitat, you are going to lose species—perhaps species we will never know,” he continues. Last month, carmakers BMW and Volvo pledged not to use EV batteries that use metals sourced from the ocean, citing the potential environmental concerns from deep-sea mining. DeepGreen’s Barron says the environmental monitoring tests will help guide development of harvesting technologies and will determine whether the effect is local or has a bigger footprint across the seafloor. He says DeepGreen will be testing its own harvesting device in 2022 with an eye to begin mining operations in 2024. All the data collected on both the DeepGreen and GSR monitoring expeditions will be published and reviewed by independent scientists. The European “ MiningImpact ” environmental monitoring project is funded by various European universities and academic labs, according to GEOMAR’s Haeckel. Scientists monitoring DeepGreen’s efforts are not paid either, and both research data sets will be shared publicly. GSR officials say they are devising ways to limit how far the sediment travels and will separate it from the nodules before they reach the surface. Commercial mining has to make both economic and environmental sense, says GSR’s head of sustainability, Samantha Smith. “If the science shows that deep-seabed mining has no advantages over the alternative, which is to rely solely on opening up new mines on land, then there won’t be any deep-sea mining industry, and we won't submit an application,” she says. Smith says that if all goes well, GSR won’t begin mining until 2028. It will take that long to do all the environmental tests as well as engineering trials. Technicians at GSR are considering varying the suction on the harvester to limit its effect on the seafloor, just as how turning down the power dial on a household vacuum cleaner changes how hard it sucks up dirt from different surfaces. Business What Sam Altman’s Firing Means for the Future of OpenAI Steven Levy Business Sam Altman’s Sudden Exit Sends Shockwaves Through OpenAI and Beyond Will Knight Gear Humanity’s Most Obnoxious Vehicle Gets an Electric (and Nearly Silent) Makeover Boone Ashworth Security The Startup That Transformed the Hack-for-Hire Industry Andy Greenberg For his part, UC Santa Barbara’s McCauley says that if the studies show that the mining can take place without significant habitat destruction, he would support it. “I want good data to answer these questions,” he says. “If it turns out that there is no harm and it’s an innocuous activity, I would have no problem with it.” Still, McCauley cautions that long-term effects of deep-sea mining might not be understood for several decades. “We don’t have those answers, and we won’t get them in the time horizon that the mining companies have for their operations,” he says. Update 4-14-2021 4:50 pm EST: This story was updated to correct information about how sediments collected by the underwater harvester would be released. 📩 The latest on tech, science, and more: Get our newsletters ! The buzzy, chatty, out-of-control rise of Clubhouse In Brazil’s favelas, esports is an unlikely source of hope Physicists learn to superfreeze antimatter ( hint: pew pew! ) AI could enable “swarm warfare” for tomorrow's fighter jets Bed tricks, cod, and the hidden history of catfishing 👁️ Explore AI like never before with our new database 🎮 WIRED Games: Get the latest tips, reviews, and more 📱 Torn between the latest phones? Never fear—check out our iPhone buying guide and favorite Android phones Topics mining Electric Vehicles Batteries marine science oceans environment Ramin Skibba Matt Simon Matt Simon Amit Katwala Grace Browne Ramin Skibba Jim Robbins Matt Simon Facebook X Pinterest YouTube Instagram Tiktok More From WIRED Subscribe Newsletters Mattresses Reviews FAQ Wired Staff Coupons Black Friday Editorial Standards Archive Contact Advertise Contact Us Customer Care Jobs Press Center RSS Accessibility Help Condé Nast Store Do Not Sell My Personal Info © 2023 Condé Nast. All rights reserved. Use of this site constitutes acceptance of our User Agreement and Privacy Policy and Cookie Statement and Your California Privacy Rights. WIRED may earn a portion of sales from products that are purchased through our site as part of our Affiliate Partnerships with retailers. The material on this site may not be reproduced, distributed, transmitted, cached or otherwise used, except with the prior written permission of Condé Nast. Ad Choices Select international site United States LargeChevron UK Italia Japón Czech Republic & Slovakia "

AI/Tech Dataset

This dataset is a collection of AI/tech articles scraped from the web.

It's hosted on HuggingFace Datasets, so it is easier to load in and work with.

To load the dataset

1. Install HuggingFace Datasets

pip install datasets

2. Load the dataset

from datasets import load_dataset

dataset = load_dataset("siavava/ai-tech-articles")

# optionally, convert it to a pandas dataframe:
df = dataset["train"].to_pandas()

You do not need to clone this repo. HuggingFace will download the dataset for you, the first time that you load it, and cache it locally so it does not need to re-download it again (unless it detects a change upstream).

File Structure

  • analytics.ipynb - Notebook containing some details about the dataset.
  • example.ipynb - A minimal notebook that loads in the dataset and converts to Pandas.
  • raw.csv - The raw data, in CSV format.
  • data/*.parquet- compressed parquet containing the data.
  • For raw text files, see the scraper repo on GitHub.
Downloads last month
122
Edit dataset card