id
stringlengths
1
169
pr-title
stringlengths
2
190
pr-article
stringlengths
0
65k
pr-summary
stringlengths
47
4.27k
sc-title
stringclasses
2 values
sc-article
stringlengths
0
2.03M
sc-abstract
stringclasses
2 values
sc-section_names
sequencelengths
0
0
sc-sections
sequencelengths
0
0
sc-authors
sequencelengths
0
0
source
stringclasses
2 values
Topic
stringclasses
10 values
Citation
stringlengths
4
4.58k
Paper_URL
stringlengths
4
213
News_URL
stringlengths
4
119
pr-summary-and-article
stringlengths
49
66.1k
652
Microsoft Targets 50,000 Jobs with LinkedIn 'Re-skilling' Effort
Microsoft announced its intent to hire 50,000 people for jobs requiring technology skills over the next three years, as part of a broader campaign with professional networking site LinkedIn to re-skill workers affected by the pandemic for new fields. Microsoft said the placements will be within its "ecosystem" of companies that utilize or help sell its products. The push began late last year as pandemic-related business closures had a greater impact on service workers than on technology and other white-collar employees who could work from home. LinkedIn offered many paid digital skills training courses for free, ranging from software development to data analysis to financial analysis. The site said it will extend the free courses until year's end, while Microsoft and LinkedIn estimate that total enrollees have reached 30.7 million, up from an expected 25 million.
[]
[]
[]
scitechnews
None
None
None
None
Microsoft announced its intent to hire 50,000 people for jobs requiring technology skills over the next three years, as part of a broader campaign with professional networking site LinkedIn to re-skill workers affected by the pandemic for new fields. Microsoft said the placements will be within its "ecosystem" of companies that utilize or help sell its products. The push began late last year as pandemic-related business closures had a greater impact on service workers than on technology and other white-collar employees who could work from home. LinkedIn offered many paid digital skills training courses for free, ranging from software development to data analysis to financial analysis. The site said it will extend the free courses until year's end, while Microsoft and LinkedIn estimate that total enrollees have reached 30.7 million, up from an expected 25 million.
653
Researchers Use AI to Show Multidimensional Imaging of Biological Processes
UCLA bioengineers and colleagues have created a new imaging system that advances dynamic imaging microscopy with artificial intelligence. The new system can reveal the details of biological processes in tiny tissue samples at a resolution of two thousandths of a millimeter and in slow motion at 200 frames per second. A study outlining the advance was recently published in Nature Methods. A recurring challenge in biology has been the extraction of spatiotemporal information from cell samples, as many millisecond-long, transient cellular processes occur in 3D tissues and across long time scales in space. Dynamics, such as flowing blood cells in the developing heart chambers, or rapidly moving neurons in the brain, are difficult to acquire as they require extremely high imaging speed, which remains an unmet optical challenge by existing microscopic techniques. To tackle this problem, the researchers adopted a type of computational imaging tool, named light-field microscopy for 3D imaging, and enhanced it with a deep-learning neural network, which is a type of artificial intelligence-powered computing system modeled after how the human brain learns. "This new system allows us to see biological events live in what is essentially five dimensions - the three dimensions of space, plus time and the molecular level dynamics as highlighted by color spectra," said Dr. Tzung Hsiai, UCLA's Maud Cady Guthman Professor of Cardiology. "For doctors and scientists, this could reveal the fine details of what's happening in microscopic spaces and over millisecond-length time scales in a way that has never been done before. This advance can go a long way in helping find new insights to understand and treat diseases." The new tool delivered a volumetric imaging at 200 cubic frames per second, revealing the transient processes inside a cell volume that measured 0.25 x 0.25 x 0.15 millimeters, or smaller than a grain of salt. "Different from conventional microscopy, the tool reconstructed the 3D biological sample based on one snapshot through post-processing instead of scanning in the captured stage. The resulting temporal resolution of the images was drastically improved," said lead author Zhaoqiang Wang, a doctoral student in bioengineering at the UCLA Samueli School of Engineering and member of Hsiai's laboratory. Compared to previously used light-field microscopy methods, Wang said the new technique also adopted a deep-learning technology to reconstruct images through a trained neural network model and thereby achieved better spatial resolution, image quality and processing throughput. A neural network was first trained using 3D image stacks and corresponding light-field snapshots. The model was then used to infer the 3D reconstruction directly from the experimental light field, which records the dynamic process. In a demonstration involving free-moving roundworms (C. elegans), the team used fluorescent tags to correlate neural signals with the roundworms' motions. The researchers also recorded cardiac dynamics of embryonic zebrafish hearts, examining the flow of blood cells in synchrony with the contracting cardiomyocytes cells that make up the heart muscle. The three co-corresponding authors on the study are Hsiai, Peng Fei and Shangbang Gao. Fei and Gao are both with the Huazhong University of Science and Technology in China. Hsiai, who directs the UCLA Cardiovascular Engineering Laboratory , holds faculty appointments in the Division of Cardiology and Department of Medicine at the UCLA David Geffen School of Medicine; the VA Greater Los Angeles Healthcare System; and in the Department of Bioengineering at UCLA Samueli. Other authors on the study include Yichen Ding of UCLA Cardiovascular Engineering Laboratory; Lanxin Zhu, Hao Zhang, Guo Li, Chengqiang Yi, Yi Li and Yicong Yang of the Huazhong University; and Mei Zhen of the Mount Sinai Hospital at the University of Toronto in Canada. The research was supported by the National Institutes of Health, the Department of Veterans Affairs and research agencies in China.
Researchers at the University of California, Los Angeles (UCLA), China's Huazhong University, and Canada's University of Toronto designed a high-resolution dynamic imaging microscopy system enhanced with artificial intelligence. The team combined light-field microscopy for three-dimensional imaging with a deep learning neural network. The system was able to facilitate volumetric imaging at 200 cubic frames per second, exposing the transient processes within a cell volume smaller than a grain of salt. UCLA's Tzung Hsiai said, "This new system allows us to see biological events live in what is essentially five dimensions - the three dimensions of space, plus time and the molecular level dynamics as highlighted by color spectra."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of California, Los Angeles (UCLA), China's Huazhong University, and Canada's University of Toronto designed a high-resolution dynamic imaging microscopy system enhanced with artificial intelligence. The team combined light-field microscopy for three-dimensional imaging with a deep learning neural network. The system was able to facilitate volumetric imaging at 200 cubic frames per second, exposing the transient processes within a cell volume smaller than a grain of salt. UCLA's Tzung Hsiai said, "This new system allows us to see biological events live in what is essentially five dimensions - the three dimensions of space, plus time and the molecular level dynamics as highlighted by color spectra." UCLA bioengineers and colleagues have created a new imaging system that advances dynamic imaging microscopy with artificial intelligence. The new system can reveal the details of biological processes in tiny tissue samples at a resolution of two thousandths of a millimeter and in slow motion at 200 frames per second. A study outlining the advance was recently published in Nature Methods. A recurring challenge in biology has been the extraction of spatiotemporal information from cell samples, as many millisecond-long, transient cellular processes occur in 3D tissues and across long time scales in space. Dynamics, such as flowing blood cells in the developing heart chambers, or rapidly moving neurons in the brain, are difficult to acquire as they require extremely high imaging speed, which remains an unmet optical challenge by existing microscopic techniques. To tackle this problem, the researchers adopted a type of computational imaging tool, named light-field microscopy for 3D imaging, and enhanced it with a deep-learning neural network, which is a type of artificial intelligence-powered computing system modeled after how the human brain learns. "This new system allows us to see biological events live in what is essentially five dimensions - the three dimensions of space, plus time and the molecular level dynamics as highlighted by color spectra," said Dr. Tzung Hsiai, UCLA's Maud Cady Guthman Professor of Cardiology. "For doctors and scientists, this could reveal the fine details of what's happening in microscopic spaces and over millisecond-length time scales in a way that has never been done before. This advance can go a long way in helping find new insights to understand and treat diseases." The new tool delivered a volumetric imaging at 200 cubic frames per second, revealing the transient processes inside a cell volume that measured 0.25 x 0.25 x 0.15 millimeters, or smaller than a grain of salt. "Different from conventional microscopy, the tool reconstructed the 3D biological sample based on one snapshot through post-processing instead of scanning in the captured stage. The resulting temporal resolution of the images was drastically improved," said lead author Zhaoqiang Wang, a doctoral student in bioengineering at the UCLA Samueli School of Engineering and member of Hsiai's laboratory. Compared to previously used light-field microscopy methods, Wang said the new technique also adopted a deep-learning technology to reconstruct images through a trained neural network model and thereby achieved better spatial resolution, image quality and processing throughput. A neural network was first trained using 3D image stacks and corresponding light-field snapshots. The model was then used to infer the 3D reconstruction directly from the experimental light field, which records the dynamic process. In a demonstration involving free-moving roundworms (C. elegans), the team used fluorescent tags to correlate neural signals with the roundworms' motions. The researchers also recorded cardiac dynamics of embryonic zebrafish hearts, examining the flow of blood cells in synchrony with the contracting cardiomyocytes cells that make up the heart muscle. The three co-corresponding authors on the study are Hsiai, Peng Fei and Shangbang Gao. Fei and Gao are both with the Huazhong University of Science and Technology in China. Hsiai, who directs the UCLA Cardiovascular Engineering Laboratory , holds faculty appointments in the Division of Cardiology and Department of Medicine at the UCLA David Geffen School of Medicine; the VA Greater Los Angeles Healthcare System; and in the Department of Bioengineering at UCLA Samueli. Other authors on the study include Yichen Ding of UCLA Cardiovascular Engineering Laboratory; Lanxin Zhu, Hao Zhang, Guo Li, Chengqiang Yi, Yi Li and Yicong Yang of the Huazhong University; and Mei Zhen of the Mount Sinai Hospital at the University of Toronto in Canada. The research was supported by the National Institutes of Health, the Department of Veterans Affairs and research agencies in China.
654
Partisan Media Sites May Not Sway Opinions, but Erode Trust in Mainstream Press
CHAMPAIGN, Ill. - Popular wisdom suggests that the internet plays a major role in shaping consumers' political attitudes in the U.S., and some recent studies blamed partisan news outlets' coverage for the increasing polarization of the nation's electorate. However, a study of 1,037 internet users during the 2018-19 U.S. midterm election found that online partisan media may have little direct impact on consumers' political beliefs and activities. Instead, the primary consequence of greater exposure to right- or left-leaning news media is the erosion of readers' trust in the mainstream press, said communication professor JungHwan Yang of the University of Illinois Urbana-Champaign. Yang is a co-author of the study, published in the Proceedings of the National Academy of the Sciences. The paper was co-written by politics and public affairs professor Andrew Guess, of Princeton University; computational political scientist Pablo Barberá, of the University of Southern California; and Simon Munzert, a professor of data science and public policy at the Hertie School. The study participants, who were recruited from the data and analytics group YouGov's Pulse panel, allowed the researchers to survey them multiple times and agreed to install passive metering software on their laptop and desktop computers or tablets so the researchers could track their online activities. The researchers collected data on more than 19 million of the participants' website visits and their Twitter posts and follows. The study was novel in its combined use of real-world experimentation and computational social science techniques, Yang said. "Past studies that have shown links between partisan media and polarization mostly relied on small-scale controlled experiments or surveys," Yang said. "So it was not only difficult to observe people's online media use accurately, but also to disentangle whether participants were selecting news sources that aligned with their partisan predispositions or if the partisan media were making people's views more extreme. "In our study, we were able to track their online activities for an extended time period as well as assess their attitudes with surveys so we could actually see what information they were consuming and its political consequences." Yang said they used a "nudgelike" approach that subtly but naturally increased participants' exposure to two partisan websites during their daily online activities to demonstrate the importance of basic digital "opt-ins" at structuring people's information consumption. For a month, one-third of the participants were asked to set the default homepage on their web browser to the conservative outlet Fox News while another one-third set theirs to the left-leaning outlet HuffPost. The remaining participants, who were not asked to change anything, were assigned to the control group. Participants in the Fox News and HuffPost groups also were asked to subscribe to affiliated newsletters. The participants were interviewed seven times from July 2018-October 2019 and were asked about their news media consumption, levels of trust in the mainstream media, their approval of then-President Donald Trump and their opinions on a variety of foreign and domestic policy issues. Participants' views on immigration were of particular interest to the researchers because immigration was a topic of contentious debate during the election. Prior to the study, participants spent less than 34 minutes per week on news-related websites, according to the study. During the first week of using their new homepages, people in the HuffPost group visited about one additional page on that site daily, amounting to nearly 50 seconds of additional browsing time. Their counterparts in the Fox News group visited three or four additional pages on that website each day, for a total of about two additional minutes. By the eighth week of the study, people in the Fox News group visited the site an additional 3.7 times each day, while people in the HuffPost group made an additional 0.4 visits daily on average. The increased exposure sparked no major changes in either group's feelings about the political candidates or toward the parties, their voting behaviors or their perceptions of polarization, although the HuffPost users' views about immigration became more liberal over time, Yang said. However, both groups' trust and confidence in the mainstream press significantly declined, an effect that emerged during the first several weeks of increased exposure and remained detectable a year later, the researchers found. "We saw a lowering in their overall trust of the media and that can promote polarization by making people less receptive to information that challenges their beliefs," Yang said. "If consumers do not believe the mainstream media, they will look for other news sources, and there are a lot of alternative sources these days. If that trend continues, over time they will have different information and a differing understanding of what's true and what's not. And that can have negative implications for democracy." A positive outcome of participants' increased exposure to the news media was that they were better informed about current events and more adept at distinguishing events that actually occurred from fictitious events on the surveys, the researchers found. The research was funded by the Volkswagen Foundation Computational Social Science Initiative, the Princeton University Committee on Research in the Humanities and Social Sciences, and the Center for International Studies at the University of Southern California.
Greater exposure to partisan media websites may not change readers' political views, but undermines their trust in the mainstream press, according to a multi-institutional analysis led by University of Illinois Urbana-Champaign (UIUC) researchers. The team studied 1,037 Internet users during the 2018-2019 U.S. midterm election, monitored by passive metering software. Data was compiled on more than 19 million study participants' site visits, as well as their Twitter posts and follows. UIUC's JungHwan Yang said the team applied a "nudgelike" strategy to boost participants' exposure to two partisan websites (Fox News and HuffPost), while a control group that received no such "nudges" did not change its online behavior. Yang said the researchers observed among partisan site visitors "a lowering in their overall trust of the media, and that can promote polarization by making people less receptive to information that challenges their beliefs."
[]
[]
[]
scitechnews
None
None
None
None
Greater exposure to partisan media websites may not change readers' political views, but undermines their trust in the mainstream press, according to a multi-institutional analysis led by University of Illinois Urbana-Champaign (UIUC) researchers. The team studied 1,037 Internet users during the 2018-2019 U.S. midterm election, monitored by passive metering software. Data was compiled on more than 19 million study participants' site visits, as well as their Twitter posts and follows. UIUC's JungHwan Yang said the team applied a "nudgelike" strategy to boost participants' exposure to two partisan websites (Fox News and HuffPost), while a control group that received no such "nudges" did not change its online behavior. Yang said the researchers observed among partisan site visitors "a lowering in their overall trust of the media, and that can promote polarization by making people less receptive to information that challenges their beliefs." CHAMPAIGN, Ill. - Popular wisdom suggests that the internet plays a major role in shaping consumers' political attitudes in the U.S., and some recent studies blamed partisan news outlets' coverage for the increasing polarization of the nation's electorate. However, a study of 1,037 internet users during the 2018-19 U.S. midterm election found that online partisan media may have little direct impact on consumers' political beliefs and activities. Instead, the primary consequence of greater exposure to right- or left-leaning news media is the erosion of readers' trust in the mainstream press, said communication professor JungHwan Yang of the University of Illinois Urbana-Champaign. Yang is a co-author of the study, published in the Proceedings of the National Academy of the Sciences. The paper was co-written by politics and public affairs professor Andrew Guess, of Princeton University; computational political scientist Pablo Barberá, of the University of Southern California; and Simon Munzert, a professor of data science and public policy at the Hertie School. The study participants, who were recruited from the data and analytics group YouGov's Pulse panel, allowed the researchers to survey them multiple times and agreed to install passive metering software on their laptop and desktop computers or tablets so the researchers could track their online activities. The researchers collected data on more than 19 million of the participants' website visits and their Twitter posts and follows. The study was novel in its combined use of real-world experimentation and computational social science techniques, Yang said. "Past studies that have shown links between partisan media and polarization mostly relied on small-scale controlled experiments or surveys," Yang said. "So it was not only difficult to observe people's online media use accurately, but also to disentangle whether participants were selecting news sources that aligned with their partisan predispositions or if the partisan media were making people's views more extreme. "In our study, we were able to track their online activities for an extended time period as well as assess their attitudes with surveys so we could actually see what information they were consuming and its political consequences." Yang said they used a "nudgelike" approach that subtly but naturally increased participants' exposure to two partisan websites during their daily online activities to demonstrate the importance of basic digital "opt-ins" at structuring people's information consumption. For a month, one-third of the participants were asked to set the default homepage on their web browser to the conservative outlet Fox News while another one-third set theirs to the left-leaning outlet HuffPost. The remaining participants, who were not asked to change anything, were assigned to the control group. Participants in the Fox News and HuffPost groups also were asked to subscribe to affiliated newsletters. The participants were interviewed seven times from July 2018-October 2019 and were asked about their news media consumption, levels of trust in the mainstream media, their approval of then-President Donald Trump and their opinions on a variety of foreign and domestic policy issues. Participants' views on immigration were of particular interest to the researchers because immigration was a topic of contentious debate during the election. Prior to the study, participants spent less than 34 minutes per week on news-related websites, according to the study. During the first week of using their new homepages, people in the HuffPost group visited about one additional page on that site daily, amounting to nearly 50 seconds of additional browsing time. Their counterparts in the Fox News group visited three or four additional pages on that website each day, for a total of about two additional minutes. By the eighth week of the study, people in the Fox News group visited the site an additional 3.7 times each day, while people in the HuffPost group made an additional 0.4 visits daily on average. The increased exposure sparked no major changes in either group's feelings about the political candidates or toward the parties, their voting behaviors or their perceptions of polarization, although the HuffPost users' views about immigration became more liberal over time, Yang said. However, both groups' trust and confidence in the mainstream press significantly declined, an effect that emerged during the first several weeks of increased exposure and remained detectable a year later, the researchers found. "We saw a lowering in their overall trust of the media and that can promote polarization by making people less receptive to information that challenges their beliefs," Yang said. "If consumers do not believe the mainstream media, they will look for other news sources, and there are a lot of alternative sources these days. If that trend continues, over time they will have different information and a differing understanding of what's true and what's not. And that can have negative implications for democracy." A positive outcome of participants' increased exposure to the news media was that they were better informed about current events and more adept at distinguishing events that actually occurred from fictitious events on the surveys, the researchers found. The research was funded by the Volkswagen Foundation Computational Social Science Initiative, the Princeton University Committee on Research in the Humanities and Social Sciences, and the Center for International Studies at the University of Southern California.
656
Getting the Inside Track on Street Design
Pedestrian movements are tricky to track, but now the first large-scale statistical analysis of pedestrian flow using anonymous phone data collected in three European capital cities, London, Amsterdam and Stockholm, has been conducted by researchers from KAUST with Swedish colleagues from Gothenburg. Analyzing the flow of pedestrians through city streets provides insights into how city design influences walking behavior. Studies of pedestrian flow inform new urban developments, enable designers to define quieter areas and "urban buzz" zones and reveal how spaces are used at different times. "In a previous study, we found strong links between the total number of people walking on a given street in one day and certain characteristics of the urban environment," says David Bolin at KAUST. Specifically, built density type, which is a variable based on the total floor space and ground space taken up by buildings on a street, correlated with the intensity of pedestrian flow, while the relative position of each street in a city - its "centrality" or street type - explained flow variations within each area. Many similar studies have been hampered by methodological inconsistencies and small datasets, but this one had a large dataset. "We took advantage of the power of large-scale data collection to determine if these same variables (density and street type) could explain both the full-day counts in different streets and the variations in flow over the day," says Bolin. "We developed a functional ANOVA model to explore our results." Data was collected over three weeks in October 2017 from detection devices on almost 700 street segments across 53 neighborhoods. The detectors collected anonymous signals from mobile phones traveling at under 6 kilometers per hour to differentiate pedestrians from people traveling on transport. "We chose streets that provided a wide mix of street types and density types from each city," says Bolin. Daily total pedestrian counts were influenced by built density, street type and a street's "attraction variables," such as the presence of local markets or public transport stops. Built density explained the fluctuations in flow across the day but street type did not. There were also differences between each city, especially in the highest density built-up areas, making it difficult to generalize the findings to other cities. The model predicted pedestrian flow for certain parts of the three cities better than others. "The results provide insights into the importance of street and density types in designing areas with different qualities," says Bolin. "Accurate predictions for other cities would require more data from multiple cities in different seasons."
Researchers at Saudi Arabia's King Abdullah University of Science and Technology (KAUST) and Sweden's Chalmers University of Technology used anonymous phone data to measure the influence of building density and street design on pedestrian behavior in London, Amsterdam, and Stockholm. Previous research determined that built density and street type correlated with pedestrian flow intensity and flow variations. KAUST's David Bolin said, "We took advantage of the power of large-scale data collection to determine if these same variables [density and street type] could explain both the full-day counts in different streets and the variations in flow over the day." They found built density, street type, and attraction variables like local markets affected total pedestrian counts; built density, unlike street type, explained shifts in flow throughout the day, and the model forecast pedestrian flow for some areas of the cities better than others.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Saudi Arabia's King Abdullah University of Science and Technology (KAUST) and Sweden's Chalmers University of Technology used anonymous phone data to measure the influence of building density and street design on pedestrian behavior in London, Amsterdam, and Stockholm. Previous research determined that built density and street type correlated with pedestrian flow intensity and flow variations. KAUST's David Bolin said, "We took advantage of the power of large-scale data collection to determine if these same variables [density and street type] could explain both the full-day counts in different streets and the variations in flow over the day." They found built density, street type, and attraction variables like local markets affected total pedestrian counts; built density, unlike street type, explained shifts in flow throughout the day, and the model forecast pedestrian flow for some areas of the cities better than others. Pedestrian movements are tricky to track, but now the first large-scale statistical analysis of pedestrian flow using anonymous phone data collected in three European capital cities, London, Amsterdam and Stockholm, has been conducted by researchers from KAUST with Swedish colleagues from Gothenburg. Analyzing the flow of pedestrians through city streets provides insights into how city design influences walking behavior. Studies of pedestrian flow inform new urban developments, enable designers to define quieter areas and "urban buzz" zones and reveal how spaces are used at different times. "In a previous study, we found strong links between the total number of people walking on a given street in one day and certain characteristics of the urban environment," says David Bolin at KAUST. Specifically, built density type, which is a variable based on the total floor space and ground space taken up by buildings on a street, correlated with the intensity of pedestrian flow, while the relative position of each street in a city - its "centrality" or street type - explained flow variations within each area. Many similar studies have been hampered by methodological inconsistencies and small datasets, but this one had a large dataset. "We took advantage of the power of large-scale data collection to determine if these same variables (density and street type) could explain both the full-day counts in different streets and the variations in flow over the day," says Bolin. "We developed a functional ANOVA model to explore our results." Data was collected over three weeks in October 2017 from detection devices on almost 700 street segments across 53 neighborhoods. The detectors collected anonymous signals from mobile phones traveling at under 6 kilometers per hour to differentiate pedestrians from people traveling on transport. "We chose streets that provided a wide mix of street types and density types from each city," says Bolin. Daily total pedestrian counts were influenced by built density, street type and a street's "attraction variables," such as the presence of local markets or public transport stops. Built density explained the fluctuations in flow across the day but street type did not. There were also differences between each city, especially in the highest density built-up areas, making it difficult to generalize the findings to other cities. The model predicted pedestrian flow for certain parts of the three cities better than others. "The results provide insights into the importance of street and density types in designing areas with different qualities," says Bolin. "Accurate predictions for other cities would require more data from multiple cities in different seasons."
658
New Wave of 'Hacktivism' Adds Twist to Cybersecurity Woes
(Reuters) - At a time when U.S. agencies and thousands of companies are fighting off major hacking campaigns originating in Russia and China, a different kind of cyber threat is re-emerging: activist hackers looking to make a political point. Three major hacks show the power of this new wave of "hacktivism" - the exposure of AI-driven video surveillance being conducted by the startup Verkada, a collection of Jan. 6 riot videos from the right-wing social network Parler, and disclosure of the Myanmar military junta's high-tech surveillance apparatus. And the U.S. government's response shows that officials regard the return of hacktivism with alarm. An indictment last week accused 21-year-old Tillie Kottmann, a Swiss hacker who took credit for the Verkada breach, of a broad conspiracy. "Wrapping oneself in an allegedly altruistic motive does not remove the criminal stench from such intrusion, theft and fraud," Seattle-based Acting U.S. Attorney Tessa Gorman said. According to a U.S. counter-intelligence strategy released a year ago, "ideologically motivated entities such as hacktivists, leaktivists, and public disclosure organizations," are now viewed as "significant threats," alongside five countries, three terrorist groups, and "transnational criminal organizations." Earlier waves of hacktivism, notably by the amorphous collective known as Anonymous in the early 2010s, largely faded away under law enforcement pressure. But now a new generation of youthful hackers, many angry about how the cybersecurity world operates and upset about the role of tech companies in spreading propaganda, are joining the fray. And some former Anonymous members are returning to the field, including Aubrey Cottle, who helped revive the group's Twitter presence last year in support of the Black Lives Matter protests. Anonymous followers drew attention for disrupting an app that the Dallas police department was using to field complaints about protesters by flooding it with nonsense traffic. They also wrested control of Twitter hashtags promoted by police supporters. "What's interesting about the current wave of the Parler archive and Gab hack and leak is that the hacktivism is supporting antiracist politics or antifascism politics," said Gabriella Coleman, an anthropologist at McGill University, Montreal, who wrote a book on Anonymous. Gab, a social network favored by white nationalists and other right-wing extremists, has also been hurt by the hacktivist campaign and had to shut down for brief periods after breaches. Most recently, Cottle has been focused on QAnon and hate groups. "QAnon trying to adopt Anonymous and merge itself into Anonymous proper, that was the straw that broke the camel's back," said Cottle, who has held a number of web development and engineering jobs, including a stint at Ericsson. He found email data showing that people in charge of the 8kun image board, where the persona known as Q posted, were in steady contact with major promoters of QAnon conspiracies here . The new-wave hacktivists also have a preferred place for putting materials they want to make public - Distributed Denial of Secrets, a transparency site that took up the mantle of WikiLeaks with less geopolitical bias. The site's collective is led by Emma Best, an American known for filing prolific freedom of information requests. Best's two-year-old site coordinating access by researchers and media to a hoard of posts taken from Gab by unidentified hackers. In an essay this week, Best praised Kottmann and said leaks would keep coming, not just from hacktivists but insiders and the ransomware operators who publish files when companies don't pay them off. "Indictments like Tillie's show just how scared the government is, and just how many corporations consider embarrassment a greater threat than insecurity," Best wrote here . The events covered by the Kottmann indictment here took place from November 2019 through January 2021. The core allegation is that the Lucerne software developer and associates broke into a number of companies, removed computer code and published it. The indictment also said Kottmann spoke to the media about poor security practices by the victims and stood to profit, if only by selling shirts saying things like "venture anticapitalist" and "catgirl hacker." But it was only after Kottmann publicly took credit for breaching Verkada and posted alarming videos from inside big companies, medical facilities and a jail that Swiss authorities raided their home at the behest of the U.S. government. Kottmann uses non-binary pronouns. "This move by the U.S. government is clearly not only an attempt to disrupt the freedom of information, but also primarily to intimidate and silence this newly emerging wave of hacktivists and leaktivists," Kottmann said in an interview with Reuters. Kottmann and their lawyer declined to discuss the U.S. charges of wire fraud for some of Kottmann's online statements, aggravated identity theft for using employee credentials, and conspiracy, which together are enough for a lengthy prison sentence. The FBI declined an interview request. If it seeks extradition, the Swiss would determine whether Kottmann's purported actions would have violated that country's laws. Kottmann was open about their disdain for the law and corporate powers-that-be. "Like many people, I've always been opposed to intellectual property as a concept and specifically how it's used to limit our understanding of the systems that run our daily lives," Kottmann said. A European friend of Kottmann's known as "donk_enby," a reference to being non-binary in gender, is another major figure in the hacktivism revival. Donk grew angry about conspiracy theories spread by QAnon followers on the social media app Parler that drove protests against COVID-19 health measures. Following a Cottle post about a leak from Parler in November, Donk dissected the iOS version of Parler's app and found a poor design choice. Each post bore an assigned number, and she could use a program to keep adding 1 to that number and download every single post in sequence. After the Jan. 6 U.S. Capitol riots, Donk shared links to the web addresses of a million Parler video posts and asked her Twitter followers to download them before rioters who recorded themselves inside the building deleted the evidence. The trove included not just footage but exact locations and timestamps, allowing members of Congress to catalogue the violence and the FBI to identify more suspects. Popular with far-right figures, Parler has struggled to stay online after being dropped by Google and Amazon. Donk's actions alarmed users who thought some videos would remain private, hindering the its attempt at a comeback. In the meantime, protesters in Myanmar asked Donk for help, leading to file dumps that prompted Google to pull its blogging platform and email accounts here from leaders of the Feb. 1 coup. Donk's identification of numerous other military contractors helped fuel sanctions that continue to pile up. One big change from the earlier era of hacktivisim is that hackers can now make money legally by reporting the security weaknesses they find to the companies involved, or taking jobs with cybersecurity firms. But some view so-called bug bounty programs, and the hiring of hackers to break into systems to find weaknesses, as mechanisms for protecting companies who should be exposed. "We're not going to hack and help secure anyone we think is doing something extremely unethical," said John Jackson, an American researcher who works with Cottle on above-ground projects. "We're not going to hack surveillance companies and help them secure their infrastructure." (This story corrects spelling to Kottmann from Hottmann, paragraphs 3, 16, 18-25)
Activist hackers looking to make political statements constitute emerging threats to U.S. cybersecurity. The U.S. government charged non-binary Swiss hacker Tillie Kottmann with conspiracy for their claimed exposure of artificial intelligence-powered corporate video surveillance by the startup Verkada. Hacktivists also exposed January 6 Capitol riot videos from the right-wing social network Parler, which Gabriella Coleman at Canada's McGill University said indicated support for antiracist or antifascism politics. Emma Best of the Distributed Denial of Secrets website said indictments like Kottmann's "show just how scared the government is, and just how many corporations consider embarrassment a greater threat than insecurity."
[]
[]
[]
scitechnews
None
None
None
None
Activist hackers looking to make political statements constitute emerging threats to U.S. cybersecurity. The U.S. government charged non-binary Swiss hacker Tillie Kottmann with conspiracy for their claimed exposure of artificial intelligence-powered corporate video surveillance by the startup Verkada. Hacktivists also exposed January 6 Capitol riot videos from the right-wing social network Parler, which Gabriella Coleman at Canada's McGill University said indicated support for antiracist or antifascism politics. Emma Best of the Distributed Denial of Secrets website said indictments like Kottmann's "show just how scared the government is, and just how many corporations consider embarrassment a greater threat than insecurity." (Reuters) - At a time when U.S. agencies and thousands of companies are fighting off major hacking campaigns originating in Russia and China, a different kind of cyber threat is re-emerging: activist hackers looking to make a political point. Three major hacks show the power of this new wave of "hacktivism" - the exposure of AI-driven video surveillance being conducted by the startup Verkada, a collection of Jan. 6 riot videos from the right-wing social network Parler, and disclosure of the Myanmar military junta's high-tech surveillance apparatus. And the U.S. government's response shows that officials regard the return of hacktivism with alarm. An indictment last week accused 21-year-old Tillie Kottmann, a Swiss hacker who took credit for the Verkada breach, of a broad conspiracy. "Wrapping oneself in an allegedly altruistic motive does not remove the criminal stench from such intrusion, theft and fraud," Seattle-based Acting U.S. Attorney Tessa Gorman said. According to a U.S. counter-intelligence strategy released a year ago, "ideologically motivated entities such as hacktivists, leaktivists, and public disclosure organizations," are now viewed as "significant threats," alongside five countries, three terrorist groups, and "transnational criminal organizations." Earlier waves of hacktivism, notably by the amorphous collective known as Anonymous in the early 2010s, largely faded away under law enforcement pressure. But now a new generation of youthful hackers, many angry about how the cybersecurity world operates and upset about the role of tech companies in spreading propaganda, are joining the fray. And some former Anonymous members are returning to the field, including Aubrey Cottle, who helped revive the group's Twitter presence last year in support of the Black Lives Matter protests. Anonymous followers drew attention for disrupting an app that the Dallas police department was using to field complaints about protesters by flooding it with nonsense traffic. They also wrested control of Twitter hashtags promoted by police supporters. "What's interesting about the current wave of the Parler archive and Gab hack and leak is that the hacktivism is supporting antiracist politics or antifascism politics," said Gabriella Coleman, an anthropologist at McGill University, Montreal, who wrote a book on Anonymous. Gab, a social network favored by white nationalists and other right-wing extremists, has also been hurt by the hacktivist campaign and had to shut down for brief periods after breaches. Most recently, Cottle has been focused on QAnon and hate groups. "QAnon trying to adopt Anonymous and merge itself into Anonymous proper, that was the straw that broke the camel's back," said Cottle, who has held a number of web development and engineering jobs, including a stint at Ericsson. He found email data showing that people in charge of the 8kun image board, where the persona known as Q posted, were in steady contact with major promoters of QAnon conspiracies here . The new-wave hacktivists also have a preferred place for putting materials they want to make public - Distributed Denial of Secrets, a transparency site that took up the mantle of WikiLeaks with less geopolitical bias. The site's collective is led by Emma Best, an American known for filing prolific freedom of information requests. Best's two-year-old site coordinating access by researchers and media to a hoard of posts taken from Gab by unidentified hackers. In an essay this week, Best praised Kottmann and said leaks would keep coming, not just from hacktivists but insiders and the ransomware operators who publish files when companies don't pay them off. "Indictments like Tillie's show just how scared the government is, and just how many corporations consider embarrassment a greater threat than insecurity," Best wrote here . The events covered by the Kottmann indictment here took place from November 2019 through January 2021. The core allegation is that the Lucerne software developer and associates broke into a number of companies, removed computer code and published it. The indictment also said Kottmann spoke to the media about poor security practices by the victims and stood to profit, if only by selling shirts saying things like "venture anticapitalist" and "catgirl hacker." But it was only after Kottmann publicly took credit for breaching Verkada and posted alarming videos from inside big companies, medical facilities and a jail that Swiss authorities raided their home at the behest of the U.S. government. Kottmann uses non-binary pronouns. "This move by the U.S. government is clearly not only an attempt to disrupt the freedom of information, but also primarily to intimidate and silence this newly emerging wave of hacktivists and leaktivists," Kottmann said in an interview with Reuters. Kottmann and their lawyer declined to discuss the U.S. charges of wire fraud for some of Kottmann's online statements, aggravated identity theft for using employee credentials, and conspiracy, which together are enough for a lengthy prison sentence. The FBI declined an interview request. If it seeks extradition, the Swiss would determine whether Kottmann's purported actions would have violated that country's laws. Kottmann was open about their disdain for the law and corporate powers-that-be. "Like many people, I've always been opposed to intellectual property as a concept and specifically how it's used to limit our understanding of the systems that run our daily lives," Kottmann said. A European friend of Kottmann's known as "donk_enby," a reference to being non-binary in gender, is another major figure in the hacktivism revival. Donk grew angry about conspiracy theories spread by QAnon followers on the social media app Parler that drove protests against COVID-19 health measures. Following a Cottle post about a leak from Parler in November, Donk dissected the iOS version of Parler's app and found a poor design choice. Each post bore an assigned number, and she could use a program to keep adding 1 to that number and download every single post in sequence. After the Jan. 6 U.S. Capitol riots, Donk shared links to the web addresses of a million Parler video posts and asked her Twitter followers to download them before rioters who recorded themselves inside the building deleted the evidence. The trove included not just footage but exact locations and timestamps, allowing members of Congress to catalogue the violence and the FBI to identify more suspects. Popular with far-right figures, Parler has struggled to stay online after being dropped by Google and Amazon. Donk's actions alarmed users who thought some videos would remain private, hindering the its attempt at a comeback. In the meantime, protesters in Myanmar asked Donk for help, leading to file dumps that prompted Google to pull its blogging platform and email accounts here from leaders of the Feb. 1 coup. Donk's identification of numerous other military contractors helped fuel sanctions that continue to pile up. One big change from the earlier era of hacktivisim is that hackers can now make money legally by reporting the security weaknesses they find to the companies involved, or taking jobs with cybersecurity firms. But some view so-called bug bounty programs, and the hiring of hackers to break into systems to find weaknesses, as mechanisms for protecting companies who should be exposed. "We're not going to hack and help secure anyone we think is doing something extremely unethical," said John Jackson, an American researcher who works with Cottle on above-ground projects. "We're not going to hack surveillance companies and help them secure their infrastructure." (This story corrects spelling to Kottmann from Hottmann, paragraphs 3, 16, 18-25)
660
Researchers Developed Backpack System to Guide Vision-Impaired Wearers
Researchers at the University of Georgia have developed a backpack system to help vision-impaired wearers understand and navigate their surroundings. The backpack uses a Luxonis OAK-D spatial camera, which has an on-chip edge AI processor and uses Intel's Movidius image processing tech. The 4K camera, which captures depth information as well as color images, is packed inside a vest or fanny pack. The system uses Intel's OpenVINO toolkit for inferencing and it can run for up to eight hours, using a pocket-sized battery housed in the fanny pack. The backpack holds a lightweight computing device with a GPS unit. The researchers say their system can detect obstacles (include overhead ones) and tell the wearer where they are through audio prompts. It can also read traffic signs and identify changes in elevation. It can, for instance, inform the wearer that there's a stop sign by a crosswalk or let them know when there's a curb in front of them. A Bluetooth earpiece allows the wearers to control the system with their voice. They can ask it to describe the surroundings or save GPS locations with a specific name. The researchers plan to open source the project. They suggest that the system is unobtrusive and wouldn't attract attention while using it in public. The downside is having to carry a backpack everywhere. Perhaps in the not-too-distant future, researchers will figure out a way to pack this kind of tech into a pair of smart glasses .
An artificial intelligence (AI) -powered backpack developed by University of Georgia (UGA) researchers can help vision-impaired wearers navigate their environment. The tool employs a Luxonis OAK-D spatial 4K camera equipped with an on-chip edge AI processor and Intel's Movidius image processing technology. The camera is bundled inside a vest or fanny pack, while Intel's OpenVINO toolkit is used for inferencing; the system can operate for up to eight hours via a pocket-sized battery, while the backpack also holds a lightweight computer with a global positioning system unit. The UGA researchers said the system can detect obstacles and relay the wearer's whereabouts through audio prompts, as well as reading traffic signs and identifying changes in elevation.
[]
[]
[]
scitechnews
None
None
None
None
An artificial intelligence (AI) -powered backpack developed by University of Georgia (UGA) researchers can help vision-impaired wearers navigate their environment. The tool employs a Luxonis OAK-D spatial 4K camera equipped with an on-chip edge AI processor and Intel's Movidius image processing technology. The camera is bundled inside a vest or fanny pack, while Intel's OpenVINO toolkit is used for inferencing; the system can operate for up to eight hours via a pocket-sized battery, while the backpack also holds a lightweight computer with a global positioning system unit. The UGA researchers said the system can detect obstacles and relay the wearer's whereabouts through audio prompts, as well as reading traffic signs and identifying changes in elevation. Researchers at the University of Georgia have developed a backpack system to help vision-impaired wearers understand and navigate their surroundings. The backpack uses a Luxonis OAK-D spatial camera, which has an on-chip edge AI processor and uses Intel's Movidius image processing tech. The 4K camera, which captures depth information as well as color images, is packed inside a vest or fanny pack. The system uses Intel's OpenVINO toolkit for inferencing and it can run for up to eight hours, using a pocket-sized battery housed in the fanny pack. The backpack holds a lightweight computing device with a GPS unit. The researchers say their system can detect obstacles (include overhead ones) and tell the wearer where they are through audio prompts. It can also read traffic signs and identify changes in elevation. It can, for instance, inform the wearer that there's a stop sign by a crosswalk or let them know when there's a curb in front of them. A Bluetooth earpiece allows the wearers to control the system with their voice. They can ask it to describe the surroundings or save GPS locations with a specific name. The researchers plan to open source the project. They suggest that the system is unobtrusive and wouldn't attract attention while using it in public. The downside is having to carry a backpack everywhere. Perhaps in the not-too-distant future, researchers will figure out a way to pack this kind of tech into a pair of smart glasses .
661
Robot Learns to Tie Knots Using Only Two Fingers on Each Hand
Tetsuya Ogata and colleagues at Japan's Waseda University have taught an artificial intelligence-powered robot to tie knots around a box using just two fingers on each hand. The team first directed a two-armed robot via remote control to manually knot a piece of rope dozens of times, then combined data recorded by the arms with information from an overhead camera and proximity sensors on the fingers; half the rope was colored red and the other half blue to aid identification. The Waseda researchers used the combined data to train a neural network to replicate the task, and the robot was 95% successful in tying a bowknot with the colored rope, and 90% successful with a white rope for which it was not trained.
[]
[]
[]
scitechnews
None
None
None
None
Tetsuya Ogata and colleagues at Japan's Waseda University have taught an artificial intelligence-powered robot to tie knots around a box using just two fingers on each hand. The team first directed a two-armed robot via remote control to manually knot a piece of rope dozens of times, then combined data recorded by the arms with information from an overhead camera and proximity sensors on the fingers; half the rope was colored red and the other half blue to aid identification. The Waseda researchers used the combined data to train a neural network to replicate the task, and the robot was 95% successful in tying a bowknot with the colored rope, and 90% successful with a white rope for which it was not trained.
662
'Smart Clothes' Can Measure Your Movements
In recent years there have been exciting breakthroughs in wearable technologies, like smartwatches that can monitor your breathing and blood oxygen levels. But what about a wearable that can detect how you move as you do a physical activity or play a sport, and could potentially even offer feedback on how to improve your technique? And, as a major bonus, what if the wearable were something you'd actually already be wearing, like a shirt of a pair of socks? That's the idea behind a new set of MIT-designed clothing that use special fibers to sense a person's movement via touch. Among other things, the researchers showed that their clothes can actually determine things like if someone is sitting, walking, or doing particular poses . The group from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) says that their clothes could be used for athletic training and rehabilitation. With patients' permission they could even help passively monitor the health of residents in assisted-care facilities and determine if, for example, someone has fallen or is unconscious. The researchers have developed a range of prototypes, from socks and gloves to a full vest. The team's "tactile electronics" use a mix of more typical textile fibers alongside a small amount of custom-made functional fibers that sense pressure from the person wearing the garment. According to CSAIL graduate student Yiyue Luo, a key advantage of the team's design is that, unlike many existing wearable electronics, theirs can be incorporated into traditional large-scale clothing production. The machine-knitted tactile textiles are soft, stretchable, breathable, and can take a wide range of forms. "Traditionally it's been hard to develop a mass-production wearable that provides high-accuracy data across a large number of sensors," says Luo, lead author on a new paper about the project that is appearing in this month's edition of Nature Electronics. "When you manufacture lots of sensor arrays, some of them will not work and some of them will work worse than others, so we developed a self-correcting mechanism that uses a self-supervised machine learning algorithm to recognize and adjust when certain sensors in the design are off base." The team's clothes have a range of capabilities. Their socks predict motion by looking at how different sequences of tactile footprints correlate to different poses as the user transitions from one pose to another. The full-sized vest can also detect the wearers' pose, activity, and the texture of the contacted surfaces. The authors imagine a coach using the sensor to analyze people's postures and give suggestions on improvement. It could also be used by an experienced athlete to record their posture so that beginners can learn from them. In the long term, they even imagine that robots could be trained to learn how to do different activities using data from the wearables. "Imagine robots that are no longer tactilely blind, and that have 'skins' that can provide tactile sensing just like we have as humans," says corresponding author Wan Shou, a postdoc at CSAIL. "Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come." The paper was co-written by MIT professors Antonio Torralba, Wojciech Matusik and Tomás Palacios, alongside PhD students Yunzhu Li, Pratyusha Sharma and Beichen Li, postdoc Kui Wu, and research engineer Michael Foshey. The work was partially funded by Toyota Research Institute.
Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab (CSAIL) have developed clothing that incorporates special fibers to detect the wearer's movements. The clothes feature "tactile electronics" that can pinpoint whether the wearer is sitting, walking, or performing particular poses. Using these tactile textiles, the researchers have developed prototypes that range from socks and gloves to a full vest, which could be used for athletic training and rehabilitation or to determine whether patients in assisted-care facilities have fallen or are unconscious, among other things. CSAIL's Wan Shou said, "Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Lab (CSAIL) have developed clothing that incorporates special fibers to detect the wearer's movements. The clothes feature "tactile electronics" that can pinpoint whether the wearer is sitting, walking, or performing particular poses. Using these tactile textiles, the researchers have developed prototypes that range from socks and gloves to a full vest, which could be used for athletic training and rehabilitation or to determine whether patients in assisted-care facilities have fallen or are unconscious, among other things. CSAIL's Wan Shou said, "Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come." In recent years there have been exciting breakthroughs in wearable technologies, like smartwatches that can monitor your breathing and blood oxygen levels. But what about a wearable that can detect how you move as you do a physical activity or play a sport, and could potentially even offer feedback on how to improve your technique? And, as a major bonus, what if the wearable were something you'd actually already be wearing, like a shirt of a pair of socks? That's the idea behind a new set of MIT-designed clothing that use special fibers to sense a person's movement via touch. Among other things, the researchers showed that their clothes can actually determine things like if someone is sitting, walking, or doing particular poses . The group from MIT's Computer Science and Artificial Intelligence Lab (CSAIL) says that their clothes could be used for athletic training and rehabilitation. With patients' permission they could even help passively monitor the health of residents in assisted-care facilities and determine if, for example, someone has fallen or is unconscious. The researchers have developed a range of prototypes, from socks and gloves to a full vest. The team's "tactile electronics" use a mix of more typical textile fibers alongside a small amount of custom-made functional fibers that sense pressure from the person wearing the garment. According to CSAIL graduate student Yiyue Luo, a key advantage of the team's design is that, unlike many existing wearable electronics, theirs can be incorporated into traditional large-scale clothing production. The machine-knitted tactile textiles are soft, stretchable, breathable, and can take a wide range of forms. "Traditionally it's been hard to develop a mass-production wearable that provides high-accuracy data across a large number of sensors," says Luo, lead author on a new paper about the project that is appearing in this month's edition of Nature Electronics. "When you manufacture lots of sensor arrays, some of them will not work and some of them will work worse than others, so we developed a self-correcting mechanism that uses a self-supervised machine learning algorithm to recognize and adjust when certain sensors in the design are off base." The team's clothes have a range of capabilities. Their socks predict motion by looking at how different sequences of tactile footprints correlate to different poses as the user transitions from one pose to another. The full-sized vest can also detect the wearers' pose, activity, and the texture of the contacted surfaces. The authors imagine a coach using the sensor to analyze people's postures and give suggestions on improvement. It could also be used by an experienced athlete to record their posture so that beginners can learn from them. In the long term, they even imagine that robots could be trained to learn how to do different activities using data from the wearables. "Imagine robots that are no longer tactilely blind, and that have 'skins' that can provide tactile sensing just like we have as humans," says corresponding author Wan Shou, a postdoc at CSAIL. "Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come." The paper was co-written by MIT professors Antonio Torralba, Wojciech Matusik and Tomás Palacios, alongside PhD students Yunzhu Li, Pratyusha Sharma and Beichen Li, postdoc Kui Wu, and research engineer Michael Foshey. The work was partially funded by Toyota Research Institute.
663
Fire-Simulating Tool Could Improve In-Flight Fire Safety
Some of the most dangerous fires are the ones you don't see coming. That goes not only for fires in buildings but for those kilometers off the ground, aboard commercial airliners. Many aircraft have systems to detect fires early on, but fires that spark in their attics, or overhead compartments - spaces with curved ceilings, filled with air ducts, electrical wiring and structural elements - could potentially sneak past them. "Attic fires are less likely to occur than elsewhere in a plane, but they are hard to detect," said Haiqing Guo, a contract fire research scientist at the Federal Aviation Administration (FAA). "By the time you see it, it's too late." Fire detector placement in overhead compartments is particularly challenging for fire protection engineers as it is unclear how to predict where smoke will travel amid the irregularly shaped clutter. A fire-simulating computer model developed at the National Institute of Standards and Technology (NIST) could now offer some much-needed guidance thanks to recent updates. In a new study , a team of NIST and FAA researchers tested the tool against a real-world scenario, where fires burned inside a grounded airliner, and found that the software closely replicated measured temperatures and correctly identified hot spots in the attic. NIST's Fire Dynamics Simulator , or FDS, simulates the flow of heat and smoke produced by fires. Since its launch in 2000, the software has been used by engineers across the globe to design fire protection systems for buildings and for forensic reconstructions of real-life fires. In both cases, engineers use the software to learn how a fire would or did burn without having to perform full-scale tests first, which are costly and sometimes impractical to run. FDS can reliably model the behavior of a fire in the presence of flat surfaces and block-like objects. This capability is good enough for the lion's share of scenarios, as most rooms are rectangular in shape. But curved surfaces, such as uneven terrain outdoors or the ceilings of trains and planes, have sometimes thrown the software for a loop. To manage this limitation, engineers using past iterations of FDS would approximate curved surfaces with small boxes, but a new version can do better. A recent update allows FDS to understand smoother surfaces made of triangles, bringing its simulations closer to reality in certain cases. While researching aircraft fire safety at FAA, Guo became intimately familiar with the puzzle of detecting attic fires. Upon learning of FDS and its new capabilities, he believed he had found a tool that could help crack the case, and he reached out to NIST's fire researchers. NIST and FAA formed a team to test the software by comparing its simulated data to real data collected in the overhead space of a commercial airliner parked at the FAA William J. Hughes Technical Center . The team placed a gas burner in either the front or rear of the space and - with five firefighters on standby - lit a small flame, which represented an attic fire in its earliest stages. They also arranged 50 temperature sensors throughout the space to capture how the hot smoke traversed the attic's complex terrain. The study's authors produced a map of the overhead compartment via Light Detection and Ranging, or lidar, a technique that employs laser light to measure distances in three dimensions. With the lidar information as a blueprint and hundreds of thousands of triangles as digital building blocks, they constructed a digital version of the space as a setting for the FDS simulations. The team ran and compared both the experiments and simulations, finding a general agreement between the two. A layer of hot gas took shape near the ceiling in both scenarios, with the same pockets of hot air having formed between the metal ribs that lined the ceiling above the gas burner. The initial jumps in temperature occurred at almost the same time between data sets. The temperature values themselves were similar too, with the simulated heat near the ceiling landing within 5 C (9 F) of the measured values on average. "This level of disagreement between model and experiment is typical for full-scale tests, so the model results are reasonable," said NIST chemical engineer Randall McDermott, a co-author of the report. "Ultimately, our goal is to be within experimental uncertainty. So, a bit more work is needed to track down the sources of error in this particular case." These results show that the new FDS can capture several traits of a real overhead compartment fire and suggest it could, with further development, become a reliable tool for fire protection engineers designing aircraft systems in the future. The team seeks to take FDS down that path, testing it against fires in differently shaped attics and examining whether the tool can replicate other aspects of fire in these spaces, such as smoke concentration, another important metric for fire detection. Further down the road, the researchers say FDS could be useful for learning not only how to detect fires, but how to put them out as well. By modeling fire suppression systems, such as fire sprinklers, engineers could gather valuable details about how to extinguish or slow the spread of fires. FDS may also demonstrate how prospective fire suppression agents would flow and mix with smoke in irregularly shaped spaces. Performing these virtual tests would help researchers to identify new chemicals or systems well suited for the job and provide insights on how to best implement them.
Scientists at the U.S. National Institute of Standards and Technology (NIST) and Federal Aviation Administration (FAA) tested a fire-simulating computer model against a real-world scenario for a grounded commercial airliner. The Fire Dynamics Simulator (FDS) models the flow of heat and smoke generated by fires, and the NIST-FAA team built a digital version of the airliner space as an environment for FDS simulations. Real-world experiments and FDS simulations generally correlated in terms of measured temperatures and hot spots. The results indicated the upgraded FDS can capture several properties of an actual overhead compartment fire, and suggested it could be further developed into a reliable tool for fire protection engineers designing aircraft systems.
[]
[]
[]
scitechnews
None
None
None
None
Scientists at the U.S. National Institute of Standards and Technology (NIST) and Federal Aviation Administration (FAA) tested a fire-simulating computer model against a real-world scenario for a grounded commercial airliner. The Fire Dynamics Simulator (FDS) models the flow of heat and smoke generated by fires, and the NIST-FAA team built a digital version of the airliner space as an environment for FDS simulations. Real-world experiments and FDS simulations generally correlated in terms of measured temperatures and hot spots. The results indicated the upgraded FDS can capture several properties of an actual overhead compartment fire, and suggested it could be further developed into a reliable tool for fire protection engineers designing aircraft systems. Some of the most dangerous fires are the ones you don't see coming. That goes not only for fires in buildings but for those kilometers off the ground, aboard commercial airliners. Many aircraft have systems to detect fires early on, but fires that spark in their attics, or overhead compartments - spaces with curved ceilings, filled with air ducts, electrical wiring and structural elements - could potentially sneak past them. "Attic fires are less likely to occur than elsewhere in a plane, but they are hard to detect," said Haiqing Guo, a contract fire research scientist at the Federal Aviation Administration (FAA). "By the time you see it, it's too late." Fire detector placement in overhead compartments is particularly challenging for fire protection engineers as it is unclear how to predict where smoke will travel amid the irregularly shaped clutter. A fire-simulating computer model developed at the National Institute of Standards and Technology (NIST) could now offer some much-needed guidance thanks to recent updates. In a new study , a team of NIST and FAA researchers tested the tool against a real-world scenario, where fires burned inside a grounded airliner, and found that the software closely replicated measured temperatures and correctly identified hot spots in the attic. NIST's Fire Dynamics Simulator , or FDS, simulates the flow of heat and smoke produced by fires. Since its launch in 2000, the software has been used by engineers across the globe to design fire protection systems for buildings and for forensic reconstructions of real-life fires. In both cases, engineers use the software to learn how a fire would or did burn without having to perform full-scale tests first, which are costly and sometimes impractical to run. FDS can reliably model the behavior of a fire in the presence of flat surfaces and block-like objects. This capability is good enough for the lion's share of scenarios, as most rooms are rectangular in shape. But curved surfaces, such as uneven terrain outdoors or the ceilings of trains and planes, have sometimes thrown the software for a loop. To manage this limitation, engineers using past iterations of FDS would approximate curved surfaces with small boxes, but a new version can do better. A recent update allows FDS to understand smoother surfaces made of triangles, bringing its simulations closer to reality in certain cases. While researching aircraft fire safety at FAA, Guo became intimately familiar with the puzzle of detecting attic fires. Upon learning of FDS and its new capabilities, he believed he had found a tool that could help crack the case, and he reached out to NIST's fire researchers. NIST and FAA formed a team to test the software by comparing its simulated data to real data collected in the overhead space of a commercial airliner parked at the FAA William J. Hughes Technical Center . The team placed a gas burner in either the front or rear of the space and - with five firefighters on standby - lit a small flame, which represented an attic fire in its earliest stages. They also arranged 50 temperature sensors throughout the space to capture how the hot smoke traversed the attic's complex terrain. The study's authors produced a map of the overhead compartment via Light Detection and Ranging, or lidar, a technique that employs laser light to measure distances in three dimensions. With the lidar information as a blueprint and hundreds of thousands of triangles as digital building blocks, they constructed a digital version of the space as a setting for the FDS simulations. The team ran and compared both the experiments and simulations, finding a general agreement between the two. A layer of hot gas took shape near the ceiling in both scenarios, with the same pockets of hot air having formed between the metal ribs that lined the ceiling above the gas burner. The initial jumps in temperature occurred at almost the same time between data sets. The temperature values themselves were similar too, with the simulated heat near the ceiling landing within 5 C (9 F) of the measured values on average. "This level of disagreement between model and experiment is typical for full-scale tests, so the model results are reasonable," said NIST chemical engineer Randall McDermott, a co-author of the report. "Ultimately, our goal is to be within experimental uncertainty. So, a bit more work is needed to track down the sources of error in this particular case." These results show that the new FDS can capture several traits of a real overhead compartment fire and suggest it could, with further development, become a reliable tool for fire protection engineers designing aircraft systems in the future. The team seeks to take FDS down that path, testing it against fires in differently shaped attics and examining whether the tool can replicate other aspects of fire in these spaces, such as smoke concentration, another important metric for fire detection. Further down the road, the researchers say FDS could be useful for learning not only how to detect fires, but how to put them out as well. By modeling fire suppression systems, such as fire sprinklers, engineers could gather valuable details about how to extinguish or slow the spread of fires. FDS may also demonstrate how prospective fire suppression agents would flow and mix with smoke in irregularly shaped spaces. Performing these virtual tests would help researchers to identify new chemicals or systems well suited for the job and provide insights on how to best implement them.
664
NJIT, Ben-Gurion University of the Negev Launch Institute for Future Technologies
New Jersey Governor Phil Murphy, President Daniel Chamovitz of Ben-Gurion University of the Negev (BGU) and President Joel S. Bloom of New Jersey Institute of Technology (NJIT) have unveiled a partnership that will create a world-class Institute for Future Technologies in New Jersey. Two powerhouse universities in the fields of cyber technologies and environmental engineering will come together to offer dual degrees and exciting new research opportunities. The Institute looks forward to receiving support and seed funding from the State of New Jersey. "NJIT is one of the state's premier STEM-focused universities, and BGU is one of the driving forces behind the success of Israel's technology economy," said Governor Murphy. "By joining together in this groundbreaking venture, NJIT and BGU will combine their expertise and track records in technological research and development to help strengthen the economic opportunity and tech leadership that I have long envisioned for our state." Acting Consul General of Israel in New York Israel Nitzan added, "This exciting partnership is another expression to the vibrant and fruitful relations of Israel and New Jersey. We share many commonalities, among them our spirit of innovation and creativity. We are proud of this collaboration between two top-notch academic institutions that will conquer the future of cybersecurity and environmental engineering." The Institute for Future Technologies will combine the academic and research capacities of two global institutions, creating the region's next hub of technological innovation. The partnership by NJIT and BGU aims to provide bespoke cyber technologies, civil and environmental engineering education, will conduct applied research and development, and will support innovation and entrepreneurship through technological commercialization efforts. The Institute's mission is to deliver: 1. Education - Offerings from dual NJIT-BGU graduate (Ph.D. and M.S./M.Sc.) degrees for local students, to corporate training programs 2. Applied Research - Opportunities for Ph.D. students and research staff, based on corporate, government, and defense R&D projects and funding 3. Innovation and Entrepreneurship - Promoting technology transfer and commercialization of R&D and other intellectual property from NJIT, BGU and other sources, including launching ventures and spinoffs Operating out of both NJIT's satellite location in the waterfront section of Jersey City as well as its main campus in Newark, the Institute will be easily accessible from the World Trade Center and the Financial District in Lower Manhattan. As companies move and expand operations across the Hudson River into Jersey City and Newark, the Institute is positioned to serve the entire metropolitan region. One main component of the NJIT-BGU agreement is collaboration in civil and environmental engineering, including research in structures, buildings, materials, infrastructures, energy and environmentally conscious construction, water resources and air quality. Both BGU and NJIT have considerable interest and expertise in the development of systems and materials with minimal environmental impact, as well as the development, use, and regulation of natural or engineered systems for the remediation of contaminated environments (water, air, soil). This is in addition to design and preparation against earthquakes and other extreme events, including man-made and natural catastrophes. These civil engineering and infrastructure/water efforts intersect with the cybersecurity effort in protecting aquatic environments and other infrastructure systems from malicious actors and cyber attacks. The Institute represents BGU's first foray into the U.S. higher education system and signals its determination to offer the valuable research and insights it produces for the benefit of students outside of Israel. "Over the past five decades, our expertise and approaches developed in the Negev Desert have become increasingly relevant globally," said BGU President Prof. Daniel Chamovitz. "BGU and NJIT tackle the world's greatest challenges through our problem-oriented approaches. We are excited to offer students in the U.S. the opportunity to get a BGU-NJIT education in New Jersey and to welcome new faculty to the Institute." As the tristate region's largest generator of tech talent, NJIT has an annual economic impact of more than $2.8 billion on the State of New Jersey. The Institute for Future Technologies will be NJIT's next contribution to the region's rapidly expanding tech sector. "NJIT continuously is evolving to preserve and improve New Jersey's leadership in technological innovation," said NJIT President Joel S. Bloom. "International partnerships, especially with world-renowned and tech-driven universities, are a natural step in this direction and represent a major opportunity for NJIT and New Jersey. We look forward to importing some Israeli 'chutzpah' and Startup Nation culture to the region as we build our joint Institute together." BGU has an impressive international reputation in the fields of cybersecurity and data science. "I am excited about this new collaboration. It is wonderful to be able to share the academic excellence and unique characteristics of BGU and NJIT to create new synergy and partnerships. We look forward to a fruitful collaboration leading to many scientific breakthroughs and to the success of our graduates who will gain a dual international degree," said BGU's Vice President for Global Engagement, Prof. Limor Aharonson-Daniel. Craig Gotsman, distinguished professor and dean of NJIT's Ying Wu College of Computing, added, "The last year of COVID-driven reality has proven that computing, digital and cyber technologies are now more important than ever before. Leadership and innovation in this field can be the key to significant economic development, and NJIT is uniquely positioned to make it happen in New Jersey. We are fortunate to have a strong partner in BGU to help us achieve this goal as we learn from their achievements in Israel." "The launch of the Institute for Future Technologies solidifies New Jersey's status as a technology hub," said Jose Lozano, President & CEO, Choose New Jersey. "Our thriving ecosystem is leading the way in offering world-class education, developing top talent, and advancing research. The State of Innovation and the Startup Nation have always enjoyed a close relationship, and this collaboration will only strengthen our economic and cultural ties." Andrew H. Gross, Executive Director of the New Jersey-Israel Commission, added, "Today's partnership stands as a major achievement and opens a new chapter in New Jersey's growing and strategic relationship with Israel. This announcement between two premier and global academic institutions charts us even further on a common course towards innovation and academic leadership that will shape the future and bring significant benefits to both New Jersey and Israel." About NJIT One of only 35 polytechnic universities in the United States, New Jersey Institute of Technology (NJIT) prepares students to become leaders in the technology-dependent economy of the 21st century. NJIT's multidisciplinary curriculum and computing-intensive approach to education provide technological proficiency, business acumen and leadership skills. NJIT is one of only 131 universities rated an R1 research university by the Carnegie Classification, which indicates the highest level of research activity. NJIT conducts more than $160 million in research activity each year and has a $2.8 billion annual economic impact on the State of New Jersey. NJIT is ranked No. 1 nationally by Forbes for the upward economic mobility of its lowest-income students and is ranked in the top 100 colleges and universities nationally for the mid-career earnings of graduates, according to PayScale.com. NJIT also is ranked third in New Jersey and 74th among colleges and universities nationwide by the QS World University Ranking ® 2020. About BGU Ben-Gurion University of the Negev (BGU) is the fastest growing research university in Israel. With 20,000 students, 6,000 staff and faculty members, and three campuses in Beer-Sheva, Sde Boker and Eilat, BGU is an agent of change, fulfilling the vision of David Ben-Gurion, Israel's legendary first prime minister, who envisaged the future of Israel emerging from the Negev Desert. International students coming from over 75 countries are an important component on its vibrant campuses. The University is at the heart of Be'er-Sheva's transformation into an innovation district, where leading multinational corporations and start-ups eagerly leverage BGU's expertise to generate innovative R&D. BGU effects change, locally, regionally and internationally. With faculties in Engineering Sciences; Health Sciences; Natural Sciences; Humanities and Social Sciences; Business and Management; and Desert Studies, the University is a recognized national and global leader in many fields, actively encouraging multi-disciplinary collaborations with government and industry, and nurturing entrepreneurship and innovation in all its forms. BGU is also a university with a conscience, active both on the frontiers of science and in the community. Over a third of our students participate in one of the world's most developed community action programs. For more information, visit the BGU website.
The New Jersey Institute of Technology (NJIT) and Israel's Ben-Gurion University of the Negev (BGU) have partnered to launch the Institute for Future Technologies in New Jersey. The NJIT-BGU collaboration aims to deliver bespoke cyber technologies, civil and environmental engineering education, applied research and development, and innovation and entrepreneurship via technological commercialization. The Institute will operate from NJIT's main campus in Newark, as well as a satellite location in Jersey City. In announcing the partnership, New Jersey Gov. Phil Murphy said, "NJIT and BGU will combine their expertise and track records in technological research and development to help strengthen the economic opportunity and tech leadership that I have long envisioned for our state."
[]
[]
[]
scitechnews
None
None
None
None
The New Jersey Institute of Technology (NJIT) and Israel's Ben-Gurion University of the Negev (BGU) have partnered to launch the Institute for Future Technologies in New Jersey. The NJIT-BGU collaboration aims to deliver bespoke cyber technologies, civil and environmental engineering education, applied research and development, and innovation and entrepreneurship via technological commercialization. The Institute will operate from NJIT's main campus in Newark, as well as a satellite location in Jersey City. In announcing the partnership, New Jersey Gov. Phil Murphy said, "NJIT and BGU will combine their expertise and track records in technological research and development to help strengthen the economic opportunity and tech leadership that I have long envisioned for our state." New Jersey Governor Phil Murphy, President Daniel Chamovitz of Ben-Gurion University of the Negev (BGU) and President Joel S. Bloom of New Jersey Institute of Technology (NJIT) have unveiled a partnership that will create a world-class Institute for Future Technologies in New Jersey. Two powerhouse universities in the fields of cyber technologies and environmental engineering will come together to offer dual degrees and exciting new research opportunities. The Institute looks forward to receiving support and seed funding from the State of New Jersey. "NJIT is one of the state's premier STEM-focused universities, and BGU is one of the driving forces behind the success of Israel's technology economy," said Governor Murphy. "By joining together in this groundbreaking venture, NJIT and BGU will combine their expertise and track records in technological research and development to help strengthen the economic opportunity and tech leadership that I have long envisioned for our state." Acting Consul General of Israel in New York Israel Nitzan added, "This exciting partnership is another expression to the vibrant and fruitful relations of Israel and New Jersey. We share many commonalities, among them our spirit of innovation and creativity. We are proud of this collaboration between two top-notch academic institutions that will conquer the future of cybersecurity and environmental engineering." The Institute for Future Technologies will combine the academic and research capacities of two global institutions, creating the region's next hub of technological innovation. The partnership by NJIT and BGU aims to provide bespoke cyber technologies, civil and environmental engineering education, will conduct applied research and development, and will support innovation and entrepreneurship through technological commercialization efforts. The Institute's mission is to deliver: 1. Education - Offerings from dual NJIT-BGU graduate (Ph.D. and M.S./M.Sc.) degrees for local students, to corporate training programs 2. Applied Research - Opportunities for Ph.D. students and research staff, based on corporate, government, and defense R&D projects and funding 3. Innovation and Entrepreneurship - Promoting technology transfer and commercialization of R&D and other intellectual property from NJIT, BGU and other sources, including launching ventures and spinoffs Operating out of both NJIT's satellite location in the waterfront section of Jersey City as well as its main campus in Newark, the Institute will be easily accessible from the World Trade Center and the Financial District in Lower Manhattan. As companies move and expand operations across the Hudson River into Jersey City and Newark, the Institute is positioned to serve the entire metropolitan region. One main component of the NJIT-BGU agreement is collaboration in civil and environmental engineering, including research in structures, buildings, materials, infrastructures, energy and environmentally conscious construction, water resources and air quality. Both BGU and NJIT have considerable interest and expertise in the development of systems and materials with minimal environmental impact, as well as the development, use, and regulation of natural or engineered systems for the remediation of contaminated environments (water, air, soil). This is in addition to design and preparation against earthquakes and other extreme events, including man-made and natural catastrophes. These civil engineering and infrastructure/water efforts intersect with the cybersecurity effort in protecting aquatic environments and other infrastructure systems from malicious actors and cyber attacks. The Institute represents BGU's first foray into the U.S. higher education system and signals its determination to offer the valuable research and insights it produces for the benefit of students outside of Israel. "Over the past five decades, our expertise and approaches developed in the Negev Desert have become increasingly relevant globally," said BGU President Prof. Daniel Chamovitz. "BGU and NJIT tackle the world's greatest challenges through our problem-oriented approaches. We are excited to offer students in the U.S. the opportunity to get a BGU-NJIT education in New Jersey and to welcome new faculty to the Institute." As the tristate region's largest generator of tech talent, NJIT has an annual economic impact of more than $2.8 billion on the State of New Jersey. The Institute for Future Technologies will be NJIT's next contribution to the region's rapidly expanding tech sector. "NJIT continuously is evolving to preserve and improve New Jersey's leadership in technological innovation," said NJIT President Joel S. Bloom. "International partnerships, especially with world-renowned and tech-driven universities, are a natural step in this direction and represent a major opportunity for NJIT and New Jersey. We look forward to importing some Israeli 'chutzpah' and Startup Nation culture to the region as we build our joint Institute together." BGU has an impressive international reputation in the fields of cybersecurity and data science. "I am excited about this new collaboration. It is wonderful to be able to share the academic excellence and unique characteristics of BGU and NJIT to create new synergy and partnerships. We look forward to a fruitful collaboration leading to many scientific breakthroughs and to the success of our graduates who will gain a dual international degree," said BGU's Vice President for Global Engagement, Prof. Limor Aharonson-Daniel. Craig Gotsman, distinguished professor and dean of NJIT's Ying Wu College of Computing, added, "The last year of COVID-driven reality has proven that computing, digital and cyber technologies are now more important than ever before. Leadership and innovation in this field can be the key to significant economic development, and NJIT is uniquely positioned to make it happen in New Jersey. We are fortunate to have a strong partner in BGU to help us achieve this goal as we learn from their achievements in Israel." "The launch of the Institute for Future Technologies solidifies New Jersey's status as a technology hub," said Jose Lozano, President & CEO, Choose New Jersey. "Our thriving ecosystem is leading the way in offering world-class education, developing top talent, and advancing research. The State of Innovation and the Startup Nation have always enjoyed a close relationship, and this collaboration will only strengthen our economic and cultural ties." Andrew H. Gross, Executive Director of the New Jersey-Israel Commission, added, "Today's partnership stands as a major achievement and opens a new chapter in New Jersey's growing and strategic relationship with Israel. This announcement between two premier and global academic institutions charts us even further on a common course towards innovation and academic leadership that will shape the future and bring significant benefits to both New Jersey and Israel." About NJIT One of only 35 polytechnic universities in the United States, New Jersey Institute of Technology (NJIT) prepares students to become leaders in the technology-dependent economy of the 21st century. NJIT's multidisciplinary curriculum and computing-intensive approach to education provide technological proficiency, business acumen and leadership skills. NJIT is one of only 131 universities rated an R1 research university by the Carnegie Classification, which indicates the highest level of research activity. NJIT conducts more than $160 million in research activity each year and has a $2.8 billion annual economic impact on the State of New Jersey. NJIT is ranked No. 1 nationally by Forbes for the upward economic mobility of its lowest-income students and is ranked in the top 100 colleges and universities nationally for the mid-career earnings of graduates, according to PayScale.com. NJIT also is ranked third in New Jersey and 74th among colleges and universities nationwide by the QS World University Ranking ® 2020. About BGU Ben-Gurion University of the Negev (BGU) is the fastest growing research university in Israel. With 20,000 students, 6,000 staff and faculty members, and three campuses in Beer-Sheva, Sde Boker and Eilat, BGU is an agent of change, fulfilling the vision of David Ben-Gurion, Israel's legendary first prime minister, who envisaged the future of Israel emerging from the Negev Desert. International students coming from over 75 countries are an important component on its vibrant campuses. The University is at the heart of Be'er-Sheva's transformation into an innovation district, where leading multinational corporations and start-ups eagerly leverage BGU's expertise to generate innovative R&D. BGU effects change, locally, regionally and internationally. With faculties in Engineering Sciences; Health Sciences; Natural Sciences; Humanities and Social Sciences; Business and Management; and Desert Studies, the University is a recognized national and global leader in many fields, actively encouraging multi-disciplinary collaborations with government and industry, and nurturing entrepreneurship and innovation in all its forms. BGU is also a university with a conscience, active both on the frontiers of science and in the community. Over a third of our students participate in one of the world's most developed community action programs. For more information, visit the BGU website.
665
TikTok Does Not Pose Overt Threat to U.S. National Security, Researchers Say
HONG KONG - The computer code underlying TikTok doesn't pose a national security threat to the U.S., according to a new study by university cybersecurity researchers. Released Monday by the University of Toronto cybersecurity group Citizen Lab, the report comes after government officials in multiple countries, including in the administration of former President Donald Trump , suggested the popular Chinese-owned short-video app could aid Beijing in spying overseas.
Cybersecurity researchers at the University of Toronto's Citizen Lab in Canada said TikTok's underlying computer code does not pose a national security threat to the U.S. The researchers said a technical analysis of the app, owned by China's ByteDance Ltd., found no evidence of "overtly malicious behavior." Although they determined that TikTok's data collection practices are no more intrusive than Facebook's, the researchers acknowledged there could be security issues they did not uncover. Further, ByteDance could be forced to turn data over to the Chinese government under the country's national security laws. ByteDance said it was committed to working with authorities to resolve their concerns.
[]
[]
[]
scitechnews
None
None
None
None
Cybersecurity researchers at the University of Toronto's Citizen Lab in Canada said TikTok's underlying computer code does not pose a national security threat to the U.S. The researchers said a technical analysis of the app, owned by China's ByteDance Ltd., found no evidence of "overtly malicious behavior." Although they determined that TikTok's data collection practices are no more intrusive than Facebook's, the researchers acknowledged there could be security issues they did not uncover. Further, ByteDance could be forced to turn data over to the Chinese government under the country's national security laws. ByteDance said it was committed to working with authorities to resolve their concerns. HONG KONG - The computer code underlying TikTok doesn't pose a national security threat to the U.S., according to a new study by university cybersecurity researchers. Released Monday by the University of Toronto cybersecurity group Citizen Lab, the report comes after government officials in multiple countries, including in the administration of former President Donald Trump , suggested the popular Chinese-owned short-video app could aid Beijing in spying overseas.
666
Computer Model Tracks Cellphone Data to Predict Covid Spread
Last year, Arti Ramesh and Anand Seetharam - both assistant professors in the Department of Computer Science at Binghamton University's Thomas J. Watson College of Engineering and Applied Science - published several studies that used data-mining and machine learning models to respond to the COVID-19 pandemic. One study used coronavirus data collected by Johns Hopkins University to show how and where infections could spread on a global scale. Another study used anonymous cell-phone data to track how residents of Rio de Janeiro traveled throughout the city before, during and after its strictest lockdown protocols. Arti Ramesh, assistant professor, computer science Arti Ramesh, assistant professor, computer science × In their latest research, Ramesh and Seetharam have blended the two ideas into a new algorithm that narrows the geographic scope of their COVID predictions, making it more useful for regional and local officials looking to curb the spread of the virus. Returning their attention to Brazil's second-largest city, they correlated cell-phone data with coronavirus infection rates in Rio's municipal districts to build a mathematical model that predicts how cases would change in the next seven days for the different municipalities. The projections are based on current COVID rates combined with mobility trends to and from the districts with the most infections. "This is one of the first studies that has quantified mobility in a manner that it can be used to demonstrate how cases are going to spread," Seetharam said. "It's not just the number of cases in a particular region that contributes to future cases in that region." "These municipalities are units where the data can be recorded and collected easily," Ramesh said. "Every city or jurisdiction has these kinds of units where the numbers of cases are recorded separately or that are governed separately from other municipalities. That action is very helpful in converting the observations we are making into policies." Anand Seetharam, assistant professor, computer science Anand Seetharam, assistant professor, computer science × Ramesh and Seetharam believe their prediction model could be mapped onto larger regions such as counties, states or even countries, giving political leaders and health officials the ability to better predict where COVID could spread the fastest. Short-term solutions might include partial lockdowns of certain neighborhoods or towns, such as New York restricting movement into and out of New Rochelle last March when the virus arrived in the U.S. "We wanted to build a model that can be translated easily into policy, one with subregions that are managed through independent jurisdictions," Ramesh said. "That way, someone can actually impose a lockdown on that level, as well as collect data about cases and predict on that level." Longer-term policies also could be made using the Binghamton team's prediction algorithm. "If you know there are a lot of cases in a particular region, you could think of a future world where this model could be used for some kind of city planning," Seetharam said. "You could create traffic flows in such a way that people try to avoid that region because of the inconvenience involved. That would lessen the spread of infection. It need not be anything invasive, but minor tweaks could be beneficial." Ramesh and Seetharam's study is called "Mobility-aware COVID-19 Case Prediction using Cellular Network Logs."
Binghamton University's Arti Ramesh and Anand Seetharam have designed an algorithm to predict the spread of Covid-19 by tracking cellphone data. The scientists matched cellphone data with coronavirus infection rates in Rio de Janeiro's municipal districts to construct a mathematical model that predicts how cases would shift the next week for Rio's municipal districts; forecasts were based on current Covid rates blended with mobility trends between districts with the most cases. Seetharam said, "This is one of the first studies that has quantified mobility in a manner that it can be used to demonstrate how cases are going to spread. It's not just the number of cases in a particular region that contributes to future cases in that region."
[]
[]
[]
scitechnews
None
None
None
None
Binghamton University's Arti Ramesh and Anand Seetharam have designed an algorithm to predict the spread of Covid-19 by tracking cellphone data. The scientists matched cellphone data with coronavirus infection rates in Rio de Janeiro's municipal districts to construct a mathematical model that predicts how cases would shift the next week for Rio's municipal districts; forecasts were based on current Covid rates blended with mobility trends between districts with the most cases. Seetharam said, "This is one of the first studies that has quantified mobility in a manner that it can be used to demonstrate how cases are going to spread. It's not just the number of cases in a particular region that contributes to future cases in that region." Last year, Arti Ramesh and Anand Seetharam - both assistant professors in the Department of Computer Science at Binghamton University's Thomas J. Watson College of Engineering and Applied Science - published several studies that used data-mining and machine learning models to respond to the COVID-19 pandemic. One study used coronavirus data collected by Johns Hopkins University to show how and where infections could spread on a global scale. Another study used anonymous cell-phone data to track how residents of Rio de Janeiro traveled throughout the city before, during and after its strictest lockdown protocols. Arti Ramesh, assistant professor, computer science Arti Ramesh, assistant professor, computer science × In their latest research, Ramesh and Seetharam have blended the two ideas into a new algorithm that narrows the geographic scope of their COVID predictions, making it more useful for regional and local officials looking to curb the spread of the virus. Returning their attention to Brazil's second-largest city, they correlated cell-phone data with coronavirus infection rates in Rio's municipal districts to build a mathematical model that predicts how cases would change in the next seven days for the different municipalities. The projections are based on current COVID rates combined with mobility trends to and from the districts with the most infections. "This is one of the first studies that has quantified mobility in a manner that it can be used to demonstrate how cases are going to spread," Seetharam said. "It's not just the number of cases in a particular region that contributes to future cases in that region." "These municipalities are units where the data can be recorded and collected easily," Ramesh said. "Every city or jurisdiction has these kinds of units where the numbers of cases are recorded separately or that are governed separately from other municipalities. That action is very helpful in converting the observations we are making into policies." Anand Seetharam, assistant professor, computer science Anand Seetharam, assistant professor, computer science × Ramesh and Seetharam believe their prediction model could be mapped onto larger regions such as counties, states or even countries, giving political leaders and health officials the ability to better predict where COVID could spread the fastest. Short-term solutions might include partial lockdowns of certain neighborhoods or towns, such as New York restricting movement into and out of New Rochelle last March when the virus arrived in the U.S. "We wanted to build a model that can be translated easily into policy, one with subregions that are managed through independent jurisdictions," Ramesh said. "That way, someone can actually impose a lockdown on that level, as well as collect data about cases and predict on that level." Longer-term policies also could be made using the Binghamton team's prediction algorithm. "If you know there are a lot of cases in a particular region, you could think of a future world where this model could be used for some kind of city planning," Seetharam said. "You could create traffic flows in such a way that people try to avoid that region because of the inconvenience involved. That would lessen the spread of infection. It need not be anything invasive, but minor tweaks could be beneficial." Ramesh and Seetharam's study is called "Mobility-aware COVID-19 Case Prediction using Cellular Network Logs."
668
UCLA Researchers Develop Noninvasive AI Method to Inspect Live Cells, Gain Critical Data
R esearchers at the UCLA Samueli School of Engineering have discovered a new artificial intelligence-based method to discern the properties of live biological cells without destroying them. The advance could enable laboratories to conduct drug-safety screening faster and more efficiently while improving quality control for cell therapies. The research was published today in Nature's Scientific Reports . "We want to know if a batch of live biological cells can be both viable and able to perform the functions we want them to. This noninvasive, AI-backed technique can infer the quality of those cells while keeping the entire batch intact," said study leader Neil Lin, an assistant professor of mechanical and aerospace engineering. "We envision this method could be widely adopted by many academic and industrial cell biology labs. And it could be especially important in cell therapies, where the cells themselves are valuable." Currently, cells are often characterized using antibody staining. This method requires cells to be isolated and dyed with fluorescent tags that light up when a target protein is present. Not only is such a process time-consuming - taking up to a full day to prepare, process and analyze the cells - it also kills the cells used for the analysis. The UCLA researchers' new technique allows for instant cell assessments while keeping the entire batch of cells intact. Employing an AI-powered deep-learning model, cells are viewed and a snapshot taken under a light-based microscope, also known as a brightfield microscope. While the model is basically the same as one that has been used in the movie industry to alter and enhance images, such as artificially aging a movie character, the UCLA team adapted the model and trained it to infer and identify antibody-labeled fluorescent images of cells. The optimized model analyzes the subtle differences in size and shape of cells, qualities not readily visible to human eyes, and utilizes that information for predicting the levels of proteins present. The processed image unveils information on existence of proteins and their whereabouts, much like a traditional stained sample would without sacrificing the cells. Moreover, this method may provide a more accurate assessment of the cells. While traditional staining methods can usually label a few different proteins, the new AI tool can predict as many proteins as the machine-learning model has been trained to identify. The group focused on mesenchymal stem cells that are essential for biological tissue regeneration in cell therapies as they can orchestrate multiple types of cells to form new tissues. These cells also hold promise for treating various inflammatory disorders as they can modulate the human immune system. The researchers noted the imaging technique could be further refined if they can obtain more training data on a wide range of cell types and features, such as the age of cells and their changes following drug use. The lead authors on the study are Sara Imboden, a visiting graduate student in mechanical engineering, and UCLA computer science graduate student Xuanqing Li. The other UCLA authors are bioengineering undergraduate student Brandon Lee, mechanical engineering graduate student Marie Payne and Cho-Jui Hsieh - an assistant professor of computer science who works on machine-learning algorithms. Lin leads the Living Soft Materials Research Laboratory at UCLA. He also has faculty appointment in the bioengineering department and is a member of the Institute for Quantitative and Computational Biosciences . The research was supported by a UCLA SPORE in Prostate Cancer grant, the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA , the California NanoSystems Institute at UCLA and the National Science Foundation.
At the University of California, Los Angeles (UCLA) Samueli School of Engineering, researchers have developed a noninvasive artificial intelligence (AI) -based technique to analyze live biological cells. Using a deep learning model, the cells are viewed and a snapshot captured under a brightfield microscope. The UCLA team trained the model to deduce and identify antibody-labeled fluorescent cellular images, and to note subtle distinctions in size and shape in order to predict protein levels. This analysis yields data on protein concentrations and location without destroying the sample, and the AI tool can predict as many proteins as the model has been trained to identify. UCLA's Neil Lin said this method could be of use to academic and industrial cell biology laboratories, and "it could be especially important in cell therapies, where the cells themselves are valuable."
[]
[]
[]
scitechnews
None
None
None
None
At the University of California, Los Angeles (UCLA) Samueli School of Engineering, researchers have developed a noninvasive artificial intelligence (AI) -based technique to analyze live biological cells. Using a deep learning model, the cells are viewed and a snapshot captured under a brightfield microscope. The UCLA team trained the model to deduce and identify antibody-labeled fluorescent cellular images, and to note subtle distinctions in size and shape in order to predict protein levels. This analysis yields data on protein concentrations and location without destroying the sample, and the AI tool can predict as many proteins as the model has been trained to identify. UCLA's Neil Lin said this method could be of use to academic and industrial cell biology laboratories, and "it could be especially important in cell therapies, where the cells themselves are valuable." R esearchers at the UCLA Samueli School of Engineering have discovered a new artificial intelligence-based method to discern the properties of live biological cells without destroying them. The advance could enable laboratories to conduct drug-safety screening faster and more efficiently while improving quality control for cell therapies. The research was published today in Nature's Scientific Reports . "We want to know if a batch of live biological cells can be both viable and able to perform the functions we want them to. This noninvasive, AI-backed technique can infer the quality of those cells while keeping the entire batch intact," said study leader Neil Lin, an assistant professor of mechanical and aerospace engineering. "We envision this method could be widely adopted by many academic and industrial cell biology labs. And it could be especially important in cell therapies, where the cells themselves are valuable." Currently, cells are often characterized using antibody staining. This method requires cells to be isolated and dyed with fluorescent tags that light up when a target protein is present. Not only is such a process time-consuming - taking up to a full day to prepare, process and analyze the cells - it also kills the cells used for the analysis. The UCLA researchers' new technique allows for instant cell assessments while keeping the entire batch of cells intact. Employing an AI-powered deep-learning model, cells are viewed and a snapshot taken under a light-based microscope, also known as a brightfield microscope. While the model is basically the same as one that has been used in the movie industry to alter and enhance images, such as artificially aging a movie character, the UCLA team adapted the model and trained it to infer and identify antibody-labeled fluorescent images of cells. The optimized model analyzes the subtle differences in size and shape of cells, qualities not readily visible to human eyes, and utilizes that information for predicting the levels of proteins present. The processed image unveils information on existence of proteins and their whereabouts, much like a traditional stained sample would without sacrificing the cells. Moreover, this method may provide a more accurate assessment of the cells. While traditional staining methods can usually label a few different proteins, the new AI tool can predict as many proteins as the machine-learning model has been trained to identify. The group focused on mesenchymal stem cells that are essential for biological tissue regeneration in cell therapies as they can orchestrate multiple types of cells to form new tissues. These cells also hold promise for treating various inflammatory disorders as they can modulate the human immune system. The researchers noted the imaging technique could be further refined if they can obtain more training data on a wide range of cell types and features, such as the age of cells and their changes following drug use. The lead authors on the study are Sara Imboden, a visiting graduate student in mechanical engineering, and UCLA computer science graduate student Xuanqing Li. The other UCLA authors are bioengineering undergraduate student Brandon Lee, mechanical engineering graduate student Marie Payne and Cho-Jui Hsieh - an assistant professor of computer science who works on machine-learning algorithms. Lin leads the Living Soft Materials Research Laboratory at UCLA. He also has faculty appointment in the bioengineering department and is a member of the Institute for Quantitative and Computational Biosciences . The research was supported by a UCLA SPORE in Prostate Cancer grant, the Eli and Edythe Broad Center of Regenerative Medicine and Stem Cell Research at UCLA , the California NanoSystems Institute at UCLA and the National Science Foundation.
669
Optimization for Resource Management Using Multi-Agent Systems
Urban development is therefore becoming more important. The efficiency of existing real-world lift transport and road network systems can be measured through surveillance equipment such as cameras and global positioning system trackers. However, when wanting to build and deploy new and never-been-seen solutions, data that is required to build it is not existing and needs to be estimated. Simple mathematical models can be used for fast modelling to estimate cars' energy consumption, simple road network systems' traffic flows and lift traveling schedules, but real-world situations are often much more sophisticated, and these simple models is unrealistic. For instance, these simple models do not take into account uncertainties like human behaviour, and sudden changes in the environment. Hence, the results of such simple models could be far from reality. In this thesis, we use multi-agent systems to simulate real-world environment such as car drivers by making smart artificial driver agents to simulate traffic in road networks. The agents behave like how a driver would behave in real-world. For instance, they take the shortest route to their destination but may choose to re-route if they meet with traffic congestion. In the artificial environment, it is easy to change the conditions of the road network system to simulate the immediate effect on the traffic flow. In this work, we performed two such case studies. First, we study the effects of Singapore drivers on the country's energy consumption assuming that every car has become an electric car. Second, we study the effects of removal bridges across Joensuu's Pielisjoki river on the traffic load. We observed that removing Pekkalansilta significantly increased traffic on both Sirkkalansilta and Suvantosilta, but surprisingly caused slight decrease on the use of Itäsilta. Removing Itäsilta or Suvantosilta, however, did not affect the use of Pekkalansilta at all. Removing Itäsilta burdened Suvantosilta most. Removing Suvantosilta would increase traffic to all other bridges. In Singapore there are many 40 storeys buildings and higher having two to three elevators each. These elevators serve families with children, elderlies on wheel-chairs and working-class people. The lifts have to run very efficiently to transport as many people as quickly as possible with consuming as little energy as possible. The lifts would need to be clever enough to work cohesively with one another to get the transporting job done. A real estate developers need to model this situation when they construct new buildings but often does not know enough what kind of people would actually live in them and yet they need the lifts to operate efficiently for the residents from day one onwards for all kind of people. The results of the thesis are used as a part of building Singapore as a Smart Nation . Many of Singapore's infrastructure will be digitally transformed to be intelligent and able to communicate with one another. However, no one has done this before and there is no telling what the outcome would be. Therefore, it is even more important to model and study all aspects of the urban environment, including traffic, to gain insights to plausible and possible outcomes. Singapore Smart Nation initiatives and efforts can be found in my most recent publication titled, 'Innovating services and digital economy in Singapore', published in 'Communications of the ACM', volume 6, page 58-59, 2020. Currently, the candidate works as Head of Singapore's National Research Foundation (NRF), an organization directly under Singapore's Prime Minister Office. Thomas Ho serves in the Services and Digital Economy directorate in NRF. His main role is to bridge research to businesses and government agencies and help Singapore's universities and research laboratories find companies and government agencies for their technology deployments. Conversely, he also helps companies and government agencies to build new capabilities by connecting them to the academics and professors so that they can provide real-world problems. In both cases, government may fund the translation work and capability development. Hence, the resource management work of the PhD helps Thomas Ho to ensure that government resources will be wisely spent. The doctoral dissertation of MSc Thomas Ho Chee Tat, entitled Optimization for resource management using multi-agent systems will be examined at the Faculty of Science and Forestry on the 30th of March 2021 at 10 am online. The opponent in the public examination will be Head of People Flow Planning, Dr. Janne Sorsa , KONE Industrial Ltd, and the custos will be Professor Pasi Fränti, University of Eastern Finland. The public examination will be held in English. Photo available for download Link to the online event Link to the dissertation
Researchers at the University of Eastern Finland (UEF) testing a thesis that multi-agent systems can be used to optimize resource management modeled car drivers using artificially intelligent agents to simulate traffic in road networks. The agents behave as drivers would behave in real-world situations, for example by taking the shortest route to their destination, but possibly opting to reroute if they encounter congestion. The UEF team applied this approach to explore two scenarios: how drivers in Singapore would impact the country's energy consumption, assuming every vehicle is electric, and the effect of bridge removal across the Finnish city of Joensuu's Pielisjoki river on traffic load. In Singapore, the thesis results are being applied to help digitally transform the city's infrastructure.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of Eastern Finland (UEF) testing a thesis that multi-agent systems can be used to optimize resource management modeled car drivers using artificially intelligent agents to simulate traffic in road networks. The agents behave as drivers would behave in real-world situations, for example by taking the shortest route to their destination, but possibly opting to reroute if they encounter congestion. The UEF team applied this approach to explore two scenarios: how drivers in Singapore would impact the country's energy consumption, assuming every vehicle is electric, and the effect of bridge removal across the Finnish city of Joensuu's Pielisjoki river on traffic load. In Singapore, the thesis results are being applied to help digitally transform the city's infrastructure. Urban development is therefore becoming more important. The efficiency of existing real-world lift transport and road network systems can be measured through surveillance equipment such as cameras and global positioning system trackers. However, when wanting to build and deploy new and never-been-seen solutions, data that is required to build it is not existing and needs to be estimated. Simple mathematical models can be used for fast modelling to estimate cars' energy consumption, simple road network systems' traffic flows and lift traveling schedules, but real-world situations are often much more sophisticated, and these simple models is unrealistic. For instance, these simple models do not take into account uncertainties like human behaviour, and sudden changes in the environment. Hence, the results of such simple models could be far from reality. In this thesis, we use multi-agent systems to simulate real-world environment such as car drivers by making smart artificial driver agents to simulate traffic in road networks. The agents behave like how a driver would behave in real-world. For instance, they take the shortest route to their destination but may choose to re-route if they meet with traffic congestion. In the artificial environment, it is easy to change the conditions of the road network system to simulate the immediate effect on the traffic flow. In this work, we performed two such case studies. First, we study the effects of Singapore drivers on the country's energy consumption assuming that every car has become an electric car. Second, we study the effects of removal bridges across Joensuu's Pielisjoki river on the traffic load. We observed that removing Pekkalansilta significantly increased traffic on both Sirkkalansilta and Suvantosilta, but surprisingly caused slight decrease on the use of Itäsilta. Removing Itäsilta or Suvantosilta, however, did not affect the use of Pekkalansilta at all. Removing Itäsilta burdened Suvantosilta most. Removing Suvantosilta would increase traffic to all other bridges. In Singapore there are many 40 storeys buildings and higher having two to three elevators each. These elevators serve families with children, elderlies on wheel-chairs and working-class people. The lifts have to run very efficiently to transport as many people as quickly as possible with consuming as little energy as possible. The lifts would need to be clever enough to work cohesively with one another to get the transporting job done. A real estate developers need to model this situation when they construct new buildings but often does not know enough what kind of people would actually live in them and yet they need the lifts to operate efficiently for the residents from day one onwards for all kind of people. The results of the thesis are used as a part of building Singapore as a Smart Nation . Many of Singapore's infrastructure will be digitally transformed to be intelligent and able to communicate with one another. However, no one has done this before and there is no telling what the outcome would be. Therefore, it is even more important to model and study all aspects of the urban environment, including traffic, to gain insights to plausible and possible outcomes. Singapore Smart Nation initiatives and efforts can be found in my most recent publication titled, 'Innovating services and digital economy in Singapore', published in 'Communications of the ACM', volume 6, page 58-59, 2020. Currently, the candidate works as Head of Singapore's National Research Foundation (NRF), an organization directly under Singapore's Prime Minister Office. Thomas Ho serves in the Services and Digital Economy directorate in NRF. His main role is to bridge research to businesses and government agencies and help Singapore's universities and research laboratories find companies and government agencies for their technology deployments. Conversely, he also helps companies and government agencies to build new capabilities by connecting them to the academics and professors so that they can provide real-world problems. In both cases, government may fund the translation work and capability development. Hence, the resource management work of the PhD helps Thomas Ho to ensure that government resources will be wisely spent. The doctoral dissertation of MSc Thomas Ho Chee Tat, entitled Optimization for resource management using multi-agent systems will be examined at the Faculty of Science and Forestry on the 30th of March 2021 at 10 am online. The opponent in the public examination will be Head of People Flow Planning, Dr. Janne Sorsa , KONE Industrial Ltd, and the custos will be Professor Pasi Fränti, University of Eastern Finland. The public examination will be held in English. Photo available for download Link to the online event Link to the dissertation
670
New U.K. Currency Honors Alan Turing, Pioneering Computer Scientist and Code-Breaker
The Bank of England (BoE) has unveiled a new £50 note featuring pioneering mathematician, code-breaker, and computer scientist Alan Turing, which will enter circulation on June 23, his birthday. The BoE's Alan Bailey said Turing's work in computing and artificial intelligence "has had an enormous impact on how we all live today." The bill is one of a series of polymer banknotes that are harder to counterfeit, and Turing's nephew Dermot Turing said his uncle would have especially appreciated the currency for highlighting his computer science achievements. Said Dermot Turing, "I think Alan Turing would have wanted us to think about things like underrepresentation of women in science subjects, underrepresentation of Black and ethnic minority kids in STEM [science, technology, engineering, and math] subjects at school, and why they're not being given the opportunities that they should have and why that's bad for all of us."
[]
[]
[]
scitechnews
None
None
None
None
The Bank of England (BoE) has unveiled a new £50 note featuring pioneering mathematician, code-breaker, and computer scientist Alan Turing, which will enter circulation on June 23, his birthday. The BoE's Alan Bailey said Turing's work in computing and artificial intelligence "has had an enormous impact on how we all live today." The bill is one of a series of polymer banknotes that are harder to counterfeit, and Turing's nephew Dermot Turing said his uncle would have especially appreciated the currency for highlighting his computer science achievements. Said Dermot Turing, "I think Alan Turing would have wanted us to think about things like underrepresentation of women in science subjects, underrepresentation of Black and ethnic minority kids in STEM [science, technology, engineering, and math] subjects at school, and why they're not being given the opportunities that they should have and why that's bad for all of us."
671
Diversity Training Steps Into the Future With Virtual Reality
It's hard to understand an experience you've never had. Still, in an era marked by heightened social awareness on race, expressing empathy and realizing what other people go through can be a powerful catalyst for change. This is especially true in the workplace, where we often interact with people from different backgrounds. Companies have traditionally responded to this with unconscious bias training, which typically involves PowerPoint presentations or click-through courses for employees to check off. Maybe that's coupled with a Zoom session with experts on diversity and inclusion. But it's often easy to get through those tasks without paying full attention, and the impact of those efforts is tough to measure. With that gap in mind, the curriculum development startup Praxis Labs launched Pivotal Experiences, a VR-based tool meant to take diversity and inclusion training to the next level. The platform lets employees experience what it's like to face bias and discrimination in the workplace and teaches them how best to respond. Users are asked to speak aloud to other avatars and reflect on what happened for a more impactful experience. "By providing perspective-taking and immersive experiences that build empathy, we're helping to build understanding," said Elise Smith, co-founder and chief executive of Praxis Labs. "By providing opportunities to practice interventions, we're helping to change how people actually act in the workplace." Last month, the New York City-based firm raised $3.2 million in seed funding from backers, including SoftBank's SB Opportunity Fund. Uber, eBay, Amazon and Google were among the company's early test partners. It's now hiring to expand the platform to other partners. The platform is launching at a critical time for companies in the United States as the pandemic spotlights disparities across the nation's labor force. "If the last 12 months have shown us anything, it has brought to light what has been around for a very long time," said Kavitha Mariappan, an executive who leads diversity efforts for Zscaler, a cloud security platform. "There's a certain level of corporate urgency around having to act on diversity and inclusion rather than just being aware." The software works on smartphones and computers, but the magic seems to happen in virtual reality, where each month, employees are assigned an avatar facing a specific issue at work. The digital scenarios reflect insights gathered from employees with a wide range of backgrounds. For instance, it could be someone facing implicit bias, ageism or other forms of discrimination at work. The avatar might also be a bystander witnessing someone who's a target of unfair treatment, so it gives workers a chance to practice being an ally. The startup designed the avatars to be representative of a global workforce. If you look into a mirror in the virtual space, you'll see someone else's image reflected back at you. They could be of a different race, gender or body size. They may be an executive or a lower-ranking employee. Users are required to respond out loud as if they are that person "to get as close as you can to experiencing the perspective of someone else," Smith said. It's a subscription-based service. Companies are signing up for six-month to yearlong commitments. While no amount of training can change everyone, data suggests that virtual reality experiences can leave a lasting impression and alter perceptions. Researchers from the University of Barcelona found that men who committed domestic violence showed more signs of emotional empathy after virtually putting themselves in the victim's shoes. Other studies have shown that VR scenarios are just as likely to increase empathy as "embodied" experiences, in which people physically re-create someone else's lived experience. "By putting people on the scene, at a real situation, these invisible situations suddenly become visible," said Nonny de la Peña, a pioneer in empathy VR and founder of Emblematic Group. Praxis Labs' strategy is to create a feedback loop. The software asks the employee how they might react to certain situations in real life and then offers approaches for best dealing with the scenario. Aggregated insights are shared with the user's employer, while individualized data is shared with the trainee, who can continue to learn over time. "Even if we can see someone is experiencing bias or discrimination or there's something truly inequitable happening, it's really hard to speak up. And the only way to change that is by building that muscle," Smith said. Empathy training in VR doesn't solve everything. Lack of inclusivity is a deep, complex problem that starts from the top down. But it does give organizations a new tool that might have a wider impact, according to Jennifer Mackin, chief executive of the Leadership Pipeline Institute, a workplace consulting firm. Diversity and inclusion experts champion the idea, saying it would probably be appealing to Generation Z, whose members, studies show, are more likely to stay with organizations they perceive as having a diverse and inclusive workforce. "The generation entering the workforce today is going to be so much more comfortable with this form of learning than something static or prerecorded," Mariappan said.
Praxis Labs, a curriculum development startup, is offering a virtual reality (VR) tool for diversity and inclusion training that aims to help employees understand what it is like to experience bias and discrimination in the workplace and the best ways to respond. The startup's VR-based Pivotal Experiences tool assigns employees an avatar facing a specific issue at work, such as implicit bias or ageism, or a bystander witnessing unfair treatment in the workplace. Users are asked to respond aloud as though they are that person. They receive individualized data on their performance, while employers are given aggregated insights. Praxis Labs' Elise Smith said, "By providing perspective-taking and immersive experiences that build empathy, we're helping to build understanding. By providing opportunities to practice interventions, we're helping to change how people actually act in the workplace."
[]
[]
[]
scitechnews
None
None
None
None
Praxis Labs, a curriculum development startup, is offering a virtual reality (VR) tool for diversity and inclusion training that aims to help employees understand what it is like to experience bias and discrimination in the workplace and the best ways to respond. The startup's VR-based Pivotal Experiences tool assigns employees an avatar facing a specific issue at work, such as implicit bias or ageism, or a bystander witnessing unfair treatment in the workplace. Users are asked to respond aloud as though they are that person. They receive individualized data on their performance, while employers are given aggregated insights. Praxis Labs' Elise Smith said, "By providing perspective-taking and immersive experiences that build empathy, we're helping to build understanding. By providing opportunities to practice interventions, we're helping to change how people actually act in the workplace." It's hard to understand an experience you've never had. Still, in an era marked by heightened social awareness on race, expressing empathy and realizing what other people go through can be a powerful catalyst for change. This is especially true in the workplace, where we often interact with people from different backgrounds. Companies have traditionally responded to this with unconscious bias training, which typically involves PowerPoint presentations or click-through courses for employees to check off. Maybe that's coupled with a Zoom session with experts on diversity and inclusion. But it's often easy to get through those tasks without paying full attention, and the impact of those efforts is tough to measure. With that gap in mind, the curriculum development startup Praxis Labs launched Pivotal Experiences, a VR-based tool meant to take diversity and inclusion training to the next level. The platform lets employees experience what it's like to face bias and discrimination in the workplace and teaches them how best to respond. Users are asked to speak aloud to other avatars and reflect on what happened for a more impactful experience. "By providing perspective-taking and immersive experiences that build empathy, we're helping to build understanding," said Elise Smith, co-founder and chief executive of Praxis Labs. "By providing opportunities to practice interventions, we're helping to change how people actually act in the workplace." Last month, the New York City-based firm raised $3.2 million in seed funding from backers, including SoftBank's SB Opportunity Fund. Uber, eBay, Amazon and Google were among the company's early test partners. It's now hiring to expand the platform to other partners. The platform is launching at a critical time for companies in the United States as the pandemic spotlights disparities across the nation's labor force. "If the last 12 months have shown us anything, it has brought to light what has been around for a very long time," said Kavitha Mariappan, an executive who leads diversity efforts for Zscaler, a cloud security platform. "There's a certain level of corporate urgency around having to act on diversity and inclusion rather than just being aware." The software works on smartphones and computers, but the magic seems to happen in virtual reality, where each month, employees are assigned an avatar facing a specific issue at work. The digital scenarios reflect insights gathered from employees with a wide range of backgrounds. For instance, it could be someone facing implicit bias, ageism or other forms of discrimination at work. The avatar might also be a bystander witnessing someone who's a target of unfair treatment, so it gives workers a chance to practice being an ally. The startup designed the avatars to be representative of a global workforce. If you look into a mirror in the virtual space, you'll see someone else's image reflected back at you. They could be of a different race, gender or body size. They may be an executive or a lower-ranking employee. Users are required to respond out loud as if they are that person "to get as close as you can to experiencing the perspective of someone else," Smith said. It's a subscription-based service. Companies are signing up for six-month to yearlong commitments. While no amount of training can change everyone, data suggests that virtual reality experiences can leave a lasting impression and alter perceptions. Researchers from the University of Barcelona found that men who committed domestic violence showed more signs of emotional empathy after virtually putting themselves in the victim's shoes. Other studies have shown that VR scenarios are just as likely to increase empathy as "embodied" experiences, in which people physically re-create someone else's lived experience. "By putting people on the scene, at a real situation, these invisible situations suddenly become visible," said Nonny de la Peña, a pioneer in empathy VR and founder of Emblematic Group. Praxis Labs' strategy is to create a feedback loop. The software asks the employee how they might react to certain situations in real life and then offers approaches for best dealing with the scenario. Aggregated insights are shared with the user's employer, while individualized data is shared with the trainee, who can continue to learn over time. "Even if we can see someone is experiencing bias or discrimination or there's something truly inequitable happening, it's really hard to speak up. And the only way to change that is by building that muscle," Smith said. Empathy training in VR doesn't solve everything. Lack of inclusivity is a deep, complex problem that starts from the top down. But it does give organizations a new tool that might have a wider impact, according to Jennifer Mackin, chief executive of the Leadership Pipeline Institute, a workplace consulting firm. Diversity and inclusion experts champion the idea, saying it would probably be appealing to Generation Z, whose members, studies show, are more likely to stay with organizations they perceive as having a diverse and inclusive workforce. "The generation entering the workforce today is going to be so much more comfortable with this form of learning than something static or prerecorded," Mariappan said.
673
KFC, Taco Bell, Pizza Hut to Start Taking Orders Via Text
Yum Brands Inc., which owns KFC, Pizza Hut, and Taco Bell, is acquiring Israeli startup Tictuk Technologies Inc. to take advantage of software that enables food orders to be submitted to restaurants through text message and social media apps like Facebook Messenger and WhatsApp. Tests of the technology in about 900 KFC, Pizza Hut, and Taco Bell restaurants in 35 countries showed an increase in sales. Yum's Clay Johnson said the technology allows customer orders to be turned around in as fast as 60 seconds. The move comes as fast-food companies seek to boost sales as sit-down restaurants reopen amid the pandemic.
[]
[]
[]
scitechnews
None
None
None
None
Yum Brands Inc., which owns KFC, Pizza Hut, and Taco Bell, is acquiring Israeli startup Tictuk Technologies Inc. to take advantage of software that enables food orders to be submitted to restaurants through text message and social media apps like Facebook Messenger and WhatsApp. Tests of the technology in about 900 KFC, Pizza Hut, and Taco Bell restaurants in 35 countries showed an increase in sales. Yum's Clay Johnson said the technology allows customer orders to be turned around in as fast as 60 seconds. The move comes as fast-food companies seek to boost sales as sit-down restaurants reopen amid the pandemic.
674
Stay on Track! Support System to Help the Visually Impaired Navigate Tactile Paving
A support system to help the visually impaired navigate tactile paving developed by scientists at Japan's Shibaura Institute of Technology includes an image processing algorithm designed to make paving detection independent of pre-defined color thresholds. The algorithm uses a Hough line transform to easily find the paving's borders, then studies the color distribution in a small area near the path's center. After statistically determining an appropriate threshold for the current frame and producing an image mask that marks the tactile paving, the algorithm de-noises the data to generate a clear image. Shibaura's Chinthaka Premachandra said, "The proposed system correctly detected tactile paving 91.65% of the time in both indoor and outdoor environments under varying lighting conditions, which is a markedly higher accuracy than previous camera-based methods with fixed thresholds."
[]
[]
[]
scitechnews
None
None
None
None
A support system to help the visually impaired navigate tactile paving developed by scientists at Japan's Shibaura Institute of Technology includes an image processing algorithm designed to make paving detection independent of pre-defined color thresholds. The algorithm uses a Hough line transform to easily find the paving's borders, then studies the color distribution in a small area near the path's center. After statistically determining an appropriate threshold for the current frame and producing an image mask that marks the tactile paving, the algorithm de-noises the data to generate a clear image. Shibaura's Chinthaka Premachandra said, "The proposed system correctly detected tactile paving 91.65% of the time in both indoor and outdoor environments under varying lighting conditions, which is a markedly higher accuracy than previous camera-based methods with fixed thresholds."
675
Tiny Swimming Robots Reach Their Target Faster Thanks to AI Nudges
By Chris Stokel-Walker Being buffeted by fluid particles can be a problem for tiny swimming robots Shutterstock/Volodimir Zozulinskyi Machine learning could help tiny microrobots swim through a fluid and reach their goal without being knocked off target by the random motion of particles they encounter on their journey. Microrobotic "swimmers" are often designed to mimic the way bacteria can propel themselves through a fluid - but bacteria have one key advantage over the robots. "A real bacterium can sense where to go and decide that it goes in that direction because it wants food," says Frank Cichos at the University of Leipzig, Germany. It is difficult for the bacteria-sized microrobots to stay on course because their small size - some are just 2 micrometres across - means they are buffeted by particles in the fluid. Unlike the bacteria, they can't correct their direction of travel, and so they tend to follow a random path described by Brownian motion . Cichos and his colleagues decided to give their microrobot swimmers a "brain": a machine learning algorithm that rewards "good" movements in the direction of a desired target. "We decided it would be good to combine [the swimming microrobots] with machine learning, which is a bit like what we do in life," says Cichos. "We experience our environment, and depending on the success of what we do, we keep that in memory or not." Their microrobot is a blob of melamine resin, with gold nanoparticles covering 30 per cent of its surface. Directing a narrow laser beam on one point on the microrobot's surface heats the gold nanoparticles there, and the temperature difference drives the microrobot through the fluid. The machine learning algorithm - the microrobot's "brain" - operates on a nearby computer. It keeps track of the robot's movement and instructs the laser to fire at a precise point on its surface to move it closer to its goal. If this instruction moves the microrobot closer to its target, the algorithm is rewarded; if the instruction moves the microrobot further from the target, the algorithm receives a penalty. Over time, the algorithm learns from these rewards and penalties which instructions are best for getting the microrobot to its target quickly and efficiently. After 7 hours of training, the system managed to reduce the number of instructions needed for the microrobot to reach a goal from 600 to 100. "The study of the movement of microscopic living organisms is important across a variety of the biological and biomedical sciences," says Jonathan Aitken at the University of Sheffield, UK. "The movement of these microscopic organisms is difficult to mimic, yet this mimicry is important to understand more about their properties, and their effect within the environment." Although the controlling system for the swimmers is located outside the microrobot now, Cichos hopes to introduce chemically powered signalling - similar to our bodies - so the microrobots can "think" for themselves in the future. Journal reference: Science Robotics , DOI: 10.1126/scirobotics.abd9285
A machine learning algorithm developed by researchers at Germany's University of Leipzig could help microrobots swim toward a goal without being knocked off course by the random motion of particles in the fluid. Swimming microrobots generally follow a random path and cannot correct their direction - unlike the bacteria they are designed to mimic, which move toward food sources. The researchers used a narrow laser beam to move a microrobot comprised of melamine resin with gold nanoparticles covering 30% of its surface. The algorithm tracked the microrobot's movement and ordered the laser to fire at a particular point on its surface. The algorithm was rewarded if the instruction moved the microrobot toward the goal, and penalized if it moved the microrobot away from the target. The number of instructions necessary for the microrobot to reach its goal was reduced nearly 85% after seven hours of such training.
[]
[]
[]
scitechnews
None
None
None
None
A machine learning algorithm developed by researchers at Germany's University of Leipzig could help microrobots swim toward a goal without being knocked off course by the random motion of particles in the fluid. Swimming microrobots generally follow a random path and cannot correct their direction - unlike the bacteria they are designed to mimic, which move toward food sources. The researchers used a narrow laser beam to move a microrobot comprised of melamine resin with gold nanoparticles covering 30% of its surface. The algorithm tracked the microrobot's movement and ordered the laser to fire at a particular point on its surface. The algorithm was rewarded if the instruction moved the microrobot toward the goal, and penalized if it moved the microrobot away from the target. The number of instructions necessary for the microrobot to reach its goal was reduced nearly 85% after seven hours of such training. By Chris Stokel-Walker Being buffeted by fluid particles can be a problem for tiny swimming robots Shutterstock/Volodimir Zozulinskyi Machine learning could help tiny microrobots swim through a fluid and reach their goal without being knocked off target by the random motion of particles they encounter on their journey. Microrobotic "swimmers" are often designed to mimic the way bacteria can propel themselves through a fluid - but bacteria have one key advantage over the robots. "A real bacterium can sense where to go and decide that it goes in that direction because it wants food," says Frank Cichos at the University of Leipzig, Germany. It is difficult for the bacteria-sized microrobots to stay on course because their small size - some are just 2 micrometres across - means they are buffeted by particles in the fluid. Unlike the bacteria, they can't correct their direction of travel, and so they tend to follow a random path described by Brownian motion . Cichos and his colleagues decided to give their microrobot swimmers a "brain": a machine learning algorithm that rewards "good" movements in the direction of a desired target. "We decided it would be good to combine [the swimming microrobots] with machine learning, which is a bit like what we do in life," says Cichos. "We experience our environment, and depending on the success of what we do, we keep that in memory or not." Their microrobot is a blob of melamine resin, with gold nanoparticles covering 30 per cent of its surface. Directing a narrow laser beam on one point on the microrobot's surface heats the gold nanoparticles there, and the temperature difference drives the microrobot through the fluid. The machine learning algorithm - the microrobot's "brain" - operates on a nearby computer. It keeps track of the robot's movement and instructs the laser to fire at a precise point on its surface to move it closer to its goal. If this instruction moves the microrobot closer to its target, the algorithm is rewarded; if the instruction moves the microrobot further from the target, the algorithm receives a penalty. Over time, the algorithm learns from these rewards and penalties which instructions are best for getting the microrobot to its target quickly and efficiently. After 7 hours of training, the system managed to reduce the number of instructions needed for the microrobot to reach a goal from 600 to 100. "The study of the movement of microscopic living organisms is important across a variety of the biological and biomedical sciences," says Jonathan Aitken at the University of Sheffield, UK. "The movement of these microscopic organisms is difficult to mimic, yet this mimicry is important to understand more about their properties, and their effect within the environment." Although the controlling system for the swimmers is located outside the microrobot now, Cichos hopes to introduce chemically powered signalling - similar to our bodies - so the microrobots can "think" for themselves in the future. Journal reference: Science Robotics , DOI: 10.1126/scirobotics.abd9285
676
More Transparency, Understanding Into Machine Behaviors
Explaining, interpreting, and understanding the human mind presents a unique set of challenges. Doing the same for the behaviors of machines, meanwhile, is a whole other story. As artificial intelligence (AI) models are increasingly used in complex situations - approving or denying loans, helping doctors with medical diagnoses, assisting drivers on the road, or even taking complete control - humans still lack a holistic understanding of their capabilities and behaviors. Existing research focuses mainly on the basics: How accurate is this model? Oftentimes, centering on the notion of simple accuracy can lead to dangerous oversights. What if the model makes mistakes with very high confidence? How would the model behave if it encountered something previously unseen, such as a self-driving car seeing a new type of traffic sign? In the quest for better human-AI interaction, a team of researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new tool called Bayes-TrEx that allows developers and users to gain transparency into their AI model. Specifically, it does so by finding concrete examples that lead to a particular behavior. The method makes use of "Bayesian posterior inference," a widely-used mathematical framework to reason about model uncertainty. In experiments, the researchers applied Bayes-TrEx to several image-based datasets, and found new insights that were previously overlooked by standard evaluations focusing solely on prediction accuracy. "Such analyses are important to verify that the model is indeed functioning correctly in all cases," says MIT CSAIL PhD student Yilun Zhou, co-lead researcher on Bayes-TrEx. "An especially alarming situation is when the model is making mistakes, but with very high confidence. Due to high user trust over the high reported confidence, these mistakes might fly under the radar for a long time and only get discovered after causing extensive damage." For example, after a medical diagnosis system finishes learning on a set of X-ray images, a doctor can use Bayes-TrEx to find images that the model misclassified with very high confidence, to ensure that it doesn't miss any particular variant of a disease. Bayes-TrEx can also help with understanding model behaviors in novel situations. Take autonomous driving systems, which often rely on camera images to take in traffic lights, bike lanes, and obstacles. These common occurrences can be easily recognized with high accuracy by the camera, but more complicated situations can provide literal and metaphorical roadblocks. A zippy Segway could potentially be interpreted as something as big as a car or as small as a bump on the road, leading to a tricky turn or costly collision. Bayes-TrEx could help address these novel situations ahead of time, and enable developers to correct any undesirable outcomes before potential tragedies occur. In addition to images, the researchers are also tackling a less-static domain: robots. Their tool, called "RoCUS," inspired by Bayes-TrEx, uses additional adaptations to analyze robot-specific behaviors. While still in a testing phase, experiments with RoCUS point to new discoveries that could be easily missed if the evaluation was focused solely on task completion. For example, a 2D navigation robot that used a deep learning approach preferred to navigate tightly around obstacles, due to how the training data was collected. Such a preference, however, could be risky if the robot's obstacle sensors are not fully accurate. For a robot arm reaching a target on a table, the asymmetry in the robot's kinematic structure showed larger implications on its ability to reach targets on the left versus the right. "We want to make human-AI interaction safer by giving humans more insight into their AI collaborators," says MIT CSAIL PhD student Serena Booth, co-lead author with Zhou. "Humans should be able to understand how these agents make decisions, to predict how they will act in the world, and - most critically - to anticipate and circumvent failures." Booth and Zhou are coauthors on the Bayes-TrEx work alongside MIT CSAIL PhD student Ankit Shah and MIT Professor Julie Shah. They presented the paper virtually at the AAAI conference on Artificial Intelligence. Along with Booth, Zhou, and Shah, MIT CSAIL postdoc Nadia Figueroa Fernandez has contributed work on the RoCUS tool.
Researchers at the Massachusetts Institute of Technology (MIT) have developed a tool for instilling transparency into artificial intelligence (AI) models, by identifying concrete examples that yield a specific behavior. The Bayes-TrEx tool applies Bayesian posterior inference, a popular mathematical framework for reasoning about model uncertainty. The MIT researchers applied Bayes-TrEX to image-based datasets, uncovering insights previously missed by standard techniques focusing exclusively on prediction accuracy. Bayes-TrEX also can understand model behaviors in unique situations, and the tool has inspired an adaptation, RoCUS, for the analysis of robot-specific behaviors. MIT's Serena Booth said, "We want to make human-AI interaction safer by giving humans more insight into their AI collaborators."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the Massachusetts Institute of Technology (MIT) have developed a tool for instilling transparency into artificial intelligence (AI) models, by identifying concrete examples that yield a specific behavior. The Bayes-TrEx tool applies Bayesian posterior inference, a popular mathematical framework for reasoning about model uncertainty. The MIT researchers applied Bayes-TrEX to image-based datasets, uncovering insights previously missed by standard techniques focusing exclusively on prediction accuracy. Bayes-TrEX also can understand model behaviors in unique situations, and the tool has inspired an adaptation, RoCUS, for the analysis of robot-specific behaviors. MIT's Serena Booth said, "We want to make human-AI interaction safer by giving humans more insight into their AI collaborators." Explaining, interpreting, and understanding the human mind presents a unique set of challenges. Doing the same for the behaviors of machines, meanwhile, is a whole other story. As artificial intelligence (AI) models are increasingly used in complex situations - approving or denying loans, helping doctors with medical diagnoses, assisting drivers on the road, or even taking complete control - humans still lack a holistic understanding of their capabilities and behaviors. Existing research focuses mainly on the basics: How accurate is this model? Oftentimes, centering on the notion of simple accuracy can lead to dangerous oversights. What if the model makes mistakes with very high confidence? How would the model behave if it encountered something previously unseen, such as a self-driving car seeing a new type of traffic sign? In the quest for better human-AI interaction, a team of researchers from MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) have created a new tool called Bayes-TrEx that allows developers and users to gain transparency into their AI model. Specifically, it does so by finding concrete examples that lead to a particular behavior. The method makes use of "Bayesian posterior inference," a widely-used mathematical framework to reason about model uncertainty. In experiments, the researchers applied Bayes-TrEx to several image-based datasets, and found new insights that were previously overlooked by standard evaluations focusing solely on prediction accuracy. "Such analyses are important to verify that the model is indeed functioning correctly in all cases," says MIT CSAIL PhD student Yilun Zhou, co-lead researcher on Bayes-TrEx. "An especially alarming situation is when the model is making mistakes, but with very high confidence. Due to high user trust over the high reported confidence, these mistakes might fly under the radar for a long time and only get discovered after causing extensive damage." For example, after a medical diagnosis system finishes learning on a set of X-ray images, a doctor can use Bayes-TrEx to find images that the model misclassified with very high confidence, to ensure that it doesn't miss any particular variant of a disease. Bayes-TrEx can also help with understanding model behaviors in novel situations. Take autonomous driving systems, which often rely on camera images to take in traffic lights, bike lanes, and obstacles. These common occurrences can be easily recognized with high accuracy by the camera, but more complicated situations can provide literal and metaphorical roadblocks. A zippy Segway could potentially be interpreted as something as big as a car or as small as a bump on the road, leading to a tricky turn or costly collision. Bayes-TrEx could help address these novel situations ahead of time, and enable developers to correct any undesirable outcomes before potential tragedies occur. In addition to images, the researchers are also tackling a less-static domain: robots. Their tool, called "RoCUS," inspired by Bayes-TrEx, uses additional adaptations to analyze robot-specific behaviors. While still in a testing phase, experiments with RoCUS point to new discoveries that could be easily missed if the evaluation was focused solely on task completion. For example, a 2D navigation robot that used a deep learning approach preferred to navigate tightly around obstacles, due to how the training data was collected. Such a preference, however, could be risky if the robot's obstacle sensors are not fully accurate. For a robot arm reaching a target on a table, the asymmetry in the robot's kinematic structure showed larger implications on its ability to reach targets on the left versus the right. "We want to make human-AI interaction safer by giving humans more insight into their AI collaborators," says MIT CSAIL PhD student Serena Booth, co-lead author with Zhou. "Humans should be able to understand how these agents make decisions, to predict how they will act in the world, and - most critically - to anticipate and circumvent failures." Booth and Zhou are coauthors on the Bayes-TrEx work alongside MIT CSAIL PhD student Ankit Shah and MIT Professor Julie Shah. They presented the paper virtually at the AAAI conference on Artificial Intelligence. Along with Booth, Zhou, and Shah, MIT CSAIL postdoc Nadia Figueroa Fernandez has contributed work on the RoCUS tool.
678
Want a Vaccination Appointment? It Helps to Know a Python Programmer
A boutique online community of computer programmers has emerged to help family and friends gain a competitive advantage in securing vaccination appointments. These programmers write simple scripts that scrape individual state or pharmacy websites every few seconds for open appointments, and send a text message when one is available. Dozens of these scripts have been uploaded on GitHub. Some are concerned whether this activity is unethical or illegal. Brooklyn, N.Y., attorney Tor Ekeland said, "Scraping data from public-facing servers that aren't using any kind of authentication protocols, like usernames or passwords? They're fine. There's generally a recognition that data-scraping is a huge component of our economy and our lives. We depend on it for our price information, for news, for communications in our social networks."
[]
[]
[]
scitechnews
None
None
None
None
A boutique online community of computer programmers has emerged to help family and friends gain a competitive advantage in securing vaccination appointments. These programmers write simple scripts that scrape individual state or pharmacy websites every few seconds for open appointments, and send a text message when one is available. Dozens of these scripts have been uploaded on GitHub. Some are concerned whether this activity is unethical or illegal. Brooklyn, N.Y., attorney Tor Ekeland said, "Scraping data from public-facing servers that aren't using any kind of authentication protocols, like usernames or passwords? They're fine. There's generally a recognition that data-scraping is a huge component of our economy and our lives. We depend on it for our price information, for news, for communications in our social networks."
680
IBM Tool Lets Users Design Quantum Chips in Minutes
Building the hardware that underpins quantum computers might not sound like everybody's cup of tea, but IBM is determined to make the idea sound less challenging. The company has announced the general availability of Qiskit Metal, an open-source platform that automates parts of the design process for quantum chips, and which IBM promised will now let "anyone" design quantum hardware . Big Blue detailed the progress made with Metal since the tool was first announced late last year as part of the company's larger Qiskit portfolio, which provides open-source tools for creating programs that can run on IBM's cloud-based quantum devices. SEE: Building the bionic brain (free PDF) (TechRepublic) While most of Qiskit's resources focus on building applications that can be executed on quantum machines, Metal targets a brand-new audience, providing software to help design the components that make up the hardware itself. The idea is to let users play around with pre-built components on the platform to produce state-of-the-art chips for superconducting quantum devices in a matter of minutes - a process that traditionally takes months of manual design, analysis and revisions for scientists in the lab. While automation processes are already firmly in place to accelerate the design of classical integrated circuits, the same cannot be said for quantum computers. As Zlatko Minev, IBM research staff and lead of Qiskit Metal, explains, quantum chips still require an intricate, time-consuming fabrication process. "How it normally happens is, you take a grad student like my former self, you put them in the lab for six months or so to design a chip, and out comes a chip," he tells ZDNet. "It would take a lot of work, it was very laborious. So one of the things I wanted was to make this easier for myself when I got to IBM, by automating the process." Qiskit Metal is, at first glance, fairly straightforward. The process starts with setting targets for the chip, such as a particular qubit frequency or qubit-qubit entanglement; users can then design an initial layout in a few minutes, using a library of pre-defined, customizable quantum components. Metal then carries out both a classical and a quantum analysis to predict the performance of the device. "This is all the stuff you would normally do manually, which I had to do all the time," says Minev. The platform can anticipate parameters such as qubit frequencies, convergence or entanglement, letting users go back and forth to tweak their model until the optimal design is found. To check the reliability of Metal, IBM's Qiskit's team partnered with Chalmers University of Technology, which already has strong experience in building quantum test chips. The researchers were able to design a competitive eight-qubit chip using IBM's platform, but in a record 30 minutes. It took another hour to run the design on a simulator, where the device performed as expected based on Chalmers' previous experiments. Another one of the early applications of Metal saw the tool deployed during a Qiskit hackathon in South Korea earlier this month. Participants gathered in teams of five or six, with the objective of designing a quantum chip from scratch in a couple of days. Using Metal, all of the teams successfully built two-qubit chips using superconducting qubits in less than 24 hours. A partnership with the University of Tokyo also produced a five-qubit quantum processor called Tsuru in just a few hours and over WebEx. As Minev explains, Metal is not designed to build large-scale quantum chips to compete against the experts currently developing fully-fledged quantum computers. Rather, the tool is meant to let users try their hand at designing quantum hardware, and optimize their models in ways that might benefit the entire ecosystem. "If you're trying to build quantum devices at a large scale, there is a lot more that goes into this, it's not an easy job," says Minev. "Metal is rather aimed at small-scale, rapid design and prototyping. The idea is to create new devices and optimize designs, to push and improve the scientific techniques." SEE: The EU wants to build its first quantum computer. That plan might not be ambitious enough Central to the project is the ease of access. IBM is hopeful that Metal will eventually be accessible to users with little to no programming knowledge, and encourage experts from all fields to wet their feet with quantum. This is important because improving quantum hardware will require input from experts from a host of different backgrounds - all of whom aren't necessarily trained in the physics of quantum computing. "A lot of challenges in the field are yet to be resolved," says Minev, "and it doesn't just take physicists, but also engineers and software developers, among others. We want to give a point of entry for all these folks to be able to come together through an easy interface." Like quantum computing, Metal is in its early stages, and Minev hopes that as the field grows, so will the platform incorporate increasingly sophisticated quantum hardware and modeling advances. For now, though, IBM has warned that users are likely to face a fair share of bugs to fix, and has encouraged curious users to come forward and trial the tool , sharing feedback and criticism as they go with the rest of the open-source community.
IBM has made the Qiskit Metal open-source platform generally available for the design of quantum hardware. The platform automates parts of the design process, allowing users to experiment with pre-built components to create state-of-the art quantum chips in minutes. Users can set targets for the chip and then design an initial layout via a library of pre-defined, customizable quantum components, after which Metal performs classical and quantum analyses to predict the device's performance. Researchers at Sweden's Chalmers University of Technology designed an eight-qubit chip in a record 30 minutes using the IBM platform, while researchers at the University of Tokyo in Japan developed a five-qubit quantum processor in a few hours. IBM's Zlatko Minev said Metal is "aimed at small-scale, rapid design and prototyping. The idea is to create new devices and optimize designs, to push and improve the scientific techniques."
[]
[]
[]
scitechnews
None
None
None
None
IBM has made the Qiskit Metal open-source platform generally available for the design of quantum hardware. The platform automates parts of the design process, allowing users to experiment with pre-built components to create state-of-the art quantum chips in minutes. Users can set targets for the chip and then design an initial layout via a library of pre-defined, customizable quantum components, after which Metal performs classical and quantum analyses to predict the device's performance. Researchers at Sweden's Chalmers University of Technology designed an eight-qubit chip in a record 30 minutes using the IBM platform, while researchers at the University of Tokyo in Japan developed a five-qubit quantum processor in a few hours. IBM's Zlatko Minev said Metal is "aimed at small-scale, rapid design and prototyping. The idea is to create new devices and optimize designs, to push and improve the scientific techniques." Building the hardware that underpins quantum computers might not sound like everybody's cup of tea, but IBM is determined to make the idea sound less challenging. The company has announced the general availability of Qiskit Metal, an open-source platform that automates parts of the design process for quantum chips, and which IBM promised will now let "anyone" design quantum hardware . Big Blue detailed the progress made with Metal since the tool was first announced late last year as part of the company's larger Qiskit portfolio, which provides open-source tools for creating programs that can run on IBM's cloud-based quantum devices. SEE: Building the bionic brain (free PDF) (TechRepublic) While most of Qiskit's resources focus on building applications that can be executed on quantum machines, Metal targets a brand-new audience, providing software to help design the components that make up the hardware itself. The idea is to let users play around with pre-built components on the platform to produce state-of-the-art chips for superconducting quantum devices in a matter of minutes - a process that traditionally takes months of manual design, analysis and revisions for scientists in the lab. While automation processes are already firmly in place to accelerate the design of classical integrated circuits, the same cannot be said for quantum computers. As Zlatko Minev, IBM research staff and lead of Qiskit Metal, explains, quantum chips still require an intricate, time-consuming fabrication process. "How it normally happens is, you take a grad student like my former self, you put them in the lab for six months or so to design a chip, and out comes a chip," he tells ZDNet. "It would take a lot of work, it was very laborious. So one of the things I wanted was to make this easier for myself when I got to IBM, by automating the process." Qiskit Metal is, at first glance, fairly straightforward. The process starts with setting targets for the chip, such as a particular qubit frequency or qubit-qubit entanglement; users can then design an initial layout in a few minutes, using a library of pre-defined, customizable quantum components. Metal then carries out both a classical and a quantum analysis to predict the performance of the device. "This is all the stuff you would normally do manually, which I had to do all the time," says Minev. The platform can anticipate parameters such as qubit frequencies, convergence or entanglement, letting users go back and forth to tweak their model until the optimal design is found. To check the reliability of Metal, IBM's Qiskit's team partnered with Chalmers University of Technology, which already has strong experience in building quantum test chips. The researchers were able to design a competitive eight-qubit chip using IBM's platform, but in a record 30 minutes. It took another hour to run the design on a simulator, where the device performed as expected based on Chalmers' previous experiments. Another one of the early applications of Metal saw the tool deployed during a Qiskit hackathon in South Korea earlier this month. Participants gathered in teams of five or six, with the objective of designing a quantum chip from scratch in a couple of days. Using Metal, all of the teams successfully built two-qubit chips using superconducting qubits in less than 24 hours. A partnership with the University of Tokyo also produced a five-qubit quantum processor called Tsuru in just a few hours and over WebEx. As Minev explains, Metal is not designed to build large-scale quantum chips to compete against the experts currently developing fully-fledged quantum computers. Rather, the tool is meant to let users try their hand at designing quantum hardware, and optimize their models in ways that might benefit the entire ecosystem. "If you're trying to build quantum devices at a large scale, there is a lot more that goes into this, it's not an easy job," says Minev. "Metal is rather aimed at small-scale, rapid design and prototyping. The idea is to create new devices and optimize designs, to push and improve the scientific techniques." SEE: The EU wants to build its first quantum computer. That plan might not be ambitious enough Central to the project is the ease of access. IBM is hopeful that Metal will eventually be accessible to users with little to no programming knowledge, and encourage experts from all fields to wet their feet with quantum. This is important because improving quantum hardware will require input from experts from a host of different backgrounds - all of whom aren't necessarily trained in the physics of quantum computing. "A lot of challenges in the field are yet to be resolved," says Minev, "and it doesn't just take physicists, but also engineers and software developers, among others. We want to give a point of entry for all these folks to be able to come together through an easy interface." Like quantum computing, Metal is in its early stages, and Minev hopes that as the field grows, so will the platform incorporate increasingly sophisticated quantum hardware and modeling advances. For now, though, IBM has warned that users are likely to face a fair share of bugs to fix, and has encouraged curious users to come forward and trial the tool , sharing feedback and criticism as they go with the rest of the open-source community.
681
Why It Pays to Think Outside the Box on Coronavirus Tests
Last year, when the National Football League decided to stage its season in the midst of the coronavirus pandemic, it went all-in on testing. The league tested all players and personnel before they reported for summer training camp, and continued near-daily testing in the months that followed. Between Aug. 1 and the Super Bowl in early February, the N.F.L. administered almost one million tests to players and staff. Many other organizations have sought safety in mass testing. The University of Illinois is testing its students, faculty and staff twice a week and has conducted more than 1.6 million tests since July. Major corporations, from Amazon to Tyson Foods , have rolled out extensive testing programs for their own employees. Now, a new analysis suggests that schools, businesses and other organizations that want to keep themselves safe should think beyond strictly themselves. By dedicating a substantial proportion of their tests to people in the surrounding community, institutions could reduce the number of Covid-19 cases among their members by as much as 25 percent, researchers report in a new paper , which has not yet been published in a scientific journal. "It's natural in an outbreak for people to become self-serving, self-focused," said Dr. Pardis Sabeti, a computational biologist at Harvard University and the Broad Institute who lead the analysis. But, she added, "If you've been in enough outbreaks you just understand that testing in a box doesn't makes sense. These things are communicable, and they're coming in from the community."
An analysis by researchers at Harvard University (HU), the Broad Institute (BI), and Colorado Mesa University (CMU) suggests organizations that expand their coronavirus testing resources to the wider community would better shield themselves from Covid-19. The team used real-world data from CMU to construct a baseline scenario in which 1% of people on campus and 6% of those in the surrounding county were infected, and CMU tested 12% of its members daily. Under such conditions, CMU would have about 200 Covid-19 cases after 40 days, but analysis determined this number would fall by 25% if the school parceled out about 45% of its tests to community members in close contact of students and staff. Said Pardis Sabeti, a computational biologist at HU and BI who led the analysis, "If you've been in enough outbreaks, you just understand that testing in a box doesn't makes sense."
[]
[]
[]
scitechnews
None
None
None
None
An analysis by researchers at Harvard University (HU), the Broad Institute (BI), and Colorado Mesa University (CMU) suggests organizations that expand their coronavirus testing resources to the wider community would better shield themselves from Covid-19. The team used real-world data from CMU to construct a baseline scenario in which 1% of people on campus and 6% of those in the surrounding county were infected, and CMU tested 12% of its members daily. Under such conditions, CMU would have about 200 Covid-19 cases after 40 days, but analysis determined this number would fall by 25% if the school parceled out about 45% of its tests to community members in close contact of students and staff. Said Pardis Sabeti, a computational biologist at HU and BI who led the analysis, "If you've been in enough outbreaks, you just understand that testing in a box doesn't makes sense." Last year, when the National Football League decided to stage its season in the midst of the coronavirus pandemic, it went all-in on testing. The league tested all players and personnel before they reported for summer training camp, and continued near-daily testing in the months that followed. Between Aug. 1 and the Super Bowl in early February, the N.F.L. administered almost one million tests to players and staff. Many other organizations have sought safety in mass testing. The University of Illinois is testing its students, faculty and staff twice a week and has conducted more than 1.6 million tests since July. Major corporations, from Amazon to Tyson Foods , have rolled out extensive testing programs for their own employees. Now, a new analysis suggests that schools, businesses and other organizations that want to keep themselves safe should think beyond strictly themselves. By dedicating a substantial proportion of their tests to people in the surrounding community, institutions could reduce the number of Covid-19 cases among their members by as much as 25 percent, researchers report in a new paper , which has not yet been published in a scientific journal. "It's natural in an outbreak for people to become self-serving, self-focused," said Dr. Pardis Sabeti, a computational biologist at Harvard University and the Broad Institute who lead the analysis. But, she added, "If you've been in enough outbreaks you just understand that testing in a box doesn't makes sense. These things are communicable, and they're coming in from the community."
683
Vaccination Megasites Lean on Enterprise Tech to Keep the Line Moving
Healthcare organizations and information technology (IT) leaders say popup Covid-19 vaccination megasites are using digital enterprise technology to accommodate surging numbers of inoculations as more Americans qualify to be immunized. Vaccination megasites across the country are using cloud communications, contact center, and systems management software from software firm NWN. NWN's Jim Sullivan said the facilities' systems must run on a secure and scalable network, which taps power and capacity from the cloud to provide flexibility for expansion. Dr. James Cardon at Connecticut-based Hartford HealthCare hospital network said enterprise IT keeps lines moving at megasites, adding, "We have overbuilt capacity" to allow for room to grow.
[]
[]
[]
scitechnews
None
None
None
None
Healthcare organizations and information technology (IT) leaders say popup Covid-19 vaccination megasites are using digital enterprise technology to accommodate surging numbers of inoculations as more Americans qualify to be immunized. Vaccination megasites across the country are using cloud communications, contact center, and systems management software from software firm NWN. NWN's Jim Sullivan said the facilities' systems must run on a secure and scalable network, which taps power and capacity from the cloud to provide flexibility for expansion. Dr. James Cardon at Connecticut-based Hartford HealthCare hospital network said enterprise IT keeps lines moving at megasites, adding, "We have overbuilt capacity" to allow for room to grow.
685
Ultrasound Reads Monkey Brains, Opening Path to Controlling Machines with Thought
Researchers at the California Institute of Technology have developed a method of predicting a monkey's intended eye or hand movements using ultrasound imaging. Their findings could help people who are paralyzed to control prostheses without requiring implants in their brains (although the technique does require a small piece of skull to be removed). Since functional ultrasound provides a less direct signal than implanted electrodes, the researchers tested whether the signal provides sufficient information for a computer to interpret the intended movement by inserting ultrasound transducers into the skulls of two rhesus macaque monkeys. The researchers found the algorithm was 78% accurate in predicting monkey eye movements, and 89% accurate in predicting an arm reach.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the California Institute of Technology have developed a method of predicting a monkey's intended eye or hand movements using ultrasound imaging. Their findings could help people who are paralyzed to control prostheses without requiring implants in their brains (although the technique does require a small piece of skull to be removed). Since functional ultrasound provides a less direct signal than implanted electrodes, the researchers tested whether the signal provides sufficient information for a computer to interpret the intended movement by inserting ultrasound transducers into the skulls of two rhesus macaque monkeys. The researchers found the algorithm was 78% accurate in predicting monkey eye movements, and 89% accurate in predicting an arm reach.
687
Newly-Wormable Windows Botnet Ballooning in Size
Researchers say a botnet targeting Windows devices is rapidly growing in size, thanks to a new infection technique that allows the malware to spread from computer to computer. The Purple Fox malware was first spotted in 2018 spreading through phishing emails and exploit kits, a way for threat groups to infect machines using existing security flaws. But researchers Amit Serper and Ophir Harpaz at security firm Guardicore, which discovered and revealed the new infection effort in a new blog post , say the malware now targets internet-facing Windows computers with weak passwords , giving the malware a foothold to spread more rapidly. The malware does this by trying to guess weak Windows user account passwords by targeting the server message block, or SMB - a component that lets Windows talk with other devices, like printers and file servers. Once the malware gains access to a vulnerable computer, it pulls a malicious payload from a network of close to 2,000 older and compromised Windows web servers and quietly installs a rootkit, keeping the malware persistently anchored to the computer while also making it much harder to be detected or removed. Once infected, the malware then closes the ports in the firewall it used to infect the computer to begin with, likely to prevent reinfection or other threat groups hijacking the already-hacked computer, the researchers said. The malware then generates a list of internet addresses and scans the internet for vulnerable devices with weak passwords to infect further, creating a growing network of ensnared devices. Botnets are formed when hundreds or thousands of hacked devices are enlisted into a network run by criminal operators, which are often then used to launch denial-of-network attacks to pummel organizations with junk traffic with the aim of knocking them offline. But with control of these devices, criminal operators can also use botnets to spread malware and spam, or to deploy file-encrypting ransomware on the infected computers. But this kind of wormable botnet presents a greater risk as it spreads largely on its own. Serper, Guardicore's vice president of security research for North America, said the wormable infection technique is "cheaper" to run than its earlier phishing and exploit kit effort. "The fact that it's an opportunistic attack that constantly scans the internet and looks for more vulnerable machines means that the attackers can sort of 'set it and forget it'," he said. It appears to be working. Purple Fox infections have rocketed by 600% since May 2020, according to data from Guardicore's own network of internet sensors. The actual number of infections is likely to be far higher, amounting to more than 90,000 infections in the past year. Guardicore published indicators of compromise to help networks identify if they have been infected. The researchers do not know what the botnet will be used for but warned that its growing size presents a risk to organizations. "We assume that this is laying the groundwork for something in the future," said Serper.
Amit Serper and Ophir Harpaz at Israeli security firm Guardicore say a botnet targeting Windows devices is expanding, due to a new infection method that lets malware spread between computers with weak passwords. The Purple Fox malware attempts to guess Windows user account passwords by targeting the server message block that allows Windows to communicate with other devices. Upon infiltration, Purple Fox pulls a malicious payload from a network of nearly 2,000 compromised Windows Web servers and installs a rootkit, keeping the malware latched on to the computer while complicating its detection or removal. It then seals the firewall ports through which it gained access, and produces a list of Internet addresses and scans the Internet for other targets. Guardicore said Purple Fox infections have soared 600% since May 2020.
[]
[]
[]
scitechnews
None
None
None
None
Amit Serper and Ophir Harpaz at Israeli security firm Guardicore say a botnet targeting Windows devices is expanding, due to a new infection method that lets malware spread between computers with weak passwords. The Purple Fox malware attempts to guess Windows user account passwords by targeting the server message block that allows Windows to communicate with other devices. Upon infiltration, Purple Fox pulls a malicious payload from a network of nearly 2,000 compromised Windows Web servers and installs a rootkit, keeping the malware latched on to the computer while complicating its detection or removal. It then seals the firewall ports through which it gained access, and produces a list of Internet addresses and scans the Internet for other targets. Guardicore said Purple Fox infections have soared 600% since May 2020. Researchers say a botnet targeting Windows devices is rapidly growing in size, thanks to a new infection technique that allows the malware to spread from computer to computer. The Purple Fox malware was first spotted in 2018 spreading through phishing emails and exploit kits, a way for threat groups to infect machines using existing security flaws. But researchers Amit Serper and Ophir Harpaz at security firm Guardicore, which discovered and revealed the new infection effort in a new blog post , say the malware now targets internet-facing Windows computers with weak passwords , giving the malware a foothold to spread more rapidly. The malware does this by trying to guess weak Windows user account passwords by targeting the server message block, or SMB - a component that lets Windows talk with other devices, like printers and file servers. Once the malware gains access to a vulnerable computer, it pulls a malicious payload from a network of close to 2,000 older and compromised Windows web servers and quietly installs a rootkit, keeping the malware persistently anchored to the computer while also making it much harder to be detected or removed. Once infected, the malware then closes the ports in the firewall it used to infect the computer to begin with, likely to prevent reinfection or other threat groups hijacking the already-hacked computer, the researchers said. The malware then generates a list of internet addresses and scans the internet for vulnerable devices with weak passwords to infect further, creating a growing network of ensnared devices. Botnets are formed when hundreds or thousands of hacked devices are enlisted into a network run by criminal operators, which are often then used to launch denial-of-network attacks to pummel organizations with junk traffic with the aim of knocking them offline. But with control of these devices, criminal operators can also use botnets to spread malware and spam, or to deploy file-encrypting ransomware on the infected computers. But this kind of wormable botnet presents a greater risk as it spreads largely on its own. Serper, Guardicore's vice president of security research for North America, said the wormable infection technique is "cheaper" to run than its earlier phishing and exploit kit effort. "The fact that it's an opportunistic attack that constantly scans the internet and looks for more vulnerable machines means that the attackers can sort of 'set it and forget it'," he said. It appears to be working. Purple Fox infections have rocketed by 600% since May 2020, according to data from Guardicore's own network of internet sensors. The actual number of infections is likely to be far higher, amounting to more than 90,000 infections in the past year. Guardicore published indicators of compromise to help networks identify if they have been infected. The researchers do not know what the botnet will be used for but warned that its growing size presents a risk to organizations. "We assume that this is laying the groundwork for something in the future," said Serper.
688
Im­age Analysis Based on ML Re­li­ably Iden­ti­fies Hemat­o­lo­gical Ma­lig­nan­cies Chal­len­ging for the Human Eye
Myelodysplastic syndrome (MDS) is a disease of the stem cells in the bone marrow, which disturbs the maturing and differentiation of blood cells. Annually, some 200 Finns are diagnosed with MDS, which can develop into acute leukaemia. Globally, the incidence of MDS is 4 cases per 100,000 person years. To diagnose MDS, a bone marrow sample is needed to also investigate genetic changes in bone marrow cells. The syndrome is classified into groups to determine the nature of the disorder in more detail. In the study conducted at the University of Helsinki, microscopic images of MDS patients' bone marrow samples were examined utilising an image analysis technique based on machine learning. The samples were stained with haematoxylin and eosin (H&E staining), a procedure that is part of the routine diagnostics for the disease. The slides were digitised and analysed with the help of computational deep learning models. The study was published in the Blood Cancer Discovery , a journal of the American Association for Cancer Research, and the results can also be explored with an interactive tool . By employing machine learning, the digital image dataset could be analysed to accurately identify the most common genetic mutations affecting the progression of the syndrome, such as acquired mutations and chromosomal aberrations. The higher the number of aberrant cells in the samples, the higher the reliability of the results generated by the prognostic models. One of the greatest challenges of utilising neural network models is understanding the criteria on which they base their conclusions drawn from data, such as information contained in images. The recently released study succeeded in determining what deep learning models see in tissue samples when they have been taught to look for, for example, genetic mutations related to MDS. The technique provides new information on the effects of complex diseases on bone marrow cells and the surrounding tissues. "The study confirms that computational analysis helps to identify features that elude the human eye. Moreover, data analysis helps to collect quantitative data on cellular changes and their relevance to the patient's prognosis," says Professor Satu Mustjoki .
Since machine learning (ML) -based image analysis can spot details in tissue that may elude the human eye, researchers at Finland's University of Helsinki used the technique to analyze microscopic images of bone marrow from myelodysplastic syndrome (MDS) patients. The researchers digitized and examined sample slides using computational deep learning models, which accurately identified the most frequent genetic mutations affecting MDS progression. Olivier Elemento at Weill Cornell Medicine's Caryl and Israel Englander Institute for Precision Medicine said, "[This] study provides new insights into the pathobiology of MDS and paves the way for increased use of artificial intelligence for the assessment and diagnosis of hematological malignancies."
[]
[]
[]
scitechnews
None
None
None
None
Since machine learning (ML) -based image analysis can spot details in tissue that may elude the human eye, researchers at Finland's University of Helsinki used the technique to analyze microscopic images of bone marrow from myelodysplastic syndrome (MDS) patients. The researchers digitized and examined sample slides using computational deep learning models, which accurately identified the most frequent genetic mutations affecting MDS progression. Olivier Elemento at Weill Cornell Medicine's Caryl and Israel Englander Institute for Precision Medicine said, "[This] study provides new insights into the pathobiology of MDS and paves the way for increased use of artificial intelligence for the assessment and diagnosis of hematological malignancies." Myelodysplastic syndrome (MDS) is a disease of the stem cells in the bone marrow, which disturbs the maturing and differentiation of blood cells. Annually, some 200 Finns are diagnosed with MDS, which can develop into acute leukaemia. Globally, the incidence of MDS is 4 cases per 100,000 person years. To diagnose MDS, a bone marrow sample is needed to also investigate genetic changes in bone marrow cells. The syndrome is classified into groups to determine the nature of the disorder in more detail. In the study conducted at the University of Helsinki, microscopic images of MDS patients' bone marrow samples were examined utilising an image analysis technique based on machine learning. The samples were stained with haematoxylin and eosin (H&E staining), a procedure that is part of the routine diagnostics for the disease. The slides were digitised and analysed with the help of computational deep learning models. The study was published in the Blood Cancer Discovery , a journal of the American Association for Cancer Research, and the results can also be explored with an interactive tool . By employing machine learning, the digital image dataset could be analysed to accurately identify the most common genetic mutations affecting the progression of the syndrome, such as acquired mutations and chromosomal aberrations. The higher the number of aberrant cells in the samples, the higher the reliability of the results generated by the prognostic models. One of the greatest challenges of utilising neural network models is understanding the criteria on which they base their conclusions drawn from data, such as information contained in images. The recently released study succeeded in determining what deep learning models see in tissue samples when they have been taught to look for, for example, genetic mutations related to MDS. The technique provides new information on the effects of complex diseases on bone marrow cells and the surrounding tissues. "The study confirms that computational analysis helps to identify features that elude the human eye. Moreover, data analysis helps to collect quantitative data on cellular changes and their relevance to the patient's prognosis," says Professor Satu Mustjoki .
689
Scientist Bridges the Gap Between Quantum Simulators, Quantum Computers
A researcher from Skoltech has filled in the gaps connecting quantum simulators with more traditional quantum computers, discovering a new computationally universal model of quantum computation, the variational model. The paper was published as a Letter in the journal Physical Review A. The work made the Editors' Suggestion list. A quantum simulator is built to share properties with a target quantum system we wish to understand. Early quantum simulators were "dedicated" - that means they could not be programmed, tuned or adjusted and so could mimic one or very few target systems. Modern quantum simulators enable some control over their settings, offering more possibilities. In contrast to quantum simulators, the long-promised quantum computer is a fully programmable quantum system. While building a fully programmable quantum processor remains elusive, noisy quantum processors that can execute short quantum programs and offer limited programmability are now available in leading laboratories around the world. These quantum processors are closer to the more established quantum simulators. Despite today's prototype quantum processors suffering from noise and a general lack of controllability, we have seen amazing demonstrations of quantum computational supremacy by Google as well as scientists in China. Quantum computational supremacy shows that quantum processors can perform certain tasks dramatically faster than even the world's leading supercomputers. Quantum computational supremacy was achieved using only limited programmability: a fixed and short quantum program, or circuit, can be tuned, followed by simplistic quantum measurements. Researchers around the world are questioning how far this simplistic approach might be pushed towards applications that are more practical than quantum supremacy. "When does a quantum simulator become a quantum computer? The quantum processors at Google and elsewhere have often been described as being "situated somewhere between a dedicated quantum simulator and a programmable quantum computer." The ad hoc approach used by Google and others was to variational tune a quantum circuit to minimize a cost function calculated classically. This approach turns out to represent a universal model of quantum computation, meaning that a quantum simulator only needs limited additional control to execute general quantum algorithms," Skoltech's Associate Professor Jacob Biamonte notes. Biamonte, who heads the Laboratory for Quantum Information Processing, has proved, as the editors of the journal note, "that the contemporary variational approach to quantum-enhanced algorithms enables a universal model of quantum computation." The editors went on to state, "This brings the resources required for universal quantum computation closer to contemporary quantum processors." "The study bridges the gap between a programmable quantum simulator and a universal quantum computer. The analysis provided a new means to implement quantum algorithms using a variational approach," Biamonte says. Contact information: Skoltech Communications +7 (495) 280 14 81 *protected email*
Jacob Biamonte at Russia's Skolkovo Institute of Science and Technology has bridged the gap between programmable quantum simulators and traditional quantum computers with his discovery of a computationally universal model of quantum computation. Biamonte cited the technique of variationally tuning a quantum circuit used by Google and others in order to minimize a classically calculated cost function, explaining, "This approach turns out to represent a universal model of quantum computation, meaning that a quantum simulator only needs limited additional control to execute general quantum algorithms." Biamonte said the analysis provided "a new means to implement quantum algorithms using a variational approach."
[]
[]
[]
scitechnews
None
None
None
None
Jacob Biamonte at Russia's Skolkovo Institute of Science and Technology has bridged the gap between programmable quantum simulators and traditional quantum computers with his discovery of a computationally universal model of quantum computation. Biamonte cited the technique of variationally tuning a quantum circuit used by Google and others in order to minimize a classically calculated cost function, explaining, "This approach turns out to represent a universal model of quantum computation, meaning that a quantum simulator only needs limited additional control to execute general quantum algorithms." Biamonte said the analysis provided "a new means to implement quantum algorithms using a variational approach." A researcher from Skoltech has filled in the gaps connecting quantum simulators with more traditional quantum computers, discovering a new computationally universal model of quantum computation, the variational model. The paper was published as a Letter in the journal Physical Review A. The work made the Editors' Suggestion list. A quantum simulator is built to share properties with a target quantum system we wish to understand. Early quantum simulators were "dedicated" - that means they could not be programmed, tuned or adjusted and so could mimic one or very few target systems. Modern quantum simulators enable some control over their settings, offering more possibilities. In contrast to quantum simulators, the long-promised quantum computer is a fully programmable quantum system. While building a fully programmable quantum processor remains elusive, noisy quantum processors that can execute short quantum programs and offer limited programmability are now available in leading laboratories around the world. These quantum processors are closer to the more established quantum simulators. Despite today's prototype quantum processors suffering from noise and a general lack of controllability, we have seen amazing demonstrations of quantum computational supremacy by Google as well as scientists in China. Quantum computational supremacy shows that quantum processors can perform certain tasks dramatically faster than even the world's leading supercomputers. Quantum computational supremacy was achieved using only limited programmability: a fixed and short quantum program, or circuit, can be tuned, followed by simplistic quantum measurements. Researchers around the world are questioning how far this simplistic approach might be pushed towards applications that are more practical than quantum supremacy. "When does a quantum simulator become a quantum computer? The quantum processors at Google and elsewhere have often been described as being "situated somewhere between a dedicated quantum simulator and a programmable quantum computer." The ad hoc approach used by Google and others was to variational tune a quantum circuit to minimize a cost function calculated classically. This approach turns out to represent a universal model of quantum computation, meaning that a quantum simulator only needs limited additional control to execute general quantum algorithms," Skoltech's Associate Professor Jacob Biamonte notes. Biamonte, who heads the Laboratory for Quantum Information Processing, has proved, as the editors of the journal note, "that the contemporary variational approach to quantum-enhanced algorithms enables a universal model of quantum computation." The editors went on to state, "This brings the resources required for universal quantum computation closer to contemporary quantum processors." "The study bridges the gap between a programmable quantum simulator and a universal quantum computer. The analysis provided a new means to implement quantum algorithms using a variational approach," Biamonte says. Contact information: Skoltech Communications +7 (495) 280 14 81 *protected email*
690
How U.K., South Africa Coronavirus Variants Escape Immunity
All viruses mutate as they make copies of themselves to spread and thrive. SARS-CoV-2, the virus the causes COVID-19, is proving to be no different. There are currently more than 4,000 variants of COVID-19, which has already killed more than 2.7 million people worldwide during the pandemic. The UK variant, also known as B.1.1.7, was first detected in September 2020, and is now causing 98 percent of all COVID-19 cases in the United Kingdom. And it appears to be gaining a firm grip in about 100 other countries it has spread to in the past several months, including France, Denmark, and the United States. The World Health Organization says B.1.1.7 is one of several variants of concern along with others that have emerged in South Africa and Brazil. "The UK, South Africa, and Brazil variants are more contagious and escape immunity easier than the original virus," said Victor Padilla-Sanchez , a research scientist at The Catholic University of America. "We need to understand why they are more infectious and, in many cases, more deadly." Victor Padilla-Sanchez, Research Scientist, The Catholic University of America. All three variants have undergone changes to their spike protein - the part of the virus which attaches to human cells. As a result, they are better at infecting cells and spreading. In a research paper published in January 2021 in Research Ideas and Outcomes , Padilla-Sanchez discusses the UK and South African variants in detail. He presents a computational analysis of the structure of the spike glycoprotein bound to the ACE2 receptor where the mutations have been introduced. His paper outlines the reason why these variants bind better to human cells. "I've been analyzing a recently published structure of the SARS-CoV-2 spike bound to the ACE2 receptor and found why the new variants are more transmissible," he said. "These findings have been obtained using UC San Francisco Chimera software and molecular dynamics simulations using the Frontera supercomputer of the Texas Advanced Computing Center (TACC)." Padilla-Sanchez found that the UK variant has many mutations in the spike glycoprotein, but most important is one mutation, N501Y, in the receptor binding domain that interacts with the ACE2 receptor. "This N501Y mutation provides a much higher efficiency of binding, which in turn makes the virus more infectious. This variant is replacing the previous virus In the United Kingdom and is spreading in many other places in the world," he said. The South Africa variant emerged in October 2020, and has more important changes in the spike protein, making it more dangerous than the UK variant. It involves a key mutation - called E484K - that helps the virus evade antibodies and parts of the immune system that can fight coronavirus based on experience from prior infection or a vaccine. Since the variant escapes immunity the body will not be able to fight the virus. "We're starting to see the South Africa variant here in the U.S.," he said. Padilla-Sanchez performed structural analysis, which studied the virus's crystal structure; and molecular dynamics to obtain these findings. 1. UK Variant (zoom in) 2. SA Variant (zoom in) 3. Both with residue labels. 4. Both without residue labels. The video above illustrates the receptor binding domain from the coronavirus spike and the ACE2 receptor. The three amino acids are 501, 417, and 484. Amino acid 501 interacts with Y41, and 417 interacts with H34. The video shows an extended simulation so people can see the interactions in real time. Credit: Victor Padilla-Sanchez, The Catholic University of America. "The main computational challenge while doing this research was to find a computer powerful enough to do the molecular dynamics task, which generates very big files, and requires a great amount of memory. This research would not have been possible without the Frontera supercomputer," Padilla-Sanchez said. According to Padilla-Sanchez, the current vaccines will not necessarily treat the variants. "The variants will require their own specific vaccines. We'll need as many vaccines for variants that appear." Going forward, Padilla-Sanchez will continue to research the changes taking place with SARS-CoV-2. "This was a very fast project - the computational study lasted one month," he said. "There are many other labs doing wet lab experiments, but there aren't many computational studies. That's why I decided to do this important work now." This study, called "SARS-CoV-2 Structural Analysis of Receptor Binding Domain New Variants from United Kingdom and South Africa," was published in Research Ideas and Outcomes in January 2021. The researcher who worked on this study is Victor Padilla-Sanchez from The Catholic University of America.
Catholic University of America (CUA) researchers recently completed an analysis of immunity-resistant coronavirus variants in the U.K. and South Africa. CUA's Victor Padilla-Sanchez said the team used molecular dynamics simulations via the Texas Advanced Computing Center's Frontera supercomputer, along with the University of California, San Francisco's Chimera software. They found both the U.K. and South African variants exhibited significant mutations in the virus' spike glycoprotein, influencing the pathogen's infectiousness. Said Padilla-Sanchez, "The main computational challenge while doing this research was to find a computer powerful enough to do the molecular dynamics task, which generates very big files, and requires a great amount of memory. This research would not have been possible without the Frontera supercomputer."
[]
[]
[]
scitechnews
None
None
None
None
Catholic University of America (CUA) researchers recently completed an analysis of immunity-resistant coronavirus variants in the U.K. and South Africa. CUA's Victor Padilla-Sanchez said the team used molecular dynamics simulations via the Texas Advanced Computing Center's Frontera supercomputer, along with the University of California, San Francisco's Chimera software. They found both the U.K. and South African variants exhibited significant mutations in the virus' spike glycoprotein, influencing the pathogen's infectiousness. Said Padilla-Sanchez, "The main computational challenge while doing this research was to find a computer powerful enough to do the molecular dynamics task, which generates very big files, and requires a great amount of memory. This research would not have been possible without the Frontera supercomputer." All viruses mutate as they make copies of themselves to spread and thrive. SARS-CoV-2, the virus the causes COVID-19, is proving to be no different. There are currently more than 4,000 variants of COVID-19, which has already killed more than 2.7 million people worldwide during the pandemic. The UK variant, also known as B.1.1.7, was first detected in September 2020, and is now causing 98 percent of all COVID-19 cases in the United Kingdom. And it appears to be gaining a firm grip in about 100 other countries it has spread to in the past several months, including France, Denmark, and the United States. The World Health Organization says B.1.1.7 is one of several variants of concern along with others that have emerged in South Africa and Brazil. "The UK, South Africa, and Brazil variants are more contagious and escape immunity easier than the original virus," said Victor Padilla-Sanchez , a research scientist at The Catholic University of America. "We need to understand why they are more infectious and, in many cases, more deadly." Victor Padilla-Sanchez, Research Scientist, The Catholic University of America. All three variants have undergone changes to their spike protein - the part of the virus which attaches to human cells. As a result, they are better at infecting cells and spreading. In a research paper published in January 2021 in Research Ideas and Outcomes , Padilla-Sanchez discusses the UK and South African variants in detail. He presents a computational analysis of the structure of the spike glycoprotein bound to the ACE2 receptor where the mutations have been introduced. His paper outlines the reason why these variants bind better to human cells. "I've been analyzing a recently published structure of the SARS-CoV-2 spike bound to the ACE2 receptor and found why the new variants are more transmissible," he said. "These findings have been obtained using UC San Francisco Chimera software and molecular dynamics simulations using the Frontera supercomputer of the Texas Advanced Computing Center (TACC)." Padilla-Sanchez found that the UK variant has many mutations in the spike glycoprotein, but most important is one mutation, N501Y, in the receptor binding domain that interacts with the ACE2 receptor. "This N501Y mutation provides a much higher efficiency of binding, which in turn makes the virus more infectious. This variant is replacing the previous virus In the United Kingdom and is spreading in many other places in the world," he said. The South Africa variant emerged in October 2020, and has more important changes in the spike protein, making it more dangerous than the UK variant. It involves a key mutation - called E484K - that helps the virus evade antibodies and parts of the immune system that can fight coronavirus based on experience from prior infection or a vaccine. Since the variant escapes immunity the body will not be able to fight the virus. "We're starting to see the South Africa variant here in the U.S.," he said. Padilla-Sanchez performed structural analysis, which studied the virus's crystal structure; and molecular dynamics to obtain these findings. 1. UK Variant (zoom in) 2. SA Variant (zoom in) 3. Both with residue labels. 4. Both without residue labels. The video above illustrates the receptor binding domain from the coronavirus spike and the ACE2 receptor. The three amino acids are 501, 417, and 484. Amino acid 501 interacts with Y41, and 417 interacts with H34. The video shows an extended simulation so people can see the interactions in real time. Credit: Victor Padilla-Sanchez, The Catholic University of America. "The main computational challenge while doing this research was to find a computer powerful enough to do the molecular dynamics task, which generates very big files, and requires a great amount of memory. This research would not have been possible without the Frontera supercomputer," Padilla-Sanchez said. According to Padilla-Sanchez, the current vaccines will not necessarily treat the variants. "The variants will require their own specific vaccines. We'll need as many vaccines for variants that appear." Going forward, Padilla-Sanchez will continue to research the changes taking place with SARS-CoV-2. "This was a very fast project - the computational study lasted one month," he said. "There are many other labs doing wet lab experiments, but there aren't many computational studies. That's why I decided to do this important work now." This study, called "SARS-CoV-2 Structural Analysis of Receptor Binding Domain New Variants from United Kingdom and South Africa," was published in Research Ideas and Outcomes in January 2021. The researcher who worked on this study is Victor Padilla-Sanchez from The Catholic University of America.
691
More Than Words: Using AI to Map How the Brain Understands Sentences
Have you ever wondered why you are able to hear a sentence and understand its meaning - given that the same words in a different order would have an entirely different meaning? New research involving neuroimaging and A.I., describes the complex network within the brain that comprehends the meaning of a spoken sentence. "It has been unclear whether the integration of this meaning is represented in a particular site in the brain, such as the anterior temporal lobes, or reflects a more network level operation that engages multiple brain regions," said Andrew Anderson, Ph.D. , research assistant professor in the University of Rochester Del Monte Institute for Neuroscience and lead author on of the study which was published in the Journal of Neuroscience . "The meaning of a sentence is more than the sum of its parts. Take a very simple example - 'the car ran over the cat' and 'the cat ran over the car' - each sentence has exactly the same words, but those words have a totally different meaning when reordered." The study is an example of how the application of artificial neural networks, or A.I., are enabling researchers to unlock the extremely complex signaling in the brain that underlies functions such as processing language. The researchers gather brain activity data from study participants who read sentences while undergoing fMRI. These scans showed activity in the brain spanning across a network of different regions - anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex. Using the computational model InferSent - an A.I. model developed by Facebook trained to produce unified semantic representations of sentences - the researchers were able to predict patterns of fMRI activity reflecting the encoding of sentence meaning across those brain regions. "It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain." Anderson and his team believe the findings could be helpful in understanding clinical conditions. "We're deploying similar methods to try to understand how language comprehension breaks down in early Alzheimer's disease. We are also interested in moving the models forward to predict brain activity elicited as language is produced. The current study had people read sentences, in the future we're interested in moving forward to predict brain activity as people might speak sentences." Additional co-authors include Edmund Lalor, Ph.D. , Rajeev Raizada, Ph.D. , and Scott Grimm, Ph.D. , with the University of Rochester, Douwe Kiela with Facebook A.I. Research, and Jeffrey Binder, M.D., Leonardo Fernandino, Ph.D., Colin Humphries, Ph.D., and Lisa Conant, Ph.D. with the Medical College of Wisconsin. The research was supported with funding from the Del Monte Institute for Neuroscience's Schimtt Program on Integrative Neuroscience and the Intelligence Advanced Research Projects Activity.
Researchers at the University of Rochester Medical Center (URMC) and the Medical College of Wisconsin combined neuroimaging and artificial intelligence (AI) to describe the brain's mechanism for understanding sentences. The team performed functional magnetic resonance imaging (fMRI) scans on study participants as they read sentences, which indicated that brain activity crossed a network of different regions. Using Facebook's InferSent AI model, the researchers could predict patterns of fMRI activity that mirrored the encoding of a sentence's meaning across those regions. URMC's Andrew Anderson said, "It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of Rochester Medical Center (URMC) and the Medical College of Wisconsin combined neuroimaging and artificial intelligence (AI) to describe the brain's mechanism for understanding sentences. The team performed functional magnetic resonance imaging (fMRI) scans on study participants as they read sentences, which indicated that brain activity crossed a network of different regions. Using Facebook's InferSent AI model, the researchers could predict patterns of fMRI activity that mirrored the encoding of a sentence's meaning across those regions. URMC's Andrew Anderson said, "It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain." Have you ever wondered why you are able to hear a sentence and understand its meaning - given that the same words in a different order would have an entirely different meaning? New research involving neuroimaging and A.I., describes the complex network within the brain that comprehends the meaning of a spoken sentence. "It has been unclear whether the integration of this meaning is represented in a particular site in the brain, such as the anterior temporal lobes, or reflects a more network level operation that engages multiple brain regions," said Andrew Anderson, Ph.D. , research assistant professor in the University of Rochester Del Monte Institute for Neuroscience and lead author on of the study which was published in the Journal of Neuroscience . "The meaning of a sentence is more than the sum of its parts. Take a very simple example - 'the car ran over the cat' and 'the cat ran over the car' - each sentence has exactly the same words, but those words have a totally different meaning when reordered." The study is an example of how the application of artificial neural networks, or A.I., are enabling researchers to unlock the extremely complex signaling in the brain that underlies functions such as processing language. The researchers gather brain activity data from study participants who read sentences while undergoing fMRI. These scans showed activity in the brain spanning across a network of different regions - anterior and posterior temporal lobes, inferior parietal cortex, and inferior frontal cortex. Using the computational model InferSent - an A.I. model developed by Facebook trained to produce unified semantic representations of sentences - the researchers were able to predict patterns of fMRI activity reflecting the encoding of sentence meaning across those brain regions. "It's the first time that we've applied this model to predict brain activity within these regions, and that provides new evidence that contextualized semantic representations are encoded throughout a distributed language network, rather than at a single site in the brain." Anderson and his team believe the findings could be helpful in understanding clinical conditions. "We're deploying similar methods to try to understand how language comprehension breaks down in early Alzheimer's disease. We are also interested in moving the models forward to predict brain activity elicited as language is produced. The current study had people read sentences, in the future we're interested in moving forward to predict brain activity as people might speak sentences." Additional co-authors include Edmund Lalor, Ph.D. , Rajeev Raizada, Ph.D. , and Scott Grimm, Ph.D. , with the University of Rochester, Douwe Kiela with Facebook A.I. Research, and Jeffrey Binder, M.D., Leonardo Fernandino, Ph.D., Colin Humphries, Ph.D., and Lisa Conant, Ph.D. with the Medical College of Wisconsin. The research was supported with funding from the Del Monte Institute for Neuroscience's Schimtt Program on Integrative Neuroscience and the Intelligence Advanced Research Projects Activity.
693
Control System Helps Several Drones Team Up to Deliver Heavy Packages
Many parcel delivery drones of the future are expected to handle packages weighing five pounds or less, a restriction that would allow small, standardized UAVs to handle a large percentage of the deliveries now done by ground vehicles. But will that relegate heavier packages to slower delivery by conventional trucks and vans? A research team at the Georgia Institute of Technology has developed a modular solution for handling larger packages without the need for a complex fleet of drones of varying sizes. By allowing teams of small drones to collaboratively lift objects using an adaptive control algorithm, the strategy could allow a wide range of packages to be delivered using a combination of several standard-sized vehicles. Beyond simplifying the drone fleet, the work could provide more robust drone operations and reduce the noise and safety concerns involved in operating large autonomous UAVs in populated areas. In addition to commercial package delivery, the system might also be used by the military to resupply small groups of soldiers in the field. "A delivery truck could carry a dozen drones in the back, and depending on how heavy a particular package is, it might use as many as six drones to carry the package," said Jonathan Rogers , the Lockheed Martin Associate Professor of Avionics Integration in Georgia Tech's Daniel Guggenheim School of Aerospace Engineering . "That would allow flexibility in the weight of the packages that could be delivered and eliminate the need to build and maintain several different sizes of delivery drones." The research was supported, in part, by a National Science Foundation graduate student fellowship and by the Hives independent research and development program of the Georgia Tech Research Institute. A paper on the research has been submitted to the Journal of Aircraft . A centralized computer system developed by graduate student Kevin Webb would monitor each of the drones lifting a package, sharing information about their location and the thrust being provided by their motors. The control system would coordinate the issuance of commands for navigation and delivery of the package. "The idea is to make multi-UAV cooperative flight easy from the user perspective," Rogers said. "We take care of the difficult issues using the onboard intelligence, rather than expecting a human to precisely measure the package weight, center of gravity, and drone relative positions. We want to make this easy enough so that a package delivery driver could operate the system consistently." The challenges of controlling a group of robots connected together to lift a package is more complex in many ways than controlling a swarm of robots that fly independently. "Most swarm work involves vehicles that are not connected, but flying in formations," Rogers said. "In that case, the individual dynamics of a specific vehicle are not constrained by what the other vehicles are doing. For us, the challenge is that the vehicles are being pulled in different directions by what the other vehicles connected to the package are doing." The team of drones would autonomously connect to a docking structure attached to a package, using an infrared guidance system that eliminates the need for humans to attach the vehicles. That could come in handy for drones sent to retrieve packages that a customer is returning. By knowing how much thrust they are producing and the altitude they are maintaining, the drone teams could even estimate the weight of the package they're picking up. Webb and Rogers have built a demonstration in which four small quadrotor drones work together to lift a box that's 2 feet by 2 feet by 2 feet and weighs 12 pounds. The control algorithm isn't limited to four vehicles and could manage "as many vehicles as you could put around the package," Rogers said. For the military, the modular cargo system could allow squads of soldiers at remote locations to be resupplied without the cost or risk of operating a large autonomous helicopter. A military UAV package retrieval team could be made up of individual vehicles carried by each soldier. "That would distribute a big lifting capability in smaller packages, which equates to small drones that could be used to team up," Rogers said. "Putting small drones together would allow them to do bigger things than they could do individually." Bringing multiple vehicles together creates a more difficult control challenge, but Rogers argues the benefits are worth the complexity. "The idea of having multiple machines working together provides better scalability than building a larger device every time you have a larger task," he said. "We think this is the right way to fill that gap." Using multiple drones to carry a heavy package could also allow more redundancy in the delivery system. Should one of the drones fail, the others should be able to pick up the load - an issue managed by the central control system. That part of the control strategy hasn't yet been tested, but it is part of Rogers' plan for future development of the system. More research is also needed on the docking system that connects the drones to packages. The structures will have to be made strong and rigid enough to connect to and lift the packages, while being inexpensive enough to be disposable. "I think the major technologies are already here, and given an adequate investment, a system could be fielded within five years to deliver packages with multiple drones," Rogers said. "It's not a technical challenge as much as it is a regulatory issue and a question of societal acceptance." Research News Georgia Institute of Technology 177 North Avenue Atlanta, Georgia 30332-0181 USA Media Relations Contacts : John Toon (404-894-6986) ([email protected]) or Anne Wainscott-Sargent (404-435-5784) ([email protected]). Writer : John Toon
An adaptive control algorithm developed by researchers at the Georgia Institute of Technology (Georgia Tech) helps unmanned aerial vehicles collaborate to deliver heavy parcels. Georgia Tech's Kevin Webb designed a centralized computer system to monitor each drone lifting a package, sharing data about their location and motor thrust, and coordinating navigation and delivery instructions. Georgia Tech's Jonathan Rogers said, "We take care of the difficult issues using the onboard intelligence, rather than expecting a human to precisely measure the package weight, center of gravity, and drone relative positions." The team of drones could autonomously connect to a docking structure attached to a package via an infrared guidance system, without requiring human guidance.
[]
[]
[]
scitechnews
None
None
None
None
An adaptive control algorithm developed by researchers at the Georgia Institute of Technology (Georgia Tech) helps unmanned aerial vehicles collaborate to deliver heavy parcels. Georgia Tech's Kevin Webb designed a centralized computer system to monitor each drone lifting a package, sharing data about their location and motor thrust, and coordinating navigation and delivery instructions. Georgia Tech's Jonathan Rogers said, "We take care of the difficult issues using the onboard intelligence, rather than expecting a human to precisely measure the package weight, center of gravity, and drone relative positions." The team of drones could autonomously connect to a docking structure attached to a package via an infrared guidance system, without requiring human guidance. Many parcel delivery drones of the future are expected to handle packages weighing five pounds or less, a restriction that would allow small, standardized UAVs to handle a large percentage of the deliveries now done by ground vehicles. But will that relegate heavier packages to slower delivery by conventional trucks and vans? A research team at the Georgia Institute of Technology has developed a modular solution for handling larger packages without the need for a complex fleet of drones of varying sizes. By allowing teams of small drones to collaboratively lift objects using an adaptive control algorithm, the strategy could allow a wide range of packages to be delivered using a combination of several standard-sized vehicles. Beyond simplifying the drone fleet, the work could provide more robust drone operations and reduce the noise and safety concerns involved in operating large autonomous UAVs in populated areas. In addition to commercial package delivery, the system might also be used by the military to resupply small groups of soldiers in the field. "A delivery truck could carry a dozen drones in the back, and depending on how heavy a particular package is, it might use as many as six drones to carry the package," said Jonathan Rogers , the Lockheed Martin Associate Professor of Avionics Integration in Georgia Tech's Daniel Guggenheim School of Aerospace Engineering . "That would allow flexibility in the weight of the packages that could be delivered and eliminate the need to build and maintain several different sizes of delivery drones." The research was supported, in part, by a National Science Foundation graduate student fellowship and by the Hives independent research and development program of the Georgia Tech Research Institute. A paper on the research has been submitted to the Journal of Aircraft . A centralized computer system developed by graduate student Kevin Webb would monitor each of the drones lifting a package, sharing information about their location and the thrust being provided by their motors. The control system would coordinate the issuance of commands for navigation and delivery of the package. "The idea is to make multi-UAV cooperative flight easy from the user perspective," Rogers said. "We take care of the difficult issues using the onboard intelligence, rather than expecting a human to precisely measure the package weight, center of gravity, and drone relative positions. We want to make this easy enough so that a package delivery driver could operate the system consistently." The challenges of controlling a group of robots connected together to lift a package is more complex in many ways than controlling a swarm of robots that fly independently. "Most swarm work involves vehicles that are not connected, but flying in formations," Rogers said. "In that case, the individual dynamics of a specific vehicle are not constrained by what the other vehicles are doing. For us, the challenge is that the vehicles are being pulled in different directions by what the other vehicles connected to the package are doing." The team of drones would autonomously connect to a docking structure attached to a package, using an infrared guidance system that eliminates the need for humans to attach the vehicles. That could come in handy for drones sent to retrieve packages that a customer is returning. By knowing how much thrust they are producing and the altitude they are maintaining, the drone teams could even estimate the weight of the package they're picking up. Webb and Rogers have built a demonstration in which four small quadrotor drones work together to lift a box that's 2 feet by 2 feet by 2 feet and weighs 12 pounds. The control algorithm isn't limited to four vehicles and could manage "as many vehicles as you could put around the package," Rogers said. For the military, the modular cargo system could allow squads of soldiers at remote locations to be resupplied without the cost or risk of operating a large autonomous helicopter. A military UAV package retrieval team could be made up of individual vehicles carried by each soldier. "That would distribute a big lifting capability in smaller packages, which equates to small drones that could be used to team up," Rogers said. "Putting small drones together would allow them to do bigger things than they could do individually." Bringing multiple vehicles together creates a more difficult control challenge, but Rogers argues the benefits are worth the complexity. "The idea of having multiple machines working together provides better scalability than building a larger device every time you have a larger task," he said. "We think this is the right way to fill that gap." Using multiple drones to carry a heavy package could also allow more redundancy in the delivery system. Should one of the drones fail, the others should be able to pick up the load - an issue managed by the central control system. That part of the control strategy hasn't yet been tested, but it is part of Rogers' plan for future development of the system. More research is also needed on the docking system that connects the drones to packages. The structures will have to be made strong and rigid enough to connect to and lift the packages, while being inexpensive enough to be disposable. "I think the major technologies are already here, and given an adequate investment, a system could be fielded within five years to deliver packages with multiple drones," Rogers said. "It's not a technical challenge as much as it is a regulatory issue and a question of societal acceptance." Research News Georgia Institute of Technology 177 North Avenue Atlanta, Georgia 30332-0181 USA Media Relations Contacts : John Toon (404-894-6986) ([email protected]) or Anne Wainscott-Sargent (404-435-5784) ([email protected]). Writer : John Toon
695
Applied Materials Tools Use AI to Catch Mistakes on Chips
Semiconductor manufacturer Applied Materials said it is using new fabrication technology that employs artificial intelligence (AI) to identify chip defects more effectively. The tools include the Enlight scanner - a highly advanced camera - to scan silicon wafers quickly for problem areas, and an electron microscope that zooms in for even-closer inspection. Electron microscopes are very slow, so the ExtractAI component checks only about 1,000 potential trouble areas on the wafers to predict where the biggest problems will be. Applied's Keith Wells said the AI-powered check takes roughly 60 minutes, adding, "It's economical for the customer to do that on every wafer. We're telling you with high confidence that these are the really killer defects."
[]
[]
[]
scitechnews
None
None
None
None
Semiconductor manufacturer Applied Materials said it is using new fabrication technology that employs artificial intelligence (AI) to identify chip defects more effectively. The tools include the Enlight scanner - a highly advanced camera - to scan silicon wafers quickly for problem areas, and an electron microscope that zooms in for even-closer inspection. Electron microscopes are very slow, so the ExtractAI component checks only about 1,000 potential trouble areas on the wafers to predict where the biggest problems will be. Applied's Keith Wells said the AI-powered check takes roughly 60 minutes, adding, "It's economical for the customer to do that on every wafer. We're telling you with high confidence that these are the really killer defects."
696
Lip-Reading Software Helps Users of All Abilities to Send Secure Messages
A computer science lab focused on making human-computer interaction easier for people of all abilities has developed a digital lip-reader complete with its own repair system so the software can continue learning from its user. LipType, a new invention from professor Ahmed Sabbir Arif and his lab, the Human-Computer Interaction Group , lets people send texts or emails on their computers and mobile devices and have contact-free interactions with public devices such as ATMs or other kiosks, without speaking aloud. There are other lip-readers, but they are not widely used because they are slow and often faulty, Arif said. "There are a lot of errors in talk-to-text, especially in noisy places, or for people with speech impairments or those who aren't native speakers," he said. "But LipType works for anyone. People might need to send a private message while in a public space, or in a meeting, and with LipType, they could just 'say' the words without making a sound." He and his students added various filters for different lighting conditions and a mistake-corrector based on different language models and they found that LipType was significantly faster and there were fewer errors than with other lip-readers. To go along with the software testing, Arif's lab conducted a social study to see if people liked and would use such a technology. They reached out to students and people in the community, including people with disabilities, and conducted an online survey. People who tried it in various tests overwhelmingly say they would use it. "People with impairments are often concerned about standing out," Arif said. "This is one way to increase access to mobile and other devices for them in a way that they won't draw attention to themselves." The social study found that people are willing to use silent speech in public places, even when it is not as accurate as other methods. They feel the software preserves their privacy and security and allows them to do what they need to do without disturbing others around them. Computer Science and Engineering graduate student Laxmi Pandey, who works with Arif, said she is excited about the results of the tests. "LipType performed 58 percent faster and 53 percent more accurately than other state-of-the-art models in various real-world settings, including poor lighting and busy market situations," Pandey said. "The success of LipType makes me believe that it can revolutionize our interaction with computer systems and with other human beings as well." She and Arif have written papers on the social study and LipType. Both have been accepted for publication and presentation at the 2021 Association for Computing Machinery's Special Interest Group on Computer-Human Interaction ACM SIGCHI Conference on Human Factors in Computing Systems, the premier international conference on human-computer interaction. LipType has a number of other applications, as well. "We were thinking about surveillance situations," Arif said. "This could be very useful for law enforcement. LipType could be used for closed captioning. We're also looking at interfaces for cars. "We have a design-for-all philosophy," Arif said.
Lip-reading software developed by researchers at the University of California, Merced (UC Merced) 's Human-Computer Interaction Group can continuously learn from its users. LipType allows users to send texts or emails on their computers and mobile devices, and contactlessly engage with public devices without speaking aloud. The UC Merced team incorporated filters for different lighting conditions and a mistake-corrector based on different language models into LipType, which was faster and less error-prone than other lip-readers. UC Merced's Laxmi Pandey said, "LipType performed 58% faster and 53% more accurately than other state-of-the-art models in various real-world settings, including poor lighting and busy market situations."
[]
[]
[]
scitechnews
None
None
None
None
Lip-reading software developed by researchers at the University of California, Merced (UC Merced) 's Human-Computer Interaction Group can continuously learn from its users. LipType allows users to send texts or emails on their computers and mobile devices, and contactlessly engage with public devices without speaking aloud. The UC Merced team incorporated filters for different lighting conditions and a mistake-corrector based on different language models into LipType, which was faster and less error-prone than other lip-readers. UC Merced's Laxmi Pandey said, "LipType performed 58% faster and 53% more accurately than other state-of-the-art models in various real-world settings, including poor lighting and busy market situations." A computer science lab focused on making human-computer interaction easier for people of all abilities has developed a digital lip-reader complete with its own repair system so the software can continue learning from its user. LipType, a new invention from professor Ahmed Sabbir Arif and his lab, the Human-Computer Interaction Group , lets people send texts or emails on their computers and mobile devices and have contact-free interactions with public devices such as ATMs or other kiosks, without speaking aloud. There are other lip-readers, but they are not widely used because they are slow and often faulty, Arif said. "There are a lot of errors in talk-to-text, especially in noisy places, or for people with speech impairments or those who aren't native speakers," he said. "But LipType works for anyone. People might need to send a private message while in a public space, or in a meeting, and with LipType, they could just 'say' the words without making a sound." He and his students added various filters for different lighting conditions and a mistake-corrector based on different language models and they found that LipType was significantly faster and there were fewer errors than with other lip-readers. To go along with the software testing, Arif's lab conducted a social study to see if people liked and would use such a technology. They reached out to students and people in the community, including people with disabilities, and conducted an online survey. People who tried it in various tests overwhelmingly say they would use it. "People with impairments are often concerned about standing out," Arif said. "This is one way to increase access to mobile and other devices for them in a way that they won't draw attention to themselves." The social study found that people are willing to use silent speech in public places, even when it is not as accurate as other methods. They feel the software preserves their privacy and security and allows them to do what they need to do without disturbing others around them. Computer Science and Engineering graduate student Laxmi Pandey, who works with Arif, said she is excited about the results of the tests. "LipType performed 58 percent faster and 53 percent more accurately than other state-of-the-art models in various real-world settings, including poor lighting and busy market situations," Pandey said. "The success of LipType makes me believe that it can revolutionize our interaction with computer systems and with other human beings as well." She and Arif have written papers on the social study and LipType. Both have been accepted for publication and presentation at the 2021 Association for Computing Machinery's Special Interest Group on Computer-Human Interaction ACM SIGCHI Conference on Human Factors in Computing Systems, the premier international conference on human-computer interaction. LipType has a number of other applications, as well. "We were thinking about surveillance situations," Arif said. "This could be very useful for law enforcement. LipType could be used for closed captioning. We're also looking at interfaces for cars. "We have a design-for-all philosophy," Arif said.
698
Engineers Combine AI, Wearable Cameras in Self-Walking Robotic Exoskeletons
Robotics researchers are developing exoskeleton legs capable of thinking and making control decisions on their own using sophisticated artificial intelligence (AI) technology. The system combines computer vision and deep-learning AI to mimic how able-bodied people walk by seeing their surroundings and adjusting their movements. "We're giving robotic exoskeletons vision so they can control themselves," said Brokoslaw Laschowski , a PhD candidate in systems design engineering who leads a University of Waterloo research project called ExoNet. Exoskeleton legs operated by motors already exist, but they must be manually controlled by users via smartphone applications or joysticks. "That can be inconvenient and cognitively demanding," said Laschowski, also a student member of the Waterloo Artificial Intelligence Institute (Waterloo.ai). "Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode." To address that limitation, the researchers fitted exoskeleton users with wearable cameras and are now optimizing AI computer software to process the video feed to accurately recognize stairs, doors and other features of the surrounding environment. The next phase of the ExoNet research project will involve sending instructions to motors so that robotic exoskeletons can climb stairs, avoid obstacles or take other appropriate actions based on analysis of the user's current movement and the upcoming terrain. "Our control approach wouldn't necessarily require human thought," said Laschowski, who is supervised by engineering professor John McPhee , the Canada Research Chair in Biomechatronic System Dynamics, in his Motion Research Group lab. "Similar to autonomous cars that drive themselves, we're designing autonomous exoskeletons that walk for themselves." The researchers are also working to improve the energy efficiency of motors for robotic exoskeletons by using human motion to self-charge the batteries. The latest in a series of papers on the related projects, Simulation of Stand-to-Sit Biomechanics for Robotic Exoskeletons and Prostheses with Energy Regeneration , appears in the journal IEEE Transactions on Medical Robotics and Bionics. The research team also includes engineering professor Alexander Wong, the Canada Research Chair in Artificial Intelligence and Medical Imaging, and William McNally, also a PhD candidate in systems design engineering and a student member of Waterloo.ai.
Researchers at Canada's University of Waterloo have combined computer vision and deep-learning artificial intelligence (AI) technology in an effort to develop robotic exoskeleton legs that can make decisions. Current exoskeleton legs must be controlled manually via smartphone applications or joysticks. The researchers used wearable cameras fitted to exoskeleton users and AI software to process the video feed to recognize stairs, doors, and other aspects of the surrounding environment. Waterloo's Brokoslaw Laschowski said, "Our control approach wouldn't necessarily require human thought. Similar to autonomous cars that drive themselves, we're designing autonomous exoskeletons that walk for themselves."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Canada's University of Waterloo have combined computer vision and deep-learning artificial intelligence (AI) technology in an effort to develop robotic exoskeleton legs that can make decisions. Current exoskeleton legs must be controlled manually via smartphone applications or joysticks. The researchers used wearable cameras fitted to exoskeleton users and AI software to process the video feed to recognize stairs, doors, and other aspects of the surrounding environment. Waterloo's Brokoslaw Laschowski said, "Our control approach wouldn't necessarily require human thought. Similar to autonomous cars that drive themselves, we're designing autonomous exoskeletons that walk for themselves." Robotics researchers are developing exoskeleton legs capable of thinking and making control decisions on their own using sophisticated artificial intelligence (AI) technology. The system combines computer vision and deep-learning AI to mimic how able-bodied people walk by seeing their surroundings and adjusting their movements. "We're giving robotic exoskeletons vision so they can control themselves," said Brokoslaw Laschowski , a PhD candidate in systems design engineering who leads a University of Waterloo research project called ExoNet. Exoskeleton legs operated by motors already exist, but they must be manually controlled by users via smartphone applications or joysticks. "That can be inconvenient and cognitively demanding," said Laschowski, also a student member of the Waterloo Artificial Intelligence Institute (Waterloo.ai). "Every time you want to perform a new locomotor activity, you have to stop, take out your smartphone and select the desired mode." To address that limitation, the researchers fitted exoskeleton users with wearable cameras and are now optimizing AI computer software to process the video feed to accurately recognize stairs, doors and other features of the surrounding environment. The next phase of the ExoNet research project will involve sending instructions to motors so that robotic exoskeletons can climb stairs, avoid obstacles or take other appropriate actions based on analysis of the user's current movement and the upcoming terrain. "Our control approach wouldn't necessarily require human thought," said Laschowski, who is supervised by engineering professor John McPhee , the Canada Research Chair in Biomechatronic System Dynamics, in his Motion Research Group lab. "Similar to autonomous cars that drive themselves, we're designing autonomous exoskeletons that walk for themselves." The researchers are also working to improve the energy efficiency of motors for robotic exoskeletons by using human motion to self-charge the batteries. The latest in a series of papers on the related projects, Simulation of Stand-to-Sit Biomechanics for Robotic Exoskeletons and Prostheses with Energy Regeneration , appears in the journal IEEE Transactions on Medical Robotics and Bionics. The research team also includes engineering professor Alexander Wong, the Canada Research Chair in Artificial Intelligence and Medical Imaging, and William McNally, also a PhD candidate in systems design engineering and a student member of Waterloo.ai.
699
UC Chemists Use Supercomputers to Understand Solvents
Water is a seemingly simple solvent, as anyone who has stirred sugar in their coffee can attest. "People have studied water for hundreds of years - Galileo studied the origin of flotation in water. Even with all that research, we don't have a complete understanding of the interactions in water," Beck said. "It's amazing because it's a simple molecule but the behavior is complex." For the quantum simulation, the chemists turned to UC's Advanced Research Computing Center and the Ohio Supercomputer Center. Quantum simulations provide a tool to help chemists better understand interactions on an atomic scale. "Quantum simulations have been around for quite a while," Eisenhart said. "But the hardware that's been evolving recently - things like graphics processing units and their acceleration when applied to these problems - creates the ability to study larger systems than we could in the past." "How do ions dissolve in this liquid compared to water? First we had to understand what the basic structure was of the liquid," Beck said. The research was funded by a grant from the National Science Foundation.
University of Cincinnati (UC) chemists Thomas Beck and Andrew Eisenhart used a supercomputer to understand the basic characteristics of an industrial solvent via quantum simulation. The researchers employed the university's Advanced Research Computing Center and the Ohio Supercomputer Center to investigate glycerol carbonate. Said Eisenhart, "Quantum simulations have been around for quite a while. But the hardware that's been evolving recently - things like graphics processing units and their acceleration when applied to these problems - creates the ability to study larger systems than we could in the past." Eisenhart said the analysis provided insights into how small modifications to molecular structure can have larger effects on the solvent overall, "and how these small changes make its interactions with very important things like ions and can have an effect on things like battery performance."
[]
[]
[]
scitechnews
None
None
None
None
University of Cincinnati (UC) chemists Thomas Beck and Andrew Eisenhart used a supercomputer to understand the basic characteristics of an industrial solvent via quantum simulation. The researchers employed the university's Advanced Research Computing Center and the Ohio Supercomputer Center to investigate glycerol carbonate. Said Eisenhart, "Quantum simulations have been around for quite a while. But the hardware that's been evolving recently - things like graphics processing units and their acceleration when applied to these problems - creates the ability to study larger systems than we could in the past." Eisenhart said the analysis provided insights into how small modifications to molecular structure can have larger effects on the solvent overall, "and how these small changes make its interactions with very important things like ions and can have an effect on things like battery performance." Water is a seemingly simple solvent, as anyone who has stirred sugar in their coffee can attest. "People have studied water for hundreds of years - Galileo studied the origin of flotation in water. Even with all that research, we don't have a complete understanding of the interactions in water," Beck said. "It's amazing because it's a simple molecule but the behavior is complex." For the quantum simulation, the chemists turned to UC's Advanced Research Computing Center and the Ohio Supercomputer Center. Quantum simulations provide a tool to help chemists better understand interactions on an atomic scale. "Quantum simulations have been around for quite a while," Eisenhart said. "But the hardware that's been evolving recently - things like graphics processing units and their acceleration when applied to these problems - creates the ability to study larger systems than we could in the past." "How do ions dissolve in this liquid compared to water? First we had to understand what the basic structure was of the liquid," Beck said. The research was funded by a grant from the National Science Foundation.
700
Technology Will Create Millions of Jobs. The Problem Will Be to Find Workers to Fill Them
New technologies will lead to tens of millions of job vacancies by 2030, according to the latest economic analysis from Boston Consulting Group (BCG). But that does little to erase the threat of unemployment spiking due to the automation of labour, BCG says. BCG's economists carried out detailed modelling of the prospective changes in the supply and demand of labour in Germany, Australia and the US, and found that job losses in the next 10 years will effectively be matched by even greater job creation. The problem is that those who find themselves out of a job won't necessarily be those that employers are looking to hire. Eliminating 10 million jobs and creating the same number of new jobs might appear to have a negligible impact, the researchers say; but in fact, doing so represents huge economic disruption, both at a national level and for the people whose jobs are at stake. SEE: Guide to Becoming a Digital Transformation Champion (TechRepublic Premium) "It is important to not just focus on the overall net surplus or shortfall because behind the net change, there are significant supply-demand mismatches across job types and geographical location," Miguel Carrasco, senior partner at BCG, tells ZDNet. "It is unrealistic to expect perfect exchangeability -- not all of the surplus capacity in the workforce can be redeployed to meet new or growing demand." To understand how employment might change to 2030, BCG's researchers assessed various factors that might affect a country's workforce. They accounted for numbers likely to enter the job market (college graduates, migration patterns) and those projected to leave (retirement and mortality rates). They then pitted those numbers against various scenarios representing how demand for labour might change, based on expected GDP growth, as well as potential technology adoption rates. In short, the higher a country's growth, the more likely it is that jobs will be created. Technology adoption, meanwhile, might cause some work to be eliminated as a result of automation, while also creating new employment opportunities. In the midrange scenario - a baseline GDP growth and a medium rate of technology adoption - all three countries were projected to experience labour shortfalls. Germany could find itself in need of three million workers by 2030, and Australia one million short. In the US, the shortfall could surpass 17 million jobs. At the same time, technology will displace many workers. In the three countries studied, the projected labour surplus is significant, reaching almost 11 million workers in the US alone. Jobs most likely to be affected will be in office and administrative support, as well as food preparation and services; in Germany, production workers will be those most affected by the automation of labour; and in Australia, job surpluses will most affect sales and related fields. There is a clear mismatch between occupations that will be lost and those that will be in demand: in all three countries, the professions with the biggest looming shortfalls are computer-related occupations and jobs in science, technology, engineering and maths. Work that requires compassionate human interaction, including healthcare, social services or teaching, will also be in high demand. This is worrying, explain BCG's specialists, who anticipate that as demand for talent is unmet, companies' financial stability and ability to compete will be affected. "Labour shortages are more worrying for businesses and governments than a surplus of labour," says Carrasco. "Global businesses that are competing in the global market for the same talent will have a very limited pool of candidates to choose from. For governments, the handbrake labour shortages create will have a clear impact on economic growth." The solution? To aggressively up-skill and re-train the workforce, to ensure that demand for talent is met in time. Managing the workforce transition will require equipping those most at risk of job loss with the required skills to fill roles that are set to boom. This means creating future software developers, data analysts, or cybersecurity testers - but also developing skills such as empathy, imagination or creativity, which will underpin jobs in more social sectors. The scope of the challenge is already immense: a survey conducted by the World Economic Forum in 2019 showed that only 27% of small companies and 29% of large companies believe they have the right talent for digital transformation. As workers suddenly found themselves carrying out their jobs entirely online, the COVID-19 pandemic highlighted some critical and persistent knowledge gaps when it comes to digital skills. A global survey carried out by Salesforce, for example, showed that almost two-thirds of workers wished they had more up-to-date skill sets . "The task of up-skilling and re-skilling the workforce is huge and urgent, but it is not insurmountable," argues Carrasco. "It requires a concentrated and collaborative effort between governments, businesses and individuals." Governments will need to carry out better predictions of how their workforce will change over time, identifying where gaps will be created and building up-skilling programs at scale to respond to the problem. SEE: What is Agile software development? Everything you need to know about delivering better code, faster BCG's report pointed to Singapore, where the government's SkillsFuture program is designed to provide citizens with training opportunities to up-skill themselves. The program includes a 'SkillsFuture Credit' that provides funding for a person's education over their lifetime. In 2020, the scheme reached 540,000 individual and 14,000 businesses. Similarly, the Canadian government has created a platform called planext, to help residents see which occupations might correspond to their existing skills and chart a potential future career path, complete with opportunities for training and education. The onus, however, will also be on companies to re-train their workforces by anticipating changes and providing their employees with opportunities for lifelong learning. According to the BCG report, investing in skills now will enable businesses to gain a significant competitive advantage, by ensuring that they have the right talent in the right place at the right time. There are already some examples of early efforts: Australian supermarket giant Woolworths announced earlier this year that it was investing more than AU$50 million over the next three years to train more than 60,000 staff in new tech-related skills. The company's staff - from stores, e-commerce operations, supply chain networks and support offices - will be trained in digital, data analytics, machine learning and robotics.
Economic analysis by Boston Consulting Group (BCG) indicates that new technologies will create tens of millions of jobs by 2030, but are unlikely to offset job losses from automation over the same period. Models of prospective changes in labor supply and demand in Germany, Australia, and the U.S. forecast that the next decade's job losses will be matched by even greater job creation. BCG's Miguel Carrasco said, "It is unrealistic to expect perfect exchangeability - not all of the surplus capacity in the workforce can be redeployed to meet new or growing demand." Occupations facing the biggest shortages include computer-related professions and jobs in science, technology, engineering, and math. BCG recommends aggressively upskilling and retraining the workforce, to ensure the timely fulfillment of demand for talent.
[]
[]
[]
scitechnews
None
None
None
None
Economic analysis by Boston Consulting Group (BCG) indicates that new technologies will create tens of millions of jobs by 2030, but are unlikely to offset job losses from automation over the same period. Models of prospective changes in labor supply and demand in Germany, Australia, and the U.S. forecast that the next decade's job losses will be matched by even greater job creation. BCG's Miguel Carrasco said, "It is unrealistic to expect perfect exchangeability - not all of the surplus capacity in the workforce can be redeployed to meet new or growing demand." Occupations facing the biggest shortages include computer-related professions and jobs in science, technology, engineering, and math. BCG recommends aggressively upskilling and retraining the workforce, to ensure the timely fulfillment of demand for talent. New technologies will lead to tens of millions of job vacancies by 2030, according to the latest economic analysis from Boston Consulting Group (BCG). But that does little to erase the threat of unemployment spiking due to the automation of labour, BCG says. BCG's economists carried out detailed modelling of the prospective changes in the supply and demand of labour in Germany, Australia and the US, and found that job losses in the next 10 years will effectively be matched by even greater job creation. The problem is that those who find themselves out of a job won't necessarily be those that employers are looking to hire. Eliminating 10 million jobs and creating the same number of new jobs might appear to have a negligible impact, the researchers say; but in fact, doing so represents huge economic disruption, both at a national level and for the people whose jobs are at stake. SEE: Guide to Becoming a Digital Transformation Champion (TechRepublic Premium) "It is important to not just focus on the overall net surplus or shortfall because behind the net change, there are significant supply-demand mismatches across job types and geographical location," Miguel Carrasco, senior partner at BCG, tells ZDNet. "It is unrealistic to expect perfect exchangeability -- not all of the surplus capacity in the workforce can be redeployed to meet new or growing demand." To understand how employment might change to 2030, BCG's researchers assessed various factors that might affect a country's workforce. They accounted for numbers likely to enter the job market (college graduates, migration patterns) and those projected to leave (retirement and mortality rates). They then pitted those numbers against various scenarios representing how demand for labour might change, based on expected GDP growth, as well as potential technology adoption rates. In short, the higher a country's growth, the more likely it is that jobs will be created. Technology adoption, meanwhile, might cause some work to be eliminated as a result of automation, while also creating new employment opportunities. In the midrange scenario - a baseline GDP growth and a medium rate of technology adoption - all three countries were projected to experience labour shortfalls. Germany could find itself in need of three million workers by 2030, and Australia one million short. In the US, the shortfall could surpass 17 million jobs. At the same time, technology will displace many workers. In the three countries studied, the projected labour surplus is significant, reaching almost 11 million workers in the US alone. Jobs most likely to be affected will be in office and administrative support, as well as food preparation and services; in Germany, production workers will be those most affected by the automation of labour; and in Australia, job surpluses will most affect sales and related fields. There is a clear mismatch between occupations that will be lost and those that will be in demand: in all three countries, the professions with the biggest looming shortfalls are computer-related occupations and jobs in science, technology, engineering and maths. Work that requires compassionate human interaction, including healthcare, social services or teaching, will also be in high demand. This is worrying, explain BCG's specialists, who anticipate that as demand for talent is unmet, companies' financial stability and ability to compete will be affected. "Labour shortages are more worrying for businesses and governments than a surplus of labour," says Carrasco. "Global businesses that are competing in the global market for the same talent will have a very limited pool of candidates to choose from. For governments, the handbrake labour shortages create will have a clear impact on economic growth." The solution? To aggressively up-skill and re-train the workforce, to ensure that demand for talent is met in time. Managing the workforce transition will require equipping those most at risk of job loss with the required skills to fill roles that are set to boom. This means creating future software developers, data analysts, or cybersecurity testers - but also developing skills such as empathy, imagination or creativity, which will underpin jobs in more social sectors. The scope of the challenge is already immense: a survey conducted by the World Economic Forum in 2019 showed that only 27% of small companies and 29% of large companies believe they have the right talent for digital transformation. As workers suddenly found themselves carrying out their jobs entirely online, the COVID-19 pandemic highlighted some critical and persistent knowledge gaps when it comes to digital skills. A global survey carried out by Salesforce, for example, showed that almost two-thirds of workers wished they had more up-to-date skill sets . "The task of up-skilling and re-skilling the workforce is huge and urgent, but it is not insurmountable," argues Carrasco. "It requires a concentrated and collaborative effort between governments, businesses and individuals." Governments will need to carry out better predictions of how their workforce will change over time, identifying where gaps will be created and building up-skilling programs at scale to respond to the problem. SEE: What is Agile software development? Everything you need to know about delivering better code, faster BCG's report pointed to Singapore, where the government's SkillsFuture program is designed to provide citizens with training opportunities to up-skill themselves. The program includes a 'SkillsFuture Credit' that provides funding for a person's education over their lifetime. In 2020, the scheme reached 540,000 individual and 14,000 businesses. Similarly, the Canadian government has created a platform called planext, to help residents see which occupations might correspond to their existing skills and chart a potential future career path, complete with opportunities for training and education. The onus, however, will also be on companies to re-train their workforces by anticipating changes and providing their employees with opportunities for lifelong learning. According to the BCG report, investing in skills now will enable businesses to gain a significant competitive advantage, by ensuring that they have the right talent in the right place at the right time. There are already some examples of early efforts: Australian supermarket giant Woolworths announced earlier this year that it was investing more than AU$50 million over the next three years to train more than 60,000 staff in new tech-related skills. The company's staff - from stores, e-commerce operations, supply chain networks and support offices - will be trained in digital, data analytics, machine learning and robotics.
702
'Expert' Hackers Used 11 Zerodays to Infect Windows, iOS, Android Users
A team of advanced hackers exploited no fewer than 11 zero-day vulnerabilities in a nine-month campaign that used compromised websites to infect fully patched devices running Windows, iOS, and Android, a Google researcher said. On Thursday, Project Zero researcher Maddie Stone said that, in the eight months that followed the February attacks, the same group exploited seven more previously unknown vulnerabilities, which this time also resided in iOS. As was the case in February, the hackers delivered the exploits through watering-hole attacks, which compromise websites frequented by targets of interest and add code that installs malware on visitors' devices. In all the attacks, the watering-hole sites redirected visitors to a sprawling infrastructure that installed different exploits depending on the devices and browsers visitors were using. Whereas the two servers used in February exploited only Windows and Android devices, the later attacks also exploited devices running iOS. Below is a diagram of how it worked: The ability to pierce advanced defenses built into well-fortified OSes and apps that were fully patched - for example, Chrome running on Windows 10 and Safari running on iOS - was one testament to the group's skill. Another testament was the group's abundance of zero-days. After Google patched a code-execution vulnerability the attackers had been exploiting in the Chrome renderer in February, the hackers quickly added a new code-execution exploit for the Chrome V8 engine. In a blog post published Thursday, Stone wrote: In all, Google researchers gathered: The seven zero-days were: The complex chain of exploits is required to break through layers of defenses that are built into modern OSes and apps. Typically, the series of exploits are needed to exploit code on a targeted device, have that code break out of a browser security sandbox, and elevate privileges so the code can access sensitive parts of the OS. Thursday's post offered no details on the group responsible for the attacks. It would be especially interesting to know if the hackers are part of a group that's already known to researchers or if it's a previously unseen team. Also useful would be information about the people who were targeted. The importance of keeping apps and OSes up to date and avoiding suspicious websites still stands. Unfortunately, neither of those things would have helped the victims hacked by this unknown group.
Google's Project Zero security researchers warned that a team of hackers used no fewer than 11 zeroday vulnerabilities over nine months, exploiting compromised websites to infect patched devices running the Windows, iOS, and Android operating systems. The group leveraged four zerodays in February 2020, and their ability to link multiple zerodays to expose the patched devices prompted Project Zero and Threat Analysis Group analysts to deem the attackers "highly sophisticated." Project Zero's Maddie Stone said over the ensuing eight months the hackers exploited seven more previously unknown iOS zerodays via watering-hole attacks. Blogged Stone, "Overall each of the exploits themselves showed an expert understanding of exploit development and the vulnerability being exploited."
[]
[]
[]
scitechnews
None
None
None
None
Google's Project Zero security researchers warned that a team of hackers used no fewer than 11 zeroday vulnerabilities over nine months, exploiting compromised websites to infect patched devices running the Windows, iOS, and Android operating systems. The group leveraged four zerodays in February 2020, and their ability to link multiple zerodays to expose the patched devices prompted Project Zero and Threat Analysis Group analysts to deem the attackers "highly sophisticated." Project Zero's Maddie Stone said over the ensuing eight months the hackers exploited seven more previously unknown iOS zerodays via watering-hole attacks. Blogged Stone, "Overall each of the exploits themselves showed an expert understanding of exploit development and the vulnerability being exploited." A team of advanced hackers exploited no fewer than 11 zero-day vulnerabilities in a nine-month campaign that used compromised websites to infect fully patched devices running Windows, iOS, and Android, a Google researcher said. On Thursday, Project Zero researcher Maddie Stone said that, in the eight months that followed the February attacks, the same group exploited seven more previously unknown vulnerabilities, which this time also resided in iOS. As was the case in February, the hackers delivered the exploits through watering-hole attacks, which compromise websites frequented by targets of interest and add code that installs malware on visitors' devices. In all the attacks, the watering-hole sites redirected visitors to a sprawling infrastructure that installed different exploits depending on the devices and browsers visitors were using. Whereas the two servers used in February exploited only Windows and Android devices, the later attacks also exploited devices running iOS. Below is a diagram of how it worked: The ability to pierce advanced defenses built into well-fortified OSes and apps that were fully patched - for example, Chrome running on Windows 10 and Safari running on iOS - was one testament to the group's skill. Another testament was the group's abundance of zero-days. After Google patched a code-execution vulnerability the attackers had been exploiting in the Chrome renderer in February, the hackers quickly added a new code-execution exploit for the Chrome V8 engine. In a blog post published Thursday, Stone wrote: In all, Google researchers gathered: The seven zero-days were: The complex chain of exploits is required to break through layers of defenses that are built into modern OSes and apps. Typically, the series of exploits are needed to exploit code on a targeted device, have that code break out of a browser security sandbox, and elevate privileges so the code can access sensitive parts of the OS. Thursday's post offered no details on the group responsible for the attacks. It would be especially interesting to know if the hackers are part of a group that's already known to researchers or if it's a previously unseen team. Also useful would be information about the people who were targeted. The importance of keeping apps and OSes up to date and avoiding suspicious websites still stands. Unfortunately, neither of those things would have helped the victims hacked by this unknown group.
703
France's Competition Authority Declines to Block Apple's Opt-in Consent for iOS App Tracking
France's competition authority (FCA) has rejected calls by French advertisers to block looming pro-privacy changes requiring third-party applications to obtain consumers' consent before tracking them on Apple iOS. FCA said it does not currently deem Apple's introduction of the App Tracking Transparency (ATT) feature as abuse of its dominant position. However, the regulator is still probing Apple "on the merits," and aims to ensure the company is not applying preferential rules for its own apps compared to those of third-party developers. An Apple spokesperson said, "ATT will provide a powerful user privacy benefit by requiring developers to ask users' permission before sharing their data with other companies for the purposes of advertising, or with data brokers. We firmly believe that users' data belongs to them, and that they should control when that data is shared, and with whom."
[]
[]
[]
scitechnews
None
None
None
None
France's competition authority (FCA) has rejected calls by French advertisers to block looming pro-privacy changes requiring third-party applications to obtain consumers' consent before tracking them on Apple iOS. FCA said it does not currently deem Apple's introduction of the App Tracking Transparency (ATT) feature as abuse of its dominant position. However, the regulator is still probing Apple "on the merits," and aims to ensure the company is not applying preferential rules for its own apps compared to those of third-party developers. An Apple spokesperson said, "ATT will provide a powerful user privacy benefit by requiring developers to ask users' permission before sharing their data with other companies for the purposes of advertising, or with data brokers. We firmly believe that users' data belongs to them, and that they should control when that data is shared, and with whom."
704
Novel Deep Learning Framework for Symbolic Regression
Lawrence Livermore National Laboratory (LLNL) computer scientists have developed a new framework and an accompanying visualization tool that leverages deep reinforcement learning for symbolic regression problems, outperforming baseline methods on benchmark problems. The paper was recently accepted as an oral presentation at the International Conference on Learning Representations (ICLR 2021), one of the top machine learning conferences in the world. The conference takes place virtually May 3-7. In the paper, the LLNL team describes applying deep reinforcement learning to discrete optimization - problems that deal with discrete "building blocks" that must be combined in a particular order or configuration to optimize a desired property . The team focused on a type of discrete optimization called symbolic regression - finding short mathematical expressions that fit data gathered from an experiment. Symbolic regression aims to uncover the underlying equations or dynamics of a physical process. "Discrete optimization is really challenging because you don't have gradients. Picture a child playing with Lego bricks, assembling a contraption for a particular task - you can change one Lego brick and all of a sudden the properties are entirely different," explained lead author Brenden Petersen. "But what we've shown is that deep reinforcement learning is a really powerful way to efficiently search that space of discrete objects." While deep learning has been successful in solving many complex tasks, its results are largely uninterpretable to humans, Petersen continued. "Here, we're using large models (i.e. neural networks) to search the space of small models (i.e. short mathematical expressions), so you're getting the best of both worlds. You're leveraging the power of deep learning, but getting what you really want, which is a very succinct physics equation." Symbolic regression is typically approached in machine learning and artificial intelligence with evolutionary algorithms, Petersen said. The problem with evolutionary approaches is that the algorithms aren't principled and don't scale very well, he explained. LLNL's deep learning approach is different because it's theory-backed and based on gradient information, making it much more understandable and useful for scientists, co-authors said. "These evolutionary approaches are based on random mutations, so basically at the end of the day, randomness plays a big role in finding the correct answer," said LLNL co-author Mikel Landajuela. "At the core of our approach is a neural network that is learning the landscape of discrete objects; it holds a memory of the process and builds an understanding of how these objects are distributed in this massive space to determine a good direction to follow. That's what makes our algorithm work better - the combination of memory and direction are missing from traditional approaches." The number of possible expressions in the landscape is prohibitively large, so co- author Claudio Santiago helped create different types of user-specified constraints for the algorithm that exclude expressions known to not be solutions, leading to quicker and more efficient searches. "The DSR framework allows a wide range of constraints to be considered, thereby considerably reducing the size of the search space," Santiago said. "This is unlike evolutionary approaches, which cannot easily consider constraints efficiently. One cannot guarantee in general that constraints will be satisfied after applying evolutionary operators, hindering them as significantly inefficient for large domains." For the paper, the team tested the algorithm on a set of symbolic regression problems, showing it outperformed several common benchmarks, including commercial software gold standards. The team has been testing it on real-world physics problems such as thin-film compression, where it is showing promising results. Authors said the algorithm is widely applicable, not just to symbolic regression, but to any kind of discrete optimization problem. They have recently started to apply it to the creation of unique amino acid sequences for improved binding to pathogens for vaccine design. Petersen said the most thrilling aspect of the work is its potential not to replace physicists, but to interact with them. To this end, the team has created an interactive visualization app for the algorithm that physicists can use to help them solve real-world problems. "It's super exciting because we've really just cracked open this new framework," Petersen said. "What really sets it apart from other methods is that it affords the ability to directly incorporate domain knowledge or prior beliefs in a very principled way. Thinking a few years down the line, we picture a physics grad student using this as a tool. As they get more information or experimental results, they can interact with the algorithm, giving it new knowledge to help it hone in on the correct answers." The work stems from a Laboratory Directed Research and Development program-funded initiative on Disruptive Research, a portfolio composed of projects considered to be high risk and high reward. Co-authors included Nathan Mundhenk, Soo Kim and Joanne Kim. LLNL machine learning researcher Ruben Glatt has since joined the team. The work also was furthered by several students from the University of California, Merced whom Petersen mentored during LLNL's 2019 Data Science Challenge Workshop where it was featured as a challenge problem. LLNL has released an open-source version of the code, available here .
Computer scientists at Lawrence Livermore National Laboratory (LLNL) have created a new framework and visualization tool that applies deep reinforcement learning to symbolic regression problems. Symbolic regression, a type of discrete optimization that seeks to determine the underlying equations or dynamics of a physical process, generally is approached in machine learning and artificial intelligence with evolutionary algorithms, which LLNL's Brenden Petersen said do not scale well. LLNL's Mikel Landajuela explained, "At the core of our approach is a neural network that is learning the landscape of discrete objects; it holds a memory of the process and builds an understanding of how these objects are distributed in this massive space to determine a good direction to follow." The team's algorithm outperformed several common benchmarks when tested on a set of symbolic regression problems.
[]
[]
[]
scitechnews
None
None
None
None
Computer scientists at Lawrence Livermore National Laboratory (LLNL) have created a new framework and visualization tool that applies deep reinforcement learning to symbolic regression problems. Symbolic regression, a type of discrete optimization that seeks to determine the underlying equations or dynamics of a physical process, generally is approached in machine learning and artificial intelligence with evolutionary algorithms, which LLNL's Brenden Petersen said do not scale well. LLNL's Mikel Landajuela explained, "At the core of our approach is a neural network that is learning the landscape of discrete objects; it holds a memory of the process and builds an understanding of how these objects are distributed in this massive space to determine a good direction to follow." The team's algorithm outperformed several common benchmarks when tested on a set of symbolic regression problems. Lawrence Livermore National Laboratory (LLNL) computer scientists have developed a new framework and an accompanying visualization tool that leverages deep reinforcement learning for symbolic regression problems, outperforming baseline methods on benchmark problems. The paper was recently accepted as an oral presentation at the International Conference on Learning Representations (ICLR 2021), one of the top machine learning conferences in the world. The conference takes place virtually May 3-7. In the paper, the LLNL team describes applying deep reinforcement learning to discrete optimization - problems that deal with discrete "building blocks" that must be combined in a particular order or configuration to optimize a desired property . The team focused on a type of discrete optimization called symbolic regression - finding short mathematical expressions that fit data gathered from an experiment. Symbolic regression aims to uncover the underlying equations or dynamics of a physical process. "Discrete optimization is really challenging because you don't have gradients. Picture a child playing with Lego bricks, assembling a contraption for a particular task - you can change one Lego brick and all of a sudden the properties are entirely different," explained lead author Brenden Petersen. "But what we've shown is that deep reinforcement learning is a really powerful way to efficiently search that space of discrete objects." While deep learning has been successful in solving many complex tasks, its results are largely uninterpretable to humans, Petersen continued. "Here, we're using large models (i.e. neural networks) to search the space of small models (i.e. short mathematical expressions), so you're getting the best of both worlds. You're leveraging the power of deep learning, but getting what you really want, which is a very succinct physics equation." Symbolic regression is typically approached in machine learning and artificial intelligence with evolutionary algorithms, Petersen said. The problem with evolutionary approaches is that the algorithms aren't principled and don't scale very well, he explained. LLNL's deep learning approach is different because it's theory-backed and based on gradient information, making it much more understandable and useful for scientists, co-authors said. "These evolutionary approaches are based on random mutations, so basically at the end of the day, randomness plays a big role in finding the correct answer," said LLNL co-author Mikel Landajuela. "At the core of our approach is a neural network that is learning the landscape of discrete objects; it holds a memory of the process and builds an understanding of how these objects are distributed in this massive space to determine a good direction to follow. That's what makes our algorithm work better - the combination of memory and direction are missing from traditional approaches." The number of possible expressions in the landscape is prohibitively large, so co- author Claudio Santiago helped create different types of user-specified constraints for the algorithm that exclude expressions known to not be solutions, leading to quicker and more efficient searches. "The DSR framework allows a wide range of constraints to be considered, thereby considerably reducing the size of the search space," Santiago said. "This is unlike evolutionary approaches, which cannot easily consider constraints efficiently. One cannot guarantee in general that constraints will be satisfied after applying evolutionary operators, hindering them as significantly inefficient for large domains." For the paper, the team tested the algorithm on a set of symbolic regression problems, showing it outperformed several common benchmarks, including commercial software gold standards. The team has been testing it on real-world physics problems such as thin-film compression, where it is showing promising results. Authors said the algorithm is widely applicable, not just to symbolic regression, but to any kind of discrete optimization problem. They have recently started to apply it to the creation of unique amino acid sequences for improved binding to pathogens for vaccine design. Petersen said the most thrilling aspect of the work is its potential not to replace physicists, but to interact with them. To this end, the team has created an interactive visualization app for the algorithm that physicists can use to help them solve real-world problems. "It's super exciting because we've really just cracked open this new framework," Petersen said. "What really sets it apart from other methods is that it affords the ability to directly incorporate domain knowledge or prior beliefs in a very principled way. Thinking a few years down the line, we picture a physics grad student using this as a tool. As they get more information or experimental results, they can interact with the algorithm, giving it new knowledge to help it hone in on the correct answers." The work stems from a Laboratory Directed Research and Development program-funded initiative on Disruptive Research, a portfolio composed of projects considered to be high risk and high reward. Co-authors included Nathan Mundhenk, Soo Kim and Joanne Kim. LLNL machine learning researcher Ruben Glatt has since joined the team. The work also was furthered by several students from the University of California, Merced whom Petersen mentored during LLNL's 2019 Data Science Challenge Workshop where it was featured as a challenge problem. LLNL has released an open-source version of the code, available here .
705
Standard Digital Camera, AI to Monitor Soil Moisture for Affordable Smart Irrigation
Researchers at the University of South Australia (UniSA) and Middle Technical University in Iraq developed a smart irrigation system that uses a standard RGB digital camera and machine learning technology to monitor soil moisture. The new method aims to make precision soil monitoring easier and more cost-effective by eliminating the need for specialized hardware and expensive thermal imaging cameras that can encounter issues in certain climatic conditions. UniSA's Ali Al-Naji said the system was found to accurately determine moisture content at different distances, times, and illumination levels. The camera was connected to an artificial neural network, which could allow the system to be trained to recognize the specific soil conditions of any location. UniSA's Javaan Chahl said, "Once the network has been trained it should be possible to achieve controlled irrigation by maintaining the appearance of the soil at the desired state."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of South Australia (UniSA) and Middle Technical University in Iraq developed a smart irrigation system that uses a standard RGB digital camera and machine learning technology to monitor soil moisture. The new method aims to make precision soil monitoring easier and more cost-effective by eliminating the need for specialized hardware and expensive thermal imaging cameras that can encounter issues in certain climatic conditions. UniSA's Ali Al-Naji said the system was found to accurately determine moisture content at different distances, times, and illumination levels. The camera was connected to an artificial neural network, which could allow the system to be trained to recognize the specific soil conditions of any location. UniSA's Javaan Chahl said, "Once the network has been trained it should be possible to achieve controlled irrigation by maintaining the appearance of the soil at the desired state."
706
Learning Apps Have Boomed in the Pandemic. Now Comes the Real Test.
After a tough year of toggling between remote and in-person schooling, many students, teachers and their families feel burned out from pandemic learning. But companies that market digital learning tools to schools are enjoying a coronavirus windfall. Venture and equity financing for education technology start-ups has more than doubled, surging to $12.58 billion worldwide last year from $4.81 billion in 2019, according to a report from CB Insights, a firm that tracks start-ups and venture capital. During the same period, the number of laptops and tablets shipped to primary and secondary schools in the United States nearly doubled to 26.7 million, from 14 million, according to data from Futuresource Consulting, a market research company in Britain. "We've seen a real explosion in demand," said Michael Boreham, a senior market analyst at Futuresource. "It's been a massive, massive sea change out of necessity."
The pandemic fueled demand for education technology, with CB Insights reporting a surge in venture and equity financing for education technology startups from $4.81 billion in 2019 to $12.58 billion in 2020. Futuresource Consulting reports a jump in the number of laptops and tablets shipped to U.S. primary and secondary schools from 14 million to 26.7 million over the same period. The pandemic prompted schools to use videoconferencing and other digital tools to replicate the school day for remote students, rather than implement artificial intelligence-powered apps that could tailor lessons to a child's abilities. However, apps that facilitate online interactions between students and teachers have grabbed investors' attention. The reading lesson app Newsela, for instance, is now valued at $1 billion. Whether these apps will remain popular will depend on how useful they prove to be in the classroom amid the shift back to in-person learning.
[]
[]
[]
scitechnews
None
None
None
None
The pandemic fueled demand for education technology, with CB Insights reporting a surge in venture and equity financing for education technology startups from $4.81 billion in 2019 to $12.58 billion in 2020. Futuresource Consulting reports a jump in the number of laptops and tablets shipped to U.S. primary and secondary schools from 14 million to 26.7 million over the same period. The pandemic prompted schools to use videoconferencing and other digital tools to replicate the school day for remote students, rather than implement artificial intelligence-powered apps that could tailor lessons to a child's abilities. However, apps that facilitate online interactions between students and teachers have grabbed investors' attention. The reading lesson app Newsela, for instance, is now valued at $1 billion. Whether these apps will remain popular will depend on how useful they prove to be in the classroom amid the shift back to in-person learning. After a tough year of toggling between remote and in-person schooling, many students, teachers and their families feel burned out from pandemic learning. But companies that market digital learning tools to schools are enjoying a coronavirus windfall. Venture and equity financing for education technology start-ups has more than doubled, surging to $12.58 billion worldwide last year from $4.81 billion in 2019, according to a report from CB Insights, a firm that tracks start-ups and venture capital. During the same period, the number of laptops and tablets shipped to primary and secondary schools in the United States nearly doubled to 26.7 million, from 14 million, according to data from Futuresource Consulting, a market research company in Britain. "We've seen a real explosion in demand," said Michael Boreham, a senior market analyst at Futuresource. "It's been a massive, massive sea change out of necessity."
707
Are Quantum Computers Good at Picking Stocks? This Project Tried to Find Out
Consultancy firm KPMG, together with a team of researchers from the Technical University of Denmark (DTU) and a yet-to-be-named European bank, has been piloting the use of quantum computing to determine which stocks to buy and sell for maximum return, an age-old banking operation known as portfolio optimization. The researchers ran a model for portfolio optimization on Canadian company D-Wave's 2,000-qubit quantum annealing processor, comparing the results to those obtained with classical means. They found that the quantum annealer performed better and faster than other methods , while being capable of resolving larger problems - although the study also indicated that D-Wave's technology still comes with some issues to do with ease of programming and scalability. The smart distribution of portfolio assets is a problem that stands at the very heart of banking. Theorized by economist Harry Markowitz as early as 1952, it consists of allocating a fixed budget to a collection of financial assets in a way that will produce as much return as possible over time. In other words, it is an optimization problem: an investor should look to maximize gain and minimize risk for a given financial portfolio. SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium) As the number of assets in the portfolio multiplies, the difficulty of the calculation exponentially increases, and the problem can quickly become intractable, even to the world's largest supercomputers. Quantum computing, on the other hand, offers the possibility of running multiple calculations at once thanks to a special quantum state that is adopted by quantum bits, or qubits. Quantum systems, for now, cannot support enough qubits to have a real-world impact. But in principle, large-scale quantum computers could one day solve complex portfolio optimization problems in a matter of minutes - which is why the world's largest banks are already putting their research team to work on developing quantum algorithms. To translate Markowitz's classical model for the portfolio selection problem into a quantum algorithm, the DTU's researchers formulated the equation into a quantum model called a quadratic unconstrained binary optimization (QUBO) problem, which they based on the usual criteria used for the operation such as budget and expected return. When deciding which quantum hardware to pick to test their model, the team was faced with a number of options: IBM and Google are both working on a superconducting quantum computer, while Honeywell and IonQ are building trapped-ion devices; Xanadu is looking at photonic quantum technologies, and Microsoft is creating a topological quantum system. D-Wave's quantum annealing processor is yet another approach to quantum computing. Unlike other systems, which are gate-based quantum computers, it is not possible to control the qubits in a quantum annealer; instead, D-Wave's technology consists of manipulating the environment surrounding the system, and letting the device find a "ground state." In this case, the ground state corresponds to the most optimal portfolio selection. This approach, while limiting the scope of the problems that can be resolved by a quantum annealer, also enable D-Wave to work with many more qubits than other devices. The company's latest device counts 5,000 qubits , while IBM's quantum computer, for example, supports less than 100 qubits. The researchers explained that the maturity of D-Wave's technology prompted them to pick quantum annealing to trial the algorithm; and equipped with the processor, they were able to embed and run the problem for up to 65 assets. To benchmark the performance of the processor, they also ran the Markowitz equation with classical means, called brute force. With the computational resources at their disposal, brute force could only be used for up to 25 assets, after which the problem became intractable for the method. Comparing between the two methods, the scientists found that the quality of the results provided by D-Wave's processor was equal to that delivered by brute force - proving that quantum annealing can reliably be used to solve the problem. In addition, as the number of assets grew, the quantum processor overtook brute force as the fastest method. From 15 assets onwards, D-Wave's processor effectively started showing significant speed-up over brute force, as the problem got closer to becoming intractable for the classical computer. To benchmark the performance of the quantum annealer for more than 25 assets - which is beyond the capability of brute force - the researchers compared the results obtained with D-Wave's processor to those obtained with a method called simulated annealing. There again, shows the study, the quantum processor provided high-quality results. Although the experiment suggests that quantum annealing might show a computational advantage over classical devices, therefore, Ulrich Busk Hoff, researcher at DTU, who participated in the research, warns against hasty conclusions. "For small-sized problems, the D-Wave quantum annealer is indeed competitive, as it offers a speed-up and solutions of high quality," he tells ZDNet. "That said, I believe that the study is premature for making any claims about an actual quantum advantage, and I would refrain from doing that. That would require a more rigorous comparison between D-Wave and classical methods - and using the best possible classical computational resources, which was far beyond the scope of the project." DTU's team also flagged some scalability issues, highlighting that as the portfolio size increased, there was a need to fine-tune the quantum model's parameters in order to prevent a drop in results quality. "As the portfolio size was increased, a degradation in the quality of the solutions found by quantum annealing was indeed observed," says Hoff. "But after optimization, the solutions were still competitive and were more often than not able to beat simulated annealing." SEE: The EU wants to build its first quantum computer. That plan might not be ambitious enough In addition, with the quantum industry still largely in its infancy, the researchers pointed to the technical difficulties that still come with using quantum technologies. Implementing quantum models, they explained, requires a new way of thinking; translating classical problems into quantum algorithms is not straightforward, and even D-Wave's fairly accessible software development kit cannot be described yet as "plug-and-play." The Canadian company's quantum processor nevertheless shows a lot of promise for solving problems such as portfolio optimization. Although the researchers shared doubts that quantum annealing would have as much of an impact as large-scale gate-based quantum computers, they pledged to continue to explore the capabilities of the technology in other fields. "I think it's fair to say that D-Wave is a competitive candidate for solving this type of problem and it is certainly worthwhile further investigation," says Hoff. KPMG, DTU's researchers and large banks are far from alone in experimenting with D-Wave's technology for near-term applications of quantum computing. For example, researchers from pharmaceutical company GlaxoSmithKline (GSK) recently trialed the use of different quantum methods to sequence gene expression, and found that quantum annealing could already compete against classical computers to start addressing life-sized problems.
Researchers from the Technical University of Denmark (DTU), consultancy firm KPMG, and an unnamed European bank are piloting the use of quantum computing for portfolio optimization. The researchers turned economist Harry Markowitz's classical model for portfolio selection into a quadratic unconstrained binary optimization problem based on budget, expected return, and other criteria. Using D-Wave's 2,000-qubit quantum annealing processor, the researchers could embed and run the problem for up to 65 assets, versus 25 for the brute force method. D-Wave's processor outperformed brute force for 15 or more assets, and the simulated annealing method for more than 25 assets. DTU's Ulrich Busk Hoff said, "As the portfolio size was increased, a degradation in the quality of the solutions found by quantum annealing was indeed observed. But after optimization, the solutions were still competitive and were more often than not able to beat simulated annealing."
[]
[]
[]
scitechnews
None
None
None
None
Researchers from the Technical University of Denmark (DTU), consultancy firm KPMG, and an unnamed European bank are piloting the use of quantum computing for portfolio optimization. The researchers turned economist Harry Markowitz's classical model for portfolio selection into a quadratic unconstrained binary optimization problem based on budget, expected return, and other criteria. Using D-Wave's 2,000-qubit quantum annealing processor, the researchers could embed and run the problem for up to 65 assets, versus 25 for the brute force method. D-Wave's processor outperformed brute force for 15 or more assets, and the simulated annealing method for more than 25 assets. DTU's Ulrich Busk Hoff said, "As the portfolio size was increased, a degradation in the quality of the solutions found by quantum annealing was indeed observed. But after optimization, the solutions were still competitive and were more often than not able to beat simulated annealing." Consultancy firm KPMG, together with a team of researchers from the Technical University of Denmark (DTU) and a yet-to-be-named European bank, has been piloting the use of quantum computing to determine which stocks to buy and sell for maximum return, an age-old banking operation known as portfolio optimization. The researchers ran a model for portfolio optimization on Canadian company D-Wave's 2,000-qubit quantum annealing processor, comparing the results to those obtained with classical means. They found that the quantum annealer performed better and faster than other methods , while being capable of resolving larger problems - although the study also indicated that D-Wave's technology still comes with some issues to do with ease of programming and scalability. The smart distribution of portfolio assets is a problem that stands at the very heart of banking. Theorized by economist Harry Markowitz as early as 1952, it consists of allocating a fixed budget to a collection of financial assets in a way that will produce as much return as possible over time. In other words, it is an optimization problem: an investor should look to maximize gain and minimize risk for a given financial portfolio. SEE: Hiring Kit: Computer Hardware Engineer (TechRepublic Premium) As the number of assets in the portfolio multiplies, the difficulty of the calculation exponentially increases, and the problem can quickly become intractable, even to the world's largest supercomputers. Quantum computing, on the other hand, offers the possibility of running multiple calculations at once thanks to a special quantum state that is adopted by quantum bits, or qubits. Quantum systems, for now, cannot support enough qubits to have a real-world impact. But in principle, large-scale quantum computers could one day solve complex portfolio optimization problems in a matter of minutes - which is why the world's largest banks are already putting their research team to work on developing quantum algorithms. To translate Markowitz's classical model for the portfolio selection problem into a quantum algorithm, the DTU's researchers formulated the equation into a quantum model called a quadratic unconstrained binary optimization (QUBO) problem, which they based on the usual criteria used for the operation such as budget and expected return. When deciding which quantum hardware to pick to test their model, the team was faced with a number of options: IBM and Google are both working on a superconducting quantum computer, while Honeywell and IonQ are building trapped-ion devices; Xanadu is looking at photonic quantum technologies, and Microsoft is creating a topological quantum system. D-Wave's quantum annealing processor is yet another approach to quantum computing. Unlike other systems, which are gate-based quantum computers, it is not possible to control the qubits in a quantum annealer; instead, D-Wave's technology consists of manipulating the environment surrounding the system, and letting the device find a "ground state." In this case, the ground state corresponds to the most optimal portfolio selection. This approach, while limiting the scope of the problems that can be resolved by a quantum annealer, also enable D-Wave to work with many more qubits than other devices. The company's latest device counts 5,000 qubits , while IBM's quantum computer, for example, supports less than 100 qubits. The researchers explained that the maturity of D-Wave's technology prompted them to pick quantum annealing to trial the algorithm; and equipped with the processor, they were able to embed and run the problem for up to 65 assets. To benchmark the performance of the processor, they also ran the Markowitz equation with classical means, called brute force. With the computational resources at their disposal, brute force could only be used for up to 25 assets, after which the problem became intractable for the method. Comparing between the two methods, the scientists found that the quality of the results provided by D-Wave's processor was equal to that delivered by brute force - proving that quantum annealing can reliably be used to solve the problem. In addition, as the number of assets grew, the quantum processor overtook brute force as the fastest method. From 15 assets onwards, D-Wave's processor effectively started showing significant speed-up over brute force, as the problem got closer to becoming intractable for the classical computer. To benchmark the performance of the quantum annealer for more than 25 assets - which is beyond the capability of brute force - the researchers compared the results obtained with D-Wave's processor to those obtained with a method called simulated annealing. There again, shows the study, the quantum processor provided high-quality results. Although the experiment suggests that quantum annealing might show a computational advantage over classical devices, therefore, Ulrich Busk Hoff, researcher at DTU, who participated in the research, warns against hasty conclusions. "For small-sized problems, the D-Wave quantum annealer is indeed competitive, as it offers a speed-up and solutions of high quality," he tells ZDNet. "That said, I believe that the study is premature for making any claims about an actual quantum advantage, and I would refrain from doing that. That would require a more rigorous comparison between D-Wave and classical methods - and using the best possible classical computational resources, which was far beyond the scope of the project." DTU's team also flagged some scalability issues, highlighting that as the portfolio size increased, there was a need to fine-tune the quantum model's parameters in order to prevent a drop in results quality. "As the portfolio size was increased, a degradation in the quality of the solutions found by quantum annealing was indeed observed," says Hoff. "But after optimization, the solutions were still competitive and were more often than not able to beat simulated annealing." SEE: The EU wants to build its first quantum computer. That plan might not be ambitious enough In addition, with the quantum industry still largely in its infancy, the researchers pointed to the technical difficulties that still come with using quantum technologies. Implementing quantum models, they explained, requires a new way of thinking; translating classical problems into quantum algorithms is not straightforward, and even D-Wave's fairly accessible software development kit cannot be described yet as "plug-and-play." The Canadian company's quantum processor nevertheless shows a lot of promise for solving problems such as portfolio optimization. Although the researchers shared doubts that quantum annealing would have as much of an impact as large-scale gate-based quantum computers, they pledged to continue to explore the capabilities of the technology in other fields. "I think it's fair to say that D-Wave is a competitive candidate for solving this type of problem and it is certainly worthwhile further investigation," says Hoff. KPMG, DTU's researchers and large banks are far from alone in experimenting with D-Wave's technology for near-term applications of quantum computing. For example, researchers from pharmaceutical company GlaxoSmithKline (GSK) recently trialed the use of different quantum methods to sequence gene expression, and found that quantum annealing could already compete against classical computers to start addressing life-sized problems.
708
Continuous Upgrades to Datacenter Virtualization Setups Key to Curbing Carbon Emissions
Ramping up the use of virtualisation technologies within European datacentres could lead to a 55% reduction in carbon emissions by 2040, whereas if current deployment levels were to remain as they are, emissions would increase by more than 250% over the next 20 years. That is according to a report compiled by market watcher Aurora Energy Research, looking into how the growing demand for cloud and online services could impact the electricity consumption and carbon emissions of datacentres across Europe. As part of this work, commissioned by VMware, the report's authors developed a Zero Progress scenario whereby predictions were made about how the emissions generated by computing would be affected if the adoption and penetration levels of virtualisation software remained "stagnant" through to the year 2040. Alongside this, it developed a Continual Improvement scenario that looked at how the level of emissions would be affected over this same time period assuming "reasonable" adoption and deployment of these same technologies. "Our analysis suggests European computing emissions would grow 250% over the next 20 years in a Zero Progress scenario, but continued improvements in virtualisation technology and its penetration, underlying the Continual Improvement scenario, can enable cumulative energy efficiency-sourced CO 2 reduction of 454 million tonnes by 2040, a 55% reduction compared to the baseline," the report stated. "While a Zero Progress scenario is improbable, this scenario highlights the importance of continued innovation in computing and its impact on carbon emissions. Consequently, policy initiatives that stifle a continuation of the pace of innovation seen historically in the sector can have a direct impact on emissions." The timing of the report is important, the authors claim, given its publication comes at a time when enterprises across Europe are in the midst of digital transformation projects that may see them increase their use of compute-intensive technologies, such as machine learning, artificial intelligence and blockchain. At the same time, many organisations have had to step-up their use of cloud services since the onset of the Covid-19 coronavirus pandemic to enable remote working. Expanding on this point, the report claims the pandemic has led to "tremendous decreases" in the amount of carbon generated by commuting, but the "overnight" adoption of remote working and online learning, for example, has seen the amount of energy consumed by datacentres rise. "This increasing reliance on digital technologies such as mobile, edge and private/public cloud computing has further accelerated the enormous energy consumption by datacentres - and, therefore, the urgency to act to decarbonise," the report added. In recent months, various major players within the European colocation and hyperscale cloud space have gone public with their plans to curb their carbon emissions after the European Commission called for the datacentre sector to become climate-neutral by 2030 . In an acknowledgement to this, the report states that business leaders and policy makers have taken steps to ensure the pandemic has not caused them to lose sight of their "aggressive sustainability and carbon emissions goals to help mitigate climate change." Even so, the modelling and predictions set out in the report highlight why it is important to ensure the sector is doing all it can to make sure its growth does not come at the expense of the environment. Aside from increasing the deployment of virtualisation technologies within public cloud and on-premise datacentres, the report also recommends that operators need to increase the number of renewably powered datacentres they run. Furthermore, it makes the case for enterprises and cloud operators to consider shifting their computing demands to datacentres where there is an abundance of lower carbon-emitting, renewable power sources. "Datacentres can help integrate renewables into the grid by utilising their backup, emergency battery story to help balance supply and demand on the grid; optimising the cooling of IT infrastructure to provide flexibility to the grid; and engaging in demand response by reducing datacentre power consumption to better match the current local supply of renewables," the report added. Ana Barillas, head of Iberia at Aurora Energy Research, said the report serves to highlight the positive impact that investing in continuous upgrades to datacentres can have on their environmental friendliness . "The Covid-19 pandemic has made our reliance on the cloud abundantly clear. The demand for computing across Europe is only expected to grow over the next 20 years, and how we deal with its impacts on CO 2 emissions will become increasingly important," said Barillas. "The continuing improvement and adoption of virtualisation technologies can enable a 40% increase in electricity consumption from European IT over the next two decades without a corresponding impact on emissions." Luigi Freguia, senior vice-president and general manager for Europe, Middle East and Africa (EMEA) at VMware, added: "This report from Aurora Energy Research demonstrates that increased deployment of, and continued improvements in virtualisation technology, allows for much more computation with less energy, and has the opportunity to reduce potential future European computing emissions 55% by 2040."
Carbon emissions tied to European datacenters could be lowered 55% by 2040 through increased use of virtualization technologies, according to a report by Aurora Energy Research commissioned by VMware. However, the report also found that maintaining current datacenter deployment levels would see emissions rise by more than 250% over the same period. The researchers said that even as the pandemic significantly reduced carbon emissions from commuting, energy consumption by datacenters increased due to the swift adoption of remote working and online learning. In addition to increased deployment of virtualization technologies, the researchers emphasized the need for more renewably powered datacenters.
[]
[]
[]
scitechnews
None
None
None
None
Carbon emissions tied to European datacenters could be lowered 55% by 2040 through increased use of virtualization technologies, according to a report by Aurora Energy Research commissioned by VMware. However, the report also found that maintaining current datacenter deployment levels would see emissions rise by more than 250% over the same period. The researchers said that even as the pandemic significantly reduced carbon emissions from commuting, energy consumption by datacenters increased due to the swift adoption of remote working and online learning. In addition to increased deployment of virtualization technologies, the researchers emphasized the need for more renewably powered datacenters. Ramping up the use of virtualisation technologies within European datacentres could lead to a 55% reduction in carbon emissions by 2040, whereas if current deployment levels were to remain as they are, emissions would increase by more than 250% over the next 20 years. That is according to a report compiled by market watcher Aurora Energy Research, looking into how the growing demand for cloud and online services could impact the electricity consumption and carbon emissions of datacentres across Europe. As part of this work, commissioned by VMware, the report's authors developed a Zero Progress scenario whereby predictions were made about how the emissions generated by computing would be affected if the adoption and penetration levels of virtualisation software remained "stagnant" through to the year 2040. Alongside this, it developed a Continual Improvement scenario that looked at how the level of emissions would be affected over this same time period assuming "reasonable" adoption and deployment of these same technologies. "Our analysis suggests European computing emissions would grow 250% over the next 20 years in a Zero Progress scenario, but continued improvements in virtualisation technology and its penetration, underlying the Continual Improvement scenario, can enable cumulative energy efficiency-sourced CO 2 reduction of 454 million tonnes by 2040, a 55% reduction compared to the baseline," the report stated. "While a Zero Progress scenario is improbable, this scenario highlights the importance of continued innovation in computing and its impact on carbon emissions. Consequently, policy initiatives that stifle a continuation of the pace of innovation seen historically in the sector can have a direct impact on emissions." The timing of the report is important, the authors claim, given its publication comes at a time when enterprises across Europe are in the midst of digital transformation projects that may see them increase their use of compute-intensive technologies, such as machine learning, artificial intelligence and blockchain. At the same time, many organisations have had to step-up their use of cloud services since the onset of the Covid-19 coronavirus pandemic to enable remote working. Expanding on this point, the report claims the pandemic has led to "tremendous decreases" in the amount of carbon generated by commuting, but the "overnight" adoption of remote working and online learning, for example, has seen the amount of energy consumed by datacentres rise. "This increasing reliance on digital technologies such as mobile, edge and private/public cloud computing has further accelerated the enormous energy consumption by datacentres - and, therefore, the urgency to act to decarbonise," the report added. In recent months, various major players within the European colocation and hyperscale cloud space have gone public with their plans to curb their carbon emissions after the European Commission called for the datacentre sector to become climate-neutral by 2030 . In an acknowledgement to this, the report states that business leaders and policy makers have taken steps to ensure the pandemic has not caused them to lose sight of their "aggressive sustainability and carbon emissions goals to help mitigate climate change." Even so, the modelling and predictions set out in the report highlight why it is important to ensure the sector is doing all it can to make sure its growth does not come at the expense of the environment. Aside from increasing the deployment of virtualisation technologies within public cloud and on-premise datacentres, the report also recommends that operators need to increase the number of renewably powered datacentres they run. Furthermore, it makes the case for enterprises and cloud operators to consider shifting their computing demands to datacentres where there is an abundance of lower carbon-emitting, renewable power sources. "Datacentres can help integrate renewables into the grid by utilising their backup, emergency battery story to help balance supply and demand on the grid; optimising the cooling of IT infrastructure to provide flexibility to the grid; and engaging in demand response by reducing datacentre power consumption to better match the current local supply of renewables," the report added. Ana Barillas, head of Iberia at Aurora Energy Research, said the report serves to highlight the positive impact that investing in continuous upgrades to datacentres can have on their environmental friendliness . "The Covid-19 pandemic has made our reliance on the cloud abundantly clear. The demand for computing across Europe is only expected to grow over the next 20 years, and how we deal with its impacts on CO 2 emissions will become increasingly important," said Barillas. "The continuing improvement and adoption of virtualisation technologies can enable a 40% increase in electricity consumption from European IT over the next two decades without a corresponding impact on emissions." Luigi Freguia, senior vice-president and general manager for Europe, Middle East and Africa (EMEA) at VMware, added: "This report from Aurora Energy Research demonstrates that increased deployment of, and continued improvements in virtualisation technology, allows for much more computation with less energy, and has the opportunity to reduce potential future European computing emissions 55% by 2040."
709
2 Win Abel Prize for Work That Bridged Math, Computer Science
Two mathematicians will share this year's Abel Prize - regarded as the field's equivalent of the Nobel - for advances in understanding the foundations of what can and cannot be solved with computers. The work of the winners - László Lovász, 73, of Eötvös Loránd University in Budapest, and Avi Wigderson, 64, of the Institute for Advanced Study in Princeton, N.J. - involves proving theorems and developing methods in pure mathematics, but the research has found practical use in computer science, particularly in cryptography. On Wednesday, the Norwegian Academy of Science and Letters, which administers the prize, cited Dr. Lovász and Dr. Wigderson "for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics. " Dr. Lovász and Dr. Wigderson will split the award money of 7.5 million Norwegian kroner, or about $880,000.
Mathematicians Avi Wigderson of Princeton University's Institute for Advanced Study and László Lovász of Hungary's Eötvös Loránd University share this year's Abel Prize - considered the Nobel Prize of mathematics - for advancing fundamental concepts in computers' problem-solving capabilities. Their research entails proving theorems and developing techniques in pure mathematics that are practically applied in computer science, especially cryptography. Lovász co-created the LLL algorithm, which has been used to uncover weaknesses in certain cryptographic systems. Meanwhile, Wigderson's work includes demonstrating that any mathematical proof could be cast as a zero-knowledge proof. The Norwegian Academy of Science and Letters cited Wigderson and Lovász "for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics."
[]
[]
[]
scitechnews
None
None
None
None
Mathematicians Avi Wigderson of Princeton University's Institute for Advanced Study and László Lovász of Hungary's Eötvös Loránd University share this year's Abel Prize - considered the Nobel Prize of mathematics - for advancing fundamental concepts in computers' problem-solving capabilities. Their research entails proving theorems and developing techniques in pure mathematics that are practically applied in computer science, especially cryptography. Lovász co-created the LLL algorithm, which has been used to uncover weaknesses in certain cryptographic systems. Meanwhile, Wigderson's work includes demonstrating that any mathematical proof could be cast as a zero-knowledge proof. The Norwegian Academy of Science and Letters cited Wigderson and Lovász "for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics." Two mathematicians will share this year's Abel Prize - regarded as the field's equivalent of the Nobel - for advances in understanding the foundations of what can and cannot be solved with computers. The work of the winners - László Lovász, 73, of Eötvös Loránd University in Budapest, and Avi Wigderson, 64, of the Institute for Advanced Study in Princeton, N.J. - involves proving theorems and developing methods in pure mathematics, but the research has found practical use in computer science, particularly in cryptography. On Wednesday, the Norwegian Academy of Science and Letters, which administers the prize, cited Dr. Lovász and Dr. Wigderson "for their foundational contributions to theoretical computer science and discrete mathematics, and their leading role in shaping them into central fields of modern mathematics. " Dr. Lovász and Dr. Wigderson will split the award money of 7.5 million Norwegian kroner, or about $880,000.
710
Mobility Data Used to Respond to Covid-19 Can Leave Out Older and Non-White People
Information on individuals' mobility - where they go as measured by their smartphones - has been used widely in devising and evaluating ways to respond to COVID-19, including how to target public health resources. Yet little attention has been paid to how reliable these data are and what sorts of demographic bias they possess. A new study tested the reliability and bias of widely used mobility data, finding that older and non-White voters are less likely to be captured by these data. Allocating public health resources based on such information could cause disproportionate harms to high-risk elderly and minority groups. The study, by researchers at Carnegie Mellon University (CMU) and Stanford University, appears in the Proceedings of the ACM Conference on Fairness, Accountability, and Transparency , a publication of the Association for Computing Machinery. "Older age is a major risk factor for COVID-19-related mortality, and African-American, Native-American, and Latinx communities bear a disproportionately high burden of COVID-19 cases and deaths," explains Amanda Coston, a doctoral student at CMU's Heinz College and Machine Learning Department, who led the study as a summer research fellow at Stanford University's Regulation, Evaluation, and Governance Lab. "If these demographic groups are not well represented in data that are used to inform policymaking, we risk enacting policies that fail to help those at greatest risk and further exacerbating serious disparities in the health care response to the pandemic." During the COVID-19 pandemic, mobility data have been used to analyze the effectiveness of social distancing policies, illustrate how people's travel affects transmission of the virus, and probe how different sectors of the economy have been affected by social distancing. Yet despite the high-stakes settings in which this information has been used, independent assessments of the data's reliability are lacking. In this study, the first independent audit of demographic bias of a smartphone-based mobility dataset used in the response to COVID-19, researchers assessed the validity of SafeGraph data. This widely used mobility dataset contains information from approximately 47 million mobile devices in the United States. The data come from mobile applications, such as navigation, weather, and social media apps, where users have opted in to location tracking. When COVID-19 began, SafeGraph released much of its data for free as part of the COVID-19 Data Consortium to enable researchers, nonprofits, and governments to gain insight and inform responses. As a result, SafeGraph's mobility data have been used widely in pandemic research, including by the Centers for Disease Control and Prevention, and to inform public health orders and guidelines issued by governors' offices, large cities, and counties. Researchers in this study sought to determine whether SafeGraph data accurately represent the broader population. SafeGraph has reported publicly on the representativeness of its data. But the researchers suggest that because the company's analysis examined demographic bias only at Census-aggregated levels and did not address the question of demographic bias for inferences specific to places of interest (e.g. voting places), an independent audit was necessary. A major challenge in conducting such an audit is the lack of demographic information - SafeGraph data do not contain demographics such as age and race. In this study, researchers showed how administrative data can provide the demographic information necessary for a bias audit, supplementing the information gathered by SafeGraph. They used North Carolina voter registration and turnout records, which typically include information on age, gender, and race, as well as voters' travel to a polling location on Election Day. Their data came from a private voter file vendor that combines publicly available voter records. In all, the study included 539,000 voters from North Carolina who voted at 558 locations during the 2018 general election. The researchers deemed this sample highly representative of all voters in that state. The study identified a sampling bias in the SafeGraph data that underrepresents two high-risk groups, which the authors called particularly concerning in the context of the COVID-19 pandemic. Specifically, older and minority voters were less likely to be captured by the mobility data. This could lead jurisdictions to under-allocate important health resources, such as pop-up testing sites and masks, to vulnerable populations. "While SafeGraph information may help people make policy decisions, auxiliary information, including prior knowledge about local populations, should also be used to make policy decisions about allocating resources," suggests Alexandra Chouldechova, assistant professor of statistics and public policy at CMU, who coauthored the study. The authors also call for more work to determine how mobility data can be more representative, including asking firms that provide this kind of data to be more transparent in including the sources of their data (e.g., identifying which smartphone applications were used to access the information). Among the study's limitations, the authors note that in the United States, voters tend to be older and include more White people than the general population, so the study's results may underestimate the sampling bias in the general population. Additionally, since SafeGraph provides researchers with an aggregated version of the data for privacy reasons, researchers could not test for bias at the individual voter level. Instead, the authors tested for bias at physical places of interest, finding evidence that SafeGraph is more likely to capture traffic to places frequented by younger, largely White visitors than to places frequented by older, largely non-White visitors. More generally, the study shows how administrative data can be used to overcome the lack of demographic information, which is a common hurdle in conducting bias audits. The study was supported by Stanford University's Institute for Human-Centered Artificial Intelligence, the Stanford RISE COVID-19 Crisis Response Faculty Seed Grant Program, CMU's K & L Gates Presidential Fellowship, and the National Science Foundation. ### Summarized from an article in Proceedings of the ACM Conference on Fairness, Accountability, and Transparency , Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy by Coston, A (Carnegie Mellon University), Guha, N (Stanford University), Ouyang, D (Stanford University), Lu, L (Stanford University), Chouldechova, A (Carnegie Mellon University), and Ho, DE (Stanford University). Copyright 2020. All rights reserved. About Heinz College of Information Systems and Public Policy The Heinz College of Information Systems and Public Policy is home to two internationally recognized graduate-level institutions at Carnegie Mellon University: the School of Information Systems and Management and the School of Public Policy and Management. This unique colocation combined with its expertise in analytics set Heinz College apart in the areas of cybersecurity, health care, the future of work, smart cities, and arts & entertainment. In 2016, INFORMS named Heinz College the #1 academic program for Analytics Education. For more information, please visit www.heinz.cmu.edu .
Researchers at Carnegie Mellon (CMU) and Stanford universities assessed smartphone-based mobility data used to respond to Covid-19, and found older and non-white voters were less likely to be included. The team audited the SafeGraph mobility dataset containing information from roughly 47 million mobile devices nationwide, harvested from mobile applications for which users opted in to geolocation. To overcome a dearth of demographic information, the researchers used North Carolina voter registration and turnout records and voters' travel to polling site locations on Election Day. The researchers said the lack of older and non-white voters in the mobility data could result in under-allocation of vital health resources to vulnerable populations.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Carnegie Mellon (CMU) and Stanford universities assessed smartphone-based mobility data used to respond to Covid-19, and found older and non-white voters were less likely to be included. The team audited the SafeGraph mobility dataset containing information from roughly 47 million mobile devices nationwide, harvested from mobile applications for which users opted in to geolocation. To overcome a dearth of demographic information, the researchers used North Carolina voter registration and turnout records and voters' travel to polling site locations on Election Day. The researchers said the lack of older and non-white voters in the mobility data could result in under-allocation of vital health resources to vulnerable populations. Information on individuals' mobility - where they go as measured by their smartphones - has been used widely in devising and evaluating ways to respond to COVID-19, including how to target public health resources. Yet little attention has been paid to how reliable these data are and what sorts of demographic bias they possess. A new study tested the reliability and bias of widely used mobility data, finding that older and non-White voters are less likely to be captured by these data. Allocating public health resources based on such information could cause disproportionate harms to high-risk elderly and minority groups. The study, by researchers at Carnegie Mellon University (CMU) and Stanford University, appears in the Proceedings of the ACM Conference on Fairness, Accountability, and Transparency , a publication of the Association for Computing Machinery. "Older age is a major risk factor for COVID-19-related mortality, and African-American, Native-American, and Latinx communities bear a disproportionately high burden of COVID-19 cases and deaths," explains Amanda Coston, a doctoral student at CMU's Heinz College and Machine Learning Department, who led the study as a summer research fellow at Stanford University's Regulation, Evaluation, and Governance Lab. "If these demographic groups are not well represented in data that are used to inform policymaking, we risk enacting policies that fail to help those at greatest risk and further exacerbating serious disparities in the health care response to the pandemic." During the COVID-19 pandemic, mobility data have been used to analyze the effectiveness of social distancing policies, illustrate how people's travel affects transmission of the virus, and probe how different sectors of the economy have been affected by social distancing. Yet despite the high-stakes settings in which this information has been used, independent assessments of the data's reliability are lacking. In this study, the first independent audit of demographic bias of a smartphone-based mobility dataset used in the response to COVID-19, researchers assessed the validity of SafeGraph data. This widely used mobility dataset contains information from approximately 47 million mobile devices in the United States. The data come from mobile applications, such as navigation, weather, and social media apps, where users have opted in to location tracking. When COVID-19 began, SafeGraph released much of its data for free as part of the COVID-19 Data Consortium to enable researchers, nonprofits, and governments to gain insight and inform responses. As a result, SafeGraph's mobility data have been used widely in pandemic research, including by the Centers for Disease Control and Prevention, and to inform public health orders and guidelines issued by governors' offices, large cities, and counties. Researchers in this study sought to determine whether SafeGraph data accurately represent the broader population. SafeGraph has reported publicly on the representativeness of its data. But the researchers suggest that because the company's analysis examined demographic bias only at Census-aggregated levels and did not address the question of demographic bias for inferences specific to places of interest (e.g. voting places), an independent audit was necessary. A major challenge in conducting such an audit is the lack of demographic information - SafeGraph data do not contain demographics such as age and race. In this study, researchers showed how administrative data can provide the demographic information necessary for a bias audit, supplementing the information gathered by SafeGraph. They used North Carolina voter registration and turnout records, which typically include information on age, gender, and race, as well as voters' travel to a polling location on Election Day. Their data came from a private voter file vendor that combines publicly available voter records. In all, the study included 539,000 voters from North Carolina who voted at 558 locations during the 2018 general election. The researchers deemed this sample highly representative of all voters in that state. The study identified a sampling bias in the SafeGraph data that underrepresents two high-risk groups, which the authors called particularly concerning in the context of the COVID-19 pandemic. Specifically, older and minority voters were less likely to be captured by the mobility data. This could lead jurisdictions to under-allocate important health resources, such as pop-up testing sites and masks, to vulnerable populations. "While SafeGraph information may help people make policy decisions, auxiliary information, including prior knowledge about local populations, should also be used to make policy decisions about allocating resources," suggests Alexandra Chouldechova, assistant professor of statistics and public policy at CMU, who coauthored the study. The authors also call for more work to determine how mobility data can be more representative, including asking firms that provide this kind of data to be more transparent in including the sources of their data (e.g., identifying which smartphone applications were used to access the information). Among the study's limitations, the authors note that in the United States, voters tend to be older and include more White people than the general population, so the study's results may underestimate the sampling bias in the general population. Additionally, since SafeGraph provides researchers with an aggregated version of the data for privacy reasons, researchers could not test for bias at the individual voter level. Instead, the authors tested for bias at physical places of interest, finding evidence that SafeGraph is more likely to capture traffic to places frequented by younger, largely White visitors than to places frequented by older, largely non-White visitors. More generally, the study shows how administrative data can be used to overcome the lack of demographic information, which is a common hurdle in conducting bias audits. The study was supported by Stanford University's Institute for Human-Centered Artificial Intelligence, the Stanford RISE COVID-19 Crisis Response Faculty Seed Grant Program, CMU's K & L Gates Presidential Fellowship, and the National Science Foundation. ### Summarized from an article in Proceedings of the ACM Conference on Fairness, Accountability, and Transparency , Leveraging Administrative Data for Bias Audits: Assessing Disparate Coverage with Mobility Data for COVID-19 Policy by Coston, A (Carnegie Mellon University), Guha, N (Stanford University), Ouyang, D (Stanford University), Lu, L (Stanford University), Chouldechova, A (Carnegie Mellon University), and Ho, DE (Stanford University). Copyright 2020. All rights reserved. About Heinz College of Information Systems and Public Policy The Heinz College of Information Systems and Public Policy is home to two internationally recognized graduate-level institutions at Carnegie Mellon University: the School of Information Systems and Management and the School of Public Policy and Management. This unique colocation combined with its expertise in analytics set Heinz College apart in the areas of cybersecurity, health care, the future of work, smart cities, and arts & entertainment. In 2016, INFORMS named Heinz College the #1 academic program for Analytics Education. For more information, please visit www.heinz.cmu.edu .
711
Israeli Town Abuzz with Delivery Drones in Coordinated Airspace Test
Rural Hadera, Israel, recently was turned over to five private firms that flew drones across its airspace as national authorities tested the responses of a central control room in the city of Haifa, 56 km (35 miles) away. The purpose of the control room is to safely coordinate the unmanned aircraft with each other, as well as with planes and helicopters. The Israel Innovation Authority's Hagit Lidor said, "This is an opportunity for the regulators to learn what is needed to establish delivery drones as a daily reality, and for the drone operators to learn what is expected of them in turn."
[]
[]
[]
scitechnews
None
None
None
None
Rural Hadera, Israel, recently was turned over to five private firms that flew drones across its airspace as national authorities tested the responses of a central control room in the city of Haifa, 56 km (35 miles) away. The purpose of the control room is to safely coordinate the unmanned aircraft with each other, as well as with planes and helicopters. The Israel Innovation Authority's Hagit Lidor said, "This is an opportunity for the regulators to learn what is needed to establish delivery drones as a daily reality, and for the drone operators to learn what is expected of them in turn."
712
Malware was Written in an Unusual Programming Language, to Stop It From Being Detected
A prolific cyber-criminal hacking operation is distributing new malware that is written in a programming language rarely used to compile malicious code. Dubbed NimzaLoader by cybersecurity researchers at Proofpoint , the malware is written in Nim - and it's thought that those behind the malware have decided to develop it this way in the hopes that choosing an unexpected programming language will make it more difficult to detect and analyse. NimzaLoader malware is designed to provide cyber attackers with access to Windows computers, and with the ability to execute commands - something that could give those controlling the malware the ability to control the machine, steal sensitive information, or potentially deploy additional malware. SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic) The malware is thought to be the work of a cyber-criminal hacking group that Proofpoint refers to as TA800, a hacking operation that targets a wide range of industries across North America. The group is usually associated with BazarLoader , a form of trojan malware that creates a full backdoor onto compromised Windows machines and is known to be used to deliver ransomware attacks . Like BazarLoader, NimzaLoader is distributed using phishing emails that link potential victims to a fake PDF downloader, which, if run, will download the malware onto the machine. At least some of the phishing emails are tailored towards specific targets with customised references involving personal details like the recipient's name and the company they work for. The template of the messages and the way the attack attempts to deliver the payload is consistent with previous TA800 phishing campaigns, leading researchers to the conclusion that NimzaLoader is also the work of what was already a prolific hacking operation, which has now added another means of attack. "TA800 has often leveraged different and unique malware, and developers may choose to use a rare programming language like Nim to avoid detection, as reverse engineers may not be familiar with Nim's implementation or focus on developing detection for it, and therefore tools and sandboxes may struggle to analyse samples of it," Sherrod DeGrippo, senior director of threat research and detection at Proofpoint, told ZDNet. Like BazarLoader before it, there's the potential that NimzaLoader could be adopted as a tool that's leased out to cyber criminals as a means of distributing their own malware attacks. SEE: Security Awareness and Training policy (TechRepublic Premium) With phishing the key means of distributing NimzaLoader, it's therefore recommended that organisations ensure that their network is secured with tools that help prevent malicious emails from arriving in inboxes in the first place. It's also recommended that organisations train staff on how to spot phishing emails, particularly when campaigns like this one attempt to exploit personal details as a means of encouraging victims to let their guard down.
Researchers at cybersecurity firm Proofpoint have determined a hacking group known as TA800 is distributing new malware written in the Nim programming language, in order to make it harder to detect. The NimzaLoader malware, distributed via phishing emails that connect to a fake PDF downloader, is intended to give hackers access to Windows computers and the ability to execute commands on them. Proofpoint's Sherrod DeGrippo said, "TA800 has often leveraged different and unique malware, and developers may choose to use a rare programming language like Nim to avoid detection, as reverse-engineers may not be familiar with Nim's implementation or focus on developing detection for it, and therefore tools and sandboxes may struggle to analyze samples of it."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at cybersecurity firm Proofpoint have determined a hacking group known as TA800 is distributing new malware written in the Nim programming language, in order to make it harder to detect. The NimzaLoader malware, distributed via phishing emails that connect to a fake PDF downloader, is intended to give hackers access to Windows computers and the ability to execute commands on them. Proofpoint's Sherrod DeGrippo said, "TA800 has often leveraged different and unique malware, and developers may choose to use a rare programming language like Nim to avoid detection, as reverse-engineers may not be familiar with Nim's implementation or focus on developing detection for it, and therefore tools and sandboxes may struggle to analyze samples of it." A prolific cyber-criminal hacking operation is distributing new malware that is written in a programming language rarely used to compile malicious code. Dubbed NimzaLoader by cybersecurity researchers at Proofpoint , the malware is written in Nim - and it's thought that those behind the malware have decided to develop it this way in the hopes that choosing an unexpected programming language will make it more difficult to detect and analyse. NimzaLoader malware is designed to provide cyber attackers with access to Windows computers, and with the ability to execute commands - something that could give those controlling the malware the ability to control the machine, steal sensitive information, or potentially deploy additional malware. SEE: A winning strategy for cybersecurity (ZDNet special report) | Download the report as a PDF (TechRepublic) The malware is thought to be the work of a cyber-criminal hacking group that Proofpoint refers to as TA800, a hacking operation that targets a wide range of industries across North America. The group is usually associated with BazarLoader , a form of trojan malware that creates a full backdoor onto compromised Windows machines and is known to be used to deliver ransomware attacks . Like BazarLoader, NimzaLoader is distributed using phishing emails that link potential victims to a fake PDF downloader, which, if run, will download the malware onto the machine. At least some of the phishing emails are tailored towards specific targets with customised references involving personal details like the recipient's name and the company they work for. The template of the messages and the way the attack attempts to deliver the payload is consistent with previous TA800 phishing campaigns, leading researchers to the conclusion that NimzaLoader is also the work of what was already a prolific hacking operation, which has now added another means of attack. "TA800 has often leveraged different and unique malware, and developers may choose to use a rare programming language like Nim to avoid detection, as reverse engineers may not be familiar with Nim's implementation or focus on developing detection for it, and therefore tools and sandboxes may struggle to analyse samples of it," Sherrod DeGrippo, senior director of threat research and detection at Proofpoint, told ZDNet. Like BazarLoader before it, there's the potential that NimzaLoader could be adopted as a tool that's leased out to cyber criminals as a means of distributing their own malware attacks. SEE: Security Awareness and Training policy (TechRepublic Premium) With phishing the key means of distributing NimzaLoader, it's therefore recommended that organisations ensure that their network is secured with tools that help prevent malicious emails from arriving in inboxes in the first place. It's also recommended that organisations train staff on how to spot phishing emails, particularly when campaigns like this one attempt to exploit personal details as a means of encouraging victims to let their guard down.
713
Faster Drug Discovery Through Machine Learning
Drugs can only work if they stick to their target proteins in the body. Assessing that stickiness is a key hurdle in the drug discovery and screening process. New research combining chemistry and machine learning could lower that hurdle. The new technique, dubbed DeepBAR, quickly calculates the binding affinities between drug candidates and their targets. The approach yields precise calculations in a fraction of the time compared to previous state-of-the-art methods. The researchers say DeepBAR could one day quicken the pace of drug discovery and protein engineering. "Our method is orders of magnitude faster than before, meaning we can have drug discovery that is both efficient and reliable," says Bin Zhang, the Pfizer-Laubach Career Development Professor in Chemistry at MIT, an associate member of the Broad Institute of MIT and Harvard, and a co-author of a new paper describing the technique. The research appears today in the Journal of Physical Chemistry Letters . The study's lead author is Xinqiang Ding, a postdoc in MIT's Department of Chemistry. The affinity between a drug molecule and a target protein is measured by a quantity called the binding free energy - the smaller the number, the stickier the bind. "A lower binding free energy means the drug can better compete against other molecules," says Zhang, "meaning it can more effectively disrupt the protein's normal function." Calculating the binding free energy of a drug candidate provides an indicator of a drug's potential effectiveness. But it's a difficult quantity to nail down. Methods for computing binding free energy fall into two broad categories, each with its own drawbacks. One category calculates the quantity exactly, eating up significant time and computer resources. The second category is less computationally expensive, but it yields only an approximation of the binding free energy. Zhang and Ding devised an approach to get the best of both worlds. Exact and efficient DeepBAR computes binding free energy exactly, but it requires just a fraction of the calculations demanded by previous methods. The new technique combines traditional chemistry calculations with recent advances in machine learning. The "BAR" in DeepBAR stands for "Bennett acceptance ratio," a decades-old algorithm used in exact calculations of binding free energy. Using the Bennet acceptance ratio typically requires a knowledge of two "endpoint" states (e.g., a drug molecule bound to a protein and a drug molecule completely dissociated from a protein), plus knowledge of many intermediate states (e.g., varying levels of partial binding), all of which bog down calculation speed. DeepBAR slashes those in-between states by deploying the Bennett acceptance ratio in machine-learning frameworks called deep generative models. "These models create a reference state for each endpoint, the bound state and the unbound state," says Zhang. These two reference states are similar enough that the Bennett acceptance ratio can be used directly, without all the costly intermediate steps. In using deep generative models, the researchers were borrowing from the field of computer vision. "It's basically the same model that people use to do computer image synthensis," says Zhang. "We're sort of treating each molecular structure as an image, which the model can learn. So, this project is building on the effort of the machine learning community." While adapting a computer vision approach to chemistry was DeepBAR's key innovation, the crossover also raised some challenges. "These models were originally developed for 2D images," says Ding. "But here we have proteins and molecules - it's really a 3D structure. So, adapting those methods in our case was the biggest technical challenge we had to overcome." A faster future for drug screening In tests using small protein-like molecules, DeepBAR calculated binding free energy nearly 50 times faster than previous methods. Zhang says that efficiency means "we can really start to think about using this to do drug screening, in particular in the context of Covid. DeepBAR has the exact same accuracy as the gold standard, but it's much faster." The researchers add that, in addition to drug screening, DeepBAR could aid protein design and engineering, since the method could be used to model interactions between multiple proteins. DeepBAR is "a really nice computational work" with a few hurdles to clear before it can be used in real-world drug discovery, says Michael Gilson, a professor of pharmaceutical sciences at the University of California at San Diego, who was not involved in the research. He says DeepBAR would need to be validated against complex experimental data. "That will certainly pose added challenges, and it may require adding in further approximations." In the future, the researchers plan to improve DeepBAR's ability to run calculations for large proteins, a task made feasible by recent advances in computer science. "This research is an example of combining traditional computational chemistry methods, developed over decades, with the latest developments in machine learning," says Ding. "So, we achieved something that would have been impossible before now." This research was funded, in part, by the National Institutes of Health.
The DeepBAR technique developed by Massachusetts Institute of Technology (MIT) researchers combines chemistry and machine learning to rapidly calculate the binding affinities between drug candidates and targets. DeepBAR completes calculations of binding free energy in a fraction of the time required by previous methods. The technique utilizes the binding free energy-calculating Bennett acceptance ratio (BAR) algorithm in deep generative models, creating two reference states for each endpoint with sufficient resemblance to enable direct BAR usage without intermediate steps. DeepBAR calculated binding free energy nearly 50 times faster than previous methods in tests with small protein-like molecules. MIT's Bin Zhang says this efficiency means "we can really start to think about using this to do drug screening, in particular in the context of Covid."
[]
[]
[]
scitechnews
None
None
None
None
The DeepBAR technique developed by Massachusetts Institute of Technology (MIT) researchers combines chemistry and machine learning to rapidly calculate the binding affinities between drug candidates and targets. DeepBAR completes calculations of binding free energy in a fraction of the time required by previous methods. The technique utilizes the binding free energy-calculating Bennett acceptance ratio (BAR) algorithm in deep generative models, creating two reference states for each endpoint with sufficient resemblance to enable direct BAR usage without intermediate steps. DeepBAR calculated binding free energy nearly 50 times faster than previous methods in tests with small protein-like molecules. MIT's Bin Zhang says this efficiency means "we can really start to think about using this to do drug screening, in particular in the context of Covid." Drugs can only work if they stick to their target proteins in the body. Assessing that stickiness is a key hurdle in the drug discovery and screening process. New research combining chemistry and machine learning could lower that hurdle. The new technique, dubbed DeepBAR, quickly calculates the binding affinities between drug candidates and their targets. The approach yields precise calculations in a fraction of the time compared to previous state-of-the-art methods. The researchers say DeepBAR could one day quicken the pace of drug discovery and protein engineering. "Our method is orders of magnitude faster than before, meaning we can have drug discovery that is both efficient and reliable," says Bin Zhang, the Pfizer-Laubach Career Development Professor in Chemistry at MIT, an associate member of the Broad Institute of MIT and Harvard, and a co-author of a new paper describing the technique. The research appears today in the Journal of Physical Chemistry Letters . The study's lead author is Xinqiang Ding, a postdoc in MIT's Department of Chemistry. The affinity between a drug molecule and a target protein is measured by a quantity called the binding free energy - the smaller the number, the stickier the bind. "A lower binding free energy means the drug can better compete against other molecules," says Zhang, "meaning it can more effectively disrupt the protein's normal function." Calculating the binding free energy of a drug candidate provides an indicator of a drug's potential effectiveness. But it's a difficult quantity to nail down. Methods for computing binding free energy fall into two broad categories, each with its own drawbacks. One category calculates the quantity exactly, eating up significant time and computer resources. The second category is less computationally expensive, but it yields only an approximation of the binding free energy. Zhang and Ding devised an approach to get the best of both worlds. Exact and efficient DeepBAR computes binding free energy exactly, but it requires just a fraction of the calculations demanded by previous methods. The new technique combines traditional chemistry calculations with recent advances in machine learning. The "BAR" in DeepBAR stands for "Bennett acceptance ratio," a decades-old algorithm used in exact calculations of binding free energy. Using the Bennet acceptance ratio typically requires a knowledge of two "endpoint" states (e.g., a drug molecule bound to a protein and a drug molecule completely dissociated from a protein), plus knowledge of many intermediate states (e.g., varying levels of partial binding), all of which bog down calculation speed. DeepBAR slashes those in-between states by deploying the Bennett acceptance ratio in machine-learning frameworks called deep generative models. "These models create a reference state for each endpoint, the bound state and the unbound state," says Zhang. These two reference states are similar enough that the Bennett acceptance ratio can be used directly, without all the costly intermediate steps. In using deep generative models, the researchers were borrowing from the field of computer vision. "It's basically the same model that people use to do computer image synthensis," says Zhang. "We're sort of treating each molecular structure as an image, which the model can learn. So, this project is building on the effort of the machine learning community." While adapting a computer vision approach to chemistry was DeepBAR's key innovation, the crossover also raised some challenges. "These models were originally developed for 2D images," says Ding. "But here we have proteins and molecules - it's really a 3D structure. So, adapting those methods in our case was the biggest technical challenge we had to overcome." A faster future for drug screening In tests using small protein-like molecules, DeepBAR calculated binding free energy nearly 50 times faster than previous methods. Zhang says that efficiency means "we can really start to think about using this to do drug screening, in particular in the context of Covid. DeepBAR has the exact same accuracy as the gold standard, but it's much faster." The researchers add that, in addition to drug screening, DeepBAR could aid protein design and engineering, since the method could be used to model interactions between multiple proteins. DeepBAR is "a really nice computational work" with a few hurdles to clear before it can be used in real-world drug discovery, says Michael Gilson, a professor of pharmaceutical sciences at the University of California at San Diego, who was not involved in the research. He says DeepBAR would need to be validated against complex experimental data. "That will certainly pose added challenges, and it may require adding in further approximations." In the future, the researchers plan to improve DeepBAR's ability to run calculations for large proteins, a task made feasible by recent advances in computer science. "This research is an example of combining traditional computational chemistry methods, developed over decades, with the latest developments in machine learning," says Ding. "So, we achieved something that would have been impossible before now." This research was funded, in part, by the National Institutes of Health.
715
Robots Increase Gender Pay Gap Despite Raising Wages Overall
Researchers at King's College London in the U.K. studied the impact of automation on 20 European countries and found that automation pushed up all wages on average, but widened the gender pay gap. The study determined that the number of robots per 10,000 workers rose an average 47% from 2006 to 2014, and correlated a 10% increase in robot workers to a 1.8% boost in the gender pay gap. The researchers attribute the increased pay discrepancy to the fact that more men hold medium- and high-skilled jobs that disproportionately benefit from automation. Countries with already high gender inequality and less support for women in the workforce saw a bigger increase in the gender pay gap due to automation, according to the research, which also found no statistically significant impact on the gender pay gap in countries with low gender inequality.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at King's College London in the U.K. studied the impact of automation on 20 European countries and found that automation pushed up all wages on average, but widened the gender pay gap. The study determined that the number of robots per 10,000 workers rose an average 47% from 2006 to 2014, and correlated a 10% increase in robot workers to a 1.8% boost in the gender pay gap. The researchers attribute the increased pay discrepancy to the fact that more men hold medium- and high-skilled jobs that disproportionately benefit from automation. Countries with already high gender inequality and less support for women in the workforce saw a bigger increase in the gender pay gap due to automation, according to the research, which also found no statistically significant impact on the gender pay gap in countries with low gender inequality.
716
Smart Device Push Brings IT, R&D Teams Together
Recent products include a smart toothbrush introduced last year that is designed to make recommendations on how a person could brush better. Colgate R&D staffers worked on the brush head, sensors and other aspects of the physical product, while members of the IT team developed the underlying application, including the machine learning that analyzes the data to make recommendations. "It is really about maximizing the use of the combined skills," said Mr. Crowe. Although IT and R&D have worked closely in electronics and manufacturing, which have strong engineering traditions, it is a relatively recent occurrence in other industries, said Erik Roth, a senior partner at McKinsey & Co. and the leader of the firm's innovation and growth practice. The increased adoption of the Internet of Things, where sensors transmitting data in real time are embedded in various devices, as well as artificial intelligence to analyze that data, have extended it is reach. So, too, has the expectation among more companies for IT to deliver more value. Within the consumer-goods industry, some 70% of CIOs reported an increase in business leaders asking their departments to work on higher-value, more-strategic projects as a result of the pandemic, according to a 2021 Gartner CIO Agenda survey. Those projects include adding digital experiences to physical products, said Michelle Duerst, vice president, analyst at Gartner. "The IT function as a separate entity, operating as a service or a central center of competence alone, doesn't work," said Procter & Gamble Co. CIO Vittorio Cretella. Like Colgate, the consumer-packaged goods company also has its IT and R&D teams working together, creating its own smart toothbrush as well as the Olay Skin Advisor personalized skin-care analyzer, and the Gillette Style Advisor facial-hair style assistant. "Our partnership is now getting very, very pervasive in everything we do," said Victor Aguilar, P&G's chief research, development and innovation officer. How P&G and Colgate bring IT and R&D teams together is somewhat flexible and can vary depending on the project. The IT side often provides the software engineers who can build platforms and applications, data scientists with expertise in artificial intelligence, and Internet of Things specialists with connectivity skills. The R&D department, which may also have data scientists, will enlist clinical researchers who can advise the team on the usefulness and safety of a device as well as industrial designers and product developers who can plan and build a first, basic model of the product. The teams work together on initial research and create prototypes to prove the concept has legs, the companies' executives said. But teaming up IT and R&D isn't without its challenges, McKinsey's Mr. Roth said. "It's not a natural pairing," Mr. Roth said, adding that the two units are staffed by people coming from different cultures. For instance, IT traditionally runs and maintains equipment while R&D's role has been to deliver research and products, he said. To help the two teams break down barriers and build trust, Colgate says it uses agile management , a common software development methodology, in which participants break development into small tasks, develop various functions and features in short sprints and quickly adjust if something isn't working or if a better idea surfaces. As IT and R&D departments work more closely together, they can learn from each other and get a better appreciation for the types of innovative products their companies might want to develop, said Mr. Roth. The teams could gain, he said, "a much greater appreciation, and role, in thinking about their work [being] directly tied to value creation," he said. Write to John McCormick at [email protected]
Information Technology (IT) staffers at consumer packaged goods companies increasingly are working with research and development (R&D) teams to develop more connected devices. To develop Colgate's new smart toothbrush, the R&D team worked on the brush head and sensors, while the IT team developed the underlying application, which uses machine learning to analyze data collected by the toothbrush to make recommendations for brushing better. Colgate's Mike Crowe said, "It is really about maximizing the use of the combined skills." The IT and R&D teams at Proctor & Gamble (P&G) also have collaborated on a smart toothbrush, a personalized skin-care analyzer, and a facial-hair style assistant. McKinsey's Ed Roth said teaming IT and R&D is not without its challenges. "It's not a natural pairing," Roth said, because each unit is staffed by people from a different work culture.
[]
[]
[]
scitechnews
None
None
None
None
Information Technology (IT) staffers at consumer packaged goods companies increasingly are working with research and development (R&D) teams to develop more connected devices. To develop Colgate's new smart toothbrush, the R&D team worked on the brush head and sensors, while the IT team developed the underlying application, which uses machine learning to analyze data collected by the toothbrush to make recommendations for brushing better. Colgate's Mike Crowe said, "It is really about maximizing the use of the combined skills." The IT and R&D teams at Proctor & Gamble (P&G) also have collaborated on a smart toothbrush, a personalized skin-care analyzer, and a facial-hair style assistant. McKinsey's Ed Roth said teaming IT and R&D is not without its challenges. "It's not a natural pairing," Roth said, because each unit is staffed by people from a different work culture. Recent products include a smart toothbrush introduced last year that is designed to make recommendations on how a person could brush better. Colgate R&D staffers worked on the brush head, sensors and other aspects of the physical product, while members of the IT team developed the underlying application, including the machine learning that analyzes the data to make recommendations. "It is really about maximizing the use of the combined skills," said Mr. Crowe. Although IT and R&D have worked closely in electronics and manufacturing, which have strong engineering traditions, it is a relatively recent occurrence in other industries, said Erik Roth, a senior partner at McKinsey & Co. and the leader of the firm's innovation and growth practice. The increased adoption of the Internet of Things, where sensors transmitting data in real time are embedded in various devices, as well as artificial intelligence to analyze that data, have extended it is reach. So, too, has the expectation among more companies for IT to deliver more value. Within the consumer-goods industry, some 70% of CIOs reported an increase in business leaders asking their departments to work on higher-value, more-strategic projects as a result of the pandemic, according to a 2021 Gartner CIO Agenda survey. Those projects include adding digital experiences to physical products, said Michelle Duerst, vice president, analyst at Gartner. "The IT function as a separate entity, operating as a service or a central center of competence alone, doesn't work," said Procter & Gamble Co. CIO Vittorio Cretella. Like Colgate, the consumer-packaged goods company also has its IT and R&D teams working together, creating its own smart toothbrush as well as the Olay Skin Advisor personalized skin-care analyzer, and the Gillette Style Advisor facial-hair style assistant. "Our partnership is now getting very, very pervasive in everything we do," said Victor Aguilar, P&G's chief research, development and innovation officer. How P&G and Colgate bring IT and R&D teams together is somewhat flexible and can vary depending on the project. The IT side often provides the software engineers who can build platforms and applications, data scientists with expertise in artificial intelligence, and Internet of Things specialists with connectivity skills. The R&D department, which may also have data scientists, will enlist clinical researchers who can advise the team on the usefulness and safety of a device as well as industrial designers and product developers who can plan and build a first, basic model of the product. The teams work together on initial research and create prototypes to prove the concept has legs, the companies' executives said. But teaming up IT and R&D isn't without its challenges, McKinsey's Mr. Roth said. "It's not a natural pairing," Mr. Roth said, adding that the two units are staffed by people coming from different cultures. For instance, IT traditionally runs and maintains equipment while R&D's role has been to deliver research and products, he said. To help the two teams break down barriers and build trust, Colgate says it uses agile management , a common software development methodology, in which participants break development into small tasks, develop various functions and features in short sprints and quickly adjust if something isn't working or if a better idea surfaces. As IT and R&D departments work more closely together, they can learn from each other and get a better appreciation for the types of innovative products their companies might want to develop, said Mr. Roth. The teams could gain, he said, "a much greater appreciation, and role, in thinking about their work [being] directly tied to value creation," he said. Write to John McCormick at [email protected]
717
How AI Can Help Curb Traffic Accidents in Cities
Despite pandemic-driven restrictions on movement, there were over 12,000 accidents in Madrid in 2020, leading to 31 fatalities. In Barcelona, there were more than 5,700 collisions, causing 14 deaths. Pedestrian and vehicle safety is a priority, which is why a research project at the Universitat Oberta de Catalunya (UOC) is harnessing artificial intelligence (AI) to make decisions that will make cities safer. The researchers have looked into the correlation between the complexity of certain urban areas and the likelihood of an accident occurring there. According to the researchers, the data they have gathered can be used to train neural networks to detect probable hazards in an area and work out patterns associated with this high risk potential. The researchers, headed by Cristina Bustos and Javier Borge, are working with algorithms that will aid traffic authorities in reducing the likelihood of accidents in urban environments. The interdisciplinary study was carried out by two UOC research groups - Complex Systems @ IN3 ( CoSIN3 ), from the Internet Interdisciplinary Institute ( IN3 ), and the Scene Understanding and Artificial Intelligence Lab ( SUNAI ), from the Faculty of Computer Science, Multimedia and Telecommunications , in collaboration with Spain's National Traffic Authority ( DGT ), the city councils of Madrid and Barcelona , academic affiliates from the Massachusetts Institute of Technology ( MIT ) and researcher Àlex Arenas from the Department of Computer Engineering and Mathematics at Universitat Rovira i Virgili ( URV ). Accidents and the urban scene, what is the connection? According to the researchers, the visual layout of what they call the "urban scene" influences the likelihood of an accident occurring. Cristina Bustos, a member of CoSIN3 and first author of a scientific article recently published on the project , said: "Our findings show that there are certain patterns in the scene layout that may affect the accident rate ." For the researcher, key factors such as the arrangement of street furniture , the location of parked cars , advertisements and façades increase driver distraction. "Our findings suggest that we've got more than just a hypothesis on our hands," said Javier Borge, lead research of CoSIN3. "What seems clear is that the number of distinct elements in a scene correlates with the number of accidents that have taken place there." Understanding the reason behind this correlation is the crux of the matter. Borge said: "The AI pinpoints places that are potentially hazardous , but it doesn't tell us why . That's why we turn to certain interpretation techniques, such as those used in this study, which bring us closer to an answer. Although we need to pursue this research line further, there's no doubt that traffic accidents happen for many reasons and a combination thereof. Our study shows that scene layout is a factor to bear in mind." According to Borge, he and his fellow researchers hypothesise that human cognitive limitations are affected by the complexity of the scene. He said: "If a scene is very complex , there is more strain on my cognitive system , possibly dampening my ability to steer clear of unexpected events ." This is where the outside help of artificial intelligence comes in, applying algorithms to identify complex urban patters. Using algorithms to reduce the likelihood of accidents Artificial intelligence has stepped up its possibilities, especially since the appearance of technologies such as neural networks and machine learning. The former is a computational model that has evolved from knowledge of the brain's plasticity, while the latter is a branch of AI that allows machines to learn without being specifically programmed to do so. The technology employed by the UOC research group is based precisely on these concepts. Cristina Bustos said: "We use deep learning [a type of machine learning based on a set of machine learning algorithms] applied to computer image processing." According to the researcher, "the purpose of these algorithms is to identify patterns in photos or videos in order to perform a specific task, for example recognizing the objects that appear and where they are or identifying the general context of the image, or even more complex tasks, such as recognizing the emotion that an image or video evokes in a person." The researchers employ convolutional neural networks, so named because they apply an operation called "convolution" on the input image and throughout the network layers. "Applying this operation," Bustos said, "the network learns to discern simple patterns in the top layers , such as lines, edges, textures, colours and corners, and becomes more complex the deeper it goes. In the end, the network is able to identify complex patters such as a person's face or a car . This type of network needs to train to perform a task, repeating the processes over and over while the researchers indicate whether it has performed well or not. Cristina said: "We don't train the network from scratch, rather we use one that has already been instructed for another task , such as recognizing people or animals, and we take advantage of this knowledge to teach it to recognize hazardous objects and patterns that may lead to accidents." AI, a city planner's greatest ally "One of the challenges of neural networks is that, given their deep, non-linear and complex nature, we don't have control over what patterns they are learning ," said Bustos. "That's why we have turned to other deep learning techniques, such as image segmentation and class activation mapping." The former, she clarified, pinpoints objects in an image through their pixels, while the latter maps out the regions in the image that the network is to look at to obtain results. Javier Borge pointed out that "artificial intelligence strikes us as a very powerful tool for finding out where problems might occur, but it's not going to solve them on its own." Thus, the team has developed a heuristic method for improving urban scenes which, according to Borge, "is worthless without a human behind it," such as an urban planner, an architect or an engineer who is able to validate and implement changes based on the algorithm-driven data. With artificial intelligence on their side, the researchers are looking at multiple hazardous urban patterns. Bustos said: "Right now we are analysing how the visual scene affects drivers' stress ." Accordingly, the researchers believe that this type of technology can be of great use to bodies such as the DGT, with a view to designing safer cities for vehicles and pedestrians. To conclude, Borge said, "Our biggest hurdle is data availability: the analysis requires a rich collection of street view images and open, geolocalized data on accident rates with details of those involved, which are not currently easy to obtain." This UOC research supports Sustainable Development Goal (SDG) 11 , to make cities inclusive, safe, resilient and sustainable. Related article Bustos, C.; Rhoads, D.; Solé-Ribalta, A.; Masip, D.; Arenas, A.; Lapedriza, A.; Borge-Holthoefer, J. (2021). Explainable, automated urban interventions to improve pedestrian and vehicle safety, Transportation Research Part C: Emerging Technologies , Volume 125, 103018, ISSN 0968-090X, DOI: https://doi.org/10.1016/j.trc.2021.103018 The project "Espacio Persona: Big Data para la mejora de la seguridad vial urbana" (ref. SPIP 2017-02263) was funded by the Spanish Directorate-General for Traffic (DGT). UOC R&I The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century, by studying interactions between technology and human & social sciences with a specific focus on the network society, e-learning and e-health. Over 500 researchers and 51 research groups work among the University's seven faculties and two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC). The United Nations' 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu . #UOC25years
An interdisciplinary research team led by Spain's Universitat Oberta de Catalunya (UOC) is leveraging artificial intelligence to reduce traffic accidents in cities. The researchers trained neural networks to detect probable hazards in an area and found that certain patterns in urban scene layout - such as the arrangement of street furniture and the location of parked cars, advertisements, and façades - may impact accident rates. The researchers developed a heuristic method to improve urban scenes, but it requires urban planners, architects, or engineers to implement changes based on the algorithm-driven data. UOC's Javier Borge said, "Our biggest hurdle is data availability: the analysis requires a rich collection of street view images and open, geolocalized data on accident rates with details of those involved, which are not currently easy to obtain."
[]
[]
[]
scitechnews
None
None
None
None
An interdisciplinary research team led by Spain's Universitat Oberta de Catalunya (UOC) is leveraging artificial intelligence to reduce traffic accidents in cities. The researchers trained neural networks to detect probable hazards in an area and found that certain patterns in urban scene layout - such as the arrangement of street furniture and the location of parked cars, advertisements, and façades - may impact accident rates. The researchers developed a heuristic method to improve urban scenes, but it requires urban planners, architects, or engineers to implement changes based on the algorithm-driven data. UOC's Javier Borge said, "Our biggest hurdle is data availability: the analysis requires a rich collection of street view images and open, geolocalized data on accident rates with details of those involved, which are not currently easy to obtain." Despite pandemic-driven restrictions on movement, there were over 12,000 accidents in Madrid in 2020, leading to 31 fatalities. In Barcelona, there were more than 5,700 collisions, causing 14 deaths. Pedestrian and vehicle safety is a priority, which is why a research project at the Universitat Oberta de Catalunya (UOC) is harnessing artificial intelligence (AI) to make decisions that will make cities safer. The researchers have looked into the correlation between the complexity of certain urban areas and the likelihood of an accident occurring there. According to the researchers, the data they have gathered can be used to train neural networks to detect probable hazards in an area and work out patterns associated with this high risk potential. The researchers, headed by Cristina Bustos and Javier Borge, are working with algorithms that will aid traffic authorities in reducing the likelihood of accidents in urban environments. The interdisciplinary study was carried out by two UOC research groups - Complex Systems @ IN3 ( CoSIN3 ), from the Internet Interdisciplinary Institute ( IN3 ), and the Scene Understanding and Artificial Intelligence Lab ( SUNAI ), from the Faculty of Computer Science, Multimedia and Telecommunications , in collaboration with Spain's National Traffic Authority ( DGT ), the city councils of Madrid and Barcelona , academic affiliates from the Massachusetts Institute of Technology ( MIT ) and researcher Àlex Arenas from the Department of Computer Engineering and Mathematics at Universitat Rovira i Virgili ( URV ). Accidents and the urban scene, what is the connection? According to the researchers, the visual layout of what they call the "urban scene" influences the likelihood of an accident occurring. Cristina Bustos, a member of CoSIN3 and first author of a scientific article recently published on the project , said: "Our findings show that there are certain patterns in the scene layout that may affect the accident rate ." For the researcher, key factors such as the arrangement of street furniture , the location of parked cars , advertisements and façades increase driver distraction. "Our findings suggest that we've got more than just a hypothesis on our hands," said Javier Borge, lead research of CoSIN3. "What seems clear is that the number of distinct elements in a scene correlates with the number of accidents that have taken place there." Understanding the reason behind this correlation is the crux of the matter. Borge said: "The AI pinpoints places that are potentially hazardous , but it doesn't tell us why . That's why we turn to certain interpretation techniques, such as those used in this study, which bring us closer to an answer. Although we need to pursue this research line further, there's no doubt that traffic accidents happen for many reasons and a combination thereof. Our study shows that scene layout is a factor to bear in mind." According to Borge, he and his fellow researchers hypothesise that human cognitive limitations are affected by the complexity of the scene. He said: "If a scene is very complex , there is more strain on my cognitive system , possibly dampening my ability to steer clear of unexpected events ." This is where the outside help of artificial intelligence comes in, applying algorithms to identify complex urban patters. Using algorithms to reduce the likelihood of accidents Artificial intelligence has stepped up its possibilities, especially since the appearance of technologies such as neural networks and machine learning. The former is a computational model that has evolved from knowledge of the brain's plasticity, while the latter is a branch of AI that allows machines to learn without being specifically programmed to do so. The technology employed by the UOC research group is based precisely on these concepts. Cristina Bustos said: "We use deep learning [a type of machine learning based on a set of machine learning algorithms] applied to computer image processing." According to the researcher, "the purpose of these algorithms is to identify patterns in photos or videos in order to perform a specific task, for example recognizing the objects that appear and where they are or identifying the general context of the image, or even more complex tasks, such as recognizing the emotion that an image or video evokes in a person." The researchers employ convolutional neural networks, so named because they apply an operation called "convolution" on the input image and throughout the network layers. "Applying this operation," Bustos said, "the network learns to discern simple patterns in the top layers , such as lines, edges, textures, colours and corners, and becomes more complex the deeper it goes. In the end, the network is able to identify complex patters such as a person's face or a car . This type of network needs to train to perform a task, repeating the processes over and over while the researchers indicate whether it has performed well or not. Cristina said: "We don't train the network from scratch, rather we use one that has already been instructed for another task , such as recognizing people or animals, and we take advantage of this knowledge to teach it to recognize hazardous objects and patterns that may lead to accidents." AI, a city planner's greatest ally "One of the challenges of neural networks is that, given their deep, non-linear and complex nature, we don't have control over what patterns they are learning ," said Bustos. "That's why we have turned to other deep learning techniques, such as image segmentation and class activation mapping." The former, she clarified, pinpoints objects in an image through their pixels, while the latter maps out the regions in the image that the network is to look at to obtain results. Javier Borge pointed out that "artificial intelligence strikes us as a very powerful tool for finding out where problems might occur, but it's not going to solve them on its own." Thus, the team has developed a heuristic method for improving urban scenes which, according to Borge, "is worthless without a human behind it," such as an urban planner, an architect or an engineer who is able to validate and implement changes based on the algorithm-driven data. With artificial intelligence on their side, the researchers are looking at multiple hazardous urban patterns. Bustos said: "Right now we are analysing how the visual scene affects drivers' stress ." Accordingly, the researchers believe that this type of technology can be of great use to bodies such as the DGT, with a view to designing safer cities for vehicles and pedestrians. To conclude, Borge said, "Our biggest hurdle is data availability: the analysis requires a rich collection of street view images and open, geolocalized data on accident rates with details of those involved, which are not currently easy to obtain." This UOC research supports Sustainable Development Goal (SDG) 11 , to make cities inclusive, safe, resilient and sustainable. Related article Bustos, C.; Rhoads, D.; Solé-Ribalta, A.; Masip, D.; Arenas, A.; Lapedriza, A.; Borge-Holthoefer, J. (2021). Explainable, automated urban interventions to improve pedestrian and vehicle safety, Transportation Research Part C: Emerging Technologies , Volume 125, 103018, ISSN 0968-090X, DOI: https://doi.org/10.1016/j.trc.2021.103018 The project "Espacio Persona: Big Data para la mejora de la seguridad vial urbana" (ref. SPIP 2017-02263) was funded by the Spanish Directorate-General for Traffic (DGT). UOC R&I The UOC's research and innovation (R&I) is helping overcome pressing challenges faced by global societies in the 21st century, by studying interactions between technology and human & social sciences with a specific focus on the network society, e-learning and e-health. Over 500 researchers and 51 research groups work among the University's seven faculties and two research centres: the Internet Interdisciplinary Institute (IN3) and the eHealth Center (eHC). The United Nations' 2030 Agenda for Sustainable Development and open knowledge serve as strategic pillars for the UOC's teaching, research and innovation. More information: research.uoc.edu . #UOC25years
719
U.S. Grid at Rising Risk to Cyberattack, Says GAO
An analysis by the U.S. Government Accountability Office (GAO) determined that distribution systems within the country's electrical grid are increasingly vulnerable to cyberattack, "in part due of the introduction of and reliance on monitoring and control technologies." The GAO found this vulnerability is growing because of industrial control systems, which increasingly are accessed remotely. The study said the systems overall are not covered by federal cybersecurity standards, but in some instances have taken independent action based on those standards. The report urges the Secretary of Energy to work with state officials, industry figures, and the Department of Homeland Security to better mitigate distribution system risks.
[]
[]
[]
scitechnews
None
None
None
None
An analysis by the U.S. Government Accountability Office (GAO) determined that distribution systems within the country's electrical grid are increasingly vulnerable to cyberattack, "in part due of the introduction of and reliance on monitoring and control technologies." The GAO found this vulnerability is growing because of industrial control systems, which increasingly are accessed remotely. The study said the systems overall are not covered by federal cybersecurity standards, but in some instances have taken independent action based on those standards. The report urges the Secretary of Energy to work with state officials, industry figures, and the Department of Homeland Security to better mitigate distribution system risks.
720
Robot Adjusts Length of Its Legs When Stepping From Grass to Concrete
The four-legged robot Dyret can adjust the length of its legs to adapt the body to the surface. Along the way, it learns what works best. This way it is better equipped the next time it encounters an unknown environment. The name Dyret (Norwegian for "The Animal") is an acronym for Dynamic Robot for Embodied Testing. "We have shown the benefits of allowing a robot to continuously adapt its body shape. Our physical robot also proves that this can easily be done with today's technology," says senior lecturer Tønnes Nygaard at UiO's Department of Informatics. In the case of Dyret, changing the body shape means that it adjusts the length of the legs. You can also read this article in Norwegian "We have seen that a mechanism to adapt the body shape is useful for our robot, and we believe this could apply to other robot designs too," says associate professor Kyrre Glette . Previously, they have shown that the robot can adapt to different environments in controlled conditions indoors. Then Nygaard spent half a year with other robot researchers at The Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia, where they specialize in testing self-learning robots outdoors. "This was previously considered too hard to achieve in the real world. Through the robot and our experiments, we've definitely shown that it's possible," Nygaard says to Titan.uio.no. The results are published today in the journal Nature Machine Intelligence . By changing the length of its legs it can automatically transition to different body shapes. Morphologically adaptive robots can operate in unpredictable environments and solve new tasks without having to be redesigned or rebuilt every time they face something unexpected. For us humans, it may be hard to imagine how difficult it is for a robot to walk from, for example, concrete to grass. Just remember that you have years of experience and quite a few senses compared to a robot. "The robot uses a camera to see how rough the terrain is, and it uses sensors in the legs to feel how hard the walking surface is," Nygaard explains. "The robot continuously learns about the environment it's walking on and, combined with the knowledge it gained indoors in the controlled environment, uses this to adapt its body." When Dyret was asked to walk on grass, it had never seen grass before. It had trained on only gravel, sand and concrete. Still, it quickly learned how to walk on the Australian grass and what was the ideal leg length. "Shorter legs give better stability, while longer legs allow for a higher walking speed if the ground is sufficiently predictable," Glette says. A flat lawn may not be the biggest of challenges, but grass in nature is full of tufts and holes that can trap a long-legged robot, so Dyret will shorten its legs. On concrete, it can stretch the legs and "run" away. The robot is also able to adapt if it gets damaged when facing unforeseen obstacles. "Using our technology, the robot is able to adapt to one of its legs becoming weaker or breaking. It can learn how to recover, whether through limping or decreasing the length of the other three legs," Nygaard says. Currently, Dyret is not ready to take on major tasks. The purpose of Nygaard's doctoral degree has been to develop the technology and find suitable materials, and to prove that it is possible. Still, he can see several possible uses in the future. "This is beneficial in environments where the robot might face many unexpected challenges. This includes search and rescue operations, but also agriculture where there is a wide range of challenging surfaces and weather conditions," Nygaard says. He also mentions exploration of mines where it is difficult for people to get to. "One could also imagine robots at different scales, for example small pipe inspection robots, being able to benefit from such technology in the future," Glette says. "We hope this idea, that one can change the body shape, will sound convincing to other researchers and that it may be incorporated into other types of robots. Maybe even a trip to Mars or some other missions in space. "I think that space robots in principle could easily have had this technology because they often will face unforeseen tasks. If they had the opportunity to change or repair themselves, it would have been good," Glette says. This will of course take a while. The space industry does not apply new technology overnight. After all, they must be absolutely sure that everything works after landing. "If you send a robot to Mars, it better work," Nygaard says. Each time Dyret manages to adapt to a new surface, it will be even better equipped to meet additional new surfaces. That is the great advantage of self-learning. If scientists were to program it to work on different surfaces, it is not certain that they would actually choose what is ideal for the robot. "We considered gravel, for instance, to be a hard surface, but the robot did not experience it like that, which is something it was able to learn on its own," Nygaard says. "When learning through its own experiences, it is able to break free from the assumptions and traditions that we humans make, often erroneously." This means that it must be allowed to fail from time to time. Like a child learning to walk. "You have to allow the robot to try some bad solutions first," Nygaard says. He brought a suitcase with spare parts to the outdoor testing in Australia. "But luckily I didn't have to use more than a few." Nygaard now works at the Norwegian Defence Research Establishment with other and fully developed robots. Maybe they can benefit from adapting their body? He will also continue to work with Dyret at the Department of Informatics. "I am now working with master students to explore other ways for the robot to learn using new and exotic methods I didn't have the time to pursue myself," he says. Anyone can take advantage of the new technology. "We have released all parts of the project as open-source. Anyone can take what we've made and use it for whatever purpose they want. They can, of course, download the robot design and build their own, but I think most people will be more inclined to use parts of our solutions as inspiration in their own work," Nygaard says. Nygaard, Glette and their colleagues have at least reached their first goal. "We have now tested the robot in unseen outdoor terrains, and we have equipped it with a machine learning algorithm collecting data and adapting the body to the new terrains," Glette says. "Bring on the unknown environments! The robot is ready," Nygaard says. Tønnes Nygaard, Charles P. Martin, Jim Tørresen, Kyrre Glette and David Howard: Real-world Embodied AI Through a Morphologically Adaptive Quadruped Robot , Nature Machine Intelligence , March 2020
The quadruped Dynamic Robot for Embodied Testing (Dyret) engineered by researchers at Norway's University of Oslo (UiO) can adjust the length of its legs to adapt to different terrains, learning the most efficient configuration for each on the fly. Former UiO researcher Tonnes Nygaard said, "The robot uses a camera to see how rough the terrain is, and it uses sensors in the legs to feel how hard the walking surface is. The robot continuously learns about the environment it's walking on and, combined with the knowledge it gained indoors in the controlled environment, uses this to adapt its body." Nygaard added that when learning through experience, the robot "is able to break free from the assumptions and traditions that we humans make, often erroneously."
[]
[]
[]
scitechnews
None
None
None
None
The quadruped Dynamic Robot for Embodied Testing (Dyret) engineered by researchers at Norway's University of Oslo (UiO) can adjust the length of its legs to adapt to different terrains, learning the most efficient configuration for each on the fly. Former UiO researcher Tonnes Nygaard said, "The robot uses a camera to see how rough the terrain is, and it uses sensors in the legs to feel how hard the walking surface is. The robot continuously learns about the environment it's walking on and, combined with the knowledge it gained indoors in the controlled environment, uses this to adapt its body." Nygaard added that when learning through experience, the robot "is able to break free from the assumptions and traditions that we humans make, often erroneously." The four-legged robot Dyret can adjust the length of its legs to adapt the body to the surface. Along the way, it learns what works best. This way it is better equipped the next time it encounters an unknown environment. The name Dyret (Norwegian for "The Animal") is an acronym for Dynamic Robot for Embodied Testing. "We have shown the benefits of allowing a robot to continuously adapt its body shape. Our physical robot also proves that this can easily be done with today's technology," says senior lecturer Tønnes Nygaard at UiO's Department of Informatics. In the case of Dyret, changing the body shape means that it adjusts the length of the legs. You can also read this article in Norwegian "We have seen that a mechanism to adapt the body shape is useful for our robot, and we believe this could apply to other robot designs too," says associate professor Kyrre Glette . Previously, they have shown that the robot can adapt to different environments in controlled conditions indoors. Then Nygaard spent half a year with other robot researchers at The Commonwealth Scientific and Industrial Research Organisation (CSIRO) in Australia, where they specialize in testing self-learning robots outdoors. "This was previously considered too hard to achieve in the real world. Through the robot and our experiments, we've definitely shown that it's possible," Nygaard says to Titan.uio.no. The results are published today in the journal Nature Machine Intelligence . By changing the length of its legs it can automatically transition to different body shapes. Morphologically adaptive robots can operate in unpredictable environments and solve new tasks without having to be redesigned or rebuilt every time they face something unexpected. For us humans, it may be hard to imagine how difficult it is for a robot to walk from, for example, concrete to grass. Just remember that you have years of experience and quite a few senses compared to a robot. "The robot uses a camera to see how rough the terrain is, and it uses sensors in the legs to feel how hard the walking surface is," Nygaard explains. "The robot continuously learns about the environment it's walking on and, combined with the knowledge it gained indoors in the controlled environment, uses this to adapt its body." When Dyret was asked to walk on grass, it had never seen grass before. It had trained on only gravel, sand and concrete. Still, it quickly learned how to walk on the Australian grass and what was the ideal leg length. "Shorter legs give better stability, while longer legs allow for a higher walking speed if the ground is sufficiently predictable," Glette says. A flat lawn may not be the biggest of challenges, but grass in nature is full of tufts and holes that can trap a long-legged robot, so Dyret will shorten its legs. On concrete, it can stretch the legs and "run" away. The robot is also able to adapt if it gets damaged when facing unforeseen obstacles. "Using our technology, the robot is able to adapt to one of its legs becoming weaker or breaking. It can learn how to recover, whether through limping or decreasing the length of the other three legs," Nygaard says. Currently, Dyret is not ready to take on major tasks. The purpose of Nygaard's doctoral degree has been to develop the technology and find suitable materials, and to prove that it is possible. Still, he can see several possible uses in the future. "This is beneficial in environments where the robot might face many unexpected challenges. This includes search and rescue operations, but also agriculture where there is a wide range of challenging surfaces and weather conditions," Nygaard says. He also mentions exploration of mines where it is difficult for people to get to. "One could also imagine robots at different scales, for example small pipe inspection robots, being able to benefit from such technology in the future," Glette says. "We hope this idea, that one can change the body shape, will sound convincing to other researchers and that it may be incorporated into other types of robots. Maybe even a trip to Mars or some other missions in space. "I think that space robots in principle could easily have had this technology because they often will face unforeseen tasks. If they had the opportunity to change or repair themselves, it would have been good," Glette says. This will of course take a while. The space industry does not apply new technology overnight. After all, they must be absolutely sure that everything works after landing. "If you send a robot to Mars, it better work," Nygaard says. Each time Dyret manages to adapt to a new surface, it will be even better equipped to meet additional new surfaces. That is the great advantage of self-learning. If scientists were to program it to work on different surfaces, it is not certain that they would actually choose what is ideal for the robot. "We considered gravel, for instance, to be a hard surface, but the robot did not experience it like that, which is something it was able to learn on its own," Nygaard says. "When learning through its own experiences, it is able to break free from the assumptions and traditions that we humans make, often erroneously." This means that it must be allowed to fail from time to time. Like a child learning to walk. "You have to allow the robot to try some bad solutions first," Nygaard says. He brought a suitcase with spare parts to the outdoor testing in Australia. "But luckily I didn't have to use more than a few." Nygaard now works at the Norwegian Defence Research Establishment with other and fully developed robots. Maybe they can benefit from adapting their body? He will also continue to work with Dyret at the Department of Informatics. "I am now working with master students to explore other ways for the robot to learn using new and exotic methods I didn't have the time to pursue myself," he says. Anyone can take advantage of the new technology. "We have released all parts of the project as open-source. Anyone can take what we've made and use it for whatever purpose they want. They can, of course, download the robot design and build their own, but I think most people will be more inclined to use parts of our solutions as inspiration in their own work," Nygaard says. Nygaard, Glette and their colleagues have at least reached their first goal. "We have now tested the robot in unseen outdoor terrains, and we have equipped it with a machine learning algorithm collecting data and adapting the body to the new terrains," Glette says. "Bring on the unknown environments! The robot is ready," Nygaard says. Tønnes Nygaard, Charles P. Martin, Jim Tørresen, Kyrre Glette and David Howard: Real-world Embodied AI Through a Morphologically Adaptive Quadruped Robot , Nature Machine Intelligence , March 2020
721
Algorithm Could Reduce Complexity of Big Data
Whenever a scientific experiment is conducted, the results are turned into numbers, often producing huge datasets. In order to reduce the size of the data, computer programmers use algorithms that can find and extract the principal features that represent the most salient statistical properties. But many such algorithms cannot be applied directly to these large volumes of data. Reza Oftadeh, doctoral student in the Department of Computer Science and Engineering at Texas A&M University, advised by Dylan Shell, faculty in the department, developed an algorithm applicable to large datasets that can directly order features from most salient to least. "There are many ad hoc ways to extract these features using machine learning algorithms, but we now have a fully rigorous theoretical proof that our model can find and extract these prominent features from the data simultaneously, doing so in one pass of the algorithm," Oftadeh said. Their paper describing the research was published in the proceedings from the 2020 International Conference on Machine Learning. A subfield of machine learning deals with component analysis, the problem of identifying and extracting a raw dataset's features to help reduce its dimensionality. Once identified, the features are used to make annotated samples of the data for further analysis or other machine learning tasks such as classification, clustering, visualization and modeling based on those features. The work to find or develop these types of algorithms has been going on for the past century, but what sets this era apart from the others is the existence of big data, which can contain many millions of sample points with tens of thousands of attributes. Analyzing these massive datasets is a complicated, time-consuming process for human programmers, so artificial neural networks (ANNs) have come to the forefront in recent years. As one of the main tools of machine learning, ANNs are computational models that are designed to simulate how the human brain analyzes and processes information. They are typically made of dozens to millions of artificial neurons, called units, arranged in a series of layers that it uses to make sense of the information it's given. ANNs can be used in various ways, but they are most commonly used to identify the unique features that best represent the data and classify them into different categories based on that information. "There are many ANNs that work very well, and we use them every day on our phones and computers," Oftadeh said. "For example, applications like Alexa, Siri and Google Translate utilize ANNs that are trained to recognize what different speech patterns, accents and voices are saying." But not all features are equally significant, and they can be placed in order from most to least important. Previous approaches use a specific type of ANN called an autoencoder to extract them, but they cannot tell exactly where the features are located or which are more important than the others. "For example, if you have hundreds of thousands of dimensions and want to find only 1,000 of the most prominent and order those 1,000, it is theoretically possible to do but not feasible in practice because the model would have to be run repeatedly on the dataset 1,000 times," Oftadeh said. To make a more intelligent algorithm, the researchers propose adding a new cost function to the network that provides the exact location of the features directly ordered by their relative importance. Once incorporated, their method results in a more efficient processing that can be fed bigger datasets to perform classic data analysis. Currently, the algorithm can only be applied to one-dimensional data samples, but the team is interested in extending their algorithm's abilities to handle even more complex structured data. The next step of their work is to generalize their method in a way that provides a unified framework to produce other machine learning methods that can find the underlying structure of a dataset and/or extract its features by setting a small number of specifications. Other contributors to this research include Jiayi Shen, doctoral student in the computer science and engineering department, and Zhangyang "Atlas" Wang, assistant professor in the electrical and computer engineering department at The University of Texas at Austin. Also instrumental in identifying the research problem, and guiding Oftadeh, was Boris Hanin, assistant professor in the department of mathematics at Princeton University. This research was funded by the National Science Foundation and U.S. Army Research Office Young Investigator Award.
Researchers at Texas A&M University, the University of Texas at Austin, and Princeton University have developed a machine learning (ML) algorithm that can find and extract the most prominent features of a dataset in one pass. The algorithm can add a new cost function to an artificial neural network that provides the exact location of salient features directly ordered by their relative importance, which increases processing efficiency. While currently only applicable to one-dimensional data samples, the research team hopes to extend the capabilities of the algorithm to manage more complex structured data.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Texas A&M University, the University of Texas at Austin, and Princeton University have developed a machine learning (ML) algorithm that can find and extract the most prominent features of a dataset in one pass. The algorithm can add a new cost function to an artificial neural network that provides the exact location of salient features directly ordered by their relative importance, which increases processing efficiency. While currently only applicable to one-dimensional data samples, the research team hopes to extend the capabilities of the algorithm to manage more complex structured data. Whenever a scientific experiment is conducted, the results are turned into numbers, often producing huge datasets. In order to reduce the size of the data, computer programmers use algorithms that can find and extract the principal features that represent the most salient statistical properties. But many such algorithms cannot be applied directly to these large volumes of data. Reza Oftadeh, doctoral student in the Department of Computer Science and Engineering at Texas A&M University, advised by Dylan Shell, faculty in the department, developed an algorithm applicable to large datasets that can directly order features from most salient to least. "There are many ad hoc ways to extract these features using machine learning algorithms, but we now have a fully rigorous theoretical proof that our model can find and extract these prominent features from the data simultaneously, doing so in one pass of the algorithm," Oftadeh said. Their paper describing the research was published in the proceedings from the 2020 International Conference on Machine Learning. A subfield of machine learning deals with component analysis, the problem of identifying and extracting a raw dataset's features to help reduce its dimensionality. Once identified, the features are used to make annotated samples of the data for further analysis or other machine learning tasks such as classification, clustering, visualization and modeling based on those features. The work to find or develop these types of algorithms has been going on for the past century, but what sets this era apart from the others is the existence of big data, which can contain many millions of sample points with tens of thousands of attributes. Analyzing these massive datasets is a complicated, time-consuming process for human programmers, so artificial neural networks (ANNs) have come to the forefront in recent years. As one of the main tools of machine learning, ANNs are computational models that are designed to simulate how the human brain analyzes and processes information. They are typically made of dozens to millions of artificial neurons, called units, arranged in a series of layers that it uses to make sense of the information it's given. ANNs can be used in various ways, but they are most commonly used to identify the unique features that best represent the data and classify them into different categories based on that information. "There are many ANNs that work very well, and we use them every day on our phones and computers," Oftadeh said. "For example, applications like Alexa, Siri and Google Translate utilize ANNs that are trained to recognize what different speech patterns, accents and voices are saying." But not all features are equally significant, and they can be placed in order from most to least important. Previous approaches use a specific type of ANN called an autoencoder to extract them, but they cannot tell exactly where the features are located or which are more important than the others. "For example, if you have hundreds of thousands of dimensions and want to find only 1,000 of the most prominent and order those 1,000, it is theoretically possible to do but not feasible in practice because the model would have to be run repeatedly on the dataset 1,000 times," Oftadeh said. To make a more intelligent algorithm, the researchers propose adding a new cost function to the network that provides the exact location of the features directly ordered by their relative importance. Once incorporated, their method results in a more efficient processing that can be fed bigger datasets to perform classic data analysis. Currently, the algorithm can only be applied to one-dimensional data samples, but the team is interested in extending their algorithm's abilities to handle even more complex structured data. The next step of their work is to generalize their method in a way that provides a unified framework to produce other machine learning methods that can find the underlying structure of a dataset and/or extract its features by setting a small number of specifications. Other contributors to this research include Jiayi Shen, doctoral student in the computer science and engineering department, and Zhangyang "Atlas" Wang, assistant professor in the electrical and computer engineering department at The University of Texas at Austin. Also instrumental in identifying the research problem, and guiding Oftadeh, was Boris Hanin, assistant professor in the department of mathematics at Princeton University. This research was funded by the National Science Foundation and U.S. Army Research Office Young Investigator Award.
723
Ford Partners with U-M on Robotics Research, Building
ANN ARBOR, Mich. (AP) - Digit marches on two legs across the floor of the University of Michigan's Ford Motor Co. Robotics Building, while Mini-Cheetah - staccato-like - does the same on four and the yellow-legged Cassie steps deliberately side-to-side. A grand opening was held Tuesday for the four-story, $75 million, 134,000-square-foot (11,429-square-meter) complex. Three floors house classrooms and research labs for robots that fly, walk, roll and augment the human body. On the top floor are Ford researchers and engineers and the automaker's first robotics and mobility research lab on a university campus. Together, they will work to develop robots and roboticists that help make lives better, keep people safer and build a more equitable society, the school and automaker announced Tuesday. "As we all drive and use our vehicles and go about our day-to-day lives, I'm sure all of us have moments in our day where we could use a little help or a little assistance," said Ken Washington, Ford's chief technology officer. "We are going to be working on drone technology, walking robots, roving robots, all types of robots in this facility and the ways in which they can make people's lives better," Washington added. "And we'll do it in a way that addresses questions and fears around safety and security. The more people see how these robots can interact with society and interact with humans, the more comfortable they'll get with them." The building on the university's Ann Arbor campus brings together researchers from 23 buildings and 10 programs into one space. Those working on two-legged disaster response robots can test them on a 30-mph (48-kph) treadmill studded with obstacles or on a stair-stepped "robot playground" designed with the help of artificial intelligence. Biomedical engineers are looking at developing lighter, more stable prosthetic legs. Ford engineers are exploring how upright Digit robots can work in human spaces. "We want them to be able to operate in realistic situations ... you get out in the real world where there's rolling, twigs," said Jessy Grizzle, the Robotics Institute director. "There's rocks. There's boulders. There's holes that you can't see because the grass is cut flat, and then you want your robots to respond well and stay upright just like a human would." Dearborn, Michigan-based Ford and other automakers are investing billions of dollars in autonomous vehicles. and robotics is expected to play a major role in their development. Ford announced in February that it was autonomous vehicle investment to $7 billion, from sensing systems to specific research into applications such as Digit, a spokesman said. In November, Ford revealed plans to transform a long-vacant Detroit book warehouse into a hub for automobile innovation. Detroit's Corktown neighborhood is the site of Ford's planned $740 million project to create a place where new transportation and mobility ideas are nurtured and developed. People one day may see a robot similar to Digit emerge from a driverless vehicle, stroll across their lawn and leave a package at the door of homes in their neighborhood, according to Washington. "This is an exciting proposition, especially in this post-COVID era where the promise of doing shopping online has become just sort of the norm," he said. "As you think about a future where package delivery is going to be part of daily life, this is a real opportunity for us to pair a robot with an autonomous vehicle to help solve the problem of package delivery at scale." "It's not here today, but you can be pretty certain that it's coming in the not-too-distant future," Washington said. Researchers working together in the building are designing robots for people, said Alec Gallimore, dean of Engineering at the University of Michigan. "Robots aren't people and people aren't robots, but we think - together - there can be synergy," Gallimore said. "So, we're designing robots that are going to help you. First responders for example. Can we put robots in harm's way so we don't have to have people there?" Ford contributed about $37 million to the cost of the robotics building which also features a three-story, indoor fly zone to test drones and other autonomous aerial vehicles indoors; a yard designed with input from scientists at the university and NASA to test vehicles and landing concepts on a landscape mimicking the surface of Mars. The University of Michigan and Ford also are working with two historically Black colleges in Atlanta, Morehouse and Spelman, allowing students there to enroll remotely in a pilot robotics course.
The University of Michigan (U-M) and car manufacturer Ford Motor have partnered to develop robots and roboticists to improve lives, human safety, and society. Tuesday's grand opening of U-M's Ford Robotics Building highlights this collaboration, as robots that fly, walk, roll, and enhance the human body are being designed at the facility. U-M's Alec Gallimore said researchers are collaborating on robots for people, to create human-robotic synergy. Said Gallimore, "Robots aren't people and people aren't robots, but we think - together - there can be synergy. So, we're designing robots that are going to help you. First responders for example. Can we put robots in harm's way so we don't have to have people there?"
[]
[]
[]
scitechnews
None
None
None
None
The University of Michigan (U-M) and car manufacturer Ford Motor have partnered to develop robots and roboticists to improve lives, human safety, and society. Tuesday's grand opening of U-M's Ford Robotics Building highlights this collaboration, as robots that fly, walk, roll, and enhance the human body are being designed at the facility. U-M's Alec Gallimore said researchers are collaborating on robots for people, to create human-robotic synergy. Said Gallimore, "Robots aren't people and people aren't robots, but we think - together - there can be synergy. So, we're designing robots that are going to help you. First responders for example. Can we put robots in harm's way so we don't have to have people there?" ANN ARBOR, Mich. (AP) - Digit marches on two legs across the floor of the University of Michigan's Ford Motor Co. Robotics Building, while Mini-Cheetah - staccato-like - does the same on four and the yellow-legged Cassie steps deliberately side-to-side. A grand opening was held Tuesday for the four-story, $75 million, 134,000-square-foot (11,429-square-meter) complex. Three floors house classrooms and research labs for robots that fly, walk, roll and augment the human body. On the top floor are Ford researchers and engineers and the automaker's first robotics and mobility research lab on a university campus. Together, they will work to develop robots and roboticists that help make lives better, keep people safer and build a more equitable society, the school and automaker announced Tuesday. "As we all drive and use our vehicles and go about our day-to-day lives, I'm sure all of us have moments in our day where we could use a little help or a little assistance," said Ken Washington, Ford's chief technology officer. "We are going to be working on drone technology, walking robots, roving robots, all types of robots in this facility and the ways in which they can make people's lives better," Washington added. "And we'll do it in a way that addresses questions and fears around safety and security. The more people see how these robots can interact with society and interact with humans, the more comfortable they'll get with them." The building on the university's Ann Arbor campus brings together researchers from 23 buildings and 10 programs into one space. Those working on two-legged disaster response robots can test them on a 30-mph (48-kph) treadmill studded with obstacles or on a stair-stepped "robot playground" designed with the help of artificial intelligence. Biomedical engineers are looking at developing lighter, more stable prosthetic legs. Ford engineers are exploring how upright Digit robots can work in human spaces. "We want them to be able to operate in realistic situations ... you get out in the real world where there's rolling, twigs," said Jessy Grizzle, the Robotics Institute director. "There's rocks. There's boulders. There's holes that you can't see because the grass is cut flat, and then you want your robots to respond well and stay upright just like a human would." Dearborn, Michigan-based Ford and other automakers are investing billions of dollars in autonomous vehicles. and robotics is expected to play a major role in their development. Ford announced in February that it was autonomous vehicle investment to $7 billion, from sensing systems to specific research into applications such as Digit, a spokesman said. In November, Ford revealed plans to transform a long-vacant Detroit book warehouse into a hub for automobile innovation. Detroit's Corktown neighborhood is the site of Ford's planned $740 million project to create a place where new transportation and mobility ideas are nurtured and developed. People one day may see a robot similar to Digit emerge from a driverless vehicle, stroll across their lawn and leave a package at the door of homes in their neighborhood, according to Washington. "This is an exciting proposition, especially in this post-COVID era where the promise of doing shopping online has become just sort of the norm," he said. "As you think about a future where package delivery is going to be part of daily life, this is a real opportunity for us to pair a robot with an autonomous vehicle to help solve the problem of package delivery at scale." "It's not here today, but you can be pretty certain that it's coming in the not-too-distant future," Washington said. Researchers working together in the building are designing robots for people, said Alec Gallimore, dean of Engineering at the University of Michigan. "Robots aren't people and people aren't robots, but we think - together - there can be synergy," Gallimore said. "So, we're designing robots that are going to help you. First responders for example. Can we put robots in harm's way so we don't have to have people there?" Ford contributed about $37 million to the cost of the robotics building which also features a three-story, indoor fly zone to test drones and other autonomous aerial vehicles indoors; a yard designed with input from scientists at the university and NASA to test vehicles and landing concepts on a landscape mimicking the surface of Mars. The University of Michigan and Ford also are working with two historically Black colleges in Atlanta, Morehouse and Spelman, allowing students there to enroll remotely in a pilot robotics course.
724
California Passes Regulation Banning 'Dark Patterns' Under Landmark Privacy Law
New rules enacted under California's Consumer Privacy Act (CCPA) will bar so-called dark patterns, or underhanded practices used by websites or applications to get users to behave atypically. Examples include website visitors suddenly being redirected to a subscription page, even when they have no interest in the product being marketed. According to an infographic from the California Attorney General's office, dark-pattern strategies rely on "confusing language or unnecessary steps such as forced clicking or scrolling through multiple screens or listening to why you shouldn't opt out of their data sale." The new CCPA regulations will further add a Privacy Options icon, which Internet users can use as a visual cue to opt out of the sale of their personal data.
[]
[]
[]
scitechnews
None
None
None
None
New rules enacted under California's Consumer Privacy Act (CCPA) will bar so-called dark patterns, or underhanded practices used by websites or applications to get users to behave atypically. Examples include website visitors suddenly being redirected to a subscription page, even when they have no interest in the product being marketed. According to an infographic from the California Attorney General's office, dark-pattern strategies rely on "confusing language or unnecessary steps such as forced clicking or scrolling through multiple screens or listening to why you shouldn't opt out of their data sale." The new CCPA regulations will further add a Privacy Options icon, which Internet users can use as a visual cue to opt out of the sale of their personal data.
726
Virtual Reality at Your Fingertips
Virtual reality technology is advancing into new and different areas, ranging from pilot training in flight simulators to spatial visualisations, e.g., in architecture and increasingly life-like video games. The possibilities afforded by simulating environments in combination with technology such as VR glasses are practically endless. However, VR systems are still rarely used in everyday applications. "Today, VR is used mainly to consume content. In the case of productivity applications such as in office scenarios, VR still has much potential for development to replace current desktop computers," says Christian Holz, a professor at ETH Zurich's Institute for Intelligent Interactive Systems. There is enormous potential indeed: if content were to be no longer limited to a screen, users would be able to leverage the nature of three-dimensional environments, interacting with great flexibility and intuitively with their hands. What's preventing this from becoming a reality? Holz thinks the main problem lies in the interaction between humans and technology. For example, most of today's VR applications are either operated with controllers that are held in the user's hand or with hands in the air, so that the position can be captured by a camera. The user is also typically standing during interaction. "If you have to hold your arms up all the time, it quickly becomes tiring," says Holz. "This currently prevents normal work processes from becoming possible, as they require interaction with applications for multiple hours." Typing on a virtual keyboard, for example, presents another problem: the fingers move only slightly and cameras cannot capture the movement as precisely as current mechanical keyboards do. With in-air typing, the usual haptic feedback is also lacking. For this reason, it's clear to Holz's research team that passive interfaces will remain important for the viable and productive adoption of VR technology. That could be a classic tabletop, a wall or a person's own body. For optimal use, the researchers developed a sensory technology called "TapID," which they will present at the IEEE VR conference call_made at the end of March. The prototype embeds several acceleration sensors in a normal rubber wristband. These sensors detect when the hand touches a surface and which finger the person has used. The researchers found that their novel sensor design can detect tiny differences in the vibration profile on the wrist in order to differentiate between each characteristic finger movement. A custom machine learning pipeline the researchers developed processes the collected data in real time. In combination with the camera system built into a set of VR glasses, which captures the position of the hands, TapID generates extremely precise input. The researchers have demonstrated this in several applications that they programmed for their development, including a virtual keyboard and a piano (see video).
Researchers at ETH Zurich in Switzerland have developed a dual-sensor wristband that facilitates intuitive free-hand interaction within virtual productivity spaces. The prototype TapID technology incorporates two acceleration sensors in a rubber wristband, which detect when the hand touches a surface and which finger the user has employed. This design senses tiny differences in the vibration profile on the wrist and differentiates between each unique finger movement, while a custom machine learning pipeline processes the data in real time. TapID generates extremely precise input when used with cameras embedded within virtual reality (VR) glasses, which capture hand positions. The researchers designed a virtual keyboard and piano to demonstrate TapID's capabilities, and ETH Zurich's Christian Holz said the portable technology "has the potential to make VR systems suitable for productivity work on the go."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at ETH Zurich in Switzerland have developed a dual-sensor wristband that facilitates intuitive free-hand interaction within virtual productivity spaces. The prototype TapID technology incorporates two acceleration sensors in a rubber wristband, which detect when the hand touches a surface and which finger the user has employed. This design senses tiny differences in the vibration profile on the wrist and differentiates between each unique finger movement, while a custom machine learning pipeline processes the data in real time. TapID generates extremely precise input when used with cameras embedded within virtual reality (VR) glasses, which capture hand positions. The researchers designed a virtual keyboard and piano to demonstrate TapID's capabilities, and ETH Zurich's Christian Holz said the portable technology "has the potential to make VR systems suitable for productivity work on the go." Virtual reality technology is advancing into new and different areas, ranging from pilot training in flight simulators to spatial visualisations, e.g., in architecture and increasingly life-like video games. The possibilities afforded by simulating environments in combination with technology such as VR glasses are practically endless. However, VR systems are still rarely used in everyday applications. "Today, VR is used mainly to consume content. In the case of productivity applications such as in office scenarios, VR still has much potential for development to replace current desktop computers," says Christian Holz, a professor at ETH Zurich's Institute for Intelligent Interactive Systems. There is enormous potential indeed: if content were to be no longer limited to a screen, users would be able to leverage the nature of three-dimensional environments, interacting with great flexibility and intuitively with their hands. What's preventing this from becoming a reality? Holz thinks the main problem lies in the interaction between humans and technology. For example, most of today's VR applications are either operated with controllers that are held in the user's hand or with hands in the air, so that the position can be captured by a camera. The user is also typically standing during interaction. "If you have to hold your arms up all the time, it quickly becomes tiring," says Holz. "This currently prevents normal work processes from becoming possible, as they require interaction with applications for multiple hours." Typing on a virtual keyboard, for example, presents another problem: the fingers move only slightly and cameras cannot capture the movement as precisely as current mechanical keyboards do. With in-air typing, the usual haptic feedback is also lacking. For this reason, it's clear to Holz's research team that passive interfaces will remain important for the viable and productive adoption of VR technology. That could be a classic tabletop, a wall or a person's own body. For optimal use, the researchers developed a sensory technology called "TapID," which they will present at the IEEE VR conference call_made at the end of March. The prototype embeds several acceleration sensors in a normal rubber wristband. These sensors detect when the hand touches a surface and which finger the person has used. The researchers found that their novel sensor design can detect tiny differences in the vibration profile on the wrist in order to differentiate between each characteristic finger movement. A custom machine learning pipeline the researchers developed processes the collected data in real time. In combination with the camera system built into a set of VR glasses, which captures the position of the hands, TapID generates extremely precise input. The researchers have demonstrated this in several applications that they programmed for their development, including a virtual keyboard and a piano (see video).
727
Researchers Find Better Way to Measure Consciousness
Millions of people are administered general anesthesia each year in the United States alone, but it's not always easy to tell whether they are actually unconscious. A small proportion of those patients regain some awareness during medical procedures, but a new study of the brain activity that represents consciousness could prevent that potential trauma. It may also help both people in comas and scientists struggling to define which parts of the brain can claim to be key to the conscious mind. "What has been shown for 100 years in an unconscious state like sleep are these slow waves of electrical activity in the brain," says Yuri Saalmann , a University of Wisconsin-Madison psychology and neuroscience professor. "But those may not be the right signals to tap into. Under a number of conditions - with different anesthetic drugs, in people that are suffering from a coma or with brain damage or other clinical situations - there can be high-frequency activity as well." UW-Madison researchers recorded electrical activity in about 1,000 neurons surrounding each of 100 sites throughout the brains of a pair of monkeys at the Wisconsin National Primate Research Center during several states of consciousness: under drug-induced anesthesia, light sleep, resting wakefulness, and roused from anesthesia into a waking state through electrical stimulation of a spot deep in the brain (a procedure the researchers described in 2020 ). "With data across multiple brain regions and different states of consciousness, we could put together all these signs traditionally associated with consciousness - including how fast or slow the rhythms of the brain are in different brain areas - with more computational metrics that describe how complex the signals are and how the signals in different areas interact," says Michelle Redinbaugh, a graduate student in Saalman's lab and co-lead author of the study, published today in the journal Cell Systems . To sift out the characteristics that best indicate whether the monkeys were conscious or unconscious, the researchers used machine learning. They handed their large pool of data over to a computer, told the computer which state of consciousness had produced each pattern of brain activity, and asked the computer which areas of the brain and patterns of electrical activity corresponded most strongly with consciousness. The results pointed away from the frontal cortex, the part of the brain typically monitored to safely maintain general anesthesia in human patients and the part most likely to exhibit the slow waves of activity long considered typical of unconsciousness. "In the clinic now, they may put electrodes on the patient's forehead," says Mohsen Afrasiabi, the other lead author of the study and an assistant scientist in Saalmann's lab. "We propose that the back of the head is a more important place for those electrodes, because we've learned the back of the brain and the deep brain areas are more predictive of state of consciousness than the front." And while both low- and high-frequency activity can be present in unconscious states, it's complexity that best indicates a waking mind. "In an anesthetized or unconscious state, those probes in 100 different sites record a relatively small number of activity patterns," says Saalmann, whose work is supported by the National Institutes of Health. A larger - or more complex - range of patterns was associated with the monkey's awake state. "You need more complexity to convey more information, which is why it's related to consciousness," Redinbaugh says. "If you have less complexity across these important brain areas, they can't convey very much information. You're looking at an unconscious brain." More accurate measurements of patients undergoing anesthesia is one possible outcome of the new findings, and the researchers are part of a collaboration supported by the National Science Foundation working on applying the knowledge of key brain areas. "Beyond just detecting the state of consciousness, these ideas could improve therapeutic outcomes from people with consciousness disorders," Saalmann says. "We could use what we've learned to optimize electrical patterns through precise brain stimulation and help people who are, say, in a coma maintain a continuous level of consciousness." This research was supported by grants from the National Institutes of Health (R01MH110311 and P51OD011106), the Binational Science Foundation, and the Wisconsin National Primate Research Center.
Analysis of neural signals in monkeys by University of Wisconsin-Madison (UWM) researchers combined traditional telltales of consciousness with computational metrics describing the signals' complexities and interaction in different brain regions. The authors used machine learning to determine whether the monkeys were conscious or not and the activity levels of their brain areas by processing those signals through a computer. UWM's Mohsen Afrasiabi said the results indicated the back of the brain and the deep brain areas are more predictive of states of consciousness than the front. UWM's Yuri Saalmann said, "We could use what we've learned to optimize electrical patterns through precise brain stimulation and help people who are, say, in a coma maintain a continuous level of consciousness."
[]
[]
[]
scitechnews
None
None
None
None
Analysis of neural signals in monkeys by University of Wisconsin-Madison (UWM) researchers combined traditional telltales of consciousness with computational metrics describing the signals' complexities and interaction in different brain regions. The authors used machine learning to determine whether the monkeys were conscious or not and the activity levels of their brain areas by processing those signals through a computer. UWM's Mohsen Afrasiabi said the results indicated the back of the brain and the deep brain areas are more predictive of states of consciousness than the front. UWM's Yuri Saalmann said, "We could use what we've learned to optimize electrical patterns through precise brain stimulation and help people who are, say, in a coma maintain a continuous level of consciousness." Millions of people are administered general anesthesia each year in the United States alone, but it's not always easy to tell whether they are actually unconscious. A small proportion of those patients regain some awareness during medical procedures, but a new study of the brain activity that represents consciousness could prevent that potential trauma. It may also help both people in comas and scientists struggling to define which parts of the brain can claim to be key to the conscious mind. "What has been shown for 100 years in an unconscious state like sleep are these slow waves of electrical activity in the brain," says Yuri Saalmann , a University of Wisconsin-Madison psychology and neuroscience professor. "But those may not be the right signals to tap into. Under a number of conditions - with different anesthetic drugs, in people that are suffering from a coma or with brain damage or other clinical situations - there can be high-frequency activity as well." UW-Madison researchers recorded electrical activity in about 1,000 neurons surrounding each of 100 sites throughout the brains of a pair of monkeys at the Wisconsin National Primate Research Center during several states of consciousness: under drug-induced anesthesia, light sleep, resting wakefulness, and roused from anesthesia into a waking state through electrical stimulation of a spot deep in the brain (a procedure the researchers described in 2020 ). "With data across multiple brain regions and different states of consciousness, we could put together all these signs traditionally associated with consciousness - including how fast or slow the rhythms of the brain are in different brain areas - with more computational metrics that describe how complex the signals are and how the signals in different areas interact," says Michelle Redinbaugh, a graduate student in Saalman's lab and co-lead author of the study, published today in the journal Cell Systems . To sift out the characteristics that best indicate whether the monkeys were conscious or unconscious, the researchers used machine learning. They handed their large pool of data over to a computer, told the computer which state of consciousness had produced each pattern of brain activity, and asked the computer which areas of the brain and patterns of electrical activity corresponded most strongly with consciousness. The results pointed away from the frontal cortex, the part of the brain typically monitored to safely maintain general anesthesia in human patients and the part most likely to exhibit the slow waves of activity long considered typical of unconsciousness. "In the clinic now, they may put electrodes on the patient's forehead," says Mohsen Afrasiabi, the other lead author of the study and an assistant scientist in Saalmann's lab. "We propose that the back of the head is a more important place for those electrodes, because we've learned the back of the brain and the deep brain areas are more predictive of state of consciousness than the front." And while both low- and high-frequency activity can be present in unconscious states, it's complexity that best indicates a waking mind. "In an anesthetized or unconscious state, those probes in 100 different sites record a relatively small number of activity patterns," says Saalmann, whose work is supported by the National Institutes of Health. A larger - or more complex - range of patterns was associated with the monkey's awake state. "You need more complexity to convey more information, which is why it's related to consciousness," Redinbaugh says. "If you have less complexity across these important brain areas, they can't convey very much information. You're looking at an unconscious brain." More accurate measurements of patients undergoing anesthesia is one possible outcome of the new findings, and the researchers are part of a collaboration supported by the National Science Foundation working on applying the knowledge of key brain areas. "Beyond just detecting the state of consciousness, these ideas could improve therapeutic outcomes from people with consciousness disorders," Saalmann says. "We could use what we've learned to optimize electrical patterns through precise brain stimulation and help people who are, say, in a coma maintain a continuous level of consciousness." This research was supported by grants from the National Institutes of Health (R01MH110311 and P51OD011106), the Binational Science Foundation, and the Wisconsin National Primate Research Center.
728
Simulation of Self-Driving Fleets Brings Their Deployment in Cities Closer
Imperial researchers have simulated self-driving fleet impacts using real-world data, providing suggestions for their optimal deployment in cities. The simulations show the potential impact of fleets of autonomous vehicles (AV) on congestion, emissions, public transport and ride-sharing services. The team analysed tens of thousands of possible deployment scenarios using real-world data and a range of service parameters and fleet management algorithms. The aim is to ensure that such services will run efficiently and profitably while reducing knock-on effects to other modes of transport, such as active and sustainable travel, whilst bringing their deployment on city streets worldwide a step closer. Researchers from the Transport Systems and Logistics Laboratory (TSL), part of the Department of Civil and Environmental Engineering at Imperial College London, have joined forces with AV software company Oxbotica in a project called SHIFT, funded by a £1.58 million grant awarded by the Centre for Connected and Autonomous Vehicles and delivered through Innovate UK. Transport for London (TfL) were also consortium members, providing data to aid understanding of how the deployment of AVs could vary across different areas of London and the need for any such deployment to work towards supporting the central goal of the Mayor of London's Transport Strategy: by 2041, 80 per cent of all trips in London are to be made on foot, by cycle or using public transport and that London's air quality is improved. The project partners have today published the SHIFT Autonomous Deployment Report , which includes details of the Imperial team's simulations, and provides first-of-its-kind driver safety guidelines, an AV build manual and a data infrastructure framework to help operators take AV demonstrations to larger-scale service deployments in the UK. Dr Panagiotis Angeloudis , Reader in Transport Systems & Logistics at Imperial College London, said: "The deployment of autonomous vehicle technologies has the potential to revolutionise mobility in cities around the world. Through the SHIFT project, we had an opportunity to study their potential impacts on the rest of the transport network in an unprecedented level of detail. "Using the tools that we developed, stakeholders can now plan better for the deployment of autonomous vehicle technologies and be better prepared for the future." The team modelled the impact of AV fleets using data from existing road networks and real-world travel demand patterns and built upon decades of research on passenger behaviour, such as how likely someone is to choose a mode of transport over others based on price, convenience and travel times. In time, as AVs are likely to operate throughout the day, without the need for driver breaks, there is the potential for them to waste energy and increase congestion by completing 'empty miles'. Algorithms that the team has developed can help optimise the fleet, making sure only enough vehicles are operating in the right areas to meet demand without wasting energy. The team also simulated the impact of electrifying AVs, showing how the emissions from road transportation could be reduced. Dr Marc Stettler , Senior Lecturer in Transport & Environment, and one of the Imperial investigators in this project said: "Proper management of autonomous vehicle fleets is essential to minimising energy consumption and environmental impacts. We want to take advantage of the potential to increase passenger occupancy and avoid vehicles cruising around looking for passengers, thereby reducing vehicle kilometres travelled." The team say their simulations have the advantage of providing a 'digital twin' of the operating environment, allowing the testing of different scenarios without waiting for the availability and deployment of new technologies. Therefore, it could be applied to other transport problems, such as how to electrify other forms of public transport, such as city buses, and even future developments like the deployment of autonomous air taxis. Dr Angeloudis said: "The motivation for deploying large fleets of AVs in cities is to reduce individual car ownership, freeing up space on the roads. For the public to choose this option, the services provided by such fleets must be reasonably priced and to accommodate their needs, responding to demand. "To do this, the vehicles should be managed as efficiently as possible, with their deployment aligned with user demand while having minimal impact on factors like congestion and emissions. Our model will help fleet operators meet these conditions, providing multiple benefits for city dwellers." Minister for Investment, Lord Grimstone, said: "The potential economic and environmental benefits of automated vehicles are huge, but deploying the technology safely in the UK's busy cities is absolutely vital in ensuring they are a success when they are brought to our streets. "SHIFT's guidelines help pave the way for self-driving vehicles on our roads, and are further evidence of the UK's global leadership in shaping the automotive industry of the future as we build back greener from the pandemic." The TSL team will now focus on applying their simulation technology to making AV deployments even safer as part of another ongoing project called D-RISK, which will focus on developing the world's first virtual driving test for self-driving cars.
Researchers at the U.K.'s Imperial College London (ICL) have modeled the impact of self-driving fleets based on real-world data, to suggest optimal approaches for urban deployment. The team analyzed tens of thousands of potential deployment scenarios, and a range of service parameters and fleet management algorithms, as part of a project called SHIFT. The algorithms can help optimize such a fleet and ensure that only enough autonomous vehicles (AVs) are running in the right areas to meet demand, while minimizing energy consumption. The report provides driver safety guidelines, an AV build manual, and a data infrastructure framework to help operators scale up AV demonstrations to service deployments.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the U.K.'s Imperial College London (ICL) have modeled the impact of self-driving fleets based on real-world data, to suggest optimal approaches for urban deployment. The team analyzed tens of thousands of potential deployment scenarios, and a range of service parameters and fleet management algorithms, as part of a project called SHIFT. The algorithms can help optimize such a fleet and ensure that only enough autonomous vehicles (AVs) are running in the right areas to meet demand, while minimizing energy consumption. The report provides driver safety guidelines, an AV build manual, and a data infrastructure framework to help operators scale up AV demonstrations to service deployments. Imperial researchers have simulated self-driving fleet impacts using real-world data, providing suggestions for their optimal deployment in cities. The simulations show the potential impact of fleets of autonomous vehicles (AV) on congestion, emissions, public transport and ride-sharing services. The team analysed tens of thousands of possible deployment scenarios using real-world data and a range of service parameters and fleet management algorithms. The aim is to ensure that such services will run efficiently and profitably while reducing knock-on effects to other modes of transport, such as active and sustainable travel, whilst bringing their deployment on city streets worldwide a step closer. Researchers from the Transport Systems and Logistics Laboratory (TSL), part of the Department of Civil and Environmental Engineering at Imperial College London, have joined forces with AV software company Oxbotica in a project called SHIFT, funded by a £1.58 million grant awarded by the Centre for Connected and Autonomous Vehicles and delivered through Innovate UK. Transport for London (TfL) were also consortium members, providing data to aid understanding of how the deployment of AVs could vary across different areas of London and the need for any such deployment to work towards supporting the central goal of the Mayor of London's Transport Strategy: by 2041, 80 per cent of all trips in London are to be made on foot, by cycle or using public transport and that London's air quality is improved. The project partners have today published the SHIFT Autonomous Deployment Report , which includes details of the Imperial team's simulations, and provides first-of-its-kind driver safety guidelines, an AV build manual and a data infrastructure framework to help operators take AV demonstrations to larger-scale service deployments in the UK. Dr Panagiotis Angeloudis , Reader in Transport Systems & Logistics at Imperial College London, said: "The deployment of autonomous vehicle technologies has the potential to revolutionise mobility in cities around the world. Through the SHIFT project, we had an opportunity to study their potential impacts on the rest of the transport network in an unprecedented level of detail. "Using the tools that we developed, stakeholders can now plan better for the deployment of autonomous vehicle technologies and be better prepared for the future." The team modelled the impact of AV fleets using data from existing road networks and real-world travel demand patterns and built upon decades of research on passenger behaviour, such as how likely someone is to choose a mode of transport over others based on price, convenience and travel times. In time, as AVs are likely to operate throughout the day, without the need for driver breaks, there is the potential for them to waste energy and increase congestion by completing 'empty miles'. Algorithms that the team has developed can help optimise the fleet, making sure only enough vehicles are operating in the right areas to meet demand without wasting energy. The team also simulated the impact of electrifying AVs, showing how the emissions from road transportation could be reduced. Dr Marc Stettler , Senior Lecturer in Transport & Environment, and one of the Imperial investigators in this project said: "Proper management of autonomous vehicle fleets is essential to minimising energy consumption and environmental impacts. We want to take advantage of the potential to increase passenger occupancy and avoid vehicles cruising around looking for passengers, thereby reducing vehicle kilometres travelled." The team say their simulations have the advantage of providing a 'digital twin' of the operating environment, allowing the testing of different scenarios without waiting for the availability and deployment of new technologies. Therefore, it could be applied to other transport problems, such as how to electrify other forms of public transport, such as city buses, and even future developments like the deployment of autonomous air taxis. Dr Angeloudis said: "The motivation for deploying large fleets of AVs in cities is to reduce individual car ownership, freeing up space on the roads. For the public to choose this option, the services provided by such fleets must be reasonably priced and to accommodate their needs, responding to demand. "To do this, the vehicles should be managed as efficiently as possible, with their deployment aligned with user demand while having minimal impact on factors like congestion and emissions. Our model will help fleet operators meet these conditions, providing multiple benefits for city dwellers." Minister for Investment, Lord Grimstone, said: "The potential economic and environmental benefits of automated vehicles are huge, but deploying the technology safely in the UK's busy cities is absolutely vital in ensuring they are a success when they are brought to our streets. "SHIFT's guidelines help pave the way for self-driving vehicles on our roads, and are further evidence of the UK's global leadership in shaping the automotive industry of the future as we build back greener from the pandemic." The TSL team will now focus on applying their simulation technology to making AV deployments even safer as part of another ongoing project called D-RISK, which will focus on developing the world's first virtual driving test for self-driving cars.
729
Drive-Throughs That Predict Your Order? Restaurants Are Thinking Fast
Many restaurants expect digital ordering and drive-throughs to remain key business channels, and some are testing artificial intelligence (AI) to predict and suggest personalized orders. McDonald's acquired Israeli AI company Dynamic Yield to boost sales via personalized digital promotions. Burger King is modernizing its drive-through with its Deep Flame AI system to suggest foods based on daily popularity, and testing Bluetooth technology to identify loyal customers and show their previous orders to calculate their probability of ordering the same items. Restaurant Brands International (RBI) hopes to deploy predictive personalized systems at more than 10,000 of its North American restaurants by mid-2022. RBI's Duncan Fulton envisions customers having "the ability to automatically reorder things and pay for the items at the board, which, ultimately, speeds up the window time, allowing you to collect your food and go on your way."
[]
[]
[]
scitechnews
None
None
None
None
Many restaurants expect digital ordering and drive-throughs to remain key business channels, and some are testing artificial intelligence (AI) to predict and suggest personalized orders. McDonald's acquired Israeli AI company Dynamic Yield to boost sales via personalized digital promotions. Burger King is modernizing its drive-through with its Deep Flame AI system to suggest foods based on daily popularity, and testing Bluetooth technology to identify loyal customers and show their previous orders to calculate their probability of ordering the same items. Restaurant Brands International (RBI) hopes to deploy predictive personalized systems at more than 10,000 of its North American restaurants by mid-2022. RBI's Duncan Fulton envisions customers having "the ability to automatically reorder things and pay for the items at the board, which, ultimately, speeds up the window time, allowing you to collect your food and go on your way."
730
Fire Safety App Simulates Wildfires, Shows Route to Avoid Them
Researchers at the CYENS Center of Excellence in Cyprus have built a mobile wildfire simulation application that provides personalized evacuation routes to anyone in the path of a fire. The app connects to a Web server running the simulation program, which uses publicly available data to update predictions of the spread of fires every 15 minutes. A fire management tool allows local fire departments to quickly tag when and where a fire starts, which is applied to real-time simulations. The app then employs the global positioning system (GPS) location of each user to map out potential escape routes, selecting optimal routes by comparing how fast each route gets them to safety against how close it brings them to the fire's path. The algorithm displays the best option either as turn-by-turn directions or as a route overlaid on a regional map.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the CYENS Center of Excellence in Cyprus have built a mobile wildfire simulation application that provides personalized evacuation routes to anyone in the path of a fire. The app connects to a Web server running the simulation program, which uses publicly available data to update predictions of the spread of fires every 15 minutes. A fire management tool allows local fire departments to quickly tag when and where a fire starts, which is applied to real-time simulations. The app then employs the global positioning system (GPS) location of each user to map out potential escape routes, selecting optimal routes by comparing how fast each route gets them to safety against how close it brings them to the fire's path. The algorithm displays the best option either as turn-by-turn directions or as a route overlaid on a regional map.
732
IBM Develops AI to Invent Antibiotics - and It's Made Two Already
Hiding behind the current COVID-19 pandemic, another serious public health threat is looming - the rise of antibiotic-resistant "superbugs." New antibiotics are needed to help turn the tide, but developing them takes time. Now, IBM Research has put AI to work on the task, producing two promising new drug candidates very quickly. The discovery of penicillin was one of the most important scientific breakthroughs of the 20th century, as previously deadly infections became easily treatable. But decades on, those benefits are beginning to falter. Like all organisms, bacteria evolve in response to their environment - so when we pump their environments (ie, our bodies) with drugs, it's only a matter of time before some of them figure out how to defend themselves. Given enough time and antibiotic use, the only microbes remaining will be those that are genetically immune to the drugs. That's the situation we're increasingly finding ourselves in. We're now down to our last line of defense - and worryingly, even those are starting to fail . Without new antibiotics or other treatments , scientists predict that once-minor infections could claim up to 10 million lives a year by 2050. Worse still, developing new drugs takes years and involves a huge amount of trial and error, with potential molecules being made up of countless possible chemical combinations. Thankfully, that's just the kind of work that artificial intelligence excels at, so IBM has developed a new system to sift through the numbers for us. The IBM Research team created an AI system that's much faster at exploring the entire possibility space for molecular configurations. First, the researchers started with a model called a deep generative autoencoder, which essentially examines a range of peptide sequences, captures important information about their function and the molecules that make them up, and looks for similarities to other peptides. Next, a system called Controlled Latent attribute Space Sampling (CLaSS) is applied. This system uses the data gathered and generates new peptide molecules with specific, desired properties. In this case, that's antimicrobial effectiveness. But of course, the ability to kill bacteria isn't the only requirement for an antibiotic - it also needs to be safe for human use, and ideally work across a range of classes of bacteria. So the AI-generated molecules are then run through deep learning classifiers to weed out ineffective or toxic combinations. Over the course of 48 days, the AI system identified, synthesized and experimented with 20 new antibiotic peptide candidates. Two of them in particular turned out to be particularly promising - they were highly potent against a range of bacteria from the two main classes (Gram-positive and Gram-negative), by punching holes in the bugs' outer membranes. In cell cultures and mouse tests, they also had low toxicity, and seemed very unlikely to lead to further drug resistance in E. coli. The two new antibiotic candidates are exciting enough by themselves, but the process through which they were discovered is the real breakthrough. Being able to develop and test new antibiotics quickly and more efficiently could help prevent the nightmare scenario of returning to a time before antibiotics. The research was published in the journal Nature . Source: IBM Research
IBM Research is using artificial intelligence (AI) to develop new antibiotics more quickly, and has already produced two promising drug candidates. Potential molecules are comprised of countless possible chemical combinations, which is why drug development generally takes years. To speed up the process, the researchers used a deep generative autoencoder model to examine a range of peptide sequences, collecting data about their function and the molecules within them and searching for similarities to other peptides. The researchers then used a Controlled Latent attribute Space Sampling (CLaSS) system to generate new peptide molecules with specific properties based on the data gathered by the model. The AI system identified, synthesized, and experimented with 20 new antibiotic peptide candidates over 48 days, producing two that were effective against a range of Gram-positive and Gram-negative bacteria.
[]
[]
[]
scitechnews
None
None
None
None
IBM Research is using artificial intelligence (AI) to develop new antibiotics more quickly, and has already produced two promising drug candidates. Potential molecules are comprised of countless possible chemical combinations, which is why drug development generally takes years. To speed up the process, the researchers used a deep generative autoencoder model to examine a range of peptide sequences, collecting data about their function and the molecules within them and searching for similarities to other peptides. The researchers then used a Controlled Latent attribute Space Sampling (CLaSS) system to generate new peptide molecules with specific properties based on the data gathered by the model. The AI system identified, synthesized, and experimented with 20 new antibiotic peptide candidates over 48 days, producing two that were effective against a range of Gram-positive and Gram-negative bacteria. Hiding behind the current COVID-19 pandemic, another serious public health threat is looming - the rise of antibiotic-resistant "superbugs." New antibiotics are needed to help turn the tide, but developing them takes time. Now, IBM Research has put AI to work on the task, producing two promising new drug candidates very quickly. The discovery of penicillin was one of the most important scientific breakthroughs of the 20th century, as previously deadly infections became easily treatable. But decades on, those benefits are beginning to falter. Like all organisms, bacteria evolve in response to their environment - so when we pump their environments (ie, our bodies) with drugs, it's only a matter of time before some of them figure out how to defend themselves. Given enough time and antibiotic use, the only microbes remaining will be those that are genetically immune to the drugs. That's the situation we're increasingly finding ourselves in. We're now down to our last line of defense - and worryingly, even those are starting to fail . Without new antibiotics or other treatments , scientists predict that once-minor infections could claim up to 10 million lives a year by 2050. Worse still, developing new drugs takes years and involves a huge amount of trial and error, with potential molecules being made up of countless possible chemical combinations. Thankfully, that's just the kind of work that artificial intelligence excels at, so IBM has developed a new system to sift through the numbers for us. The IBM Research team created an AI system that's much faster at exploring the entire possibility space for molecular configurations. First, the researchers started with a model called a deep generative autoencoder, which essentially examines a range of peptide sequences, captures important information about their function and the molecules that make them up, and looks for similarities to other peptides. Next, a system called Controlled Latent attribute Space Sampling (CLaSS) is applied. This system uses the data gathered and generates new peptide molecules with specific, desired properties. In this case, that's antimicrobial effectiveness. But of course, the ability to kill bacteria isn't the only requirement for an antibiotic - it also needs to be safe for human use, and ideally work across a range of classes of bacteria. So the AI-generated molecules are then run through deep learning classifiers to weed out ineffective or toxic combinations. Over the course of 48 days, the AI system identified, synthesized and experimented with 20 new antibiotic peptide candidates. Two of them in particular turned out to be particularly promising - they were highly potent against a range of bacteria from the two main classes (Gram-positive and Gram-negative), by punching holes in the bugs' outer membranes. In cell cultures and mouse tests, they also had low toxicity, and seemed very unlikely to lead to further drug resistance in E. coli. The two new antibiotic candidates are exciting enough by themselves, but the process through which they were discovered is the real breakthrough. Being able to develop and test new antibiotics quickly and more efficiently could help prevent the nightmare scenario of returning to a time before antibiotics. The research was published in the journal Nature . Source: IBM Research
733
Coming 'Vaccine Passports' Aim for Simplicity
Developers of the first digital "vaccine passports" for post-pandemic travel said the applications are designed for ease of use, and engineered to eventually mate with other travel platforms. The Clear trusted-traveler program is evaluating a Covid-19 test or vaccination-verification app on certain flights into Hawaii. Meanwhile, organizations like the nonprofit Commons Project Foundation and the International Air Transport Association (IATA) are unveiling apps that allow cross-border travelers to demonstrate they have had a COVID-19 test or a vaccination. Health pass apps aim to navigate users through data entry, while minimizing the information they must input, partly through functions like passport-chip scanning. IATA's Alan Murray Hayden thinks gradual adoption of such apps will provide a kind of herd immunity for lines at the airport.
[]
[]
[]
scitechnews
None
None
None
None
Developers of the first digital "vaccine passports" for post-pandemic travel said the applications are designed for ease of use, and engineered to eventually mate with other travel platforms. The Clear trusted-traveler program is evaluating a Covid-19 test or vaccination-verification app on certain flights into Hawaii. Meanwhile, organizations like the nonprofit Commons Project Foundation and the International Air Transport Association (IATA) are unveiling apps that allow cross-border travelers to demonstrate they have had a COVID-19 test or a vaccination. Health pass apps aim to navigate users through data entry, while minimizing the information they must input, partly through functions like passport-chip scanning. IATA's Alan Murray Hayden thinks gradual adoption of such apps will provide a kind of herd immunity for lines at the airport.
734
Cybersecurity Report: 'Smart Farms' Are Hackable Farms
Some have dubbed this the era of " smart agriculture " - with farms around the world scaling up their use of the Internet, IoT , big data, cloud computing and artificial intelligence to increase yields and sustainability. Yet with so much digital technology, naturally, also comes heightened potential cybersecurity vulnerabilities. There's no scaling back smart agriculture either. By the end of this decade we will need the extra food it produces - with world's population projected to cross 8.5 billion , and more than 840 million people affected by acute hunger. Unless smart agriculture can dramatically increase the global food system's efficiency, the prospect of reducing global malnutrition and hunger - let alone the ambitious goal of zero hunger by 2030 - appears very difficult indeed. Agriculture 4.0 (as smart ag is also called) aims not just at growing more food but also at increased efficiency, more powerful data analysis, and more intelligent automation and decision-making. Although smart agriculture has been extensively studied, "the security issues [around] smart agriculture have not," says Xing Yang from Nanjing Agricultural University in China. Research in the field to date, he adds, has mostly involved applying conventional cybersecurity wisdom to agricultural tech. Agricultural cybersecurity, by contrast, he says, is not given enough attention. Yang and his colleagues surveyed the different kinds of smart agriculture, as well as the key technologies and applications specific to them. Agricultural IoT applications have unique characteristics that give rise to security issues, which the authors enumerate and suggest counter-measures for. (Their research was published in a recent issue of the journal IEEE/CAA Journal of Automatica Sinica .) For example, while field agriculture might be subject to threats from damage to the facility, poultry and livestock breeding may face sensor failures, and greenhouse cultivation could face control system intrusions. All of these could result in damage to the IoT architecture, both hardware and software, leading to failure or malfunction in farming operations. Plus, there are threats to data acquisition technologies - malicious attacks, unauthorized access, privacy leaks, and so on - while blockchain technologies can be vulnerable to access control failure and unsafe consensus agreement. In Yang's opinion, the most pressing security problems in smart agriculture involve the physical environment, such as plant factory control system intrusion and unmanned aerial vehicle (UAV) false positioning. "The network for rural areas is not as good as that of cities," he says, "which means that the network signals in some areas are poor, which leads to...false base station signals." The researchers also paid extra attention to agricultural equipment as potential security threats, something that recent studies have not done. "Considering that the deployment of IoT devices in farmland is relatively sparse and cannot be effectively supervised, how to ensure the physical security of these devices is a challenge," Yang says. "In addition, the delay caused by long-distance signal transmission also increases the risk of Sybil attacks [which is] transmitting malicious data through virtual nodes." In their experiments with solar insecticidal lamps, for instance, they found that the lamp's high voltage pulse affects the data transmission from Zigbee -based IoT devices and data acquisition sensors. Thus, Yang says, to minimize unnecessary losses, it's important to study each device in the context of how it's actually deployed in the field, including the possible safety risks of specific agricultural equipment. The study also summarizes existing security and privacy countermeasures suitable for smart agriculture, including authentication and access control protocols, privacy-preserving frameworks, robust intrusion detection systems, and cryptography and key management. Yang is optimistic that the application of existing technologies - such as edge computing, artificial intelligence and blockchain - can be used to mitigate some of the existing problems. He says that AI algorithms can be developed that might detect the presence of malicious users, while existing industrial security standards can be applied to design a targeted security scheme for agricultural IoT. This represents a significant research challenge, he says, because current datasets used in deep-learning approaches are not based on smart agriculture environments. Therefore, new datasets are required to build network intrusion detectors in a smart agriculture environment. "These [new] technologies can help the development of smart agriculture and solve some of the existing security problems," Yang says, "but they have loopholes, so they also bring new security issues."
Researchers at China's Nanjing Agricultural University (NAU) surveyed smart farming and its underlying technologies and utilities, and discovered unique cybersecurity issues stemming from agricultural Internet of Things (IoT) applications. Possible threats to IoT integrity include facility damage, sensor failures in poultry and livestock breeding, and control system intrusions in greenhouses. NAU's Xing Yang said the most pressing vulnerability in smart agriculture concerns the physical environment, like plant factory control system intrusion and unmanned aerial vehicle false positioning; for example, rural areas are prone to poor network signals, which Yang said leads to false base station signals. Yang and his colleagues suggested the use of countermeasures in response, including artificial intelligence to detect malicious users, and the application of existing industrial security standards to design a targeted security framework for agricultural IoT.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at China's Nanjing Agricultural University (NAU) surveyed smart farming and its underlying technologies and utilities, and discovered unique cybersecurity issues stemming from agricultural Internet of Things (IoT) applications. Possible threats to IoT integrity include facility damage, sensor failures in poultry and livestock breeding, and control system intrusions in greenhouses. NAU's Xing Yang said the most pressing vulnerability in smart agriculture concerns the physical environment, like plant factory control system intrusion and unmanned aerial vehicle false positioning; for example, rural areas are prone to poor network signals, which Yang said leads to false base station signals. Yang and his colleagues suggested the use of countermeasures in response, including artificial intelligence to detect malicious users, and the application of existing industrial security standards to design a targeted security framework for agricultural IoT. Some have dubbed this the era of " smart agriculture " - with farms around the world scaling up their use of the Internet, IoT , big data, cloud computing and artificial intelligence to increase yields and sustainability. Yet with so much digital technology, naturally, also comes heightened potential cybersecurity vulnerabilities. There's no scaling back smart agriculture either. By the end of this decade we will need the extra food it produces - with world's population projected to cross 8.5 billion , and more than 840 million people affected by acute hunger. Unless smart agriculture can dramatically increase the global food system's efficiency, the prospect of reducing global malnutrition and hunger - let alone the ambitious goal of zero hunger by 2030 - appears very difficult indeed. Agriculture 4.0 (as smart ag is also called) aims not just at growing more food but also at increased efficiency, more powerful data analysis, and more intelligent automation and decision-making. Although smart agriculture has been extensively studied, "the security issues [around] smart agriculture have not," says Xing Yang from Nanjing Agricultural University in China. Research in the field to date, he adds, has mostly involved applying conventional cybersecurity wisdom to agricultural tech. Agricultural cybersecurity, by contrast, he says, is not given enough attention. Yang and his colleagues surveyed the different kinds of smart agriculture, as well as the key technologies and applications specific to them. Agricultural IoT applications have unique characteristics that give rise to security issues, which the authors enumerate and suggest counter-measures for. (Their research was published in a recent issue of the journal IEEE/CAA Journal of Automatica Sinica .) For example, while field agriculture might be subject to threats from damage to the facility, poultry and livestock breeding may face sensor failures, and greenhouse cultivation could face control system intrusions. All of these could result in damage to the IoT architecture, both hardware and software, leading to failure or malfunction in farming operations. Plus, there are threats to data acquisition technologies - malicious attacks, unauthorized access, privacy leaks, and so on - while blockchain technologies can be vulnerable to access control failure and unsafe consensus agreement. In Yang's opinion, the most pressing security problems in smart agriculture involve the physical environment, such as plant factory control system intrusion and unmanned aerial vehicle (UAV) false positioning. "The network for rural areas is not as good as that of cities," he says, "which means that the network signals in some areas are poor, which leads to...false base station signals." The researchers also paid extra attention to agricultural equipment as potential security threats, something that recent studies have not done. "Considering that the deployment of IoT devices in farmland is relatively sparse and cannot be effectively supervised, how to ensure the physical security of these devices is a challenge," Yang says. "In addition, the delay caused by long-distance signal transmission also increases the risk of Sybil attacks [which is] transmitting malicious data through virtual nodes." In their experiments with solar insecticidal lamps, for instance, they found that the lamp's high voltage pulse affects the data transmission from Zigbee -based IoT devices and data acquisition sensors. Thus, Yang says, to minimize unnecessary losses, it's important to study each device in the context of how it's actually deployed in the field, including the possible safety risks of specific agricultural equipment. The study also summarizes existing security and privacy countermeasures suitable for smart agriculture, including authentication and access control protocols, privacy-preserving frameworks, robust intrusion detection systems, and cryptography and key management. Yang is optimistic that the application of existing technologies - such as edge computing, artificial intelligence and blockchain - can be used to mitigate some of the existing problems. He says that AI algorithms can be developed that might detect the presence of malicious users, while existing industrial security standards can be applied to design a targeted security scheme for agricultural IoT. This represents a significant research challenge, he says, because current datasets used in deep-learning approaches are not based on smart agriculture environments. Therefore, new datasets are required to build network intrusion detectors in a smart agriculture environment. "These [new] technologies can help the development of smart agriculture and solve some of the existing security problems," Yang says, "but they have loopholes, so they also bring new security issues."
735
Faulty Software Snarls Vaccine Sign-Ups
Persistent flaws in software for setting up Covid-19 vaccination appointments online threaten to slow the U.S. vaccine rollout, with many states switching software providers originally recommended by the U.S. Centers for Disease Control and Prevention (CDC), with little improvement. The CDC recommended multinational Deloitte's Vaccine Administration Management System (VAMS) software, which Deloitte said was originally intended for smaller groups at early stages of state rollouts. Health experts blamed glitches on multiple impediments, including developers condensing work that would normally take years into weeks, localities' individual eligibility requirements, and the inability for different scheduling systems to communicate with each other. Some frustrated states switched from VAMS to Maryland's PrepMod software, which has been undermined by myriad shortcomings, including failures to reserve appointment slots as people filled out their information. PrepMod's Tiffany Tate has faulted healthcare workers, and not the software itself, for such difficulties.
[]
[]
[]
scitechnews
None
None
None
None
Persistent flaws in software for setting up Covid-19 vaccination appointments online threaten to slow the U.S. vaccine rollout, with many states switching software providers originally recommended by the U.S. Centers for Disease Control and Prevention (CDC), with little improvement. The CDC recommended multinational Deloitte's Vaccine Administration Management System (VAMS) software, which Deloitte said was originally intended for smaller groups at early stages of state rollouts. Health experts blamed glitches on multiple impediments, including developers condensing work that would normally take years into weeks, localities' individual eligibility requirements, and the inability for different scheduling systems to communicate with each other. Some frustrated states switched from VAMS to Maryland's PrepMod software, which has been undermined by myriad shortcomings, including failures to reserve appointment slots as people filled out their information. PrepMod's Tiffany Tate has faulted healthcare workers, and not the software itself, for such difficulties.
736
Robots Can Use Eye Contact to Draw Out Reluctant Participants in Group
Researchers at KTH Royal Institute of Technology published results of experiments in which robots led a Swedish word game with individuals whose proficiency in the Nordic language was varied. They found that by redirecting its gaze to less proficient players, a robot can elicit involvement from even the most reluctant participants. Doctoral students Sarah Gillet and Ronald Cumbal say the results offer evidence that robots could play a productive role in educational settings. Calling on someone by name isn't always the best way to elicit engagement, Gillet says. "Gaze can by nature influence very dynamically how much people are participating, especially if there is this natural tendency for imbalance - due to the differences in language proficiency," she says. "If someone is not inclined to participate for some reason, we showed that gaze is able to overcome this difference and help everyone to participate." Cumbal says that studies have shown that robots can support group discussion, but this is the first study to examine what happens when a robot uses gaze in a group interaction that isn't balanced - when it is dominated by one or more individuals. The experiment involved pairs of players - one fluent in Swedish and one who is learning Swedish. The players were instructed to give the robot clues in Swedish so that it could guess the correct term. The face of the robot was an animated projection on a specially designed plastic mask. While it would be natural for a fluent speaker to dominate such a scenario, Cumbal says, the robot was able to prompt the participation of the less fluent player by redirecting its gaze naturally toward them and silently waiting for them to hazard an attempt. "Robot gaze can modify group dynamics - what role people take in a situation," he says. "Our work builds on that and shows further that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute." The study was published at the ACM/IEEE International Conference on Human-Robot Interaction '21, and subsequently nominated for Best Paper Award at the conference. Funding for the research was provided by the Swedish Research Council, the Swedish Foundation for Strategic Research and the Jacobs Foundation. David Callahan Update: on March 11, the paper was named Best Paper at the HRI 2021 conference. -Ed.
Researchers at Sweden's KTH Royal Institute of Technology have demonstrated that robots can encourage participation in group settings by making eye contact with reluctant participants. The study involved robots leading a Swedish word game with a pair of participants, one fluent in Swedish and one learning the language. The game required the players to give clues in Swedish to the robot, whose face was an animated projection on a plastic mask. The robot would redirect its gaze to the less-fluent player to encourage their participation. KTH's Ronald Cumbal said the study's results demonstrate "that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Sweden's KTH Royal Institute of Technology have demonstrated that robots can encourage participation in group settings by making eye contact with reluctant participants. The study involved robots leading a Swedish word game with a pair of participants, one fluent in Swedish and one learning the language. The game required the players to give clues in Swedish to the robot, whose face was an animated projection on a plastic mask. The robot would redirect its gaze to the less-fluent player to encourage their participation. KTH's Ronald Cumbal said the study's results demonstrate "that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute." Researchers at KTH Royal Institute of Technology published results of experiments in which robots led a Swedish word game with individuals whose proficiency in the Nordic language was varied. They found that by redirecting its gaze to less proficient players, a robot can elicit involvement from even the most reluctant participants. Doctoral students Sarah Gillet and Ronald Cumbal say the results offer evidence that robots could play a productive role in educational settings. Calling on someone by name isn't always the best way to elicit engagement, Gillet says. "Gaze can by nature influence very dynamically how much people are participating, especially if there is this natural tendency for imbalance - due to the differences in language proficiency," she says. "If someone is not inclined to participate for some reason, we showed that gaze is able to overcome this difference and help everyone to participate." Cumbal says that studies have shown that robots can support group discussion, but this is the first study to examine what happens when a robot uses gaze in a group interaction that isn't balanced - when it is dominated by one or more individuals. The experiment involved pairs of players - one fluent in Swedish and one who is learning Swedish. The players were instructed to give the robot clues in Swedish so that it could guess the correct term. The face of the robot was an animated projection on a specially designed plastic mask. While it would be natural for a fluent speaker to dominate such a scenario, Cumbal says, the robot was able to prompt the participation of the less fluent player by redirecting its gaze naturally toward them and silently waiting for them to hazard an attempt. "Robot gaze can modify group dynamics - what role people take in a situation," he says. "Our work builds on that and shows further that even when there is an imbalance in skills required for the activity, the gaze of a robot can still influence how the participants contribute." The study was published at the ACM/IEEE International Conference on Human-Robot Interaction '21, and subsequently nominated for Best Paper Award at the conference. Funding for the research was provided by the Swedish Research Council, the Swedish Foundation for Strategic Research and the Jacobs Foundation. David Callahan Update: on March 11, the paper was named Best Paper at the HRI 2021 conference. -Ed.
737
New Approach Found for Energy-Efficient AI Applications
Most new achievements in artificial intelligence (AI) require very large neural networks. They consist of hundreds of millions of neurons arranged in several hundred layers, i.e. they have very "deep" network structures. These large, deep neural networks consume a lot of energy in the computer. Those neural networks that are used in image classification (e.g. face and object recognition) are particularly energy-intensive, since they have to send very many numerical values from one neuron layer to the next with great accuracy in each time cycle. Computer scientist Wolfgang Maass, together with his PhD student Christoph Stöckl, has now found a design method for artificial neural networks that paves the way for energy-efficient high-performance AI hardware (e.g. chips for driver assistance systems, smartphones and other mobile devices). The two researchers from the Institute of Theoretical Computer Science at Graz University of Technology (TU Graz) have optimized artificial neuronal networks in computer simulations for image classification in such a way that the neurons - similar to neurons in the brain - only need to send out signals relatively rarely and those that they do are very simple. The proven classification accuracy of images with this design is nevertheless very close to the current state of the art of current image classification tools. Maass and Stöckl were inspired by the way the human brain works. It processes several trillion computing operations per second, but only requires about 20 watts. This low energy consumption is made possible by inter-neuronal communication by means of very simple electrical impulses, so-called spikes. The information is thereby encoded not only by the number of spikes, but also by their time-varying patterns. "You can think of it like Morse code. The pauses between the signals also transmit information," Maass explains. That spike-based hardware can reduce the energy consumption of neural network applications is not new. However, so far this could not be realized for the very deep and large neural networks that are needed for really good image classification. In the design method of Maass and Stöckl, the transmission of information now depends not only on how many spikes a neuron sends out, but also on when the neuron sends out these spikes. The time or the temporal intervals between the spikes practically encode themselves and can therefore transmit a great deal of additional information. "We show that with just a few spikes - an average of two in our simulations - as much information can be conveyed between processors as in more energy-intensive hardware," Maass said. With their results, the two computer scientists from TU Graz provide a new approach for hardware that combines few spikes and thus low energy consumption with state-of-the-art performances of AI applications. The findings could dramatically accelerate the development of energy-efficient AI applications and are described in the journal Nature Machine Intelligence . This research work is anchored in the Fields of Expertise " Human and Biotechnology " and " Information, Communication & Computing ," two of the five Fields of Expertise of TU Graz. It was funded by the European Human Brain Project , which combines neuroscience, medicine and the development of brain-inspired technologies. The researchers at the Institute for the Theoretical Computer Science have also recently attracted attention with other research successes on a new learning algorithm and a biological programming language .
Researchers at Austria's Graz University of Technology (TU Graz) have demonstrated a new approach to energy-efficient artificial intelligence that needs very few signals to function, and that assigns meaning to pauses between signals. TU Graz's Wolfgang Maass and Christoph Stöckl optimized artificial neural networks in computer models for image classification so the neurons only have to transmit extremely simple signals occasionally, achieving an accuracy similar to that of state-of-the-art tools. The data transmission model relies not only on how many spikes a neuron sends out, but also on when the neuron transmits the spikes. Maass said, "With just a few spikes - an average of two in our simulations - as much information can be conveyed between processors as in more energy-intensive hardware."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Austria's Graz University of Technology (TU Graz) have demonstrated a new approach to energy-efficient artificial intelligence that needs very few signals to function, and that assigns meaning to pauses between signals. TU Graz's Wolfgang Maass and Christoph Stöckl optimized artificial neural networks in computer models for image classification so the neurons only have to transmit extremely simple signals occasionally, achieving an accuracy similar to that of state-of-the-art tools. The data transmission model relies not only on how many spikes a neuron sends out, but also on when the neuron transmits the spikes. Maass said, "With just a few spikes - an average of two in our simulations - as much information can be conveyed between processors as in more energy-intensive hardware." Most new achievements in artificial intelligence (AI) require very large neural networks. They consist of hundreds of millions of neurons arranged in several hundred layers, i.e. they have very "deep" network structures. These large, deep neural networks consume a lot of energy in the computer. Those neural networks that are used in image classification (e.g. face and object recognition) are particularly energy-intensive, since they have to send very many numerical values from one neuron layer to the next with great accuracy in each time cycle. Computer scientist Wolfgang Maass, together with his PhD student Christoph Stöckl, has now found a design method for artificial neural networks that paves the way for energy-efficient high-performance AI hardware (e.g. chips for driver assistance systems, smartphones and other mobile devices). The two researchers from the Institute of Theoretical Computer Science at Graz University of Technology (TU Graz) have optimized artificial neuronal networks in computer simulations for image classification in such a way that the neurons - similar to neurons in the brain - only need to send out signals relatively rarely and those that they do are very simple. The proven classification accuracy of images with this design is nevertheless very close to the current state of the art of current image classification tools. Maass and Stöckl were inspired by the way the human brain works. It processes several trillion computing operations per second, but only requires about 20 watts. This low energy consumption is made possible by inter-neuronal communication by means of very simple electrical impulses, so-called spikes. The information is thereby encoded not only by the number of spikes, but also by their time-varying patterns. "You can think of it like Morse code. The pauses between the signals also transmit information," Maass explains. That spike-based hardware can reduce the energy consumption of neural network applications is not new. However, so far this could not be realized for the very deep and large neural networks that are needed for really good image classification. In the design method of Maass and Stöckl, the transmission of information now depends not only on how many spikes a neuron sends out, but also on when the neuron sends out these spikes. The time or the temporal intervals between the spikes practically encode themselves and can therefore transmit a great deal of additional information. "We show that with just a few spikes - an average of two in our simulations - as much information can be conveyed between processors as in more energy-intensive hardware," Maass said. With their results, the two computer scientists from TU Graz provide a new approach for hardware that combines few spikes and thus low energy consumption with state-of-the-art performances of AI applications. The findings could dramatically accelerate the development of energy-efficient AI applications and are described in the journal Nature Machine Intelligence . This research work is anchored in the Fields of Expertise " Human and Biotechnology " and " Information, Communication & Computing ," two of the five Fields of Expertise of TU Graz. It was funded by the European Human Brain Project , which combines neuroscience, medicine and the development of brain-inspired technologies. The researchers at the Institute for the Theoretical Computer Science have also recently attracted attention with other research successes on a new learning algorithm and a biological programming language .
738
Computing Clean Water
Water is perhaps Earth's most critical natural resource. Given increasing demand and increasingly stretched water resources, scientists are pursuing more innovative ways to use and reuse existing water, as well as to design new materials to improve water purification methods. Synthetically created semi-permeable polymer membranes used for contaminant solute removal can provide a level of advanced treatment and improve the energy efficiency of treating water; however, existing knowledge gaps are limiting transformative advances in membrane technology. One basic problem is learning how the affinity, or the attraction, between solutes and membrane surfaces impacts many aspects of the water purification process. "Fouling - where solutes stick to and gunk up membranes - significantly reduces performance and is a major obstacle in designing membranes to treat produced water," said M. Scott Shell , a chemical engineering professor at UC Santa Barbara, who conducts computational simulations of soft materials and biomaterials. "If we can fundamentally understand how solute stickiness is affected by the chemical composition of membrane surfaces, including possible patterning of functional groups on these surfaces, then we can begin to design next-generation, fouling-resistant membranes to repel a wide range of solute types." Now, in a paper published in the Proceedings of the National Academy of Sciences (PNAS), Shell and lead author Jacob Monroe, a recent Ph.D. graduate of the department and a former member of Shell's research group, explain the relevance of macroscopic characterizations of solute-to-surface affinity. "Solute-surface interactions in water determine the behavior of a huge range of physical phenomena and technologies, but are particularly important in water separation and purification, where often many distinct types of solutes need to be removed or captured," said Monroe, now a postdoctoral researcher at the National Institute of Standards and Technology (NIST). "This work tackles the grand challenge of understanding how to design next-generation membranes that can handle huge yearly volumes of highly contaminated water sources, like those produced in oilfield operations, where the concentration of solutes is high and their chemistries quite diverse." Solutes are frequently characterized as spanning a range from hydrophilic, which can be thought of as water-liking and dissolving easily in water, to hydrophobic, or water-disliking and preferring to separate from water, like oil. Surfaces span the same range; for example, water beads up on hydrophobic surfaces and spreads out on hydrophilic surfaces. Hydrophilic solutes like to stick to hydrophilic surfaces, and hydrophobic solutes stick to hydrophobic surfaces. Here, the researchers corroborated the expectation that "like sticks to like," but also discovered, surprisingly, that the complete picture is more complex. "Among the wide range of chemistries that we considered, we found that hydrophilic solutes also like hydrophobic surfaces, and that hydrophobic solutes also like hydrophilic surfaces, though these attractions are weaker than those of like to like," explained Monroe, referencing the eight solutes the group tested, ranging from ammonia and boric acid, to isopropanol and methane. The group selected small-molecule solutes typically found in produced waters to provide a fundamental perspective on solute-surface affinity. The computational research group developed an algorithm to repattern surfaces by rearranging surface chemical groups in order to minimize or maximize the affinity of a given solute to the surface, or alternatively, to maximize the surface affinity of one solute relative to that of another. The approach relied on a genetic algorithm that "evolved" surface patterns in a way similar to natural selection, optimizing them toward a particular function goal. Through simulations, the team discovered that surface affinity was poorly correlated to conventional methods of solute hydrophobicity, such as how soluble a solute is in water. Instead, they found a stronger connection between surface affinity and the way that water molecules near a surface or near a solute change their structures in response. In some cases, these neighboring waters were forced to adopt structures that were unfavorable; by moving closer to hydrophobic surfaces, solutes could then reduce the number of such unfavorable water molecules, providing an overall driving force for affinity. "The missing ingredient was understanding how the water molecules near a surface are structured and move around it," said Monroe. "In particular, water structural fluctuations are enhanced near hydrophobic surfaces, compared to bulk water, or the water far away from the surface. We found that fluctuations drove the stickiness of every small solute types that we tested. " The finding is significant because it shows that in designing new surfaces, researchers should focus on the response of water molecules around them and avoid being guided by conventional hydrophobicity metrics. Based on their findings, Monroe and Shell say that surfaces comprised of different types of molecular chemistries may be the key to achieving multiple performance goals, such as preventing an assortment of solutes from fouling a membrane. "Surfaces with multiple types of chemical groups offer great potential. We showed that not only the presence of different surface groups, but their arrangement or pattern, influence solute-surface affinity," Monroe said. "Just by rearranging the spatial pattern, it becomes possible to significantly increase or decrease the surface affinity of a given solute, without changing how many surface groups are present." According to the team, their findings show that computational methods can contribute in significant ways to next-generation membrane systems for sustainable water treatment. "This work provided detailed insight into the molecular-scale interactions that control solute-surface affinity," said Shell, the John E. Myers Founder's Chair in Chemical Engineering. "Moreover, it shows that surface patterning offers a powerful design strategy in engineering membranes are resistant to fouling by a variety of contaminants and that can precisely control how each solute type is separated out. As a result, it offers molecular design rules and targets for next-generation membrane systems capable of purifying highly contaminated waters in an energy-efficient manner." Most of the surfaces examined were model systems, simplified to facilitate analysis and understanding. The researchers say that the natural next step will be to examine increasingly complex and realistic surfaces that more closely mimic actual membranes used in water treatment. Another important step to bring the modeling closer to membrane design will be to move beyond understanding merely how sticky a membrane is for a solute and toward computing the rates at which solutes move through membranes. The research was performed as part of the Center for Materials for Water and Energy Systems (M-WET), an Energy Frontier Research Center supported by the U.S. Department of Energy. The collaborative partnership includes researchers at UCSB, the University of Texas at Austin, and the Lawrence Berkeley National Laboratory.
Researchers at the University of California, Santa Barbara (UCSB), the University of Texas at Austin, and the U.S. Department of Energy's Lawrence Berkeley National Laboratory have computationally modeled affinity between solutes and membrane surfaces, to characterize their effects on water purification. The research team developed a genetic algorithm to repattern surfaces by reconfiguring surface chemical groups to minimize or maximize a given solute's affinity for the surface, or to maximize a solute's surface affinity relative to that of another. Simulations demonstrated a stronger link between surface affinity and how molecules near a surface or a solute reorganize in response. Former UCSB researcher Jacob Monroe said, "This work tackles the grand challenge of understanding how to design next-generation membranes that can handle huge yearly volumes of highly contaminated water sources."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of California, Santa Barbara (UCSB), the University of Texas at Austin, and the U.S. Department of Energy's Lawrence Berkeley National Laboratory have computationally modeled affinity between solutes and membrane surfaces, to characterize their effects on water purification. The research team developed a genetic algorithm to repattern surfaces by reconfiguring surface chemical groups to minimize or maximize a given solute's affinity for the surface, or to maximize a solute's surface affinity relative to that of another. Simulations demonstrated a stronger link between surface affinity and how molecules near a surface or a solute reorganize in response. Former UCSB researcher Jacob Monroe said, "This work tackles the grand challenge of understanding how to design next-generation membranes that can handle huge yearly volumes of highly contaminated water sources." Water is perhaps Earth's most critical natural resource. Given increasing demand and increasingly stretched water resources, scientists are pursuing more innovative ways to use and reuse existing water, as well as to design new materials to improve water purification methods. Synthetically created semi-permeable polymer membranes used for contaminant solute removal can provide a level of advanced treatment and improve the energy efficiency of treating water; however, existing knowledge gaps are limiting transformative advances in membrane technology. One basic problem is learning how the affinity, or the attraction, between solutes and membrane surfaces impacts many aspects of the water purification process. "Fouling - where solutes stick to and gunk up membranes - significantly reduces performance and is a major obstacle in designing membranes to treat produced water," said M. Scott Shell , a chemical engineering professor at UC Santa Barbara, who conducts computational simulations of soft materials and biomaterials. "If we can fundamentally understand how solute stickiness is affected by the chemical composition of membrane surfaces, including possible patterning of functional groups on these surfaces, then we can begin to design next-generation, fouling-resistant membranes to repel a wide range of solute types." Now, in a paper published in the Proceedings of the National Academy of Sciences (PNAS), Shell and lead author Jacob Monroe, a recent Ph.D. graduate of the department and a former member of Shell's research group, explain the relevance of macroscopic characterizations of solute-to-surface affinity. "Solute-surface interactions in water determine the behavior of a huge range of physical phenomena and technologies, but are particularly important in water separation and purification, where often many distinct types of solutes need to be removed or captured," said Monroe, now a postdoctoral researcher at the National Institute of Standards and Technology (NIST). "This work tackles the grand challenge of understanding how to design next-generation membranes that can handle huge yearly volumes of highly contaminated water sources, like those produced in oilfield operations, where the concentration of solutes is high and their chemistries quite diverse." Solutes are frequently characterized as spanning a range from hydrophilic, which can be thought of as water-liking and dissolving easily in water, to hydrophobic, or water-disliking and preferring to separate from water, like oil. Surfaces span the same range; for example, water beads up on hydrophobic surfaces and spreads out on hydrophilic surfaces. Hydrophilic solutes like to stick to hydrophilic surfaces, and hydrophobic solutes stick to hydrophobic surfaces. Here, the researchers corroborated the expectation that "like sticks to like," but also discovered, surprisingly, that the complete picture is more complex. "Among the wide range of chemistries that we considered, we found that hydrophilic solutes also like hydrophobic surfaces, and that hydrophobic solutes also like hydrophilic surfaces, though these attractions are weaker than those of like to like," explained Monroe, referencing the eight solutes the group tested, ranging from ammonia and boric acid, to isopropanol and methane. The group selected small-molecule solutes typically found in produced waters to provide a fundamental perspective on solute-surface affinity. The computational research group developed an algorithm to repattern surfaces by rearranging surface chemical groups in order to minimize or maximize the affinity of a given solute to the surface, or alternatively, to maximize the surface affinity of one solute relative to that of another. The approach relied on a genetic algorithm that "evolved" surface patterns in a way similar to natural selection, optimizing them toward a particular function goal. Through simulations, the team discovered that surface affinity was poorly correlated to conventional methods of solute hydrophobicity, such as how soluble a solute is in water. Instead, they found a stronger connection between surface affinity and the way that water molecules near a surface or near a solute change their structures in response. In some cases, these neighboring waters were forced to adopt structures that were unfavorable; by moving closer to hydrophobic surfaces, solutes could then reduce the number of such unfavorable water molecules, providing an overall driving force for affinity. "The missing ingredient was understanding how the water molecules near a surface are structured and move around it," said Monroe. "In particular, water structural fluctuations are enhanced near hydrophobic surfaces, compared to bulk water, or the water far away from the surface. We found that fluctuations drove the stickiness of every small solute types that we tested. " The finding is significant because it shows that in designing new surfaces, researchers should focus on the response of water molecules around them and avoid being guided by conventional hydrophobicity metrics. Based on their findings, Monroe and Shell say that surfaces comprised of different types of molecular chemistries may be the key to achieving multiple performance goals, such as preventing an assortment of solutes from fouling a membrane. "Surfaces with multiple types of chemical groups offer great potential. We showed that not only the presence of different surface groups, but their arrangement or pattern, influence solute-surface affinity," Monroe said. "Just by rearranging the spatial pattern, it becomes possible to significantly increase or decrease the surface affinity of a given solute, without changing how many surface groups are present." According to the team, their findings show that computational methods can contribute in significant ways to next-generation membrane systems for sustainable water treatment. "This work provided detailed insight into the molecular-scale interactions that control solute-surface affinity," said Shell, the John E. Myers Founder's Chair in Chemical Engineering. "Moreover, it shows that surface patterning offers a powerful design strategy in engineering membranes are resistant to fouling by a variety of contaminants and that can precisely control how each solute type is separated out. As a result, it offers molecular design rules and targets for next-generation membrane systems capable of purifying highly contaminated waters in an energy-efficient manner." Most of the surfaces examined were model systems, simplified to facilitate analysis and understanding. The researchers say that the natural next step will be to examine increasingly complex and realistic surfaces that more closely mimic actual membranes used in water treatment. Another important step to bring the modeling closer to membrane design will be to move beyond understanding merely how sticky a membrane is for a solute and toward computing the rates at which solutes move through membranes. The research was performed as part of the Center for Materials for Water and Energy Systems (M-WET), an Energy Frontier Research Center supported by the U.S. Department of Energy. The collaborative partnership includes researchers at UCSB, the University of Texas at Austin, and the Lawrence Berkeley National Laboratory.
739
NTSB Asks NHTSA for More Self-Driving Car Rules, Citing Tesla's 'Full Self-Driving' Beta
One of the things that Tesla likes to do is give the features of its cars friendly sounding, easy-to-remember names. Ludicrous mode, fart mode, and Autopilot are examples of this. For the most part, this is fine, but sometimes -- and this is the case with Autopilot -- those names can give people a false sense of confidence . Of course, the latest and probably most egregious example of this is with the "Full Self-Driving" (FSD) option that Tesla has been pushing for years, but which has only recently been getting out into the world as a semipublic beta test . Now, Tesla boss Elon Musk has admitted to regulators that the FSD beta is really just an advanced driver-assistance system , but now some of those regulators are concerned that the lack of regulation in naming and testing these systems could be a recipe for disaster, according to a report published Friday by CNBC . Specifically, the National Transportation Safety Board has reached out to its sister agency, the National Highway Traffic Safety Administration, in the form of a letter asking it to lay down much more strict guidelines when it comes to advanced driver-assistance systems and self-driving car development and testing on public roads. The letter, written by NTSB Chair Robert Sumwalt, calls Tesla out for testing its Level 2 ADAS system on public roads with regular drivers while calling it Full Self-Driving, stating: He mentions Tesla a staggering 16 times in the letter, though the regulations he seems to be asking for would also likely directly affect other companies explicitly focused on developing autonomous vehicles (Level 4 and 5) like Cruise and Waymo . OK, so why is the chair of the NTSB writing to NHTSA asking for help instead of doing something about it? The answer to that lies in what each organization's role is. NTSB is responsible for investigating vehicle crashes, looking for their underlying cause, and often making recommendations to the government and the auto industry based on its findings. On the other hand, NHTSA is responsible for things like vehicle crash testing, vehicle recalls and keeping the book on vehicle safety standards. What will be the result of this letter, in the best-case scenario? Well, ideally, we'd see NHTSA change its historically hands-off position on regulating self-driving vehicle development and issue some hard and fast rules, not only about what kind of testing is allowed on public roads, but also establishing some clear nomenclature that makes it easier for customers to understand what their vehicles' ADAS systems can and can't do and prevent manufacturers from using marketing to embellish the truth about their systems' capabilities. If you're curious about the entire contents of the letter from Chairperson Sumwalt to NHTSA, you can read it below:
The U.S. National Transportation Safety Board (NTSB) has requested tougher rules for advanced driver-assistance systems (ADAS) and self-driving car development and testing on public roads from the National Highway Traffic Safety Administration (NHTSA). NTSB's Robert Sumwalt cited electric-vehicle company Tesla's beta test of its Level 2 ADAS on public roads with regular drivers, called Full Self-Driving, as "having limited oversight or reporting requirements." Sumwalt said, "NHTSA's hands-off approach to oversight of [autonomous vehicle] testing poses a potential risk to motorists and other road users." Ideally, NHTSA would not only impose stronger rules for ADAS testing on public roads, but also provide clear terminology that improves customer understanding of ADAS' capabilities, and ban manufacturers' use of marketing to exaggerate those capabilities.
[]
[]
[]
scitechnews
None
None
None
None
The U.S. National Transportation Safety Board (NTSB) has requested tougher rules for advanced driver-assistance systems (ADAS) and self-driving car development and testing on public roads from the National Highway Traffic Safety Administration (NHTSA). NTSB's Robert Sumwalt cited electric-vehicle company Tesla's beta test of its Level 2 ADAS on public roads with regular drivers, called Full Self-Driving, as "having limited oversight or reporting requirements." Sumwalt said, "NHTSA's hands-off approach to oversight of [autonomous vehicle] testing poses a potential risk to motorists and other road users." Ideally, NHTSA would not only impose stronger rules for ADAS testing on public roads, but also provide clear terminology that improves customer understanding of ADAS' capabilities, and ban manufacturers' use of marketing to exaggerate those capabilities. One of the things that Tesla likes to do is give the features of its cars friendly sounding, easy-to-remember names. Ludicrous mode, fart mode, and Autopilot are examples of this. For the most part, this is fine, but sometimes -- and this is the case with Autopilot -- those names can give people a false sense of confidence . Of course, the latest and probably most egregious example of this is with the "Full Self-Driving" (FSD) option that Tesla has been pushing for years, but which has only recently been getting out into the world as a semipublic beta test . Now, Tesla boss Elon Musk has admitted to regulators that the FSD beta is really just an advanced driver-assistance system , but now some of those regulators are concerned that the lack of regulation in naming and testing these systems could be a recipe for disaster, according to a report published Friday by CNBC . Specifically, the National Transportation Safety Board has reached out to its sister agency, the National Highway Traffic Safety Administration, in the form of a letter asking it to lay down much more strict guidelines when it comes to advanced driver-assistance systems and self-driving car development and testing on public roads. The letter, written by NTSB Chair Robert Sumwalt, calls Tesla out for testing its Level 2 ADAS system on public roads with regular drivers while calling it Full Self-Driving, stating: He mentions Tesla a staggering 16 times in the letter, though the regulations he seems to be asking for would also likely directly affect other companies explicitly focused on developing autonomous vehicles (Level 4 and 5) like Cruise and Waymo . OK, so why is the chair of the NTSB writing to NHTSA asking for help instead of doing something about it? The answer to that lies in what each organization's role is. NTSB is responsible for investigating vehicle crashes, looking for their underlying cause, and often making recommendations to the government and the auto industry based on its findings. On the other hand, NHTSA is responsible for things like vehicle crash testing, vehicle recalls and keeping the book on vehicle safety standards. What will be the result of this letter, in the best-case scenario? Well, ideally, we'd see NHTSA change its historically hands-off position on regulating self-driving vehicle development and issue some hard and fast rules, not only about what kind of testing is allowed on public roads, but also establishing some clear nomenclature that makes it easier for customers to understand what their vehicles' ADAS systems can and can't do and prevent manufacturers from using marketing to embellish the truth about their systems' capabilities. If you're curious about the entire contents of the letter from Chairperson Sumwalt to NHTSA, you can read it below:
740
Deep Learning Enables Real-Time 3D Holograms on Smartphone
Using artificial intelligence, scientists can now rapidly generate photorealistic color 3D holograms even on a smartphone. And according to a new study, this new technology could find use in virtual reality (VR) and augmented reality (AR) headsets and other applications. A hologram is an image that essentially resembles a 2D window looking onto a 3D scene. The pixels of each hologram scatter light waves falling onto them, making these waves interact with each other in ways that generate an illusion of depth. Holographic video displays create 3D images that people can view without feeling eye strain, unlike conventional 3D displays that produce the illusion of depth using 2D images. However, although companies such as Samsung have recently made strides toward developing hardware that can display holographic video, it remains a major challenge actually generating the holographic data for such devices to display. Each hologram encodes an extraordinary amount of data in order to create the illusion of depth throughout an image. As such, generating holographic video has often required a supercomputer's worth of computing power. In order to bring holographic video to the masses, scientists have tried a number of different strategies to cut down the amount of computation needed - for example, replacing complex physics simulations with simple lookup tables. However, these often come at the cost of image quality. Now researchers at MIT have developed a new way to produce holograms nearly instantly - a deep-learning based method so efficient, it can generate holograms on a laptop in a blink of an eye. They detailed their findings this week , which were funded in part by Sony, online in the journal Nature . "Everything worked out magically, which really exceeded all of our expectations," says study lead author Liang Shi, a computer scientist at MIT. Using physics simulations for computer-generated holography involves calculating the appearance of many chunks of a hologram and then combining them to get the final hologram, Shi notes. Using lookup tables is like memorizing a set of frequently used chunks of hologram, but this sacrifices accuracy and still requires the combination step, he says. In a way, computer-generated holography is a bit like figuring out how to cut a cake, Shi says. Using physics simulations to calculate the appearance of each point in space is a time-consuming process that resembles using eight precise cuts to produce eight slices of cake. Using lookup tables for computer-generated holography is like marking the boundary of each slice before cutting. Although this saves a bit of time by eliminating the step of calculating where to cut, carrying out all eight cuts still takes up a lot of time. In contrast, the new technique uses deep learning to essentially figure out how to cut a cake into eight slices using just three cuts , Shi says. The convolutional neural network - a system that roughly mimics how the human brain processes visual data - learns shortcuts to generate a complete hologram without needing to separately calculate how each chunk of it appears, "which will reduce total operations by orders of magnitude," he says. A visualization of 3-D hologram computation. (Left) A 3-D model. (Middle) A color image that includes depth data. (Right) A simulation of the scattered light patterns generating a 3-D hologram. Image: MIT The researchers first built a custom database of 4,000 computer-generated images, which each included color and depth information for each pixel. This database also included a 3D hologram corresponding to each image. Using this data, the convolutional neural network learned how to calculate how best to generate holograms from the images. It could then produce new holograms from images with depth information, which is provided with typical computer-generated images and can be calculated from a multi-camera setup or from lidar sensors, both of which are standard on some new iPhones. The new system requires less than 620 kilobytes of memory, and can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on a single consumer-grade GPU. The researchers could run it an iPhone 11 Pro at a rate of 1.1 holograms per second and on a Google Edge TPU at a rate of 2 holograms per second, suggesting it could one day generate holograms in real-time on future virtual-reality (VR) and augmented-reality (AR) mobile headsets. Real-time 3D holography might also help enhance so-called volumetric 3D printing techniques , which create 3D objects by projecting images onto vats of liquid and can generate complex hollow structures. The scientists note their technique could also find use in optical and acoustic tweezers useful for manipulating matter on a microscopic level, as well as holographic microscopes that can analyze cells and conventional static holograms for use in art, security, data storage and other applications. Future research might add eye-tracking technology to speed up the system by creating holograms that are high-resolution only where the eyes are looking, Shi says. Another direction is to generate holograms with a person's visual acuity in mind, so users with eyeglasses don't need special VR headsets matching their eye prescription, he adds.
Massachusetts Institute of Technology (MIT) scientists can generate photorealistic three-dimensional (3D) holograms in color on a smartphone, in real time. The technique utilizes a deep learning convolutional neural network (CNN) to generate holograms without requiring separate calculations for how each chunk of the image appears. The MIT researchers compiled a database of 4,000 computer-generated images, each with color and depth information for each pixel, as well as a corresponding 3D hologram for each image. The CNN tapped this data to calculate an optimal hologram generation process, then produced new holograms from images with depth information calculated from a multi-camera setup or from LiDAR sensors included in certain iPhones. The system can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on one consumer-grade graphics processing unit (GPU), using less than 620 kilobytes of memory.
[]
[]
[]
scitechnews
None
None
None
None
Massachusetts Institute of Technology (MIT) scientists can generate photorealistic three-dimensional (3D) holograms in color on a smartphone, in real time. The technique utilizes a deep learning convolutional neural network (CNN) to generate holograms without requiring separate calculations for how each chunk of the image appears. The MIT researchers compiled a database of 4,000 computer-generated images, each with color and depth information for each pixel, as well as a corresponding 3D hologram for each image. The CNN tapped this data to calculate an optimal hologram generation process, then produced new holograms from images with depth information calculated from a multi-camera setup or from LiDAR sensors included in certain iPhones. The system can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on one consumer-grade graphics processing unit (GPU), using less than 620 kilobytes of memory. Using artificial intelligence, scientists can now rapidly generate photorealistic color 3D holograms even on a smartphone. And according to a new study, this new technology could find use in virtual reality (VR) and augmented reality (AR) headsets and other applications. A hologram is an image that essentially resembles a 2D window looking onto a 3D scene. The pixels of each hologram scatter light waves falling onto them, making these waves interact with each other in ways that generate an illusion of depth. Holographic video displays create 3D images that people can view without feeling eye strain, unlike conventional 3D displays that produce the illusion of depth using 2D images. However, although companies such as Samsung have recently made strides toward developing hardware that can display holographic video, it remains a major challenge actually generating the holographic data for such devices to display. Each hologram encodes an extraordinary amount of data in order to create the illusion of depth throughout an image. As such, generating holographic video has often required a supercomputer's worth of computing power. In order to bring holographic video to the masses, scientists have tried a number of different strategies to cut down the amount of computation needed - for example, replacing complex physics simulations with simple lookup tables. However, these often come at the cost of image quality. Now researchers at MIT have developed a new way to produce holograms nearly instantly - a deep-learning based method so efficient, it can generate holograms on a laptop in a blink of an eye. They detailed their findings this week , which were funded in part by Sony, online in the journal Nature . "Everything worked out magically, which really exceeded all of our expectations," says study lead author Liang Shi, a computer scientist at MIT. Using physics simulations for computer-generated holography involves calculating the appearance of many chunks of a hologram and then combining them to get the final hologram, Shi notes. Using lookup tables is like memorizing a set of frequently used chunks of hologram, but this sacrifices accuracy and still requires the combination step, he says. In a way, computer-generated holography is a bit like figuring out how to cut a cake, Shi says. Using physics simulations to calculate the appearance of each point in space is a time-consuming process that resembles using eight precise cuts to produce eight slices of cake. Using lookup tables for computer-generated holography is like marking the boundary of each slice before cutting. Although this saves a bit of time by eliminating the step of calculating where to cut, carrying out all eight cuts still takes up a lot of time. In contrast, the new technique uses deep learning to essentially figure out how to cut a cake into eight slices using just three cuts , Shi says. The convolutional neural network - a system that roughly mimics how the human brain processes visual data - learns shortcuts to generate a complete hologram without needing to separately calculate how each chunk of it appears, "which will reduce total operations by orders of magnitude," he says. A visualization of 3-D hologram computation. (Left) A 3-D model. (Middle) A color image that includes depth data. (Right) A simulation of the scattered light patterns generating a 3-D hologram. Image: MIT The researchers first built a custom database of 4,000 computer-generated images, which each included color and depth information for each pixel. This database also included a 3D hologram corresponding to each image. Using this data, the convolutional neural network learned how to calculate how best to generate holograms from the images. It could then produce new holograms from images with depth information, which is provided with typical computer-generated images and can be calculated from a multi-camera setup or from lidar sensors, both of which are standard on some new iPhones. The new system requires less than 620 kilobytes of memory, and can generate 60 color 3D holograms per second with a resolution of 1,920 by 1,080 pixels on a single consumer-grade GPU. The researchers could run it an iPhone 11 Pro at a rate of 1.1 holograms per second and on a Google Edge TPU at a rate of 2 holograms per second, suggesting it could one day generate holograms in real-time on future virtual-reality (VR) and augmented-reality (AR) mobile headsets. Real-time 3D holography might also help enhance so-called volumetric 3D printing techniques , which create 3D objects by projecting images onto vats of liquid and can generate complex hollow structures. The scientists note their technique could also find use in optical and acoustic tweezers useful for manipulating matter on a microscopic level, as well as holographic microscopes that can analyze cells and conventional static holograms for use in art, security, data storage and other applications. Future research might add eye-tracking technology to speed up the system by creating holograms that are high-resolution only where the eyes are looking, Shi says. Another direction is to generate holograms with a person's visual acuity in mind, so users with eyeglasses don't need special VR headsets matching their eye prescription, he adds.
741
License-Plate Scans Aid Crime-Solving But Spur Little Privacy Debate
Law enforcement agencies increasingly are using data gathered by the vast network of automated license-plate scanners to solve crimes. The scanners initially were placed on telephone poles, police cars, toll plazas, bridges, and in parking lots but now can be found on tow trucks and municipal garbage trucks as well. License-plate scans were instrumental in the arrests of several suspected rioters at the U.S. Capitol. However, there are concerns about abuse, misidentification, and the scope of data collection, given that, for instance, some systems read a plate's number but not its state. Electronic Frontier Foundation's Dave Maass said, "License-plate readers are mass surveillance technology. They are collecting data on everyone regardless of whether there is a connection to a crime, and they are storing that data for long periods of time."
[]
[]
[]
scitechnews
None
None
None
None
Law enforcement agencies increasingly are using data gathered by the vast network of automated license-plate scanners to solve crimes. The scanners initially were placed on telephone poles, police cars, toll plazas, bridges, and in parking lots but now can be found on tow trucks and municipal garbage trucks as well. License-plate scans were instrumental in the arrests of several suspected rioters at the U.S. Capitol. However, there are concerns about abuse, misidentification, and the scope of data collection, given that, for instance, some systems read a plate's number but not its state. Electronic Frontier Foundation's Dave Maass said, "License-plate readers are mass surveillance technology. They are collecting data on everyone regardless of whether there is a connection to a crime, and they are storing that data for long periods of time."
742
Hackers Act Differently if Accessing Male or Female Facebook Profiles
By Chris Stokel-Walker Facebook logins can be traded by hackers Shutterstock/TY Lim Cybercriminals seem to behave differently depending on the age and gender listed on the Facebook accounts they hack into, although questions have been raised about the ethics of the study that has revealed this. Jeremiah Onaolapo at the University of Vermont and his colleagues, including some at Facebook, created 1008 realistic Facebook accounts, populating them with fake information, photos and posts. They then leaked the login details for 672 of these accounts on websites used by hackers to trade compromised credentials, including Pastebin, Paste.org.ru, ...
University of Vermont and Facebook researchers found that hackers on the social media platform display different behavior depending on the age and gender listed on the hacked Facebook account. The researchers created 1,008 realistic Facebook accounts and leaked the login details for 672 of them on websites used by hackers to trade compromised credentials. They used the other accounts to populate the friendship groups of the leaked accounts to monitor them over a six-month period. The researchers found that 46% of the leaked accounts were accessed 322 times combined. They also determined that hackers messaged the friends of younger profiles more than those of older profiles, and that in many cases male accounts - but never female accounts - were vandalized.
[]
[]
[]
scitechnews
None
None
None
None
University of Vermont and Facebook researchers found that hackers on the social media platform display different behavior depending on the age and gender listed on the hacked Facebook account. The researchers created 1,008 realistic Facebook accounts and leaked the login details for 672 of them on websites used by hackers to trade compromised credentials. They used the other accounts to populate the friendship groups of the leaked accounts to monitor them over a six-month period. The researchers found that 46% of the leaked accounts were accessed 322 times combined. They also determined that hackers messaged the friends of younger profiles more than those of older profiles, and that in many cases male accounts - but never female accounts - were vandalized. By Chris Stokel-Walker Facebook logins can be traded by hackers Shutterstock/TY Lim Cybercriminals seem to behave differently depending on the age and gender listed on the Facebook accounts they hack into, although questions have been raised about the ethics of the study that has revealed this. Jeremiah Onaolapo at the University of Vermont and his colleagues, including some at Facebook, created 1008 realistic Facebook accounts, populating them with fake information, photos and posts. They then leaked the login details for 672 of these accounts on websites used by hackers to trade compromised credentials, including Pastebin, Paste.org.ru, ...
744
Cloud Computing Could Prevent 1B Metric Tons of CO2 Emissions by 2024
The world's biggest cloud computing providers have promised to pursue "green IT," and new research from IDC suggests there is a big opportunity to prevent carbon emissions via the adoption of cloud computing. However, the impact that cloud computing could have on overall emissions depends largely on how datacenters are built over the next few years. Between 2021 and 2024, the move to cloud computing should, at minimum, prevent 629 million metric tons of carbon dioxide (CO2) emissions, IDC says. If all datacenters in 2024 were designed for sustainability, as much as 1.6 billion metric tons could be saved. All told, IDC expects about 60 percent of datacenters to adopt "smarter" sustainability practices by 2024, saving more than 1 billion metric tons of emissions. The projection is based on IDC data on server distribution, as well as cloud and on-premises software use. IDC also used third-party information on datacenter power usage, CO2 emissions per kilowatt-hour and emission comparisons of cloud and non-cloud datacenters. Cloud computing can prevent CO2 emissions, given the efficiency gained from aggregating compute resources. Large-scale datacenters, in comparison to discrete enterprise datacenters, can more efficiently manage power capacity, optimize cooling, leverage power-efficient servers and increase server utilization rates. Emissions could be reduced even further if workloads are shifted to locations that optimize the use of renewable energy sources. Most of Big Tech has pledged to do their part to reduce carbon emissions. Earlier this year, IBM laid out its plan to achieve net-zero carbon dioxide emissions by 2030. Microsoft has also reported that net carbon emissions from it and its supply chain would be negative by 2030 . Facebook says it will reach net-zero CO2 emissions by 2030, too . Amazon say it will reach net-zero CO2 emissions by 2040 . Google has been carbon neutral since 2007, and it's aiming to run its whole business carbon free by 2030 . Meanwhile, Apple is aiming for it and its supply chain to be carbon neutral by 2030.
A new report from market research firm IDC found that shifting to cloud computing should reduce carbon dioxide (CO
[]
[]
[]
scitechnews
None
None
None
None
A new report from market research firm IDC found that shifting to cloud computing should reduce carbon dioxide (CO The world's biggest cloud computing providers have promised to pursue "green IT," and new research from IDC suggests there is a big opportunity to prevent carbon emissions via the adoption of cloud computing. However, the impact that cloud computing could have on overall emissions depends largely on how datacenters are built over the next few years. Between 2021 and 2024, the move to cloud computing should, at minimum, prevent 629 million metric tons of carbon dioxide (CO2) emissions, IDC says. If all datacenters in 2024 were designed for sustainability, as much as 1.6 billion metric tons could be saved. All told, IDC expects about 60 percent of datacenters to adopt "smarter" sustainability practices by 2024, saving more than 1 billion metric tons of emissions. The projection is based on IDC data on server distribution, as well as cloud and on-premises software use. IDC also used third-party information on datacenter power usage, CO2 emissions per kilowatt-hour and emission comparisons of cloud and non-cloud datacenters. Cloud computing can prevent CO2 emissions, given the efficiency gained from aggregating compute resources. Large-scale datacenters, in comparison to discrete enterprise datacenters, can more efficiently manage power capacity, optimize cooling, leverage power-efficient servers and increase server utilization rates. Emissions could be reduced even further if workloads are shifted to locations that optimize the use of renewable energy sources. Most of Big Tech has pledged to do their part to reduce carbon emissions. Earlier this year, IBM laid out its plan to achieve net-zero carbon dioxide emissions by 2030. Microsoft has also reported that net carbon emissions from it and its supply chain would be negative by 2030 . Facebook says it will reach net-zero CO2 emissions by 2030, too . Amazon say it will reach net-zero CO2 emissions by 2040 . Google has been carbon neutral since 2007, and it's aiming to run its whole business carbon free by 2030 . Meanwhile, Apple is aiming for it and its supply chain to be carbon neutral by 2030.
747
Double-Masking Benefits Are Limited, Japan Supercomputer Finds
Double-masking, as recommended by the U.S. Centers for Disease Control and Prevention, yields limited benefits in preventing the spread of droplets that could transmit Covid-19 compared to a single well-fitted disposable mask, according to an analysis conducted with a Japanese supercomputer. Researchers at Japan's Riken research institute and Kobe University used Fugaku, the world's fastest supercomputer, to model droplet dispersal. The simulation demonstrated that wearing just one tightly-fitted disposable mask prevented the spread of 85% of virus-bearing particles, while wearing two masks prevented only 89%. One well-fitted mask captured 81% of the droplets, compared to 69% by one loosely-fitted mask. The researchers observed that a tight fit and avoiding gaps in the mask were essential to blocking droplet spread.
[]
[]
[]
scitechnews
None
None
None
None
Double-masking, as recommended by the U.S. Centers for Disease Control and Prevention, yields limited benefits in preventing the spread of droplets that could transmit Covid-19 compared to a single well-fitted disposable mask, according to an analysis conducted with a Japanese supercomputer. Researchers at Japan's Riken research institute and Kobe University used Fugaku, the world's fastest supercomputer, to model droplet dispersal. The simulation demonstrated that wearing just one tightly-fitted disposable mask prevented the spread of 85% of virus-bearing particles, while wearing two masks prevented only 89%. One well-fitted mask captured 81% of the droplets, compared to 69% by one loosely-fitted mask. The researchers observed that a tight fit and avoiding gaps in the mask were essential to blocking droplet spread.
749
CredChain: Take Control of Your Own Digital Identity ... and Keep That Valuable Bitcoin Password Safe
The CredChain Self-Sovereign Identity platform architecture developed by researchers at Australia's University of New South Wales (UNSW) School of Computer Science and Engineering uses blockchain to create, share, and verify cryptocurrency credentials securely. UNSW's Helen Paik and Salil Kanhere said CredChain could offer Key Sharding, the process of splitting complicated passwords into meaningless shards stored in different locations that can only be validated when recombined. Kanhere said, "If or when the key is lost, the owner can present enough pieces of the keys to the system to prove his identity and recover the original." Paik said CredChain offers decentralized identity authority via the blockchain, and "also ensures that when a credential is shared, the user can redact parts of the credential to minimize the private data being shared, while maintaining the validity of the credential."
[]
[]
[]
scitechnews
None
None
None
None
The CredChain Self-Sovereign Identity platform architecture developed by researchers at Australia's University of New South Wales (UNSW) School of Computer Science and Engineering uses blockchain to create, share, and verify cryptocurrency credentials securely. UNSW's Helen Paik and Salil Kanhere said CredChain could offer Key Sharding, the process of splitting complicated passwords into meaningless shards stored in different locations that can only be validated when recombined. Kanhere said, "If or when the key is lost, the owner can present enough pieces of the keys to the system to prove his identity and recover the original." Paik said CredChain offers decentralized identity authority via the blockchain, and "also ensures that when a credential is shared, the user can redact parts of the credential to minimize the private data being shared, while maintaining the validity of the credential."
750
Trading Chicken Parts is Going Digital
Ashley Honey at New Zealand-based Nui hopes to use his company's electronic trading platforms to automate trading of meat and poultry products. The goal is to remove intermediate brokers or redistributors from the supply chain and lower costs with streaming platforms that centralize supply and provide access to smaller industry players at better prices. Agricultural giant Tyson Foods also aims to upgrade technology to simplify sales transactions, in order to reduce the cost of processing and distributing food across the U.S. To this end, the firm is deploying robotic arms to package poultry, and implementing digital platforms to help sales teams recognize consumption trends.
[]
[]
[]
scitechnews
None
None
None
None
Ashley Honey at New Zealand-based Nui hopes to use his company's electronic trading platforms to automate trading of meat and poultry products. The goal is to remove intermediate brokers or redistributors from the supply chain and lower costs with streaming platforms that centralize supply and provide access to smaller industry players at better prices. Agricultural giant Tyson Foods also aims to upgrade technology to simplify sales transactions, in order to reduce the cost of processing and distributing food across the U.S. To this end, the firm is deploying robotic arms to package poultry, and implementing digital platforms to help sales teams recognize consumption trends.
752
Bug Bounties: More Hackers Spotting Vulnerabilities Across Web, Mobile, IoT
HackerOne's 2021 Hacker Report reveals a 63% jump in the number of hackers submitting vulnerabilities to bug bounty programs during the last year. Earnings for ethical hackers disclosing vulnerabilities to the HackerOne bug bounty program more than doubled to $40 million in 2020, from $19 million in 2019. Most of the hackers focus on Web applications, but submissions of vulnerabilities associated with Android devices, Internet of Things devices, and application programming interfaces also increased last year. Said HackerOne's Jobert Abma, "We're seeing huge growth in vulnerability submissions across all categories and an increase in hackers specializing across a wider variety of technologies."
[]
[]
[]
scitechnews
None
None
None
None
HackerOne's 2021 Hacker Report reveals a 63% jump in the number of hackers submitting vulnerabilities to bug bounty programs during the last year. Earnings for ethical hackers disclosing vulnerabilities to the HackerOne bug bounty program more than doubled to $40 million in 2020, from $19 million in 2019. Most of the hackers focus on Web applications, but submissions of vulnerabilities associated with Android devices, Internet of Things devices, and application programming interfaces also increased last year. Said HackerOne's Jobert Abma, "We're seeing huge growth in vulnerability submissions across all categories and an increase in hackers specializing across a wider variety of technologies."
753
Anti-Feminist YouTube, Reddit Content a Gateway to the Alt-Right
Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) have determined that Reddit and YouTube users who engage with anti-feminist content can become alt-right converts. EPFL's Manoel Ribeiro and colleagues analyzed 300 million comments on 115 Reddit forums and 526 YouTube channels from 2006 to 2018, tracking the type of subject matter each user engaged with: general news, content from communities that expressed hate towards women (sometimes called the "manosphere"), and alt-right material. They also checked people who in 2016 commented on YouTube videos classified as anti-feminist and on general news videos, without engaging with alt-right videos, compared to 2018. Members of the male-separatist group Men Going Their Own Way were most likely to later engage with alt-right content, while overall the migration from the manosphere to the alt-right was higher on Reddit than YouTube.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the Swiss Federal Institute of Technology in Lausanne (EPFL) have determined that Reddit and YouTube users who engage with anti-feminist content can become alt-right converts. EPFL's Manoel Ribeiro and colleagues analyzed 300 million comments on 115 Reddit forums and 526 YouTube channels from 2006 to 2018, tracking the type of subject matter each user engaged with: general news, content from communities that expressed hate towards women (sometimes called the "manosphere"), and alt-right material. They also checked people who in 2016 commented on YouTube videos classified as anti-feminist and on general news videos, without engaging with alt-right videos, compared to 2018. Members of the male-separatist group Men Going Their Own Way were most likely to later engage with alt-right content, while overall the migration from the manosphere to the alt-right was higher on Reddit than YouTube.
755
Classic Math Conundrum Solved: Danish Computer Scientist Developed Algorithm for Finding the Shortest Route
When heading somewhere new, most of us leave it to computer algorithms to help us find the best route, whether by using a car's GPS, or public transport and map apps on their phone. Still, there are times when a proposed route doesn't quite align with reality. This is because road networks, public transportation networks and other networks aren't static. The best route can suddenly be the slowest, e.g. because a queue has formed due to roadworks or an accident. People probably don't think about the complicated math behind routing suggestions in these types of situations. The software being used is trying to solve a variant for the classic algorithmic "shortest path" problem, the shortest path in a dynamic network. For 40 years, researchers have been working to find an algorithm that can optimally solve this mathematical conundrum. Now, Christian Wulff-Nilsen of the University of Copenhagen's Department of Computer Science has succeeded in cracking the nut along with two colleagues. "We have developed an algorithm, for which we now have mathematical proof, that it is better than every other algorithm up to now - and the closest thing to optimal that will ever be, even if we look 1000 years into the future," says Associate Professor Wulff-Nilsen. The results were presented at the prestigious FOCS 2020 conference. Optimally, in this context, refers to an algorithm that spends as little time and as little computer memory as possible to calculate the best route in a given network. This is not just true of road and transportation networks, but also the internet or any other type of network. The researchers represent a network as a so-called dynamic graph." In this context, a graph is an abstract representation of a network consisting of edges, roads for example, and nodes, representing intersections, for example. When a graph is dynamic, it means that it can change over time. The new algorithm handles changes consisting of deleted edges - for example, if the equivalent of a stretch of a road suddenly becomes inaccessible due to roadworks. "The tremendous advantage of seeing a network as an abstract graph is that it can be used to represent any type of network. It could be the internet, where you want to send data via as short a route as possible, a human brain or the network of friendship relations on Facebook. This makes graph algorithms applicable in a wide variety of contexts," explains Christian Wulff-Nilsen. Traditional algorithms assume that a graph is static, which is rarely true in the real world. When these kinds of algorithms are used in a dynamic network, they need to be rerun every time a small change occurs in the graph - which wastes time. Finding better algorithms is not just useful when travelling. It is necessary in virtually any area where data is produced, as Christian Wulff-Nilsen points out: "We are living in a time when volumes of data grow at a tremendous rate and the development of hardware simply can't keep up. In order to manage all of the data we produce, we need to develop smarter software that requires less running time and memory. That's why we need smarter algorithms," he says. He hopes that it will be possible to use this algorithm or some of the techniques behind it in practice, but stresses that this is theoretical evidence and first requires experimentation.
Computer scientists at Denmark's University of Copenhagen (UCPH) have solved the classic algorithmic problem of calculating the shortest path between two points when the route traverses a changing network. UCPH's Christian Wulff-Nilsen and colleagues represent a network as a dynamic graph, and their new algorithm accommodates changes consisting of deleted edges. Wulff-Nilsen said, "The tremendous advantage of seeing a network as an abstract graph is that it can be used to represent any type of network. It could be the Internet, where you want to send data via as short a route as possible, a human brain, or the network of friendship relations on Facebook." Wulff-Nilsen described the algorithm as "better than every other algorithm up to now - and the closest thing to optimal that will ever be, even if we look 1,000 years into the future."
[]
[]
[]
scitechnews
None
None
None
None
Computer scientists at Denmark's University of Copenhagen (UCPH) have solved the classic algorithmic problem of calculating the shortest path between two points when the route traverses a changing network. UCPH's Christian Wulff-Nilsen and colleagues represent a network as a dynamic graph, and their new algorithm accommodates changes consisting of deleted edges. Wulff-Nilsen said, "The tremendous advantage of seeing a network as an abstract graph is that it can be used to represent any type of network. It could be the Internet, where you want to send data via as short a route as possible, a human brain, or the network of friendship relations on Facebook." Wulff-Nilsen described the algorithm as "better than every other algorithm up to now - and the closest thing to optimal that will ever be, even if we look 1,000 years into the future." When heading somewhere new, most of us leave it to computer algorithms to help us find the best route, whether by using a car's GPS, or public transport and map apps on their phone. Still, there are times when a proposed route doesn't quite align with reality. This is because road networks, public transportation networks and other networks aren't static. The best route can suddenly be the slowest, e.g. because a queue has formed due to roadworks or an accident. People probably don't think about the complicated math behind routing suggestions in these types of situations. The software being used is trying to solve a variant for the classic algorithmic "shortest path" problem, the shortest path in a dynamic network. For 40 years, researchers have been working to find an algorithm that can optimally solve this mathematical conundrum. Now, Christian Wulff-Nilsen of the University of Copenhagen's Department of Computer Science has succeeded in cracking the nut along with two colleagues. "We have developed an algorithm, for which we now have mathematical proof, that it is better than every other algorithm up to now - and the closest thing to optimal that will ever be, even if we look 1000 years into the future," says Associate Professor Wulff-Nilsen. The results were presented at the prestigious FOCS 2020 conference. Optimally, in this context, refers to an algorithm that spends as little time and as little computer memory as possible to calculate the best route in a given network. This is not just true of road and transportation networks, but also the internet or any other type of network. The researchers represent a network as a so-called dynamic graph." In this context, a graph is an abstract representation of a network consisting of edges, roads for example, and nodes, representing intersections, for example. When a graph is dynamic, it means that it can change over time. The new algorithm handles changes consisting of deleted edges - for example, if the equivalent of a stretch of a road suddenly becomes inaccessible due to roadworks. "The tremendous advantage of seeing a network as an abstract graph is that it can be used to represent any type of network. It could be the internet, where you want to send data via as short a route as possible, a human brain or the network of friendship relations on Facebook. This makes graph algorithms applicable in a wide variety of contexts," explains Christian Wulff-Nilsen. Traditional algorithms assume that a graph is static, which is rarely true in the real world. When these kinds of algorithms are used in a dynamic network, they need to be rerun every time a small change occurs in the graph - which wastes time. Finding better algorithms is not just useful when travelling. It is necessary in virtually any area where data is produced, as Christian Wulff-Nilsen points out: "We are living in a time when volumes of data grow at a tremendous rate and the development of hardware simply can't keep up. In order to manage all of the data we produce, we need to develop smarter software that requires less running time and memory. That's why we need smarter algorithms," he says. He hopes that it will be possible to use this algorithm or some of the techniques behind it in practice, but stresses that this is theoretical evidence and first requires experimentation.
756
Drones Used in Search and Rescue Trials at North Sea Wind Farm
A range of tests using drones at a wind farm in the North Sea have been completed, offering another glimpse of how the technology could have an important role to play in the renewables sector. The tests were carried out at the 309 megawatt Rentel offshore wind farm by DEME Offshore and Sabca, a Belgian aerospace firm. According to an announcement from DEME Offshore earlier this week, the trial focused on several areas including turbine inspections, environmental surveys and parcel deliveries. One part of the pilot involved an automated drone being deployed to carry out a search and rescue demonstration, in which it used infrared detection to locate its target before dropping a life buoy into the sea. "We are convinced that these innovative, advanced technologies, which focus on fully autonomous operations without the need for any vessels and people offshore, have a game-changing potential to increase safety, lower the impact on the environment in the O&M phase of a project and reduce the overall costs," Bart De Poorter, general manager at DEME Offshore, said in a statement. The term "O&M" refers to operations and maintenance. Details of the trial at the Rentel facility follow an announcement last week that researchers in the U.K. were attempting to find sites for marine energy installations using drone technology. The 12-month project is headed up by scientists from the University of the Highlands and Islands in Scotland and will also involve researchers from Bangor University and Swansea University in Wales. According to UHI, drones will be used to "film the movement of water then apply algorithms to determine its speed." The team will undertake tests in a range of weather conditions in Ramsey Sound, Wales, and Pentland Firth, Scotland. The broad idea behind the pilot is that drones could offer a cheaper and more streamlined approach to finding potential spots for the installation of tidal turbines compared to current techniques, which use seabed sensors and survey vessels. The use of drones within the energy sector is well established. Back in 2019, researchers in the U.K. said they had developed autonomous drones that could inspect offshore energy sites. And in 2018, Air Control Entech and the Oil & Gas Technology Centre launched three drones which could live stream offshore inspections and undertake three-dimensional laser scanning and ultrasonic testing.
The international Dredging, Environmental, and Marine Engineering (DEME) Offshore conglomerate and Belgian aerospace firm Sabca has completed tests of drones in various scenarios at a wind farm in the North Sea. DEME Offshore said the pilot program involved using drone for turbine inspections, environmental surveys, and parcel deliveries, as well as a search and rescue operation involving an automated drone using infrared detection to locate its target before dropping a life buoy into the sea. DEME Offshore's Bart De Poorter said, "We are convinced that these innovative, advanced technologies, which focus on fully autonomous operations without the need for any vessels and people offshore, have a game-changing potential to increase safety, lower the impact on the environment in the [operations and management] phase of a project, and reduce the overall costs."
[]
[]
[]
scitechnews
None
None
None
None
The international Dredging, Environmental, and Marine Engineering (DEME) Offshore conglomerate and Belgian aerospace firm Sabca has completed tests of drones in various scenarios at a wind farm in the North Sea. DEME Offshore said the pilot program involved using drone for turbine inspections, environmental surveys, and parcel deliveries, as well as a search and rescue operation involving an automated drone using infrared detection to locate its target before dropping a life buoy into the sea. DEME Offshore's Bart De Poorter said, "We are convinced that these innovative, advanced technologies, which focus on fully autonomous operations without the need for any vessels and people offshore, have a game-changing potential to increase safety, lower the impact on the environment in the [operations and management] phase of a project, and reduce the overall costs." A range of tests using drones at a wind farm in the North Sea have been completed, offering another glimpse of how the technology could have an important role to play in the renewables sector. The tests were carried out at the 309 megawatt Rentel offshore wind farm by DEME Offshore and Sabca, a Belgian aerospace firm. According to an announcement from DEME Offshore earlier this week, the trial focused on several areas including turbine inspections, environmental surveys and parcel deliveries. One part of the pilot involved an automated drone being deployed to carry out a search and rescue demonstration, in which it used infrared detection to locate its target before dropping a life buoy into the sea. "We are convinced that these innovative, advanced technologies, which focus on fully autonomous operations without the need for any vessels and people offshore, have a game-changing potential to increase safety, lower the impact on the environment in the O&M phase of a project and reduce the overall costs," Bart De Poorter, general manager at DEME Offshore, said in a statement. The term "O&M" refers to operations and maintenance. Details of the trial at the Rentel facility follow an announcement last week that researchers in the U.K. were attempting to find sites for marine energy installations using drone technology. The 12-month project is headed up by scientists from the University of the Highlands and Islands in Scotland and will also involve researchers from Bangor University and Swansea University in Wales. According to UHI, drones will be used to "film the movement of water then apply algorithms to determine its speed." The team will undertake tests in a range of weather conditions in Ramsey Sound, Wales, and Pentland Firth, Scotland. The broad idea behind the pilot is that drones could offer a cheaper and more streamlined approach to finding potential spots for the installation of tidal turbines compared to current techniques, which use seabed sensors and survey vessels. The use of drones within the energy sector is well established. Back in 2019, researchers in the U.K. said they had developed autonomous drones that could inspect offshore energy sites. And in 2018, Air Control Entech and the Oil & Gas Technology Centre launched three drones which could live stream offshore inspections and undertake three-dimensional laser scanning and ultrasonic testing.
758
Robots Designed to Avoid Environmental Dangers, Deliver Data Quickly
A University of Texas at Dallas research group has developed an autonomous robotic team of devices that can be used at hazardous or difficult-to-reach sites to make surveys and collect data - providing more and faster insights than human beings are able to deliver. Dr. David Lary , professor of physics in the School of Natural Sciences and Mathematics , said his group's robot teams - composed of autonomous devices that gather data on the ground, in the air and in water - would be ideally suited for hazardous environmental situations and/or for holistic environmental surveys of ecosystems. "An autonomous team like this could do a survey and rapidly sample what's in the air and the water so that people could be kept out of harm's way," Lary said. "In another context, the robots could provide a general survey of ecosystems, or they could look at situations such as harmful algal blooms in lakes." Lary said the autonomous robotic teams are also useful for real-time decision support in areas such as agriculture and infrastructure inspection. A recent demonstration in the field showed how the autonomous robotic team can rapidly learn the characteristics of environments it has never seen before. Lary and his colleagues deployed the robots in a test run in Plano, Texas, to demonstrate their data-gathering capabilities. He said he hopes the robot team prototype can be a model for changing the methods that are used to survey disaster sites, waterways and extreme environments. The rapid acquisition of holistic data by coordinated robotic team members facilitates transparent, data-driven decisions. The approach allows for more real-time data to be gathered more rapidly and for streamlined software updates for the machines, Lary said. The multirobot, multisensor team can include various combinations of devices, such as a robot boat that carries sensors to measure water composition, as well as sonar to track objects below the surface and to provide aquatic remote sensing. At the same time, an overflying aerial robot collects hyperspectral images, providing an entire spectrum for every pixel in the image. Using the remotely sensed information, the devices - through machine learning - can rapidly construct wide area maps of the environmental state. "Not only do we get depth information, we also can measure the height of any vegetation that's in the water. We can determine what is at the base of a pool, pond or estuary and the kinds of fish in the vicinity. With the sonar we can count and size the individual fish and get the total biomass in a vertical profile," Lary said. In addition to the boat, the robot team includes an unmanned aerial drone that carries several cameras, an array of onboard sensors and a downwelling irradiance spectrometer, which gathers data about the radiation directed toward the Earth from the sun or the atmosphere. In addition, a ground vehicle can collect soil samples and utilize ground-penetrating radar. Satellite data can be added to the team to provide photos and measurements from space. Besides allowing access to areas that are typically inaccessible to or dangerous for humans, the robot team approach significantly enhances the amount of data that can be collected. "In just a few minutes we can collect many thousands of data records," Lary said. "So, if you were to deploy a robot team multiple times over several locations in a period of about a month, you could get hundreds of thousands - even millions - of records. It's the rapid acquisition of relevant data that can help keep people out of harm's way, which is the point." In addition to gathering large amounts of correlated data rapidly, another way that the robot team is improving the survey process is the ease by which software updates are provided to the machines. "Just like Tesla vehicles, they simply receive over-the-air updates that enhance their capabilities. It works well for Tesla, and we envision the same thing for our robotic team," he said. Lary said the robot team is just one aspect of his research, which is focused on developing comprehensive, turnkey sensing systems that connect with back-end data systems to turn streams of information into actionable insights. "The single driving goal of everything I do is preemptive human protection. It's trying to keep people out of harm's way and to have a suite of sentinels that can give us real-time information," he said. "I want this capability to be available to municipalities, health departments, corporations and individuals, through an extensible store where, like Lego blocks, you can get the individual sensing systems that can help with disaster response or just routine planning." The research was funded in part by the Texas National Security Network Excellence Fund award for Environmental Sensing Security Sentinels and the SOFWERX award for Machine Learning. Lary's research was assisted by the Texas Research and Education Cyberinfrastructure Services center, led by Dr. Christopher Simmons, director in the Office of Information Technology. Supported by a grant from the National Science Foundation, the center provides computing support for various research projects at UT Dallas and other UT System schools.
An autonomous team of robotic devices developed by researchers at the University of Texas at Dallas (UT Dallas) can be deployed to perform a general survey of ecosystems or at hazardous or hard-to-reach sites for real-time decision support. The devices collect thousands of data records while on the ground, in the air, or in water within minutes. The autonomous devices include robot boats to measure water composition, sonar to detect objects below the water's surface, aerial drones with multiple onboard sensors, and a ground vehicle to collect soil samples and deploy ground-penetrating radar. UT Dallas' David Lary said, "An autonomous team like this could do a survey and rapidly sample what's in the air and the water so that people could be kept out of harm's way."
[]
[]
[]
scitechnews
None
None
None
None
An autonomous team of robotic devices developed by researchers at the University of Texas at Dallas (UT Dallas) can be deployed to perform a general survey of ecosystems or at hazardous or hard-to-reach sites for real-time decision support. The devices collect thousands of data records while on the ground, in the air, or in water within minutes. The autonomous devices include robot boats to measure water composition, sonar to detect objects below the water's surface, aerial drones with multiple onboard sensors, and a ground vehicle to collect soil samples and deploy ground-penetrating radar. UT Dallas' David Lary said, "An autonomous team like this could do a survey and rapidly sample what's in the air and the water so that people could be kept out of harm's way." A University of Texas at Dallas research group has developed an autonomous robotic team of devices that can be used at hazardous or difficult-to-reach sites to make surveys and collect data - providing more and faster insights than human beings are able to deliver. Dr. David Lary , professor of physics in the School of Natural Sciences and Mathematics , said his group's robot teams - composed of autonomous devices that gather data on the ground, in the air and in water - would be ideally suited for hazardous environmental situations and/or for holistic environmental surveys of ecosystems. "An autonomous team like this could do a survey and rapidly sample what's in the air and the water so that people could be kept out of harm's way," Lary said. "In another context, the robots could provide a general survey of ecosystems, or they could look at situations such as harmful algal blooms in lakes." Lary said the autonomous robotic teams are also useful for real-time decision support in areas such as agriculture and infrastructure inspection. A recent demonstration in the field showed how the autonomous robotic team can rapidly learn the characteristics of environments it has never seen before. Lary and his colleagues deployed the robots in a test run in Plano, Texas, to demonstrate their data-gathering capabilities. He said he hopes the robot team prototype can be a model for changing the methods that are used to survey disaster sites, waterways and extreme environments. The rapid acquisition of holistic data by coordinated robotic team members facilitates transparent, data-driven decisions. The approach allows for more real-time data to be gathered more rapidly and for streamlined software updates for the machines, Lary said. The multirobot, multisensor team can include various combinations of devices, such as a robot boat that carries sensors to measure water composition, as well as sonar to track objects below the surface and to provide aquatic remote sensing. At the same time, an overflying aerial robot collects hyperspectral images, providing an entire spectrum for every pixel in the image. Using the remotely sensed information, the devices - through machine learning - can rapidly construct wide area maps of the environmental state. "Not only do we get depth information, we also can measure the height of any vegetation that's in the water. We can determine what is at the base of a pool, pond or estuary and the kinds of fish in the vicinity. With the sonar we can count and size the individual fish and get the total biomass in a vertical profile," Lary said. In addition to the boat, the robot team includes an unmanned aerial drone that carries several cameras, an array of onboard sensors and a downwelling irradiance spectrometer, which gathers data about the radiation directed toward the Earth from the sun or the atmosphere. In addition, a ground vehicle can collect soil samples and utilize ground-penetrating radar. Satellite data can be added to the team to provide photos and measurements from space. Besides allowing access to areas that are typically inaccessible to or dangerous for humans, the robot team approach significantly enhances the amount of data that can be collected. "In just a few minutes we can collect many thousands of data records," Lary said. "So, if you were to deploy a robot team multiple times over several locations in a period of about a month, you could get hundreds of thousands - even millions - of records. It's the rapid acquisition of relevant data that can help keep people out of harm's way, which is the point." In addition to gathering large amounts of correlated data rapidly, another way that the robot team is improving the survey process is the ease by which software updates are provided to the machines. "Just like Tesla vehicles, they simply receive over-the-air updates that enhance their capabilities. It works well for Tesla, and we envision the same thing for our robotic team," he said. Lary said the robot team is just one aspect of his research, which is focused on developing comprehensive, turnkey sensing systems that connect with back-end data systems to turn streams of information into actionable insights. "The single driving goal of everything I do is preemptive human protection. It's trying to keep people out of harm's way and to have a suite of sentinels that can give us real-time information," he said. "I want this capability to be available to municipalities, health departments, corporations and individuals, through an extensible store where, like Lego blocks, you can get the individual sensing systems that can help with disaster response or just routine planning." The research was funded in part by the Texas National Security Network Excellence Fund award for Environmental Sensing Security Sentinels and the SOFWERX award for Machine Learning. Lary's research was assisted by the Texas Research and Education Cyberinfrastructure Services center, led by Dr. Christopher Simmons, director in the Office of Information Technology. Supported by a grant from the National Science Foundation, the center provides computing support for various research projects at UT Dallas and other UT System schools.
760
Hackers Breach Thousands of Security Cameras, Exposing Tesla, Jails, Hospitals
Hackers say they have compromised data from as many as 150,000 surveillance cameras, including footage from electric vehicle company Tesla. An international hacking collective executed the breach to demonstrate the ease of exposing video surveillance by targeting camera data provided by enterprise security startup Verkada. In addition to footage from Tesla factories and warehouses, the hackers exposed footage from the offices of software provider Cloudflare, and from hospitals, schools, jails, and police stations. Tillie Kottmann, one of the hackers claiming credit for the breach, said the collective obtained root access to cameras, enabling them to execute their own code; they exploited a Super Admin account to access the cameras, and found a username and password for an administrator account online. A Verkada spokesperson said the company has disabled all internal administrator accounts to block unauthorized access.
[]
[]
[]
scitechnews
None
None
None
None
Hackers say they have compromised data from as many as 150,000 surveillance cameras, including footage from electric vehicle company Tesla. An international hacking collective executed the breach to demonstrate the ease of exposing video surveillance by targeting camera data provided by enterprise security startup Verkada. In addition to footage from Tesla factories and warehouses, the hackers exposed footage from the offices of software provider Cloudflare, and from hospitals, schools, jails, and police stations. Tillie Kottmann, one of the hackers claiming credit for the breach, said the collective obtained root access to cameras, enabling them to execute their own code; they exploited a Super Admin account to access the cameras, and found a username and password for an administrator account online. A Verkada spokesperson said the company has disabled all internal administrator accounts to block unauthorized access.
761
Smart Speakers Can Detect Abnormal Heart Rhythms, Researchers Find
University of Washington (UW) researchers have developed a contactless method of screening for irregular heartbeats using smart speakers and an artificial intelligence-powered system that employs sonar. UW's Arun Sridhar said the goal was to use existing appliances to advance edge cardiology and health monitoring. The system emits audio signals into a room at a volume undetectable to humans, and an algorithm identifies heartbeat vibrations from a person's chest wall as the pulses bounce back to the speaker; a second algorithm measures inter-beat intervals. The UW researchers trained the speakers to detect regular and irregular heart rhythms, and their readings were relatively accurate in comparison to those of medical-grade electrocardiogram monitors.
[]
[]
[]
scitechnews
None
None
None
None
University of Washington (UW) researchers have developed a contactless method of screening for irregular heartbeats using smart speakers and an artificial intelligence-powered system that employs sonar. UW's Arun Sridhar said the goal was to use existing appliances to advance edge cardiology and health monitoring. The system emits audio signals into a room at a volume undetectable to humans, and an algorithm identifies heartbeat vibrations from a person's chest wall as the pulses bounce back to the speaker; a second algorithm measures inter-beat intervals. The UW researchers trained the speakers to detect regular and irregular heart rhythms, and their readings were relatively accurate in comparison to those of medical-grade electrocardiogram monitors.
762
Magnetic Boost Helps Squeeze More Data Onto Computer Hard Disks
By Matthew Sparkes A conventional computer hard disc drive Stefan Dinse / Alamy The computer hard discs of the future will have a higher data-storage capacity through the clever use of heating or microwave energy. Researchers at Toshiba have discovered a stepping-stone solution that may help pave the way to those next-generation discs. A hard disc consists of spinning platters covered in microscopic magnetic particles known as grains. The magnetic orientation of a small cluster of grains determines whether a single bit - the smallest unit of computational information - is a 0 or a 1. These grain clusters have become smaller and smaller as manufacturers have sought higher data density. But if they are too small, then very little energy is needed to change their magnetic orientation, leaving bits susceptible to accidental flipping from 0 to 1, or vice versa, and damaging the data on the disc. To fix this problem, manufacturers have used materials with stronger magnetic properties for the grains, meaning they are more likely to hold on to their orientation without flipping. But this leads to a new problem: the components in the hard disc that encode data by flipping the magnetic grains, known as the read/write head, are themselves becoming smaller to increase data density, and these components risk becoming too small and underpowered to flip the strongly magnetic grains. In the next generation of hard discs, heat or microwaves will help give the read/write head the extra energy required to flip the magnetic grains, but this will require redesigning the spinning platters using new materials. Now Hirofumi Suto and his colleagues at Toshiba have discovered a technology for the short term that uses microwaves with existing platter materials. While working with experimental microwave-assisted switching (MAS) devices, they identified an approach called flux control that improves the ability to flip grains, albeit to a limited extent, by amplifying the magnetic field from the read/write head. They realised that this approach works without having to create special materials for the platters, which is essential in a full MAS system. Using this approach, the Toshiba team created a commercial hard disc in a helium-filled enclosure that is now on sale in capacities of up to 18 terabytes. No one at Toshiba was made available to discuss the new work before publication of this story. Siva Sivaram at rival hard disc maker Western Digital says the technology is a stepping stone, but that engineering problems are now forcing a total rethink of traditional designs. "Now it's getting tougher and tougher, you have to think of this whole thing holistically. Heat is the way forward for everybody for the long term. In the middle of the decade, 2024, 2026, you're going to have heat. But it adds a lot of cost and complexity and reliability issues," he says. Journal reference: Journal of Applied Physics , DOI: 10.1063/5.0041561
A short-term technology developed by researchers at Japan's Toshiba may help clear a path toward next-generation computer hard disks by utilizing microwaves with existing platter material. Next-generation disks are expected to have higher data-storage capacity as a result of microwave-assisted switching, and Toshiba's Hirofumi Suto and colleagues' approach works by amplifying the magnetic field from the read/write head. The team used this method to fabricate a commercial hard disk in a helium-filled container that is being sold in capacities of up to 18 terabytes. Siva Sivaram at hard-disc manufacturer Western Digital said future disks will utilize heat as a long-term measure for boosting data storage, "[b]ut it adds a lot of cost and complexity and reliability issues."
[]
[]
[]
scitechnews
None
None
None
None
A short-term technology developed by researchers at Japan's Toshiba may help clear a path toward next-generation computer hard disks by utilizing microwaves with existing platter material. Next-generation disks are expected to have higher data-storage capacity as a result of microwave-assisted switching, and Toshiba's Hirofumi Suto and colleagues' approach works by amplifying the magnetic field from the read/write head. The team used this method to fabricate a commercial hard disk in a helium-filled container that is being sold in capacities of up to 18 terabytes. Siva Sivaram at hard-disc manufacturer Western Digital said future disks will utilize heat as a long-term measure for boosting data storage, "[b]ut it adds a lot of cost and complexity and reliability issues." By Matthew Sparkes A conventional computer hard disc drive Stefan Dinse / Alamy The computer hard discs of the future will have a higher data-storage capacity through the clever use of heating or microwave energy. Researchers at Toshiba have discovered a stepping-stone solution that may help pave the way to those next-generation discs. A hard disc consists of spinning platters covered in microscopic magnetic particles known as grains. The magnetic orientation of a small cluster of grains determines whether a single bit - the smallest unit of computational information - is a 0 or a 1. These grain clusters have become smaller and smaller as manufacturers have sought higher data density. But if they are too small, then very little energy is needed to change their magnetic orientation, leaving bits susceptible to accidental flipping from 0 to 1, or vice versa, and damaging the data on the disc. To fix this problem, manufacturers have used materials with stronger magnetic properties for the grains, meaning they are more likely to hold on to their orientation without flipping. But this leads to a new problem: the components in the hard disc that encode data by flipping the magnetic grains, known as the read/write head, are themselves becoming smaller to increase data density, and these components risk becoming too small and underpowered to flip the strongly magnetic grains. In the next generation of hard discs, heat or microwaves will help give the read/write head the extra energy required to flip the magnetic grains, but this will require redesigning the spinning platters using new materials. Now Hirofumi Suto and his colleagues at Toshiba have discovered a technology for the short term that uses microwaves with existing platter materials. While working with experimental microwave-assisted switching (MAS) devices, they identified an approach called flux control that improves the ability to flip grains, albeit to a limited extent, by amplifying the magnetic field from the read/write head. They realised that this approach works without having to create special materials for the platters, which is essential in a full MAS system. Using this approach, the Toshiba team created a commercial hard disc in a helium-filled enclosure that is now on sale in capacities of up to 18 terabytes. No one at Toshiba was made available to discuss the new work before publication of this story. Siva Sivaram at rival hard disc maker Western Digital says the technology is a stepping stone, but that engineering problems are now forcing a total rethink of traditional designs. "Now it's getting tougher and tougher, you have to think of this whole thing holistically. Heat is the way forward for everybody for the long term. In the middle of the decade, 2024, 2026, you're going to have heat. But it adds a lot of cost and complexity and reliability issues," he says. Journal reference: Journal of Applied Physics , DOI: 10.1063/5.0041561
763
Drones vs. Hungry Moths: Dutch Use Tech to Protect Crops
MONSTER, Netherlands (AP) - Dutch cress grower Rob Baan has enlisted high-tech helpers to tackle a pest in his greenhouses: palm-sized drones seek and destroy moths that produce caterpillars that can chew up his crops. "I have unique products where you don't get certification to spray chemicals and I don't want it," Baan said in an interview in a greenhouse bathed in the pink glow of LED lights that help his seedlings grow. His company, Koppert Cress, exports aromatic seedlings, plants and flowers to top-end restaurants around the world. A keen adopter of innovative technology in his greenhouses, Baan turned to PATS Indoor Drone Solutions, a startup that is developing autonomous drone systems as greenhouse sentinels, to add another layer of protection for his plants. The drones themselves are basic, but they are steered by smart technology aided by special cameras that scan the airspace in greenhouses. The drones instantly kill the moths by flying into them, destroying them in midair. "So it sees the moth flying by, it knows where the drone is ... and then it just directs the drone towards the moth," said PATS chief technical officer Kevin van Hecke. There weren't any moths around on a recent greenhouse visit by The Associated Press, but the company has released video shot in a controlled environment that shows how one bug is instantly pulverized by a drone rotor. The drones form part of an array of pest control systems in Baan's greenhouses that also includes other bugs, pheromone traps and bumblebees. The drone system is the brainchild of former students from the Technical University in Delft who thought up the idea after wondering if they might be able to use drones to kill mosquitos buzzing around their rooms at night. Baan says the drone control system is smart enough to distinguish between good and bad critters. "You don't want to kill a ladybug, because a ladybug is very helpful against aphids," he said. "So they should kill the bad ones, not the good ones. And the good ones are sometimes very expensive - I pay at least 50 cents for one bumblebee, so I don't want them to kill my bumblebees." The young company is still working to perfect the technology. "It's still a development product, but we ... have very good results. We are targeting moths and we are taking out moths every night in an autonomous way without human intervention," said PATS CEO Bram Tijmons. "I think that's a good step forward." Baan also acknowledges that the system still needs refining. "I think they still need too many drones ... but it will be manageable, it will be less," he said. "I think they can do this greenhouse in the future maybe with 50 small drones, and then it's very beneficial."
Dutch farmers are adopting palm-sized drones from autonomous systems developer PATS Indoor Drone Solutions to protect their crops against moths. The drones patrol greenhouses and kill moths by flying into them, directed by smart technology that uses special cameras to scan the airspace. PATS' Kevin van Hecke said the drone "sees the moth flying by, it knows where the drone is ... and then it just directs the drone towards the moth." Dutch cress grower Rob Baan said the system can distinguish between helpful and destructive insects. PATS' Bram Tijmons said, "We are targeting moths and we are taking out moths every night in an autonomous way without human intervention."
[]
[]
[]
scitechnews
None
None
None
None
Dutch farmers are adopting palm-sized drones from autonomous systems developer PATS Indoor Drone Solutions to protect their crops against moths. The drones patrol greenhouses and kill moths by flying into them, directed by smart technology that uses special cameras to scan the airspace. PATS' Kevin van Hecke said the drone "sees the moth flying by, it knows where the drone is ... and then it just directs the drone towards the moth." Dutch cress grower Rob Baan said the system can distinguish between helpful and destructive insects. PATS' Bram Tijmons said, "We are targeting moths and we are taking out moths every night in an autonomous way without human intervention." MONSTER, Netherlands (AP) - Dutch cress grower Rob Baan has enlisted high-tech helpers to tackle a pest in his greenhouses: palm-sized drones seek and destroy moths that produce caterpillars that can chew up his crops. "I have unique products where you don't get certification to spray chemicals and I don't want it," Baan said in an interview in a greenhouse bathed in the pink glow of LED lights that help his seedlings grow. His company, Koppert Cress, exports aromatic seedlings, plants and flowers to top-end restaurants around the world. A keen adopter of innovative technology in his greenhouses, Baan turned to PATS Indoor Drone Solutions, a startup that is developing autonomous drone systems as greenhouse sentinels, to add another layer of protection for his plants. The drones themselves are basic, but they are steered by smart technology aided by special cameras that scan the airspace in greenhouses. The drones instantly kill the moths by flying into them, destroying them in midair. "So it sees the moth flying by, it knows where the drone is ... and then it just directs the drone towards the moth," said PATS chief technical officer Kevin van Hecke. There weren't any moths around on a recent greenhouse visit by The Associated Press, but the company has released video shot in a controlled environment that shows how one bug is instantly pulverized by a drone rotor. The drones form part of an array of pest control systems in Baan's greenhouses that also includes other bugs, pheromone traps and bumblebees. The drone system is the brainchild of former students from the Technical University in Delft who thought up the idea after wondering if they might be able to use drones to kill mosquitos buzzing around their rooms at night. Baan says the drone control system is smart enough to distinguish between good and bad critters. "You don't want to kill a ladybug, because a ladybug is very helpful against aphids," he said. "So they should kill the bad ones, not the good ones. And the good ones are sometimes very expensive - I pay at least 50 cents for one bumblebee, so I don't want them to kill my bumblebees." The young company is still working to perfect the technology. "It's still a development product, but we ... have very good results. We are targeting moths and we are taking out moths every night in an autonomous way without human intervention," said PATS CEO Bram Tijmons. "I think that's a good step forward." Baan also acknowledges that the system still needs refining. "I think they still need too many drones ... but it will be manageable, it will be less," he said. "I think they can do this greenhouse in the future maybe with 50 small drones, and then it's very beneficial."
764
Facebook Researchers Report Advance in Computer Vision
Facebook AI, the artificial-intelligence research arm of Facebook Inc., said it has developed a software tool kit that will allow companies to create highly accurate computer vision software in less time than it takes to build the systems with the methods commonly used today. The tool kit, called Vissl, will allow companies to use an emerging AI technique known as self-supervised learning, where AI models train themselves on large sets of data without the need for external labels. As applied to computer vision, where machines...
A software toolkit developed by Facebook's artificial intelligence (AI) research arm could enable companies to create highly accurate computer vision software more quickly. Facebook AI's Vissl toolkit leverages self-supervised learning, in which AI models train themselves on large datasets without external labels. Facebook's Yann LeCun said the techniques "allow you to basically reduce the amount of labeled data that is required to reach reasonable performance." Gartner's Carlton Sapp said the time required to build computer vision systems potentially could be halved using such self-supervised learning methods. LeCun, named 2018 ACM A.M. Turing Award laureate for his work on deep neural networks, said the technique also will boost the accuracy of computer vision systems by allowing analysis of more items in an image. In tests on the ImageNet database, Facebook's techniques achieved 85% accuracy, compared to 80% for computer vision systems trained with supervised learning.
[]
[]
[]
scitechnews
None
None
None
None
A software toolkit developed by Facebook's artificial intelligence (AI) research arm could enable companies to create highly accurate computer vision software more quickly. Facebook AI's Vissl toolkit leverages self-supervised learning, in which AI models train themselves on large datasets without external labels. Facebook's Yann LeCun said the techniques "allow you to basically reduce the amount of labeled data that is required to reach reasonable performance." Gartner's Carlton Sapp said the time required to build computer vision systems potentially could be halved using such self-supervised learning methods. LeCun, named 2018 ACM A.M. Turing Award laureate for his work on deep neural networks, said the technique also will boost the accuracy of computer vision systems by allowing analysis of more items in an image. In tests on the ImageNet database, Facebook's techniques achieved 85% accuracy, compared to 80% for computer vision systems trained with supervised learning. Facebook AI, the artificial-intelligence research arm of Facebook Inc., said it has developed a software tool kit that will allow companies to create highly accurate computer vision software in less time than it takes to build the systems with the methods commonly used today. The tool kit, called Vissl, will allow companies to use an emerging AI technique known as self-supervised learning, where AI models train themselves on large sets of data without the need for external labels. As applied to computer vision, where machines...
765
Researchers Are Peering Inside Computer Brains. What They've Found Will Surprise You
Researchers at artificial intelligence (AI) research company OpenAI developed new techniques to examine the inner workings of neural networks to help interpret their decision-making. As neuroscientists have found in studies of the human brain, the researchers found individual neurons in a large neural network used to identify and categorize images can encode a particular concept. This finding is important given the challenges of understanding the rationale behind decisions made by neural networks. The researchers used reverse-engineering techniques to determine what most activated a particular artificial neuron. Among other things, the researchers identified a bias that could enable someone to trick the AI into making incorrect identifications. Said OpenAI's Gabriel Goh, "I think you definitely see a lot of stereotyping in the model."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at artificial intelligence (AI) research company OpenAI developed new techniques to examine the inner workings of neural networks to help interpret their decision-making. As neuroscientists have found in studies of the human brain, the researchers found individual neurons in a large neural network used to identify and categorize images can encode a particular concept. This finding is important given the challenges of understanding the rationale behind decisions made by neural networks. The researchers used reverse-engineering techniques to determine what most activated a particular artificial neuron. Among other things, the researchers identified a bias that could enable someone to trick the AI into making incorrect identifications. Said OpenAI's Gabriel Goh, "I think you definitely see a lot of stereotyping in the model."
767
Tiny Computers Reveal How Wild Bats Hunt So Efficiently
The bats' echolocation is more advanced than previously thought. Bats muffle their screams almost to a whisper when hunting, so echoes from trees and buildings do not drown out echoes from the prey. This is revealed by an international research team led by Aarhus University using miniature computers that they have put on the backs of wild bats. 2021.03.03 | Peter F. Gammelby og Laura Stidsholt An international research team has taken a seat on the back of wild bats to experience their world through echoes recorded on-board the bats by 3-gram computers. In a new paper published in Science Advances , the research team from Aarhus University and the Max Planck Institute of Ornithology attached echo- and motion-recording tags to wild greater mouse-eared bats in Bulgaria. "We experienced the world through the ears of the bats by recording their echoes directly on-board while they were hunting for insects at night," said Dr. Laura Stidsholt, postdoc at Aarhus University and leading author of the study. Facts: - The study was conducted on 10 female greater mouse-eared bats at the Orlova Chuka cave in Bulgaria. These bats can weigh up to 35 grams, and they were not bothered by the electronic backpacks - which, incidentally, they were all freed from again. Albeit with some difficulty, for they are quite hard to catch. - The study was funded by a Carlsberg Semper Ardens grant to Peter Teglberg Madsen and the Emmy Noether program of Deutsche Forschungsgemeinschaft to Holger R. Goerlitz. The tags recorded the echolocation calls and the movement of each bat in three dimensions, but most importantly, also the echoes returning from their environment during one full night of foraging. This allowed the research team to tap into the sensory scenes of a hunting animal. "We wanted to use the tags to find out how bats control what they "see" when they hunt tiny insects on the wing on superfast timescales. We used the sound recordings to find and track echoes from prey and vegetation, and to our surprise, we found that the bats are guided by extremely weak prey echoes that would be like a whisper to us," said Dr. Laura Stidsholt. The bats themselves control the strength of their returning echoes by calling louder or weaker. So why would they choose these weak echoes, if they could increase the levels by calling louder? To answer this, the researchers quantified the volume of air in which bats could potentially detect an echo for each echolocation call. The bats controlled the size of these sensory volumes by adjusting the strength of their calls. "We found that hunting bats narrow their sensory volumes by more than a thousand times to only focus on the prey and thereby reduce the clutter from other echoes. It's like an acoustic version of a tunnel vision that briefly makes their world much simpler," said Dr. Holger Goerlitz of Max Planck Institute of Ornithology, a co-author of the study. He continued: "The weak prey echoes might therefore be a consequence of the small sensory volumes shaped to hunt close to background clutter." To protect these weak echoes from interference, the research team also showed that the bats used their flight patterns to separate the prey echoes from the background e.g. by flying parallel to trees. "When the bats are hunting, they stay at least a prey detection distance away from the vegetation. We think they do this to avoid masking of the weak prey echoes by the loud echoes from vegetation. By continually adjusting both their flight patterns and their sensory volumes during the hunt, the bats simplify the information they need to process," said senior author Professor Peter Teglberg Madsen of Aarhus University. Peter Teglberg Madsen suggests that the bats dedicate their attention and brain to the most essential information - getting their next meal - and that this might be one of the reasons that bats are such efficient hunters. The miniature tags were designed and developed by Associate Professor Mark Johnson at Aarhus Institute of Advanced Sciences. In the video below, Laura Stidsholt tells about the work of eavesdropping on bats' hunting technique: "It was a real challenge to make a computer so small that it could work on a flying bat and still be sensitive enough to pick up these weak sounds," said Mark Johnson. Echolocating bats account for 20 percent of all mammal species and play important roles in ecosystems across the planet. "We think that this strategy has expanded the niches available to look for insects and is one of the reasons that bats have become such versatile hunters, but we don't know if they are versatile enough to cope with all the changes humans are making to the environment," said Professor Peter Teglberg Madsen. Postdoc Laura Stidsholt Department of Biologi - Zoophysiologi Aarhus University Mail: [email protected] Mobile: +45 2871 7824 Professor Peter Teglberg Madsen Department of Biologi - Zoophysiologi Aarhus University Mail: [email protected] Mobile: +45 5177 8771
Researchers at Denmark's Aarhus University and Germany's Max Planck Institute of Ornithology used 3-gram computers attached to wild greater mouse-eared bats in Bulgaria to study how they hunt. The miniature tags record each bat's echolocation calls and movement in three dimensions. The institute's Holger Goerlitz said, "We found that hunting bats narrow their sensory volumes by more than a thousand times to only focus on the prey, and thereby reduce the clutter from other echoes. It's like an acoustic version of a tunnel vision that briefly makes their world much simpler." Said Aarhus University's Mark Johnson, who developed the tags, "It was a real challenge to make a computer so small that it could work on a flying bat and still be sensitive enough to pick up these weak sounds."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Denmark's Aarhus University and Germany's Max Planck Institute of Ornithology used 3-gram computers attached to wild greater mouse-eared bats in Bulgaria to study how they hunt. The miniature tags record each bat's echolocation calls and movement in three dimensions. The institute's Holger Goerlitz said, "We found that hunting bats narrow their sensory volumes by more than a thousand times to only focus on the prey, and thereby reduce the clutter from other echoes. It's like an acoustic version of a tunnel vision that briefly makes their world much simpler." Said Aarhus University's Mark Johnson, who developed the tags, "It was a real challenge to make a computer so small that it could work on a flying bat and still be sensitive enough to pick up these weak sounds." The bats' echolocation is more advanced than previously thought. Bats muffle their screams almost to a whisper when hunting, so echoes from trees and buildings do not drown out echoes from the prey. This is revealed by an international research team led by Aarhus University using miniature computers that they have put on the backs of wild bats. 2021.03.03 | Peter F. Gammelby og Laura Stidsholt An international research team has taken a seat on the back of wild bats to experience their world through echoes recorded on-board the bats by 3-gram computers. In a new paper published in Science Advances , the research team from Aarhus University and the Max Planck Institute of Ornithology attached echo- and motion-recording tags to wild greater mouse-eared bats in Bulgaria. "We experienced the world through the ears of the bats by recording their echoes directly on-board while they were hunting for insects at night," said Dr. Laura Stidsholt, postdoc at Aarhus University and leading author of the study. Facts: - The study was conducted on 10 female greater mouse-eared bats at the Orlova Chuka cave in Bulgaria. These bats can weigh up to 35 grams, and they were not bothered by the electronic backpacks - which, incidentally, they were all freed from again. Albeit with some difficulty, for they are quite hard to catch. - The study was funded by a Carlsberg Semper Ardens grant to Peter Teglberg Madsen and the Emmy Noether program of Deutsche Forschungsgemeinschaft to Holger R. Goerlitz. The tags recorded the echolocation calls and the movement of each bat in three dimensions, but most importantly, also the echoes returning from their environment during one full night of foraging. This allowed the research team to tap into the sensory scenes of a hunting animal. "We wanted to use the tags to find out how bats control what they "see" when they hunt tiny insects on the wing on superfast timescales. We used the sound recordings to find and track echoes from prey and vegetation, and to our surprise, we found that the bats are guided by extremely weak prey echoes that would be like a whisper to us," said Dr. Laura Stidsholt. The bats themselves control the strength of their returning echoes by calling louder or weaker. So why would they choose these weak echoes, if they could increase the levels by calling louder? To answer this, the researchers quantified the volume of air in which bats could potentially detect an echo for each echolocation call. The bats controlled the size of these sensory volumes by adjusting the strength of their calls. "We found that hunting bats narrow their sensory volumes by more than a thousand times to only focus on the prey and thereby reduce the clutter from other echoes. It's like an acoustic version of a tunnel vision that briefly makes their world much simpler," said Dr. Holger Goerlitz of Max Planck Institute of Ornithology, a co-author of the study. He continued: "The weak prey echoes might therefore be a consequence of the small sensory volumes shaped to hunt close to background clutter." To protect these weak echoes from interference, the research team also showed that the bats used their flight patterns to separate the prey echoes from the background e.g. by flying parallel to trees. "When the bats are hunting, they stay at least a prey detection distance away from the vegetation. We think they do this to avoid masking of the weak prey echoes by the loud echoes from vegetation. By continually adjusting both their flight patterns and their sensory volumes during the hunt, the bats simplify the information they need to process," said senior author Professor Peter Teglberg Madsen of Aarhus University. Peter Teglberg Madsen suggests that the bats dedicate their attention and brain to the most essential information - getting their next meal - and that this might be one of the reasons that bats are such efficient hunters. The miniature tags were designed and developed by Associate Professor Mark Johnson at Aarhus Institute of Advanced Sciences. In the video below, Laura Stidsholt tells about the work of eavesdropping on bats' hunting technique: "It was a real challenge to make a computer so small that it could work on a flying bat and still be sensitive enough to pick up these weak sounds," said Mark Johnson. Echolocating bats account for 20 percent of all mammal species and play important roles in ecosystems across the planet. "We think that this strategy has expanded the niches available to look for insects and is one of the reasons that bats have become such versatile hunters, but we don't know if they are versatile enough to cope with all the changes humans are making to the environment," said Professor Peter Teglberg Madsen. Postdoc Laura Stidsholt Department of Biologi - Zoophysiologi Aarhus University Mail: [email protected] Mobile: +45 2871 7824 Professor Peter Teglberg Madsen Department of Biologi - Zoophysiologi Aarhus University Mail: [email protected] Mobile: +45 5177 8771
768
Algorithm Could Reduce Complexity of Big Data
Researchers at Texas A&M University, the University of Texas at Austin, and Princeton University have developed an algorithm that can be applied to large datasets, with the ability to extract and directly order features from most to least salient. Texas A&M's Reza Oftadeh said, "There are many ad hoc ways to extract these features using machine learning algorithms, but we now have a fully rigorous theoretical proof that our model can find and extract these prominent features from the data simultaneously, doing so in one pass of the algorithm." The algorithm adds a new cost function to an artificial neural network to provide the exact location of features directly ordered by their relative performance, allowing it to perform classic data analysis on larger datasets more efficiently.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Texas A&M University, the University of Texas at Austin, and Princeton University have developed an algorithm that can be applied to large datasets, with the ability to extract and directly order features from most to least salient. Texas A&M's Reza Oftadeh said, "There are many ad hoc ways to extract these features using machine learning algorithms, but we now have a fully rigorous theoretical proof that our model can find and extract these prominent features from the data simultaneously, doing so in one pass of the algorithm." The algorithm adds a new cost function to an artificial neural network to provide the exact location of features directly ordered by their relative performance, allowing it to perform classic data analysis on larger datasets more efficiently.
769
Storing the Declaration of Independence in a Single Molecule
Just how much space would you need to store all of the world's data? A building? A block? A city? The amount of global data is estimated to be around 44 zettabytes. A 15-million-square-foot warehouse can hold 1 billion gigabytes, or .001 zettabyte. So you would need 44,000 such warehouses - which would cover nearly the entire state of West Virginia. John Chaput is hoping to change all that. A professor of pharmaceutical sciences at UCI with appointments in chemistry and molecular biology & biochemistry, he and his lab team are striving to improve a technique that's already on the bleeding edge of synthetic biology and data storage. By employing an artificial variation of DNA, Chaput is transforming the field of semipermanent data storage. "Unnatural genetic polymers offer a nice paradigm for developing novel soft materials that are capable of low-energy, high-density information storage without the liabilities of DNA," he says. Genetic data encoding is relatively new. Scientists have only been able to effectively record data on and recover data from DNA for about eight years, with the most significant advances happening over the past two. While the process is quickly becoming more cost- and time-effective, other setbacks still inhibit the long-term practicality of the method. For example, DNA is an inherently fragile molecule, susceptible to degradation from numerous naturally occurring enzymes, sunlight, and a slew of acids and bases. For a more robust medium of genetic storage, Chaput chose threose nucleic acid. TNA is much hardier and less prone to degradation from physical factors, including enzymes and acids and bases, but it is not indestructible. TNA can be damaged or destroyed by biological contamination, though this is uncommon. What makes genetic storage so effective is the intrinsic complexity of each molecule versus digital techniques, which use a binary coding system of ones and zeros. Computers convert every symbol, image and sound into a binary sequence and transcribe it to a magnetic or solid-state drive. This process has made incredible leaps and bounds over the past few decades, but soon it may not be enough. "At some time, we will start making more info than we can store," Chaput says. "What do we do then?" By employing the four-letter nucleotide code used in DNA, rather than the binary system, Chaput's team can effectively transcribe data to a strand of DNA, which is made up of four components: adenine, thymine, cytosine and guanine, referred to as A, T, C and G. By sequentially assigning each nucleotide a specific binary number, the researchers can essentially write a binary sequence using these nucleotides. When retrieval of the genetic code is required, a special enzyme that connects the two sequences is added, and the genomic sequence is converted back into the original binary form. TNA also comprises A, T, C and G, but it's a synthetic genetic polymer created by organic chemist Albert Eschenmoser and modified by Chaput to carry information. It's one of several improvements developed by humans to address the innate fragility of DNA. Made using an artificial sugar called threose, TNA has quickly become an important synthetic genetic polymer because of its ability to base pair with other sequences of DNA and RNA, as well as its 100 percent biostability and lack of degrading enzymes. Chaput and his team have already tested this mechanism by transcribing the Declaration of Independence and the UCI seal to a solution of TNA and recovering them. He has theorized that - due to the medium's incredible complexity - all of human history, every book ever written, every song ever sung and every Instagram brunch photo ever taken could be stored in half a cup of liquid TNA. "These systems open the door to new possibilities," Chaput says. They're "quite different than the ones used by nature."
Researchers at the University of California, Irvine (UCI) are using an artificial variation of DNA for data storage. The researchers are using the four-letter nucleotide code in DNA instead of the binary system to transcribe data to a DNA strand. They sequentially assign each nucleotide a specific binary number, which allows them to write a binary sequence using the nucleotides. A special enzyme that connects the two sequences is added when the genetic code must be retrieved. For their experiment, the researchers chose threose nucleic acid (TNA), a synthetic genetic polymer less prone to degradation from physical factors. The researchers were able to transcribe the Declaration of Independence and the UCI seal to a solution of TNA, and recover them. UCI's John Chaput suggested all data generated during all of human history could be stored in just a half-cup of liquid TNA.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University of California, Irvine (UCI) are using an artificial variation of DNA for data storage. The researchers are using the four-letter nucleotide code in DNA instead of the binary system to transcribe data to a DNA strand. They sequentially assign each nucleotide a specific binary number, which allows them to write a binary sequence using the nucleotides. A special enzyme that connects the two sequences is added when the genetic code must be retrieved. For their experiment, the researchers chose threose nucleic acid (TNA), a synthetic genetic polymer less prone to degradation from physical factors. The researchers were able to transcribe the Declaration of Independence and the UCI seal to a solution of TNA, and recover them. UCI's John Chaput suggested all data generated during all of human history could be stored in just a half-cup of liquid TNA. Just how much space would you need to store all of the world's data? A building? A block? A city? The amount of global data is estimated to be around 44 zettabytes. A 15-million-square-foot warehouse can hold 1 billion gigabytes, or .001 zettabyte. So you would need 44,000 such warehouses - which would cover nearly the entire state of West Virginia. John Chaput is hoping to change all that. A professor of pharmaceutical sciences at UCI with appointments in chemistry and molecular biology & biochemistry, he and his lab team are striving to improve a technique that's already on the bleeding edge of synthetic biology and data storage. By employing an artificial variation of DNA, Chaput is transforming the field of semipermanent data storage. "Unnatural genetic polymers offer a nice paradigm for developing novel soft materials that are capable of low-energy, high-density information storage without the liabilities of DNA," he says. Genetic data encoding is relatively new. Scientists have only been able to effectively record data on and recover data from DNA for about eight years, with the most significant advances happening over the past two. While the process is quickly becoming more cost- and time-effective, other setbacks still inhibit the long-term practicality of the method. For example, DNA is an inherently fragile molecule, susceptible to degradation from numerous naturally occurring enzymes, sunlight, and a slew of acids and bases. For a more robust medium of genetic storage, Chaput chose threose nucleic acid. TNA is much hardier and less prone to degradation from physical factors, including enzymes and acids and bases, but it is not indestructible. TNA can be damaged or destroyed by biological contamination, though this is uncommon. What makes genetic storage so effective is the intrinsic complexity of each molecule versus digital techniques, which use a binary coding system of ones and zeros. Computers convert every symbol, image and sound into a binary sequence and transcribe it to a magnetic or solid-state drive. This process has made incredible leaps and bounds over the past few decades, but soon it may not be enough. "At some time, we will start making more info than we can store," Chaput says. "What do we do then?" By employing the four-letter nucleotide code used in DNA, rather than the binary system, Chaput's team can effectively transcribe data to a strand of DNA, which is made up of four components: adenine, thymine, cytosine and guanine, referred to as A, T, C and G. By sequentially assigning each nucleotide a specific binary number, the researchers can essentially write a binary sequence using these nucleotides. When retrieval of the genetic code is required, a special enzyme that connects the two sequences is added, and the genomic sequence is converted back into the original binary form. TNA also comprises A, T, C and G, but it's a synthetic genetic polymer created by organic chemist Albert Eschenmoser and modified by Chaput to carry information. It's one of several improvements developed by humans to address the innate fragility of DNA. Made using an artificial sugar called threose, TNA has quickly become an important synthetic genetic polymer because of its ability to base pair with other sequences of DNA and RNA, as well as its 100 percent biostability and lack of degrading enzymes. Chaput and his team have already tested this mechanism by transcribing the Declaration of Independence and the UCI seal to a solution of TNA and recovering them. He has theorized that - due to the medium's incredible complexity - all of human history, every book ever written, every song ever sung and every Instagram brunch photo ever taken could be stored in half a cup of liquid TNA. "These systems open the door to new possibilities," Chaput says. They're "quite different than the ones used by nature."
770
Engineering Platform Offers Collaborative Cloud Options for Sustainable Manufacturing
WEST LAFAYETTE, Ind. - A Purdue University engineering innovator has developed a cloud-based platform aimed at mapping inter-industry dependence networks for materials and waste generation among manufacturers in sectors such as chemicals, pharmaceuticals and other industries tied to biobased economies. Shweta Singh , an assistant professor in Purdue's College of Engineering , led a team that developed a new method for automated creation of physical input-output tables to track flows in manufacturing networks. PIOTs provide a detailed mapping of inter-industrial dependence to meet a manufacturing target that can help determine economic and environmental outcomes for various manufacturing pathways based on material requirements. "Unlike current technologies that use tiered data flow systems or time series approximations to fill data bandgaps, our new platform allows for dynamic changes in manufacturing network via mechanistic models developed as computer codes or simulation systems to update network structure for industrial interactions," Singh said. "The goal of this technology is to assist manufacturers to track the materials flow and supply network demand to optimize the process and reduce overall waste, as well as assist in the decision-making process to pick the most sustainable and resilient technology in any supplier network." Singh said the algorithms in this bottom-up modular model help to map resource flows with enhanced accuracy and reliability as data can be more easily reconciled based on mechanistic models through this approach. She added that the platform offers a collaborative, cloud-based environment that may have applications for the pharmaceutical, food and chemical processing industries. Singh leads Purdue's Sustainable Industrial Natural Coupled Systems (SINCS) Group. The mission of the SINCS group is to enable sustainable growth of biobased economies and industries along with ensuring local and global ecological sustainability. The work is supported by the National Science Foundation's Division of Chemical, Bioengineering, Environmental and Transport System. This research was supported by the National Science Foundation under grant No. CBET- 1805741. Singh and her team worked with the Purdue Research Foundation Office of Technology Commercialization to patent the technology. The innovators are looking for partners to continue developing and commercializing their platform technology for various industries. For more information on licensing and other opportunities, contact Abhi Karve of OTC at [email protected] and mention track code 2020-SING-68956. A video explaining the technology is available here . About Purdue Research Foundation Office of Technology Commercialization The Purdue Research Foundation Office of Technology Commercialization operates one of the most comprehensive technology transfer programs among leading research universities in the U.S. Services provided by this office support the economic development initiatives of Purdue University and benefit the university's academic activities through commercializing, licensing and protecting Purdue intellectual property. The office is located in the Convergence Center for Innovation and Collaboration in Discovery Park District , adjacent to the Purdue campus. In fiscal year 2020, the office reported 148 deals finalized with 225 technologies signed, 408 disclosures received and 180 issued U.S. patents. The office is managed by the Purdue Research Foundation, which received the 2019 Innovation and Economic Prosperity Universities Award for Place from the Association of Public and Land-grant Universities. In 2020, IPWatchdog Institute ranked Purdue third nationally in startup creation and in the top 20 for patents. The Purdue Research Foundation is a private, nonprofit foundation created to advance the mission of Purdue University. Contact [email protected] for more information. About Purdue University Purdue University is a top public research institution developing practical solutions to today's toughest challenges. Ranked the No. 5 Most Innovative University in the United States by U.S. News & World Report, Purdue delivers world-changing research and out-of-this-world discovery. Committed to hands-on and online, real-world learning, Purdue offers a transformative education to all. Committed to affordability and accessibility, Purdue has frozen tuition and most fees at 2012-13 levels, enabling more students than ever to graduate debt-free. See how Purdue never stops in the persistent pursuit of the next giant leap at purdue.edu . Writer: Chris Adam, [email protected] Source: Shweta Singh, [email protected]
A new cloud-based platform designed by Purdue University researchers aims to map inter-industry dependencies for materials and waste generation among manufacturers in sectors linked to bio-based economies. Purdue's Shweta Singh and colleagues invented a method for automatically generating physical input-output tables to track flows in these networks. Said Singh, "Our new platform allows for dynamic changes in manufacturing network via mechanistic models developed as computer codes or simulation systems to update network structure for industrial interactions. The goal of this technology is to assist manufacturers to track the materials flow and supply network demand to optimize the process and reduce overall waste, as well as assist in the decision-making process to pick the most sustainable and resilient technology in any supplier network."
[]
[]
[]
scitechnews
None
None
None
None
A new cloud-based platform designed by Purdue University researchers aims to map inter-industry dependencies for materials and waste generation among manufacturers in sectors linked to bio-based economies. Purdue's Shweta Singh and colleagues invented a method for automatically generating physical input-output tables to track flows in these networks. Said Singh, "Our new platform allows for dynamic changes in manufacturing network via mechanistic models developed as computer codes or simulation systems to update network structure for industrial interactions. The goal of this technology is to assist manufacturers to track the materials flow and supply network demand to optimize the process and reduce overall waste, as well as assist in the decision-making process to pick the most sustainable and resilient technology in any supplier network." WEST LAFAYETTE, Ind. - A Purdue University engineering innovator has developed a cloud-based platform aimed at mapping inter-industry dependence networks for materials and waste generation among manufacturers in sectors such as chemicals, pharmaceuticals and other industries tied to biobased economies. Shweta Singh , an assistant professor in Purdue's College of Engineering , led a team that developed a new method for automated creation of physical input-output tables to track flows in manufacturing networks. PIOTs provide a detailed mapping of inter-industrial dependence to meet a manufacturing target that can help determine economic and environmental outcomes for various manufacturing pathways based on material requirements. "Unlike current technologies that use tiered data flow systems or time series approximations to fill data bandgaps, our new platform allows for dynamic changes in manufacturing network via mechanistic models developed as computer codes or simulation systems to update network structure for industrial interactions," Singh said. "The goal of this technology is to assist manufacturers to track the materials flow and supply network demand to optimize the process and reduce overall waste, as well as assist in the decision-making process to pick the most sustainable and resilient technology in any supplier network." Singh said the algorithms in this bottom-up modular model help to map resource flows with enhanced accuracy and reliability as data can be more easily reconciled based on mechanistic models through this approach. She added that the platform offers a collaborative, cloud-based environment that may have applications for the pharmaceutical, food and chemical processing industries. Singh leads Purdue's Sustainable Industrial Natural Coupled Systems (SINCS) Group. The mission of the SINCS group is to enable sustainable growth of biobased economies and industries along with ensuring local and global ecological sustainability. The work is supported by the National Science Foundation's Division of Chemical, Bioengineering, Environmental and Transport System. This research was supported by the National Science Foundation under grant No. CBET- 1805741. Singh and her team worked with the Purdue Research Foundation Office of Technology Commercialization to patent the technology. The innovators are looking for partners to continue developing and commercializing their platform technology for various industries. For more information on licensing and other opportunities, contact Abhi Karve of OTC at [email protected] and mention track code 2020-SING-68956. A video explaining the technology is available here . About Purdue Research Foundation Office of Technology Commercialization The Purdue Research Foundation Office of Technology Commercialization operates one of the most comprehensive technology transfer programs among leading research universities in the U.S. Services provided by this office support the economic development initiatives of Purdue University and benefit the university's academic activities through commercializing, licensing and protecting Purdue intellectual property. The office is located in the Convergence Center for Innovation and Collaboration in Discovery Park District , adjacent to the Purdue campus. In fiscal year 2020, the office reported 148 deals finalized with 225 technologies signed, 408 disclosures received and 180 issued U.S. patents. The office is managed by the Purdue Research Foundation, which received the 2019 Innovation and Economic Prosperity Universities Award for Place from the Association of Public and Land-grant Universities. In 2020, IPWatchdog Institute ranked Purdue third nationally in startup creation and in the top 20 for patents. The Purdue Research Foundation is a private, nonprofit foundation created to advance the mission of Purdue University. Contact [email protected] for more information. About Purdue University Purdue University is a top public research institution developing practical solutions to today's toughest challenges. Ranked the No. 5 Most Innovative University in the United States by U.S. News & World Report, Purdue delivers world-changing research and out-of-this-world discovery. Committed to hands-on and online, real-world learning, Purdue offers a transformative education to all. Committed to affordability and accessibility, Purdue has frozen tuition and most fees at 2012-13 levels, enabling more students than ever to graduate debt-free. See how Purdue never stops in the persistent pursuit of the next giant leap at purdue.edu . Writer: Chris Adam, [email protected] Source: Shweta Singh, [email protected]
772
Helping Soft Robots Turn Rigid on Demand
Imagine a robot. Perhaps you've just conjured a machine with a rigid, metallic exterior. While robots armored with hard exoskeletons are common, they're not always ideal. Soft-bodied robots, inspired by fish or other squishy creatures, might better adapt to changing environments and work more safely with people. Roboticists generally have to decide whether to design a hard- or soft-bodied robot for a particular task. But that tradeoff may no longer be necessary. Working with computer simulations, MIT researchers have developed a concept for a soft-bodied robot that can turn rigid on demand. The approach could enable a new generation of robots that combine the strength and precision of rigid robots with the fluidity and safety of soft ones. "This is the first step in trying to see if we can get the best of both worlds," says James Bern, the paper's lead author and a postdoc in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Bern will present the research at the IEEE International Conference on Soft Robotics next month. Bern's advisor, Daniela Rus, who is the CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the paper's other author. Roboticists have experimented with myriad mechanisms to operate soft robots, including inflating balloon-like chambers in a robot's arm or grabbing objects with vacuum-sealed coffee grounds . However, a key unsolved challenge for soft robotics is control - how to drive the robot's actuators in order to achieve a given goal. Until recently, most soft robots were controlled manually, but in 2017 Bern and his colleagues proposed that an algorithm could take the reigns. Using a simulation to help control a cable-driven soft robot, they picked a target position for the robot and had a computer figure out how much to pull on each of the cables in order to get there. A similar sequence happens in our bodies each time we reach for something: A target position for our hand is translated into contractions of the muscles in our arm. Now, Bern and his colleagues are using similar techniques to ask a question that goes beyond the robot's movement: "If I pull the cables in just the right way, can I get the robot to act stiff?" Bern says he can - at least in a computer simulation - thanks to inspiration from the human arm. While contracting the biceps alone can bend your elbow to a certain degree, contracting the biceps and triceps simultaneously can lock your arm rigidly in that position. Put simply, "you can get stiffness by pulling on both sides of something," says Bern. So, he applied the same principle to his robots. The researchers' paper lays out a way to simultaneously control the position and stiffness of a cable-driven soft robot. The method takes advantage of the robots' multiple cables - using some to twist and turn the body, while using others to counterbalance each other to tweak the robot's rigidity. Bern emphasizes that the advance isn't a revolution in mechanical engineering, but rather a new twist on controlling cable-driven soft robots. "This is an intuitive way of expanding how you can control a soft robot," he says. "It's just encoding that idea [of on-demand rigidity] into something a computer can work with." Bern hopes his roadmap will one day allow users to control a robot's rigidity as easily as its motion. On the computer, Bern used his roadmap to simulate movement and rigidity adjustment in robots of various shapes. He tested how well the robots, when stiffened, could resist displacement when pushed. Generally, the robots remained rigid as intended, though they were not equally resistant from all angles. "Dual-mode materials that can change stiffness are always fascinating," says Muhammad Hussain, an electrical engineer at the University of California at Berkeley, who was not involved with the research. He suggested potential applications in health care, where soft robots could one day travel through the blood stream then stiffen to perform microsurgery at a particular site in the body. Hussain say Bern's demonstration "shows a viable path toward that future." Bern is building a prototype robot to test out his rigidity-on-demand control system. But he hopes to one day take the technology out of the lab. "Interacting with humans is definitely a vision for soft robotics," he says. Bern points to potential applications in caring for human patients, where a robot's softness could enhance safety, while its ability to become rigid could allow for lifting when necessary. "The core message is to make it easy to control robots' stiffness," says Bern. "Let's start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform."
Massachusetts Institute of Technology (MIT) researchers used computer models to design a soft-bodied, cable-driven robot that turns rigid on demand by simultaneously controlling its position and stiffness. MIT's James Bern said, "It's just encoding that idea [of on-demand rigidity] into something a computer can work with." Bern used this roadmap to model the tuning of movement and rigidity in robots of various shapes, and tested their ability, when stiffened, to resist displacement when pushed. In simulation, the robots generally remained rigid as intended, but were not equally resistant from all angles. Bern is constructing a prototype robot to test the control system, and said his hope is to "start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform."
[]
[]
[]
scitechnews
None
None
None
None
Massachusetts Institute of Technology (MIT) researchers used computer models to design a soft-bodied, cable-driven robot that turns rigid on demand by simultaneously controlling its position and stiffness. MIT's James Bern said, "It's just encoding that idea [of on-demand rigidity] into something a computer can work with." Bern used this roadmap to model the tuning of movement and rigidity in robots of various shapes, and tested their ability, when stiffened, to resist displacement when pushed. In simulation, the robots generally remained rigid as intended, but were not equally resistant from all angles. Bern is constructing a prototype robot to test the control system, and said his hope is to "start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform." Imagine a robot. Perhaps you've just conjured a machine with a rigid, metallic exterior. While robots armored with hard exoskeletons are common, they're not always ideal. Soft-bodied robots, inspired by fish or other squishy creatures, might better adapt to changing environments and work more safely with people. Roboticists generally have to decide whether to design a hard- or soft-bodied robot for a particular task. But that tradeoff may no longer be necessary. Working with computer simulations, MIT researchers have developed a concept for a soft-bodied robot that can turn rigid on demand. The approach could enable a new generation of robots that combine the strength and precision of rigid robots with the fluidity and safety of soft ones. "This is the first step in trying to see if we can get the best of both worlds," says James Bern, the paper's lead author and a postdoc in MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL). Bern will present the research at the IEEE International Conference on Soft Robotics next month. Bern's advisor, Daniela Rus, who is the CSAIL director and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science, is the paper's other author. Roboticists have experimented with myriad mechanisms to operate soft robots, including inflating balloon-like chambers in a robot's arm or grabbing objects with vacuum-sealed coffee grounds . However, a key unsolved challenge for soft robotics is control - how to drive the robot's actuators in order to achieve a given goal. Until recently, most soft robots were controlled manually, but in 2017 Bern and his colleagues proposed that an algorithm could take the reigns. Using a simulation to help control a cable-driven soft robot, they picked a target position for the robot and had a computer figure out how much to pull on each of the cables in order to get there. A similar sequence happens in our bodies each time we reach for something: A target position for our hand is translated into contractions of the muscles in our arm. Now, Bern and his colleagues are using similar techniques to ask a question that goes beyond the robot's movement: "If I pull the cables in just the right way, can I get the robot to act stiff?" Bern says he can - at least in a computer simulation - thanks to inspiration from the human arm. While contracting the biceps alone can bend your elbow to a certain degree, contracting the biceps and triceps simultaneously can lock your arm rigidly in that position. Put simply, "you can get stiffness by pulling on both sides of something," says Bern. So, he applied the same principle to his robots. The researchers' paper lays out a way to simultaneously control the position and stiffness of a cable-driven soft robot. The method takes advantage of the robots' multiple cables - using some to twist and turn the body, while using others to counterbalance each other to tweak the robot's rigidity. Bern emphasizes that the advance isn't a revolution in mechanical engineering, but rather a new twist on controlling cable-driven soft robots. "This is an intuitive way of expanding how you can control a soft robot," he says. "It's just encoding that idea [of on-demand rigidity] into something a computer can work with." Bern hopes his roadmap will one day allow users to control a robot's rigidity as easily as its motion. On the computer, Bern used his roadmap to simulate movement and rigidity adjustment in robots of various shapes. He tested how well the robots, when stiffened, could resist displacement when pushed. Generally, the robots remained rigid as intended, though they were not equally resistant from all angles. "Dual-mode materials that can change stiffness are always fascinating," says Muhammad Hussain, an electrical engineer at the University of California at Berkeley, who was not involved with the research. He suggested potential applications in health care, where soft robots could one day travel through the blood stream then stiffen to perform microsurgery at a particular site in the body. Hussain say Bern's demonstration "shows a viable path toward that future." Bern is building a prototype robot to test out his rigidity-on-demand control system. But he hopes to one day take the technology out of the lab. "Interacting with humans is definitely a vision for soft robotics," he says. Bern points to potential applications in caring for human patients, where a robot's softness could enhance safety, while its ability to become rigid could allow for lifting when necessary. "The core message is to make it easy to control robots' stiffness," says Bern. "Let's start making soft robots that are safe but can also act rigid on demand, and expand the spectrum of tasks robots can perform."
773
White House Cites 'Active Threat,' Urges Action Despite Microsoft Patch
The White House has advised computer network operators to further efforts to determine whether their systems were targeted by an attack on Microsoft's Outlook email program, warning of serious vulnerabilities still unresolved. Although Microsoft issued a patch to correct flaws in Outlook's software, a back door that can allow access to compromised servers remains open; a White House official called this "an active threat still developing." A source informed Reuters that more than 20,000 U.S. organizations had been compromised by the hack, which Microsoft attributed to China; although for now only a small percentage of infected networks have been compromised via the back door, more attacks are anticipated. Said the White House official, "Patching and mitigation is not remediation if the servers have already been compromised, and it is essential that any organization with a vulnerable server take measures to determine if they were already targeted."
[]
[]
[]
scitechnews
None
None
None
None
The White House has advised computer network operators to further efforts to determine whether their systems were targeted by an attack on Microsoft's Outlook email program, warning of serious vulnerabilities still unresolved. Although Microsoft issued a patch to correct flaws in Outlook's software, a back door that can allow access to compromised servers remains open; a White House official called this "an active threat still developing." A source informed Reuters that more than 20,000 U.S. organizations had been compromised by the hack, which Microsoft attributed to China; although for now only a small percentage of infected networks have been compromised via the back door, more attacks are anticipated. Said the White House official, "Patching and mitigation is not remediation if the servers have already been compromised, and it is essential that any organization with a vulnerable server take measures to determine if they were already targeted."
774
Ghana Using Drones to Deliver Coronavirus Vaccines to Rural Communities
San Francisco-based startup Zipline is delivering coronavirus vaccines to rural communities in Ghana via autonomous drones as part of the World Health Organization's COVAX program. The Ghanaian service started March 2 with Zipline's deployment of 4,500 doses across the Ashanti Region in 36 separate flights, in partnership with Ghana's government and United Parcel Service. Each six-foot-long drone's flight is monitored from its distribution center, and the aircraft can fly nearly 100 miles round-trip on a single battery charge while conveying four pounds of cargo. Orders can be scheduled ahead or placed on demand for just-in-time delivery, and drones can be dispatched within seven minutes of an order's receipt. The drones drop their cargo to destination sites via parachute.
[]
[]
[]
scitechnews
None
None
None
None
San Francisco-based startup Zipline is delivering coronavirus vaccines to rural communities in Ghana via autonomous drones as part of the World Health Organization's COVAX program. The Ghanaian service started March 2 with Zipline's deployment of 4,500 doses across the Ashanti Region in 36 separate flights, in partnership with Ghana's government and United Parcel Service. Each six-foot-long drone's flight is monitored from its distribution center, and the aircraft can fly nearly 100 miles round-trip on a single battery charge while conveying four pounds of cargo. Orders can be scheduled ahead or placed on demand for just-in-time delivery, and drones can be dispatched within seven minutes of an order's receipt. The drones drop their cargo to destination sites via parachute.
775
From Vote to Virus, Misinformation Campaign Targets Latinos
Campaigns to target Latinos with misinformation on topics ranging from the presidential election to the coronavirus vaccine highlight how social media and other technologies are being exploited to such a degree that countermeasures cannot keep up. Examples include doctored videos and images, quotes taken out of context, and conspiracy theories. Former Democratic Party chairman Tom Perez said, "The volume and sources of Spanish-language information are exceedingly wide-ranging and that should scare everyone." An academic study found that most false narratives in the Spanish-language community "were translated from English and circulated via prominent platforms like Facebook, Twitter, and YouTube, as well as in closed group chat platforms like WhatsApp, and efforts often appeared coordinated across platforms." Watchers of Spanish-language content online point to a shortage of reliable sources with sufficiently large followings to consistently dispel falsehoods.
[]
[]
[]
scitechnews
None
None
None
None
Campaigns to target Latinos with misinformation on topics ranging from the presidential election to the coronavirus vaccine highlight how social media and other technologies are being exploited to such a degree that countermeasures cannot keep up. Examples include doctored videos and images, quotes taken out of context, and conspiracy theories. Former Democratic Party chairman Tom Perez said, "The volume and sources of Spanish-language information are exceedingly wide-ranging and that should scare everyone." An academic study found that most false narratives in the Spanish-language community "were translated from English and circulated via prominent platforms like Facebook, Twitter, and YouTube, as well as in closed group chat platforms like WhatsApp, and efforts often appeared coordinated across platforms." Watchers of Spanish-language content online point to a shortage of reliable sources with sufficiently large followings to consistently dispel falsehoods.
776
Rapid 3D Printing Method Moves Toward 3D-Printed Organs
BUFFALO, N.Y. - It looks like science fiction: A machine dips into a shallow vat of translucent yellow goo and pulls out what becomes a life-sized hand. But the seven-second video, which is sped-up from 19 minutes, is real. The hand, which would take six hours to create using conventional 3D printing methods, demonstrates what University at Buffalo engineers say is progress toward 3D-printed human tissue and organs - biotechnology that could eventually save countless lives lost due to the shortage of donor organs. "The technology we've developed is 10-50 times faster than the industry standard, and it works with large sample sizes that have been very difficult to achieve previously," says the study's co-lead author Ruogang Zhao, PhD, associate professor of biomedical engineering. The work is described in a study published Feb. 15 in the journal Advanced Healthcare Materials. It centers on a 3D printing method called stereolithography and jelly-like materials known as hydrogels, which are used to create, among things, diapers, contact lenses and scaffolds in tissue engineering. The latter application is particularly useful in 3D printing, and it's something the research team spent a major part of its effort optimizing to achieve its incredibly fast and accurate 3D printing technique.
Researchers at the University at Buffalo (UB), Syracuse University, the U.S. Department of Veterans Affairs Western New York Healthcare System, and the Roswell Park Comprehensive Cancer Center have developed a three-dimensional (3D) printing method that is reportedly 10 to 50 times faster than the industry standard. The technique combines stereolithography and hydrogels, the latter of which required optimization to enhance speed and accuracy. UB's Chi Zhou said the process "significantly reduces part deformation and cellular injuries caused by the prolonged exposure to the environmental stresses you commonly see in conventional 3D printing methods." The researchers said the technique is especially conducive to printing cells with embedded blood vessel networks, which is expected to be critical to the manufacture of 3D-printed human tissue and organs.
[]
[]
[]
scitechnews
None
None
None
None
Researchers at the University at Buffalo (UB), Syracuse University, the U.S. Department of Veterans Affairs Western New York Healthcare System, and the Roswell Park Comprehensive Cancer Center have developed a three-dimensional (3D) printing method that is reportedly 10 to 50 times faster than the industry standard. The technique combines stereolithography and hydrogels, the latter of which required optimization to enhance speed and accuracy. UB's Chi Zhou said the process "significantly reduces part deformation and cellular injuries caused by the prolonged exposure to the environmental stresses you commonly see in conventional 3D printing methods." The researchers said the technique is especially conducive to printing cells with embedded blood vessel networks, which is expected to be critical to the manufacture of 3D-printed human tissue and organs. BUFFALO, N.Y. - It looks like science fiction: A machine dips into a shallow vat of translucent yellow goo and pulls out what becomes a life-sized hand. But the seven-second video, which is sped-up from 19 minutes, is real. The hand, which would take six hours to create using conventional 3D printing methods, demonstrates what University at Buffalo engineers say is progress toward 3D-printed human tissue and organs - biotechnology that could eventually save countless lives lost due to the shortage of donor organs. "The technology we've developed is 10-50 times faster than the industry standard, and it works with large sample sizes that have been very difficult to achieve previously," says the study's co-lead author Ruogang Zhao, PhD, associate professor of biomedical engineering. The work is described in a study published Feb. 15 in the journal Advanced Healthcare Materials. It centers on a 3D printing method called stereolithography and jelly-like materials known as hydrogels, which are used to create, among things, diapers, contact lenses and scaffolds in tissue engineering. The latter application is particularly useful in 3D printing, and it's something the research team spent a major part of its effort optimizing to achieve its incredibly fast and accurate 3D printing technique.
777
The Robots Are Coming for Phil in Accounting
The trend - quietly building for years, but accelerating to warp speed since the pandemic - goes by the sleepy moniker "robotic process automation." And it is transforming workplaces at a pace that few outsiders appreciate. Nearly 8 in 10 corporate executives surveyed by Deloitte last year said they had implemented some form of R.P.A. Another 16 percent said they planned to do so within three years. Most of this automation is being done by companies you've probably never heard of. UiPath, the largest stand-alone automation firm, is valued at $35 billion - roughly the size of eBay - and is slated to go public later this year. Other companies like Automation Anywhere and Blue Prism, which have Fortune 500 companies like Coca-Cola and Walgreens Boots Alliance as clients, are also enjoying breakneck growth, and tech giants like Microsoft have recently introduced their own automation products to get in on the action. Executives generally spin these bots as being good for everyone, "streamlining operations" while "liberating workers" from mundane and repetitive tasks. But they are also liberating plenty of people from their jobs. Independent experts say that major corporate R.P.A. initiatives have been followed by rounds of layoffs, and that cutting costs, not improving workplace conditions, is usually the driving factor behind the decision to automate. Craig Le Clair, an analyst with Forrester Research who studies the corporate automation market, said that for executives, much of the appeal of R.P.A. bots is that they are cheap, easy to use and compatible with their existing back-end systems. He said that companies often rely on them to juice short-term profits, rather than embarking on more expensive tech upgrades that might take years to pay for themselves. "It's not a moonshot project like a lot of A.I., so companies are doing it like crazy," Mr. Le Clair said. "With R.P.A., you can build a bot that costs $10,000 a year and take out two to four humans."
Many U.S. companies are adopting software to perform tasks ranging from simple accounting to more sophisticated cognitive work that previously involved teams of employees, and white-collar workers are concerned. This robotic process automation (RPA) trend is growing rapidly, and independent experts warn that layoffs can follow major corporate RPA efforts, which are implemented to reduce costs rather than improve workplace conditions. Craig Le Clair at advisory firm Forrester Research said RPA bots' affordability, ease of use, and compatibility with back-end systems are their key selling points to executives, who would rather boost short-term profits than make expensive, time-consuming upgrades. Research by Massachusetts Institute of Technology and Boston University economists indicated that task automation has outpaced the creation of new jobs since the late 1980s, possibly because of popular "so-so technologies" that are sufficient to replace human beings, but do not boost productivity or job creation.
[]
[]
[]
scitechnews
None
None
None
None
Many U.S. companies are adopting software to perform tasks ranging from simple accounting to more sophisticated cognitive work that previously involved teams of employees, and white-collar workers are concerned. This robotic process automation (RPA) trend is growing rapidly, and independent experts warn that layoffs can follow major corporate RPA efforts, which are implemented to reduce costs rather than improve workplace conditions. Craig Le Clair at advisory firm Forrester Research said RPA bots' affordability, ease of use, and compatibility with back-end systems are their key selling points to executives, who would rather boost short-term profits than make expensive, time-consuming upgrades. Research by Massachusetts Institute of Technology and Boston University economists indicated that task automation has outpaced the creation of new jobs since the late 1980s, possibly because of popular "so-so technologies" that are sufficient to replace human beings, but do not boost productivity or job creation. The trend - quietly building for years, but accelerating to warp speed since the pandemic - goes by the sleepy moniker "robotic process automation." And it is transforming workplaces at a pace that few outsiders appreciate. Nearly 8 in 10 corporate executives surveyed by Deloitte last year said they had implemented some form of R.P.A. Another 16 percent said they planned to do so within three years. Most of this automation is being done by companies you've probably never heard of. UiPath, the largest stand-alone automation firm, is valued at $35 billion - roughly the size of eBay - and is slated to go public later this year. Other companies like Automation Anywhere and Blue Prism, which have Fortune 500 companies like Coca-Cola and Walgreens Boots Alliance as clients, are also enjoying breakneck growth, and tech giants like Microsoft have recently introduced their own automation products to get in on the action. Executives generally spin these bots as being good for everyone, "streamlining operations" while "liberating workers" from mundane and repetitive tasks. But they are also liberating plenty of people from their jobs. Independent experts say that major corporate R.P.A. initiatives have been followed by rounds of layoffs, and that cutting costs, not improving workplace conditions, is usually the driving factor behind the decision to automate. Craig Le Clair, an analyst with Forrester Research who studies the corporate automation market, said that for executives, much of the appeal of R.P.A. bots is that they are cheap, easy to use and compatible with their existing back-end systems. He said that companies often rely on them to juice short-term profits, rather than embarking on more expensive tech upgrades that might take years to pay for themselves. "It's not a moonshot project like a lot of A.I., so companies are doing it like crazy," Mr. Le Clair said. "With R.P.A., you can build a bot that costs $10,000 a year and take out two to four humans."
778
Honda Launches Advanced Self-Driving Cars in Japan
Japanese automaker Honda has launched a limited roll-out of its new Legend, which it calls the most advanced driverless vehicle licensed for the road, in Japan. The Legend's capabilities include adaptive driving in lanes, passing and switching lanes in certain conditions, and an emergency stop function if a driver is unresponsive to handover warnings. The Legend's autonomy is rated Level 3 on a scale of 0 to 5; analysts said a true Level 4 vehicle, in which a car no longer requires a driver at all, is a long time off. Experts said the Legend's limited rollout would help gauge demand for more autonomous vehicles.
[]
[]
[]
scitechnews
None
None
None
None
Japanese automaker Honda has launched a limited roll-out of its new Legend, which it calls the most advanced driverless vehicle licensed for the road, in Japan. The Legend's capabilities include adaptive driving in lanes, passing and switching lanes in certain conditions, and an emergency stop function if a driver is unresponsive to handover warnings. The Legend's autonomy is rated Level 3 on a scale of 0 to 5; analysts said a true Level 4 vehicle, in which a car no longer requires a driver at all, is a long time off. Experts said the Legend's limited rollout would help gauge demand for more autonomous vehicles.
780
Autonomous Underwater Robot Saves People From Drowning
A research team at Germany's Fraunhofer Institute for Optronics, System Technologies, and Image Exploitation has developed an underwater rescue robot with the help of the city of Halle's water rescue service. The autonomous system will aid lifeguards and lifesavers, and rescue swimmers from drowning. Surveillance cameras register the movements and position of a distressed swimmer and transmit coordinates to the robot, which is dispatched from a docking station on the pool floor. Upon reaching the swimmer, the robot carries them to the surface, with a mechanism for fixing the swimmer in place preventing him/her from sliding down as they surface. The robot requires acoustic sensors to rescue people in lakes with limited visibility; a test under such conditions showed the robot could successfully rescue a dummy swimmer in just over two minutes.
[]
[]
[]
scitechnews
None
None
None
None
A research team at Germany's Fraunhofer Institute for Optronics, System Technologies, and Image Exploitation has developed an underwater rescue robot with the help of the city of Halle's water rescue service. The autonomous system will aid lifeguards and lifesavers, and rescue swimmers from drowning. Surveillance cameras register the movements and position of a distressed swimmer and transmit coordinates to the robot, which is dispatched from a docking station on the pool floor. Upon reaching the swimmer, the robot carries them to the surface, with a mechanism for fixing the swimmer in place preventing him/her from sliding down as they surface. The robot requires acoustic sensors to rescue people in lakes with limited visibility; a test under such conditions showed the robot could successfully rescue a dummy swimmer in just over two minutes.
782
Meet the Israeli Robot That 'Hears' Through Dead Locust's Ear
Researchers at Israel's Tel Aviv University (TAU) have connected a dead locust's auditory organ to a robot, which responds to sounds picked up by the ear. TAU's Ben M. Maoz said, "Our task was to replace the robot's electronic microphone with a dead insect's ear, use the ear's ability to detect the electrical signals from the environment - in this case vibrations in the air - and, using a special chip, convert the insect input to that of the robot." In a demonstration, the locust's ear caused the robot to move forward in response to a single hand clap, and move backward when the researchers clapped twice. Said Maoz, "This initiative opens the door to sensory integrations between robots and insects - and may make much more cumbersome and expensive developments in the field of robotics redundant."
[]
[]
[]
scitechnews
None
None
None
None
Researchers at Israel's Tel Aviv University (TAU) have connected a dead locust's auditory organ to a robot, which responds to sounds picked up by the ear. TAU's Ben M. Maoz said, "Our task was to replace the robot's electronic microphone with a dead insect's ear, use the ear's ability to detect the electrical signals from the environment - in this case vibrations in the air - and, using a special chip, convert the insect input to that of the robot." In a demonstration, the locust's ear caused the robot to move forward in response to a single hand clap, and move backward when the researchers clapped twice. Said Maoz, "This initiative opens the door to sensory integrations between robots and insects - and may make much more cumbersome and expensive developments in the field of robotics redundant."