Unnamed: 0
int64 0
10k
| input
stringlengths 9.18k
112k
| output
stringlengths 136
194k
| instruction
stringclasses 1
value |
---|---|---|---|
637 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>As a complex microecological system, the gut microbiota plays crucial roles in many aspects, including immunology, physiology and development. The specific function and mechanism of the gut microbiota in birds are distinct due to their extremely special body structure, physiological attributes and life history. Data on the gut microbiota of the common kestrel, a second-class protected animal species in China, are currently scarce.</ns0:p><ns0:p>With high-throughput sequencing technology, we characterized the bacterial community of the gut from 9 fecal samples from a wounded common kestrel by sequencing the V3-V4 region of the 16S ribosomal RNA gene in this study. Our results showed that Proteobacteria (41.078%), Firmicutes (40.923%) and Actinobacteria (11.191%) were the most predominant phyla. Lactobacillus (20.563%) was the most dominant genus, followed by Escherichia-Shigella (17.588%) and Acinetobacter (5.956%). Our results could also offer fundamental data and novel strategies for the protection of wild animals.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>With the rapid progress of sequencing techniques, more and more researches on gut microbiota have revealed its important roles in immunology, physiology and development <ns0:ref type='bibr' target='#b33'>(Guarner & Malagelada 2003;</ns0:ref><ns0:ref type='bibr' target='#b54'>Nicholson et al. 2005)</ns0:ref>, as well as several basic and critical process, such as nutrient absorption, vitamins synthesis and diseases in both human and animals <ns0:ref type='bibr' target='#b27'>(Fukuda & Ohno 2014;</ns0:ref><ns0:ref type='bibr' target='#b37'>Kau et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b55'>Omahony et al. 2015)</ns0:ref> . The analysis of gut microbiota for wild animal PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed gradually becomes a new method that could potentially inform animal conservation and husbandry. Reports concerning the gut microbiota of other avian species, such as Cooper's hawk (Accipiter cooperii) <ns0:ref type='bibr' target='#b67'>(Taylor et al. 2019)</ns0:ref>, bar-headed geese (Anser indicus) <ns0:ref type='bibr' target='#b75'>(Wang et al. 2017)</ns0:ref>, hooded crane (Grus monacha) <ns0:ref type='bibr' target='#b80'>(Zhao et al. 2017)</ns0:ref>, Western Gull (Larus occidentalis) <ns0:ref type='bibr' target='#b13'>(Cockerham et al. 2019)</ns0:ref>, herring gulls (Larus argentatus) <ns0:ref type='bibr' target='#b26'>(Fuirst et al. 2018</ns0:ref>) and black-legged kittiwakes (Rissa tridactyla) <ns0:ref type='bibr' target='#b71'>(van Dongen et al. 2013)</ns0:ref>, have increased rapidly. Although these species have been studied in regards to their microbiome, there isn't a positive trend for each of these species.</ns0:p><ns0:p>The specific function and mechanism of the gut microbiota in birds are distinct due to their body structure, physiological attributes and life history. For instance, the stable body temperature that higher than ambient temperature could ensure a high metabolic rate for birds, which meet the requirements of flight. The streamlined body, method of high-efficiency breathing and the relative short gastrointestinal tracts are their other special attributes. Meanwhile, the ability to fly brings additional unique differences for birds compared to other animals, as well as the changes of their intestinal microbiota to some extent. However, as a research focus, data on the gut microbiota of the common kestrel are currently very scarce.</ns0:p><ns0:p>The common kestrel (Falco tinnunculus) is a small raptor that belongs to Falconidae, which is a family of diurnal birds of prey, including falcons and kestrels. A total of 12 subspecies for common kestrel are distributed widely from the Palearctic to Oriental regions <ns0:ref type='bibr' target='#b20'>(Cramp & Brooks 1992)</ns0:ref>. Although listed in the least concern (LC) class by the International Union for Conservation of Nature (IUCN) (BirdLife International. 2016), the common kestrel was listed as state second-class protected animals (Defined by the LAW OF THE PEOPLE'S REPUBLIC OF CHINA ON THE PROTECTION OF WILDLIFE, Chapter II, Article 9) in China. The common kestrel is a typical opportunistic forager that catches small and medium-sized animals, including small mammals, birds, reptiles and some invertebrates <ns0:ref type='bibr' target='#b7'>(Anthony 1993;</ns0:ref><ns0:ref type='bibr' target='#b8'>Aparicio 2000;</ns0:ref><ns0:ref type='bibr' target='#b73'>Village 2010</ns0:ref>). Insects such as grasshoppers and dragonflies were also identified in the diet of the common kestrel <ns0:ref type='bibr' target='#b28'>(Geng et al. 2009</ns0:ref>). As generalist predators, common kestrels choose distinct DNA extraction and PCR amplification Microbial DNA was extracted from fresh fecal samples using an E.Z.N.A.® Stool DNA Kit (Omega Bio-tek, Norcross, GA, U.S.) according to the manufacturer's protocols. The V3-V4 region of the bacterial 16S ribosomal RNA gene was amplified by PCR (95 °C for 3 min; followed by 25 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s; and a final extension at 72 °C for 5 min) using the primers 338F (5'-barcode-ACTCCTACGGGAGGCAGCAG-3') and 806R (5'-GGACTACHVGGGTWTCTAAT-3'), where the barcode is an eight-base sequence unique to each sample. PCRs were performed in triplicate in a 20 μL mixture containing 4 μL of 5 × FastPfu Buffer, 2 μL of 2.5 mM dNTPs, 0.8 μL of each primer (5 μM), 0.4 μL of FastPfu Polymerase, and 10 ng of template DNA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Illumina MiSeq sequencing</ns0:head><ns0:p>Amplicons were extracted from 2% agarose gels and purified using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, U.S.) according to the manufacturer's instructions and quantified using QuantiFluor™ -ST (Promega, U.S.). Purified amplicons were pooled in equimolar amounts and paired-end sequenced (2 × 250) on an Illumina MiSeq platform according to standard protocols.</ns0:p></ns0:div>
<ns0:div><ns0:head>Processing of sequencing data</ns0:head><ns0:p>Raw fastq files were demultiplexed and quality-filtered using QIIME (version 1.17) <ns0:ref type='bibr' target='#b11'>(Caporaso et al. 2010)</ns0:ref> with the following criteria. (i) The 300 bp reads were truncated at any site receiving an average quality score <20 over a 50 bp sliding window, discarding the truncated reads that were shorter than 50 bp. (ii) Exact barcode matching, 2 nucleotide mismatches in primer matching, and reads containing ambiguous characters were removed. (iii) Only sequences that overlapped longer than 10 bp were assembled according to their overlap sequence. Reads that could not be assembled were discarded.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed Operational taxonomic units (OTUs) were clustered with a 97% similarity cutoff using UPARSE (version 7.1 http://drive5.com/uparse/), and chimeric sequences were identified and removed using UCHIME <ns0:ref type='bibr' target='#b22'>(Edgar et al. 2011)</ns0:ref>. The taxonomy of each 16S rRNA gene sequence was analyzed by RDP Classifier (http://rdp.cme.msu.edu/) against the SILVA (SSU115)16S rRNA database using a confidence threshold of 70% <ns0:ref type='bibr' target='#b5'>(Amato et al. 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>All the indices of alpha diversity, including Chao, ACE, Shannon, Simpson, and coverage, and the analysis of beta diversity were calculated with QIIME. The rarefaction curves, rank abundance curves, and stacked histogram of relative abundance were displayed with R (version 2.15.3) (R Core Team. 2013).</ns0:p><ns0:p>The hierarchical clustering trees were built using UPGMA (unweighted pair-group method with arithmetic mean) based on weighted and unweighted distance matrices at different levels.</ns0:p><ns0:p>Principal coordinate analysis (PCoA) was calculated and displayed using QIIME and R, as well as hierarchical clustering trees.</ns0:p><ns0:p>This study was performed in accordance with the recommendations of the Animal Ethics Review Committee of Beijing Normal University (approval reference number: CLE-EAW-2019-026).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Overall sequencing data</ns0:head><ns0:p>A total of 28 phyla, 70 classes, 183 orders, 329 families and 681 genera were detected among the gastrointestinal bacterial communities. There were altogether 389,474 reads obtained and classified into 1673 OTUs at the 0.97 sequence identity cut-off in 9 fecal samples from a common kestrel.</ns0:p><ns0:p>Alpha diversity indices (including Sobs, Shannon, Simpson, ACE, Chao and coverage) of each sample are shown in Table <ns0:ref type='table'>1</ns0:ref>. The Sobs and Shannon index of all samples were shown in Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>. <ns0:ref type='table'>PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:ref> Manuscript to be reviewed Additionally, the rarefaction curves (A) and the rank abundance curves (B) are shown in Fig. <ns0:ref type='figure' target='#fig_3'>S1</ns0:ref>, which indicated that the number of OTUs for further analysis was reasonable, as well as the abundance of species in common kestrel feces. The total sequences, total bases and OTU distributions of all samples are shown in Table <ns0:ref type='table'>S4 and Table S5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>PeerJ reviewing</ns0:head></ns0:div>
<ns0:div><ns0:head>Bacterial composition and relative abundance</ns0:head><ns0:p>At the phylum level of the gut microbiota in the common kestrel, the most predominant phylum was Proteobacteria (41.078%), followed by Firmicutes (40.923%), <ns0:ref type='bibr'>Actinobacteria (11.191%)</ns0:ref> and Bacteroidetes (3.821%). In addition to Tenericutes (0.178%) and Verrucomicrobia (0.162%), Patescibacteria (0.543%) and Deinococcus-Thermus (0.504%) were also ranked in the top 10 species in the common kestrel fecal microbiota (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>The top 5 families in the gut microbiota were Lactobacillaceae (20.563%), Enterobacteriaceae (18.346%), Moraxellaceae (6.733%), Bifidobacteriaceae (5.624%) and Burkholderiaceae (4.752%).</ns0:p><ns0:p>At the genus level, Lactobacillus (20.563%), Escherichia-Shigella (17.588%) and Acinetobacter (5.956%) were the most dominant genera. These were followed by Bifidobacterium (5.624%) and Enterococcus (4.024%) (Table <ns0:ref type='table'>3</ns0:ref>). These five genera in the total gut microbiota of several samples accounted for a small proportion, such as for E5 (28.755%) and E6 (10.905%) and especially for E4 (2.861%), while the largest proportion was 98.416% in E1.</ns0:p><ns0:p>The stacked histogram of relative abundance for species is also demonstrated in Fig. <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> at the phylum (A) and genus (B) levels, which could intuitively represent the basic bacterial composition and relative abundance. The community structures of E1 and E9 were more similar than those of the other feces samples at both levels.</ns0:p><ns0:p>The hierarchical clustering trees showed the similarity of community structure among different samples, which were generated by UPGMA (unweighted pair-group method with arithmetic mean) with the unweighted UniFrac (Fig. <ns0:ref type='figure'>3A</ns0:ref>) and weighted UniFrac (Fig. <ns0:ref type='figure'>3B</ns0:ref>) distance matrixes.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Although the fecal samples were collected from the common kestrel in chronological order (E1-E9) of therapy treatments, no distinct or obvious clustering relationships are discernable in Fig.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.</ns0:head></ns0:div>
<ns0:div><ns0:head>Discrepancy of community composition</ns0:head><ns0:p>To further demonstrate the differences in community composition among the nine samples, principal coordinates analysis (PCoA) was applied and depicted in Fig. <ns0:ref type='figure'>4</ns0:ref>. For PCoA, we chose the same two distance matrices (unweighted UniFrac in Fig. <ns0:ref type='figure'>4A</ns0:ref> and weighted UniFrac in Fig. <ns0:ref type='figure'>4B</ns0:ref>) as above to analyze the discrepancies. The results in Fig. <ns0:ref type='figure'>4</ns0:ref> were similar to those in Fig. <ns0:ref type='figure'>3</ns0:ref>, in which all samples scattered dispersedly, suggesting that the variation in the composition of the gut microbiota of the common kestrel was not obvious in this case over time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Knowledge and comprehension concerning the gut microbiota have continued to progressively develop with relevant techniques over the past decade <ns0:ref type='bibr' target='#b31'>(Guarner 2014;</ns0:ref><ns0:ref type='bibr' target='#b46'>Li et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b61'>Qin et al. 2010</ns0:ref>). The application of analysis for intestinal microecology was also a research focus in the field of wild animal protection.</ns0:p><ns0:p>The common kestrel (Falco tinnunculus) is listed as a second-class protected animal species in China. Although research concerning avian species, including the common kestrel, has been increasing gradually, the available data on the gut microbiota in the common kestrel are currently unknown.</ns0:p><ns0:p>We characterized the basic composition and structure of the gut microbiota from a wounded common kestrel in this study, which was rescued by the Beijing Raptor Rescue Center (BRRC).</ns0:p><ns0:p>In general, the overall community structure of the gut microbiota in this common kestrel was in accordance with previous relevant characterizations in avian species, such as Cooper's hawks <ns0:ref type='bibr' target='#b67'>(Taylor et al. 2019)</ns0:ref>, bar-headed geese <ns0:ref type='bibr' target='#b75'>(Wang et al. 2017)</ns0:ref>, hooded cranes <ns0:ref type='bibr' target='#b80'>(Zhao et al. 2017</ns0:ref> Manuscript to be reviewed swan geese <ns0:ref type='bibr' target='#b74'>(Wang et al. 2016)</ns0:ref>, which included Proteobacteria, Firmicutes, Actinobacteria and Bacteroidetes.</ns0:p><ns0:p>The most predominant phylum in the fecal gut microbiota of the common kestrel was Proteobacteria (41.078%), which ranked after Firmicutes in other birds, such as cockatiels (Nymphicus hollandicus) <ns0:ref type='bibr' target='#b2'>(Alcaraz et al. 2016</ns0:ref>) and black-legged kittiwakes <ns0:ref type='bibr' target='#b71'>(van Dongen et al. 2013)</ns0:ref>. This crucial phylum plays many valuable roles. For instance, Proteobacteria is beneficial for the giant panda, which can degrade lignin in its major food resource <ns0:ref type='bibr' target='#b23'>(Fang et al. 2012</ns0:ref>).</ns0:p><ns0:p>Additionally, it has been reported that Proteobacteria is also the most dominant phylum in obese dogs <ns0:ref type='bibr' target='#b58'>(Park et al. 2015)</ns0:ref>. The specific function of this phylum could be distinct in birds due to their unique physiological traits, as well as their developmental strategies <ns0:ref type='bibr' target='#b40'>(Kohl 2012</ns0:ref>). However, the high relative abundance of Proteobacteria in the total bacterial community was observed mainly in several samples that were collected during surgeries or drug treatments, such as E1 and E4. Sample E1 was collected on 23th June that the day after kestrel rescued from the wild. On 22th June, the kestrel was bandaged with silver sulfadiazine cream (SSD), also given subcutaneously 10 ml and orally 4ml lactated ringer's solution (LRS) respectively. The increased level of Proteobacteria was associated with some cardiovascular events, inflammation and inflammatory bowel disease <ns0:ref type='bibr' target='#b3'>(Amar et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b12'>Carvalho et al. 2012)</ns0:ref>. Although the kestrel's weight increased 34 grams when E4 was collected, it just ate a mouse's head. Combining the status when kestrel was rescued, we speculated that the increased proportion of Proteobacteria may reflect its food consumption or gastrointestinal status to some extent. Environmental influential factors, as well as dietary changes, should also be considered an important index that could result in variations in the relative abundance of species in the gut microbiota <ns0:ref type='bibr' target='#b21'>(De Filippo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b63'>Scott et al. 2013)</ns0:ref>.</ns0:p><ns0:p>Furthermore, the dominant genera within Proteobacteria in our study were Escherichia-Shigella (17.588%), Acinetobacter (5.956%), Paracoccus (2.904%) and Burkholderia-Caballeronia-Paraburkholderia (2.408%). Escherichia-Shigella is a common pathogenic bacterium that can PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed cause diarrhea in humans <ns0:ref type='bibr' target='#b35'>(Hermes et al. 2009</ns0:ref>). The main cause for the high relative abundance of Escherichia-Shigella was the E1 (88.610%) sample, which suggested indirectly that the physical condition of the common kestrel was not normal when it was rescued by staff from the BRRC. This result was also consistent with the actual state of this wounded common kestrel that we observed (Table <ns0:ref type='table'>S3</ns0:ref>).</ns0:p><ns0:p>Although Firmicutes (40.923%) ranked after Proteobacteria, its actual relative abundance was slightly lower than that in the common kestrel. As a common phylum of the gut microbiota, Firmicutes exists widely in both mammals and birds, and this ancient symbiosis may be linked to the common ancestor of amniotes <ns0:ref type='bibr' target='#b17'>(Costello et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b40'>Kohl 2012)</ns0:ref>. Firmicutes can provide certain energy for the host through catabolizing complex carbohydrates and sugar and even digesting fiber by some species <ns0:ref type='bibr' target='#b15'>(Costa et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b25'>Flint et al. 2008;</ns0:ref><ns0:ref type='bibr' target='#b30'>Guan et al. 2017)</ns0:ref>.</ns0:p><ns0:p>The dominant genera in Firmicutes were Lactobacillus (20.563%), Enterococcus (4.024%) and Clostridium_sensu_stricto_1 (3.586%). The relative abundance of Enterococcus in E5 (15.026%) contributed to the highest ranking of this genus. Enterococcus is not regarded as a special pathogenic bacterium due to its harmlessness and can even be used as a normal food additive in related industries <ns0:ref type='bibr' target='#b24'>(Fisher & Phillips 2009;</ns0:ref><ns0:ref type='bibr' target='#b51'>Moreno et al. 2006)</ns0:ref>. Enterococcus species are also considered common nosocomial pathogens that can cause a high death rate <ns0:ref type='bibr' target='#b48'>(Lopes et al. 2005)</ns0:ref>. Meanwhile, these species are also associated with the kinds of infections, including neonatal infections, intraabdominal and pelvic infections, as well as the nosocomial infections and superinfections <ns0:ref type='bibr' target='#b52'>(Murray 1990</ns0:ref>). Coincidentally, before the sample E5 was collected, to dealing with the wound on its right tarsometatarsus, the kestrel was treated under anesthesia. The kestrel's right digit tendon was exposed and has no function. Although ensuring the sterile conditions, we inferred that the kestrel was infected by certain bacteria during the surgery.</ns0:p><ns0:p>The BRRC might be a specific location similar to hospitals for raptors to some extent, which could explain the high proportion of Enterococcus in the fecal samples of this common kestrel.</ns0:p><ns0:p>However, this genus should be given sufficient attention in subsequent studies with additional PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed samples from different individuals. The abundance of Clostridium increases as more protein is digested <ns0:ref type='bibr' target='#b49'>(Lubbs et al. 2009</ns0:ref>). Some species, like Clostridium difficile, that belong to Clostridium was reported that might have related to certain diseases, such as diarrhea and severe lifethreatening pseudomembranous colitis <ns0:ref type='bibr' target='#b42'>(Kuijper et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b60'>Pepin et al. 2004</ns0:ref>). The high relative abundance of this genus also resulted primarily from certain samples (E8, 28.177%), similar to the Enterococcus mentioned above. And, more remarkable, the collection of sample E8 was in the same situation as E5. On 13th July, the kestrel also underwent the surgery under anesthesia.</ns0:p><ns0:p>While as E5 collected, the kestrel's status was still normal according to relevant records. These results indicated that the high relative abundance of certain pathogens may not show any symptoms of illness for kestrel. In general, the abnormal situation of E5 and E8 still need to be paid enough attention. Moreover, to minimize the influences due to the individual differences, more samples from different individuals should be collected for further study.</ns0:p><ns0:p>The third dominant phylum in the gut microbiota in our study was Actinobacteria (11.191%), which was also detected in other species, such as turkeys (Meleagris gallopavo) <ns0:ref type='bibr' target='#b77'>(Wilkinson et al. 2017</ns0:ref>) and Leach's storm petrel (Oceanodroma leucorhoa) <ns0:ref type='bibr' target='#b59'>(Pearce et al. 2017)</ns0:ref>. The abundance of Actinobacteria varied in different species, such as house cats (7.30%) and dogs (1.8%) <ns0:ref type='bibr' target='#b34'>(Handl et al. 2011</ns0:ref>), but only accounted for 0.53% in wolves <ns0:ref type='bibr' target='#b78'>(Wu et al. 2017)</ns0:ref>. Within this phylum, Bifidobacterium (5.624%) and Glutamicibacter (1.840%) were the primary genera. The presence of Bifidobacterium is closely related to the utilization of glycans produced by the host, as well as oligosaccharides in human milk <ns0:ref type='bibr' target='#b64'>(Sela et al. 2008;</ns0:ref><ns0:ref type='bibr' target='#b70'>Turroni et al. 2010)</ns0:ref>. Noticeably, Bifidobacterium thermophilum was reported to be used through oral administration for chickens to resist E. coli infection <ns0:ref type='bibr' target='#b39'>(Kobayashi et al. 2002)</ns0:ref>. The detection and application of Bifidobacterium, especially for the rescue of many rare avian species, would be worth considering for curing various diseases in the future.</ns0:p><ns0:p>Additionally, the relative abundance of Bacteroidetes was 3.821% in this study, which consisted mainly of Sphingobacterium. Bacteroidetes is another important component of the gut Manuscript to be reviewed microbiota that can degrade relevant carbohydrates from secretions of the gut, as well as high molecular weight substances <ns0:ref type='bibr' target='#b68'>(Thoetkiattikul et al. 2013)</ns0:ref>. The proportion of Bacteroidetes, which was stable in most samples we collected except E5 (18.166%), would increase correspondingly with weight loss for mice or changes in fiber content in rural children's daily diet <ns0:ref type='bibr' target='#b21'>(De Filippo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b45'>Ley et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b69'>Turnbaugh et al. 2008)</ns0:ref>. However, the weight of kestrel was increasing during the collection of E5 and E8. Additionally, although underwent surgery on 4th July, the reason of the high proportion of Bacteroidetes in sample E5 still unknown. To characterize the basic composition and structure of the gut microbiota for the common kestrel more accurately, additional fresh fecal samples from healthy individuals should be collected in follow-up studies.</ns0:p><ns0:p>Furthermore, additional attention should be paid to the high ranking of Patescibacteria (0.543%) and Deinococcus-Thermus (0.504%) at the phylum level. Patescibacteria might be related to basic biosynthesis of amino acids, nucleotides and so on <ns0:ref type='bibr' target='#b43'>(Lemos et al. 2019)</ns0:ref>. Members of Deinococcus-Thermus are known mainly for their capability to resist extreme radiation, including ultraviolet radiation, as well as oxidizing agents <ns0:ref type='bibr' target='#b18'>(Cox & Battista 2005;</ns0:ref><ns0:ref type='bibr' target='#b29'>Griffiths & Gupta 2007)</ns0:ref>. The specific function of certain species in these phyla for the common kestrel should be studied by controlled experiments, detailed observations or more advanced approaches, as molecular biological techniques are developed.</ns0:p><ns0:p>In addition to the quantity of samples, living environment, age, sex and individual differentiation should also be considered as influencing factors, which would cause a degree of discrepancies at all levels in the gut microbiota. In addition, A comparison of wounded and healthy samples for the bacterial composition in the intestinal microbiota is another essential research direction that may provide additional information for wild animal rescue, such as important biomarkers that indirectly indicate potential diseases. </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>) andPeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)Manuscript to be reviewed In summary, using high-throughput sequencing technology in this study, we first characterized the elementary bacterial composition and structure of the gut microbiota for a wounded common kestrel in the BRRC, which could provide valuable basic data for future studies. Further research on Enterococcus, Patescibacteria and Deinococcus-Thermus should be conducted in the future with additional samples. The integration of other auxiliary techniques or disciplines, such as metagenomics and transcriptomics, could offer a deeper understanding of the function and mechanism of the gut microbiota, as well as the protection of wild animals.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,255.37,525.00,200.25' type='bitmap' /></ns0:figure>
<ns0:note place='foot' n='1'>PeerJ reviewing PDF | (2020:01:45085:1:1:NEW 10 Jul 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Editor and Reviewers,
Thanks a lot for reviewing our manuscript and the valuable suggestions. We have revised our manuscript carefully and answered the questions you mentioned.
We hope your next comments and decisions.
Sincerely yours,
Yu Guan
Reviewer 1
Basic reporting
Guan et al. conducted an interesting and important study of Common Kestrel gut microbiome. This study will be useful for the species conservation. I recommend this paper is accepted after the revision of my comments. Moreover, I strongly suggest the author should find a native English speaker and an expert in gut microbiota to improve the language.
Reply: Although have polished our article before submitting to PeerJ, we will improve the language continuedly in the newer revised manuscript.
Experimental design
Materials & Methods
The limitation is the samples from just 1 bird.
L 91. How the fecal could reflect the healthy condition? I am confused.
Reply: Actually, the real health condition we mentioned should be the actual state of its health and we have corrected it.
L134-136. PCA, PCOA and NMDS are redundant. You just need one approach.
Reply: We kept the PCoA and deleted the PCA and NMDS finally.
L137-138. Reworte.
Reply: We have rewritten these sentences.
Validity of the findings
No comment.
Comments for the author
General: The abstract should be rewrote.
L 35. Change composition and structure to community and delete microbiota.
Reply: We have revised it.
L40. Move “A total of 28 phyla, 70 classes, 183 orders, 329 families and 681 genera were detected among the gastrointestinal bacterial communities.” to the first sentence of results.
Reply: We have moved it.
L42. Remove this sentence “Further research....”.
Reply: We have removed it.
L43. Remove this sentence “In addition....”
Reply: We have removed it.
Introduction.
Reworte this sentence in lines 80-81.
Reply: We have rewritten it.
Discussion
1.This section should be rewrote and language should be further improved.
Reply: We have rewritten it.
2. The author mainly discussed the bacterial composition of each phylum and speculated the potential function. I think the author should also compare the results with the recently published paper of birds on mBio.
Reply: We have revised this section and added some references.
3. The author collected 9 samples from one animal in different days. And of course, the author observed the bacterial difference, so why this happen? I guess this may be linked to the healthy and treatment. This should be discussed.
Reply: We have discussed it as you suggested.
Reviewer 2
Basic reporting
This manuscript uses clear, unambiguous, English language, but uses certain terms quite casually without support. For example, in line 76 you reference the birds to have “special” attributes, but then do not follow up with what they are or how they are unique and thus having a different importance for microbiome studies.
Reply: Thanks for your correction and we have revised it.
Your introduction needs to start more broadly and the first paragraph should flow better instead of jumping from fact to fact when talking about kestrel natural history and biology. You should consider starting more broadly about the microbiome and then honing in on the species specific information. Expand upon the knowledge gap being filled.
Reply: We have rewritten this section according to your suggestions.
In lines 72 through 76 you suggest that studies on these species have increased, however, after a preliminary literature review it appears that this is not the case. Please rephrase to state that in recent years, these species have been studied in regards to their microbiome, but that there isn’t a positive trend for each of these species.
Reply: We have revised this section
For lines 72 through 76 consider adding Cockerham et al. 2019 and Fuirst et al. 2018 as an additional recent microbiome studies on Western and herring gulls. Especially for referencing microbial associations with urban and non-urban environments.
Reply: We have added these studies.
Please consider italicizing “Falconidae” in line 51
Reply: We have revised it.
Where is your ICUN citation for lines 53-55
Reply: We have added the citation here.
In line 80 do not use the word “elementary”
Reply: We have deleted it.
I would suggest replacing Figure 1 with a plot that shows the alpha diversity values for all metrics so that we can see the variation among samples. Rarefaction curves should supplemental material.
Reply: We accept your suggestion. About Fig. 1, we move Rarefaction Curves (A) and Rank Abundance Curves (B) to supplementary material, and show the alpha diversity result instead. Here we give the Sobs index and the Shannon index results. For more detail, we give exact values of Sobs index, Shannon index, Simpson index, Ace index, Chao index and Coverage index of every sample in table 1.
The text in Figure 4 appears stretched and blurry.
Reply: We have revised it.
Experimental design
Molecular operational taxonomic units (OTUs) are often used for microbiome studies, but have been quickly shifting to using ASVs. OTUs use a 97% threshold, but you could avoid this and its ecological limitations, by using amplicon sequence variants (ASVs). Especially for a more descriptive microbiome study, using the most recent analytical methods is important. I would recommend following the protocol from Callahan et al. 2017. ASVs are a threshold-free metric and can be selected against databases using the DADA2 pipeline for example.
Reply: Thanks for your good suggestion. In fact, we indeed have studied papers about ASVs, and realized the limitations of OTUs. However, in a long time, we have to give our samples to a company for sequencing and basic analysis, because there is no sequencing equipment in our lab, and no one is good at bioinformatics. Unfortunately, for now, there is no company would like to use ASVs for our data analysis. Our lab is now planning to seek cooperation with the one who is capable of data analysis, and trying to train a researcher to be a bioinformatics specialist. But now, for this manuscript, we have no choice to use OTUs to tell our story. We hope to be understood and allow us to use OTUs in this manuscript.
I do not see any mention of removing contaminants from samples or running negative controls. Was this done? If so, please state this explicitly.
Reply: We did care about any contaminants. So, in the whole process of our sample collection, we strictly follow the standard procedure to avoid any possible contaminants. And in the sequencing part, the company we chose is professional and responsible in microbiome sequencing. It was comforting to know that no contaminant was found in our data, so we did no step to remove contaminants from our data.
In line 129 please explicitly state your rarefication depth.
Reply: We here add two tables as supplementary materials (S2) to show more information of total sequences, total bases, and OTU distributions in every sample. To our knowledge, comparing to most published papers, more than 30 thousand sequences is enough for our samples in this study.
Validity of the findings
The validity of these findings is quite limited due to small sample size. Please consider increasing the sample size for final publication.
Reply: We are aware of the limitation of our findings. We have considered your suggestion and will increase the sample size as the quantity of rescued individual increasing in the following study with more other kestrels. And we also add discussions in the appropriate location.
The overall impact of this research is not mentioned in detail. The results are detailed and acceptable, but there needs to be more emphasis on the significance of this study other than it being another avian microbiome study.
Reply: We have added discussion about the significance of our study.
There should be more detail on where each of the nine birds were retrieved from and how that geographic and environmental origin could influence variation in the microbiome diversity and composition.
Reply: We have added the detail information as you suggested. And the all nine fecal samples were collected from the same common kestrel in BRRC.
Analysis of this data is statistically sound, but I would highly encourage revising the analysis to use ASVs instead of OTUs. OTUs are not replicable between studies, and as a descriptive study, using ASVs would provide more detail and replicable methods for future research. See Callahan et al. 2017 as previously mentioned.
Reply: Thanks for your good suggestion. In fact, we indeed have studied papers about ASVs, and realized the limitations of OTUs. However, in a long time, we have to give our samples to a company for sequencing and basic analysis, because there is no sequencing equipment in our lab, and no one is good at bioinformatics. Unfortunately, for now, there is no company would like to use ASVs for our data analysis. Our lab is now planning to seek cooperation with the one who is capable of data analysis, and trying to train a researcher to be a bioinformatics specialist. But now, for this manuscript, we have no choice to use OTUs to tell our story. We hope to be understood and allow us to use OTUs in this manuscript.
Comments for the author
I commend the authors for their thorough research and expansive analyses from these samples. The manuscript is clearly outlined and written well. However, given that this is a more surface-level microbiome study, there should be much more emphasis on the importance for this. There is an overabundance of comparisons of taxonomy to other studies, but barely any link to why we need this information in the first place. Additionally, please consider revising the analysis to follow DADA2 ASVs rather than OTUs. Lastly, a descriptive microbiome study would definitely be strengthened by a larger sample size.
Reply: Thanks for all your valuable suggestions and we have revised our manuscripts.
Reviewer 3
Basic reporting
Although there are phrasing issues to improve clarity, the manuscript is clearly written, unambiguous, and professional English without grammatical errors is present throughout. The authors appear to have written the paper carefully.
The literature cited is robust and relevant. However, the introduction and background do not show the proper context, especially for a journal with a wide-ranging audience such as PeerJ. The introduction begins by introducing the study organism, the kestrel. However, the methods of the paper are much more far reaching- the kestrel here serves to show the importance of describing gut microbiota for general conservation and husbandry purposes, as well as to illustrate effects of surgery and drugs on an animal. The authors do nothing to place the unique value of their data in this context. This results in a paper that is less novel.
Reply: We have revised the introduction and relevant sections in the manuscript.
The structure of the paper is fine and follows PeerJ standard format. Figures are clear and concise, although I am not sure if the authors utilize colorblind friendly palettes.
Reply: We have considered it and produced the figures as friendly as possible.
Experimental design
No experiments are performed; the paper serves to report on gut microbiota in the kestrel. The research question is defined, and a knowledge gap, notably the lack of gut microbiota data in birds despite its potential importance, is defined. However, once again I think the authors miss the context of their own study- this isn’t just reporting on a bird that was captured and sampled. It is reporting on a bird that was injured, recovered, performed surgery on, and changed its regiment from life in the wild to captivity. But none of these aspects of the bird are highlighted with a context or relevant citations. I think that the unique background of the kestrel is what makes the study special- in addition to what the authors already provide- that birds are unique animals with an “extremely special body structure.” So while a knowledge gap is identified, I think another one is out there that is very beneficial to address.
Reply: We have revised the whole manuscript to highlight the theme of our study. Meanwhile, we will make comparison of the gut microbiota with sick and healthy individual in the following studies.
The methods on the molecular and bioinformatic work are provided with sufficient detail to replicate. However, more detail on the sample collection would improve the manuscript. The kestrel’s unique history is not clear. The reader knows that the common kestrel was “carefully treated with several surgeries and drug therapies.” The details of these events could have a major impact on the microbiome, but we don’t know anything about what happened. For example, were antibiotics a part of the treatments?
Reply: We have added some details of surgeries and drug therapies.
We also need to be provided information about the samples. We don’t know anything about when they were collected relative to the life history of the kestrel. Information is vague- lines 90-92: “nine fecal samples that may reflect the real health condition were collected from the common kestrel after relevant treatments on different days.” This makes it sound to the reader as if all treatments were collected after different types of treatment, with the treatments taking place during different days. However, in the discussion, a few small details about sample collection are provided, lines 209-210 say “observed mainly in several samples that were collected during surgeries or drug treatments, such as E1 and E4.” This changes things, and implies that certain samples occur during treatments, but that others did not. If so, what was the duration of time between the last treatments? What treatments were received? Were any sampled before treatments? Any after treatment on the bird was complete?
Reply: We have added some details of surgeries and drug therapies.
Validity of the findings
A lot of data are provided in Tables, and data are available via FigShare. The speculation and claims in the discussion are appropriate. However, as with the introduction, I think the discussion should be framed within the context the authors provide- but also- with the link to learning about the microbiota of a bird with a unique life history. This will require the use of different citations to give context to this different point.
Reply: We have revised the manuscript.
Comments for the author
Introduction:
General comment: I think that the introduction’s format should be completely redone. Instead of starting general, and narrowing down to the study at hand, this introduction starts specific, and the context of the study is only given at the very end. The power of this study is that it characterized the gut microbiota as a way to potentially inform conservation efforts and husbandry of kestrels. It shows that gut microbiota analyses are a tool, that is often overlooked, during conservation. However, the introduction doesn’t get this point across- instead it starts with kestrels. The scope and impact of this study could reach far beyond kestrels to the general application of gut microbiota analyses coupled with conservation, however this more far-reaching point is lost in the introduction. I would flip, or reverse, the section.
Reply: We have revised it.
Line 51: PeerJ is a journal with a general audience, may wanted to specify Aves or at least Falconiformes.
Reply: We have revised it.
Line 54: Readers are likely not familiar with China’s scheme for classifying protected species. It is worth defining what second-class protected means. I do not have a context for this definition.
Reply: We have revised it. The protection level of wild animal was defined by the LAW OF THE PEOPLE'S REPUBLIC OF CHINA ON THE PROTECTION OF WILDLIFE.
Details: Chapter II Protection of Wildlife
Article 9
The State shall give special protection to the species of wildlife which are rare or near extinction. The wildlife under special state protection shall consist of two classes: wildlife under first class protection and wildlife under second class protection. Lists or revised lists of wildlife under special state protection shall be drawn up by the department of wildlife administration under the State Council and announced after being submitted to and approved by the State Council. The wildlife under special local protection, being different from the wildlife under special state protection, refers to the wildlife specially protected by provinces, autonomous regions or municipalities directly under the Central Government. Lists of wildlife under special local protection shall be drawn up and announced by the governments of provinces, autonomous regions or municipalities directly under the Central Government and shall be submitted to the State Council for the record. Lists or revised lists of terrestrial wildlife under state protection, which are beneficial or of important economic or scientific value, shall be drawn up and announced by the department of wildlife administration under the State Council.
Line 55: I don’t think the kestrel is a typical opportunistic forager, I’m not sure what this means. Its hovering ability is exceptional.
Reply: It means that “This raptor is considered to be an opportunistic forager catching on what is locally available”. And generalist predators catch on the most common or easy preys occurring in their hunting area.
References:
1. Costantini D, Casagrande S, Lieto GD, Fanfani A, Dellomo G. 2005. Consistent differences in feeding habits between neighbouring breeding kestrels. Behaviour 142:1403-1415.
2. Village, A. 1990. The kestrel. - T. & A. D. Poyser, London.
Line 59: avoid the use of the term “wintering” as it is difficult to understand for readers based in more tropical latitudes or the southern hemisphere. “Non-breeding” is more appropriate. Also, the distinct predatory strategies at different times of the year here are mentioned, but not described. Would be good to describe the difference between the strategies.
Reply: We have revised it.
Line 61: urban and rural kestrels have differences- but what are they? It’s better for the reader to know the characteristics of the differences (which has higher reproductive success, hunting success?) than to simply know that the differences exist in an undefined direction.
Reply: We have deleted this paragraph and kept the next paragraph.
Line 65: probably don’t need this many citations. What does “comparatively comprehensive” mean? I don’t understand what was compared here. Are all these many studies comparing rural and urban?
Reply: We have deleted this paragraph and “comparatively”.
Line 68: what is a research hotspot? May consider rephrasing.
Reply: We have revised it.
Line 70: can be phrased better. “With the rapid progress of sequencing techniques, reports on gut microbiota have suggested important roles in…”
Reply: We have revised it.
Line 73: probably don’t need to pluralize these species. Also, the species could be organized phylogenetically. Are there more raptors to add to this list (could remove non-raptors so that there are not too many citations)?
Reply: Thanks for your suggestion and we also considered this part. The main reason we chose these species is that the available data of gut microbiota for avian were limited, especially for raptors.
Line 76: “specialized body structure” could remove the phrase “extremely special.” Also, what about birds are extremely special? This is a general audience that is being addressed and may need more concrete detail.
Reply: We have revised it.
Line 77: Additional used twice in this sentence. Overall this sentence’s phrasing could be revamped.
Reply: We have revised it.
Methods:
Line 87-93: Definitely needs to be addressed: for sample collection, I think it is incredibly important to know what the kestrel’s diet was in captivity. Were any samples acquired before the kestrel was fed in captivity (and so would reflect the diet of a wild bird?). What subspecies of kestrel was used in this study?
Reply: Here we added the details of the kestrel’s diet, also of the treatment and surgeries. However, because of the protection status of kestrel in China and other limited conditions of our team, the wild fecal samples of kestrel were hard obtained. Of course, the collection of the wild fecal samples under the natural condition is an important step for our next research.
Due to the number of individuals were also limited and valuable, the analysis of gut microbiota in kestrel was just on the species level. The other next work for us is to compare the difference in gut microbiota between different species of raptors.
Line 93: this is more of a descriptive study so I am not sure “experiments” is the correct word.
Reply: We have revised it to “bacterial studies”.
Line 130: I don’t think this is the correct way (“Team”) to cite R.
Reply: We have revised it.
Results:
Line 141-147: Better to not have a single sentence paragraph, all one paragraph.
Reply: We have revised it.
Line 150: Again, better to not have a single sentence paragraph. Can merge here.
Reply: We have revised it.
Line 154: I don’t understand why this is remarkable.
Reply: We have deleted it.
Line 163: In which samples did these genera account for the largest proportion? I think it is sensible to compare these here.
Reply: We have revised it
Line 169-172: I think you mean “the hierarchical clustering trees showed”
Reply: We have revised it.
Discussion:
General: I think you should start your discussion by interpreting the results, then zoom outwards to explain the context. Here, the discussion starts with the context. It would be better to save this until the end. So, I would invert the way that both the introduction and discussion are written.
Reply: We have rewritten the sections you mentioned carefully.
Line 187-189: Very general sentences that don’t say much. Could be supported by citations.
Reply: We have added citations.
Line 193: limited, or unknown? If limited, need a citation here.
Reply: We have revised it to unknown.
Line 202: first mention of cockatiels, requires Latin binomial.
Reply: We have added the Latin binomial.
Line 224: better phrasing “and an ancient symbiosis may be linked to the common ancestor of amniotes” or something of that nature
Reply: We have rephrased it.
Line 234: I’m not sure what this sentence means- a hospital like environment contributed to the detection of this genus?
Reply: We mean that the BRRC has the similar complicated conditions, like there are many patients, as the hospital. So, the possibility of the detection for this common nosocomial genus also increased.
Line 237: The word “moreover” implies a connection to the next sentence. But here, you’re switching topics to a new genus.
Reply: We have deleted it.
Line 241: First mention of turkey, storm-petrel, requires binomial
Reply: We have added the Latin binomial.
Line 245: Do you mean the “presence of Bifidobacterium is closely related to”?
Reply: We have rephrased it.
" | Here is a paper. Please give your review comments after reading it. |
638 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>As a complex microecological system, the gut microbiota plays crucial roles in many aspects, including immunology, physiology and development. The specific function and mechanism of the gut microbiota in birds are distinct due to their body structure, physiological attributes and life history. Data on the gut microbiota of the common kestrel, a second-class protected animal species in China, are currently scarce. With highthroughput sequencing technology, we characterized the bacterial community of the gut from 9 fecal samples from a wounded common kestrel by sequencing the V3-V4 region of the 16S ribosomal RNA gene. Our results showed that Proteobacteria (41.078%), Firmicutes (40.923%) and Actinobacteria (11.191%) were the most predominant phyla.</ns0:p><ns0:p>Lactobacillus (20.563%) was the most dominant genus, followed by Escherichia-Shigella (17.588%) and Acinetobacter (5.956%). Our results offer fundamental data and novel strategies for the protection of wild animals.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Sequencing technology has developed rapidly in recent years. Meanwhile, more and more research on gut microbiota have revealed its important roles in immunology, physiology and development <ns0:ref type='bibr' target='#b34'>(Guarner & Malagelada 2003;</ns0:ref><ns0:ref type='bibr' target='#b55'>Nicholson et al. 2005)</ns0:ref>, as well as several basic and critical process, such as nutrient absorption, vitamins synthesis and diseases in both human and animals <ns0:ref type='bibr' target='#b28'>(Fukuda & Ohno 2014;</ns0:ref><ns0:ref type='bibr' target='#b38'>Kau et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b56'>Omahony et al. 2015)</ns0:ref> . Gut microbiota analysis of wild animals is becoming a new method that may provide information for animal protection Microbial DNA was extracted from fresh fecal samples using an E.Z.N.A.® Stool DNA Kit (Omega Bio-tek, Norcross, GA, U.S.) according to the manufacturer's protocols. The V3-V4 region of the bacterial 16S ribosomal RNA gene was amplified by PCR (95 °C for 3 min; followed by 25 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s; and a final extension at 72 °C for 5 min) using the primers 338F (5'-barcode-ACTCCTACGGGAGGCAGCAG-3') and 806R (5'-GGACTACHVGGGTWTCTAAT-3'), where the barcode is an eight-base sequence unique to each sample. PCRs were performed in triplicate in a 20 μL mixture containing 4 μL of 5 × FastPfu Buffer, 2 μL of 2.5 mM dNTPs, 0.8 μL of each primer (5 μM), 0.4 μL of FastPfu Polymerase, and 10 ng of template DNA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Illumina MiSeq sequencing</ns0:head><ns0:p>Amplicons were extracted from 2% agarose gels and purified using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, U.S.) according to the manufacturer's instructions and quantified using QuantiFluor™ -ST (Promega, U.S.). Purified amplicons were pooled in equimolar amounts and paired-end sequenced (2 × 250) on an Illumina MiSeq platform according to standard protocols.</ns0:p></ns0:div>
<ns0:div><ns0:head>Processing of sequencing data</ns0:head><ns0:p>Raw fastq files were demultiplexed and quality-filtered using QIIME (version 1.17) <ns0:ref type='bibr' target='#b11'>(Caporaso et al. 2010)</ns0:ref> with the following criteria. (i) The 300 bp reads were truncated at any site receiving an average quality score <20 over a 50 bp sliding window, discarding the truncated reads that were shorter than 50 bp. (ii) Exact barcode matching, 2 nucleotide mismatches in primer matching, and reads containing ambiguous characters were removed. (iii) Only sequences that overlapped longer than 10 bp were assembled according to their overlap sequence. Reads that could not be assembled were discarded.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Operational taxonomic units (OTUs) were clustered with a 97% similarity cutoff using UPARSE (version 7.1 http://drive5.com/uparse/), and chimeric sequences were identified and removed using UCHIME <ns0:ref type='bibr' target='#b23'>(Edgar et al. 2011)</ns0:ref>. The taxonomy of each 16S rRNA gene sequence was analyzed by RDP Classifier (http://rdp.cme.msu.edu/) against the SILVA (SSU115)16S rRNA database using a confidence threshold of 70% <ns0:ref type='bibr' target='#b5'>(Amato et al. 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>All the indices of alpha diversity, including Chao, ACE, Shannon, Simpson, and coverage, and the analysis of beta diversity were calculated with QIIME. The rarefaction curves, rank abundance curves, and stacked histogram of relative abundance were displayed with R (version 2.15.3) <ns0:ref type='bibr' target='#b21'>(Dalgaard 2010)</ns0:ref>.</ns0:p><ns0:p>The hierarchical clustering trees were built using UPGMA (unweighted pair-group method with arithmetic mean) based on weighted and unweighted distance matrices at different levels.</ns0:p><ns0:p>Principal coordinate analysis (PCoA) was calculated and displayed using QIIME and R, as well as hierarchical clustering trees.</ns0:p><ns0:p>This study was performed in accordance with the recommendations of the Animal Ethics Review Committee of Beijing Normal University (approval reference number: CLS-EAW-2019-026). Manuscript to be reviewed Additionally, the rarefaction curves (A) and the rank abundance curves (B) are shown in Fig. <ns0:ref type='figure' target='#fig_4'>S1</ns0:ref>, which indicated that the number of OTUs for further analysis was reasonable, as well as the abundance of species in common kestrel feces. The total sequences, total bases and OTU distributions of all samples are shown in Table <ns0:ref type='table'>S4 and Table S5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Overall</ns0:head></ns0:div>
<ns0:div><ns0:head>Bacterial composition and relative abundance</ns0:head><ns0:p>At the phylum level of the gut microbiota in the common kestrel, the most predominant phylum was Proteobacteria (41.078%), followed by Firmicutes (40.923%), <ns0:ref type='bibr'>Actinobacteria (11.191%)</ns0:ref> and Bacteroidetes (3.821%). In addition to Tenericutes (0.178%) and Verrucomicrobia (0.162%), Patescibacteria (0.543%) and Deinococcus-Thermus (0.504%) were also ranked in the top 10 species in the common kestrel fecal microbiota (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>The top 5 families in the gut microbiota were Lactobacillaceae (20.563%), Enterobacteriaceae (18.346%), Moraxellaceae (6.733%), Bifidobacteriaceae (5.624%) and Burkholderiaceae (4.752%).</ns0:p><ns0:p>At the genus level, Lactobacillus (20.563%), Escherichia-Shigella (17.588%) and Acinetobacter (5.956%) were the most dominant genera. These were followed by Bifidobacterium (5.624%) and Enterococcus (4.024%) (Table <ns0:ref type='table'>3</ns0:ref>). These five genera in the total gut microbiota of several samples accounted for a small proportion, such as for E5 (28.755%) and E6 (10.905%) and especially for E4 (2.861%), while the largest proportion was 98.416% in E1.</ns0:p><ns0:p>The stacked histogram of relative abundance for species is also demonstrated in Fig. <ns0:ref type='figure' target='#fig_5'>2</ns0:ref> at the phylum (A) and genus (B) levels, which could intuitively represent the basic bacterial composition and relative abundance. The community structures of E1 and E9 were more similar than those of the other feces samples at both levels.</ns0:p><ns0:p>The hierarchical clustering trees showed the similarity of community structure among different samples, which were generated by UPGMA (unweighted pair-group method with arithmetic mean) with the unweighted UniFrac (Fig. <ns0:ref type='figure' target='#fig_6'>3A</ns0:ref>) and weighted UniFrac (Fig. <ns0:ref type='figure' target='#fig_6'>3B</ns0:ref>) distance matrixes.</ns0:p><ns0:p>PeerJ reviewing <ns0:ref type='table'>PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)</ns0:ref> Manuscript to be reviewed Although the fecal samples were collected from the common kestrel in chronological order (E1-E9) of therapy treatments, no distinct or obvious clustering relationships are discernable in Fig.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.</ns0:head></ns0:div>
<ns0:div><ns0:head>Discrepancy of community composition</ns0:head><ns0:p>To further demonstrate the differences in community composition among the nine samples, principal coordinates analysis (PCoA) was applied (Fig. <ns0:ref type='figure'>4</ns0:ref>). For PCoA, we chose the same two distance matrices (unweighted UniFrac in Fig. <ns0:ref type='figure'>4A</ns0:ref> and weighted UniFrac in Fig. <ns0:ref type='figure'>4B</ns0:ref>) as above to analyze the discrepancies. The results in Fig. <ns0:ref type='figure'>4</ns0:ref> were similar to those in Fig. <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>, in which all samples scattered dispersedly, suggesting that variation in the composition of the gut microbiota of the common kestrel was not obvious over time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Knowledge and comprehension concerning gut microbiota have continued to progressively develop with relevant techniques over the past decade <ns0:ref type='bibr' target='#b33'>(Guarner 2014;</ns0:ref><ns0:ref type='bibr' target='#b47'>Li et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b61'>Qin et al. 2010</ns0:ref>). The application of analysis for intestinal microecology continues to be also a research focus in the field of wild animal protection.</ns0:p><ns0:p>The common kestrel (Falco tinnunculus) is listed as a second-class protected animal species in China. Although research concerning avian species, including the common kestrel, has been increasing gradually, the available data on the gut microbiota in the common kestrel were currently unknown.</ns0:p><ns0:p>We characterized the basic composition and structure of the gut microbiota from a wounded common kestrel in this study, which was rescued by the Beijing Raptor Rescue Center (BRRC).</ns0:p><ns0:p>In general, the overall community structure of the gut microbiota in this common kestrel was in accordance with previous relevant characterizations in birds, such as Cooper's hawks <ns0:ref type='bibr' target='#b67'>(Taylor et al. 2019)</ns0:ref>, bar-headed geese <ns0:ref type='bibr' target='#b76'>(Wang et al. 2017)</ns0:ref>, hooded cranes <ns0:ref type='bibr' target='#b80'>(Zhao et al. 2017)</ns0:ref> Manuscript to be reviewed <ns0:ref type='bibr' target='#b75'>(Wang et al. 2016)</ns0:ref>, which included Proteobacteria, Firmicutes, Actinobacteria and Bacteroidetes.</ns0:p><ns0:p>The most predominant phylum in the fecal gut microbiota of the common kestrel was Proteobacteria (41.078%), which ranked after Firmicutes in other birds, such as cockatiels (Nymphicus hollandicus) <ns0:ref type='bibr' target='#b2'>(Alcaraz et al. 2016</ns0:ref>) and black-legged kittiwakes <ns0:ref type='bibr' target='#b71'>(van Dongen et al. 2013)</ns0:ref>. This crucial phylum plays many valuable roles. For instance, Proteobacteria is beneficial for the giant panda, which can degrade lignin in its major food resource <ns0:ref type='bibr' target='#b24'>(Fang et al. 2012</ns0:ref>).</ns0:p><ns0:p>Additionally, it has been reported that Proteobacteria is also the most dominant phylum in obese dogs <ns0:ref type='bibr' target='#b58'>(Park et al. 2015)</ns0:ref>. The specific function of this phylum could be distinct in birds due to their unique physiological traits, as well as their developmental strategies <ns0:ref type='bibr' target='#b41'>(Kohl 2012</ns0:ref>). However, the high relative abundance of Proteobacteria in the total bacterial community was observed mainly in several samples that were collected during surgeries or drug treatments, such as E1 and E4. Sample E1 was collected on 23rd June that the day after kestrel rescued from the wild. On 22nd June, the kestrel was bandaged with silver sulfadiazine cream (SSD), also subcutaneously injected with 10 ml and orally administered with 4ml lactated ringer's solution (LRS)</ns0:p><ns0:p>respectively. The increased level of Proteobacteria was associated with some cardiovascular events, inflammation and inflammatory bowel disease <ns0:ref type='bibr' target='#b3'>(Amar et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b12'>Carvalho et al. 2012)</ns0:ref>.</ns0:p><ns0:p>Although the kestrel's weight increased 34 grams when E4 was collected, it just ate a mouse's head. Combined with the status when kestrel was rescued, we speculated that the increased proportion of Proteobacteria may reflect its food consumption or gastrointestinal status to some extent. Environmental influential factors, as well as dietary changes, should also be considered an important index that could result in variations in the relative abundance of species in the gut microbiota <ns0:ref type='bibr' target='#b22'>(De Filippo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b64'>Scott et al. 2013)</ns0:ref>.</ns0:p><ns0:p>Furthermore, the dominant genera within Proteobacteria in our study were Escherichia-Shigella (17.588%), Acinetobacter (5.956%), Paracoccus (2.904%) and Burkholderia-Caballeronia-Paraburkholderia (2.408%). Escherichia-Shigella is a common pathogenic bacterium that can PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed cause diarrhea in humans <ns0:ref type='bibr' target='#b36'>(Hermes et al. 2009</ns0:ref>). The main cause for the high relative abundance of Escherichia-Shigella was the E1 (88.610%) sample, which suggested indirectly that the physical condition of the common kestrel was not normal when it was rescued by staff from the BRRC. This result was also consistent with the actual state of this wounded common kestrel that we observed (Table <ns0:ref type='table'>S3</ns0:ref>).</ns0:p><ns0:p>Although Firmicutes (40.923%) ranked after Proteobacteria, its actual relative abundance was only slightly lower than that in the common kestrel. As a common phylum of the gut microbiota, Firmicutes exists widely in both mammals and birds, and this ancient symbiosis may be linked to the common ancestor of amniotes <ns0:ref type='bibr' target='#b17'>(Costello et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b41'>Kohl 2012)</ns0:ref>. Firmicutes can provide certain energy for the host through catabolizing complex carbohydrates, sugar, and even by digesting fiber in some species <ns0:ref type='bibr' target='#b15'>(Costa et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b26'>Flint et al. 2008;</ns0:ref><ns0:ref type='bibr' target='#b32'>Guan et al. 2017)</ns0:ref>.</ns0:p><ns0:p>The dominant genera in Firmicutes were Lactobacillus (20.563%), Enterococcus (4.024%) and Clostridium_sensu_stricto_1 (3.586%). The relative abundance of Enterococcus in E5 (15.026%) contributed to the highest ranking of this genus. Enterococcus is not regarded as a pathogenic bacterium due to its harmlessness and can even be used as a normal food additive in related industries <ns0:ref type='bibr' target='#b25'>(Fisher & Phillips 2009;</ns0:ref><ns0:ref type='bibr' target='#b52'>Moreno et al. 2006)</ns0:ref>. Enterococcus species are also considered common nosocomial pathogens that can cause a high death rate <ns0:ref type='bibr' target='#b49'>(Lopes et al. 2005)</ns0:ref>.</ns0:p><ns0:p>Meanwhile, these species are also associated with certain infections, including neonatal infections, intraabdominal and pelvic infections, as well as the nosocomial infections and superinfections <ns0:ref type='bibr' target='#b53'>(Murray 1990)</ns0:ref>. Coincidentally, prior to the collection of sample E5, kestrel was anesthetized for the treatment of the right tarsometatarsus injury. The right digit tendon of kestrel was exposed before managing the wound, without any function. Although ensuring the sterile conditions, we inferred that the kestrel was infected by certain bacteria during the surgery.</ns0:p><ns0:p>The BRRC could be regarded as a specific hospital for raptor, which could explain the high proportion of Enterococcus in the fecal samples of this common kestrel. However, this genus should be given sufficient attention in subsequent studies with additional samples from different PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed individuals. The abundance of Clostridium increases as more protein is digested <ns0:ref type='bibr' target='#b50'>(Lubbs et al. 2009)</ns0:ref>. Clostridium difficile has been reported to be associated with certain diseases, such as diarrhea and severely life-threatening pseudomembranous colitis <ns0:ref type='bibr' target='#b44'>(Kuijper et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b60'>Pepin et al. 2004</ns0:ref>). The high relative abundance of this genus also resulted primarily from certain samples (E8, 28.177%), similar to the Enterococcus mentioned above. And it's remarkable that the collection of sample E8 was in the same situation as E5. On 13th July, the kestrel also underwent surgery under anesthesia. While E5 was collected, the kestrel's status was still normal according to relevant records. These results indicated that the high relative abundance of certain pathogens may not show any symptoms of illness for kestrel. In general, the abnormal situation of E5 and E8 still need to be paid enough attention. Moreover, to minimize the influences due to the individual differences, more samples from different individuals should be collected for further study.</ns0:p><ns0:p>The third dominant phylum in the gut microbiota in our study was Actinobacteria (11.191%), which was also detected in other species, such as turkeys (Meleagris gallopavo) <ns0:ref type='bibr' target='#b77'>(Wilkinson et al. 2017</ns0:ref>) and Leach's storm petrel (Oceanodroma leucorhoa) <ns0:ref type='bibr' target='#b59'>(Pearce et al. 2017)</ns0:ref>. The relative abundance of Actinobacteria varied in different species, such as house cats (7.30%) and dogs (1.8%) <ns0:ref type='bibr' target='#b35'>(Handl et al. 2011</ns0:ref>), but only accounted for 0.53% in wolves <ns0:ref type='bibr' target='#b78'>(Wu et al. 2017)</ns0:ref>. Within this phylum, Bifidobacterium (5.624%) and Glutamicibacter (1.840%) were the primary genera. The presence of Bifidobacterium is closely related to the utilization of glycans produced by the host, as well as oligosaccharides in human milk <ns0:ref type='bibr' target='#b65'>(Sela et al. 2008;</ns0:ref><ns0:ref type='bibr' target='#b70'>Turroni et al. 2010)</ns0:ref>. Noticeably, Bifidobacterium thermophilum was reported to be used through oral administration for chickens to resist E. coli infection <ns0:ref type='bibr' target='#b40'>(Kobayashi et al. 2002)</ns0:ref>. The detection and application of Bifidobacterium, especially for the rescue of many rare avian species, would be worth considering for curing various diseases in the future.</ns0:p><ns0:p>Additionally, the relative abundance of Bacteroidetes was 3.821% in this study, which consisted mainly of Sphingobacterium. Bacteroidetes is another important component of the gut Manuscript to be reviewed microbiota that can degrade relevant carbohydrates from secretions of the gut, as well as high molecular weight substances <ns0:ref type='bibr' target='#b68'>(Thoetkiattikul et al. 2013)</ns0:ref>. The proportion of Bacteroidetes, which was stable in most samples we collected except E5 (18.166%), would increase correspondingly with weight loss for mice or changes in fiber content in rural children's daily diet <ns0:ref type='bibr' target='#b22'>(De Filippo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b46'>Ley et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b69'>Turnbaugh et al. 2008)</ns0:ref>. However, the weight of kestrel was increasing during the collection of E5 and E8. Additionally, although kestrel underwent surgery on 4th July, the reason for the high proportion of Bacteroidetes in its fecal sample E5 were unclear. To characterize the basic composition and structure of the gut microbiota for the common kestrel more accurately, additional fresh fecal samples from healthy individuals should be collected in follow-up studies.</ns0:p><ns0:p>Furthermore, additional attention should be paid to the high ranking of Patescibacteria (0.543%) and Deinococcus-Thermus (0.504%) at the phylum level. Patescibacteria might be related to basic biosynthesis of amino acids, nucleotides and so on <ns0:ref type='bibr' target='#b45'>(Lemos et al. 2019)</ns0:ref>. Members of Deinococcus-Thermus are known mainly for their capability to resist extreme radiation, including ultraviolet radiation, as well as oxidizing agents <ns0:ref type='bibr' target='#b18'>(Cox & Battista 2005;</ns0:ref><ns0:ref type='bibr' target='#b30'>Griffiths & Gupta 2007)</ns0:ref>. The specific function of certain species in these phyla for the common kestrel should be studied by controlled experiments, detailed observations or more advanced approaches, as molecular biological techniques are developed.</ns0:p><ns0:p>In addition to the quantity of samples, living environment, age, sex and individual differentiation should also be considered as influencing factors, which would cause a degree of discrepancies at all levels in the gut microbiota. In addition, A comparison of wounded and healthy samples for the bacterial composition in the intestinal microbiota is another essential research direction that may provide additional information for wild animal rescue, such as important biomarkers that indirectly indicate potential diseases. Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>sequencing data A total of 28 phyla, 70 classes, 183 orders, 329 families and 681 genera were detected among the gastrointestinal bacterial communities. There were altogether 389,474 reads obtained and classified into 1673 OTUs at the 0.97 sequence identity cut-off in 9 fecal samples from a common kestrel. Alpha diversity indices (including Sobs, Shannon, Simpson, ACE, Chao and coverage) of each sample are shown in Table 1. The Sobs and Shannon index of all samples are shown in Fig. 1. PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>and swan geese PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)Manuscript to be reviewed In summary, using high-throughput sequencing technology in this study, we first characterized the elementary bacterial composition and structure of the gut microbiota for a wounded common kestrel in the BRRC, which could provide valuable basic data for future studies. Further research on Enterococcus, Patescibacteria and Deinococcus-Thermus should be conducted in the future with additional samples. The integration of other auxiliary techniques or disciplines, such as metagenomics and transcriptomics, could offer a deeper understanding of the function and mechanism of the gut microbiota, as well as the protection of wild animals.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 The</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,255.37,525.00,200.25' type='bitmap' /></ns0:figure>
<ns0:note place='foot' n='1'>PeerJ reviewing PDF | (2020:01:45085:2:0:NEW 9 Aug 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Editor and Reviewer,
Thanks a lot for reviewing our manuscript and the valuable suggestions. We have revised our manuscript carefully and answered the questions you mentioned again.
We also sorry for all the trouble caused by English problems. In the future research, we will put forward more strict requirements for routine paper writing.
We hope for your comments and final decisions as soon as possible.
Sincerely yours,
Yu Guan
Reviewer 3
Basic reporting
In my initial review, I discussed that the original manuscript contained clear, unambiguous, and professional English throughout. However, the new edited portions contain English that is written with many more issues than the original paper itself. In this way, the paper has actually regressed from its original submission. Basic mistakes (such as the use of '23th' for example) show that the edits likely did not receive the same amount of scrutiny as the original paper.
As I mentioned in my original review, the literature cited is robust.
Figures are shared properly. The paper is self-contained. It does not pose direct hypotheses as it is more of an exploratory study.
Experimental design
While the authors did incorporate my suggestion of mentioning the history of the kestrel, I do not believe that the changes are sufficient, and perhaps I was not clear about what I was asking for. The kestrel has a strange history, of being found injured and emaciated, as well as undergoing multiple treatments with different types of chemicals and surgeries. While the authors provide some anecdotally framed information about relevant samplings in the discussion, this is too challenging for the reader to piece together. I suggest mentioning these treatments clearly in the Methods, and mentioning all treatments corresponding to all sampling periods. This information would be greatly enhanced by a supplementary table detailing the sampling period, date, and relevant surgery, chemicals, or other notes (kestrel ate a mouse, for example) for that day. This would allow the reader to come to their own conclusions about the findings, rather than relying on the author’s selective interpretations in the discussion.
Reply: Due to the complexity of the treatment and feeding process, we have included it in the supplementary table (Table S2 and S3) for readers' reference.
Additionally, the authors continue to use what I consider to be a comparatively weaker knowledge gap in the lack of knowledge about bird microbiota. I think the more interesting angle here is a medical one, regarding the kestrel's health and surgeries. However, if the editor and other peer reviewers do not find this to be a problem it could be published in the current context.
Validity of the findings
This is all fine.
Comments for the author
General:
The authors may have misinterpreted my question about subspecies. Could the authors please identify the subspecies of kestrel they are observing? This is essential. I am not asking for a comparison of subspecies- just labelling the current kestrel that has been worked on.
Reply: We here just discussed the Common Kestrel (Falco tinnunculus) according to previous avian research and did not discuss the subspecies temporarily. But we will identify the subspecies in the next study as the quantity of the rescued individuals increased.
Overall my main comment is that the care that went into the original paper is not reflected in the edits. The edited content is plagued by poor English and poor formatting (see literature cited).
Reply: We have rephrased many sections you mentioned.
Also very little context is given to the findings as a whole. The authors do a good job of explaining where specific findings fit into the field (this bacteria that we found also does this, and this, and so on) but not where their findings, in summation, fit into the field. Placing emphasis on this in the beginning of the discussion would be good. Again, I think that this study being performed on a captive bird that underwent surgery is a good place to start.
Line edits corresponding to the numbering system in the 'track changes' version of the document.
Line 33:
Replace “extremely special” to “specialized”
Reply: We have changed the manuscript.
Line 38:
Eliminate “in this study”
Reply: We have eliminated it.
Line 41:
Eliminate “could also”
Reply: We have eliminated it.
Line 48:
“Research” not “researches”
Reply: We have corrected it.
Line 48:
First sentence of new intro is a run on sentence, could break into two sentences
Reply: We have broken it into two sentences.
Line 53:
“Animals” plural. “has gradually become a” This sentence is not using proper English.
Reply: We have corrected it.
Line 70-72:
Re-read reviewer 2’s comment regarding this sentence. It still insinuates that studies on the species themselves have increased, not on birds as a whole. It still needs to be rephrased.
Reply: We have rephrased this sentence.
Line 72:
“With regard.”
Reply: We have corrected it.
Line 73:
Avoid casual abbreviations in scientific writing: “isn’t.” What does “positive trend” mean? Confusing.
Reply: We have corrected it.
Line 74:
You changed this sentence here, now change it in the Abstract.
Reply: We have corrected the Abstract.
Line 75:
This sentence is also not using proper English and needs to be edited. Also, not all birds are warmer than ambient temperature, for example in extremely hot desert environments.
Reply: We have rephrased this sentence.
Line 75-81:
I appreciate the additional information but this section is using poor English. I can’t put in the time to correct all of the English, the authors need to consult someone.
Reply: We have rephrased this section.
Line 86:
Should “BirdLife International” citation include a period in it?
Line 108:
Eliminate “the” in “the common raptors”
Reply: We have eliminated it.
Line 109:
Eliminate “the” in “the food chains.” Eliminate the in “the common kestrels.” Is the kestrel a top predator? Presumably other larger raptors can prey on it.
Reply: We have eliminated “the” and changed “top” into “important”.
Line 118:
“the injured common kestrel could not fly and was found…” Eliminate “first.” First is implied with “found”
Reply: We have eliminated “first”.
Line 228:
Should “R Core Team” citation include a period in it?
Reply: We have corrected this citations.
Line 239-241:
Probably a mistake that could be eliminated.
Reply: We have eliminated the mistake.
Line 247:
“Are shown” as opposed to “were shown”
Reply: We have corrected it.
Line 282:
May want to relist these results with the smallest percentage first- it does the best job of illustrating your point, so may want to put it up front.
Reply: Based on the relevant references and our previously published papers, we believe that the existing ranking method can better illustrate our point.
Line 304:
Eliminate “and depicted in”
Reply: We have eliminated it.
Line 307:
“the variation” Eliminate “the”
Reply: We have eliminated it.
Line 308:
Eliminate “in this case” - it is implied.
Reply: We have eliminated it.
Line 312:
Eliminate “the”
Reply: We have eliminated it.
Line 314:
“continues to be” Eliminate “was”
Reply: We have rephrased it.
Line 318:
“were” past tense, because you have now characterized it.
Reply: We have rephrased it.
Line 324:
Can probably just say “birds” here
Reply: We have rephrased it.
Line 349:
“23rd” “June, the day after”
Reply: We have corrected it.
Line 350:
“22nd” June. This document needs to be reviewed by a native English speaker.
Reply: We have corrected it.
Line 351:
Unclear, revise English
Reply: We have rephrased it.
Line 355:
“combined with”
Reply: We have corrected it.
Line 369:
Was “only” slightly lower. Otherwise using “although” in this sentence does not makes sense.
Reply: We have corrected it.
Line 374:
“Carbohydrates, sugar, and even by digesting fiber in some species”
Reply: We have corrected it.
Line 379:
Eliminate “special” ?
Reply: We have eliminated it.
Line 382-390:
Again, information is appreciated but I don’t have time (or think it is my job) to make every English correction. Someone else needs to review the manuscript generally, but especially these new edited sections.
Reply: We have rephrased it.
Line 393:
Don’t need to specify that Clostridium difficile belongs to Clostridium
Reply: We have rephrased it.
Line 393-394:
Again, English issues.
Reply: We have rephrased it.
Line 397:
“Remarkably.” Sentence is vague, need to back up with some numbers
Reply: We have rephrased it.
Line 398:
Eliminate “the” in “the surgery” “While E5 was collected”
Reply: We have rephrased it.
Line 414:
This is indicating relative abundance correct?
Reply: We mean relative abundance here.
Line 419:
Do you mean “notably”?
Reply: We did mean “noticeably” here.
Line 430-432:
Again, poor English. “Although the kestrel underwent surgery on 4th July, the reason for the high proportion…”
Reply: We have rephrased it.
Literature cited:
There is inconsistent italicization and species capitalization of Falco tinnunculus in the literature cited.
Reply: We have corrected it.
I am not sure that the BirdLife International citation is formatted correctly, as I do not think a reader could retrieve information based on this citation.
Reply: We have corrected this citation according to the official format (https://www.iucnredlist.org/species/22696362/93556429).
The new citations are added very sloppily, without proper page numbers, often including URL links, and are inconsistent.
Reply: We have corrected the citations.
" | Here is a paper. Please give your review comments after reading it. |
639 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>As a complex microecological system, the gut microbiota plays crucial roles in many aspects, including immunology, physiology and development. The specific function and mechanism of the gut microbiota in birds are distinct due to their body structure, physiological attributes and life history. Data on the gut microbiota of the common kestrel, a second-class protected animal species in China, are currently scarce. With highthroughput sequencing technology, we characterized the bacterial community of the gut from 9 fecal samples from a wounded common kestrel by sequencing the V3-V4 region of the 16S ribosomal RNA gene. Our results showed that Proteobacteria (41.078%), Firmicutes (40.923%) and Actinobacteria (11.191%) were the most predominant phyla.</ns0:p><ns0:p>Lactobacillus (20.563%) was the most dominant genus, followed by Escherichia-Shigella (17.588%) and Acinetobacter (5.956%). Our results would offer fundamental data and direction for the wildlife rescue.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Recent research on host-associated gut microbial communities have revealed their important roles in immunology, physiology and development <ns0:ref type='bibr' target='#b34'>(Guarner & Malagelada 2003;</ns0:ref><ns0:ref type='bibr' target='#b58'>Nicholson et al. 2005)</ns0:ref>, as well as several basic and critical processes, such as nutrient absorption and vitamins synthesis in both human and animals <ns0:ref type='bibr' target='#b28'>(Fukuda & Ohno 2014;</ns0:ref><ns0:ref type='bibr' target='#b38'>Kau et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b60'>Omahony et al. 2015)</ns0:ref> . Gut microbiota analysis of wild animals is becoming a new method that may provide information for wildlife rescue and animal husbandry. Reports concerning the gut microbiota of Manuscript to be reviewed other avian species, such as Cooper's hawk (Accipiter cooperii) <ns0:ref type='bibr' target='#b74'>(Taylor et al. 2019)</ns0:ref>, bar-headed geese (Anser indicus) <ns0:ref type='bibr' target='#b82'>(Wang et al. 2017)</ns0:ref>, hooded crane (Grus monacha) <ns0:ref type='bibr' target='#b89'>(Zhao et al. 2017)</ns0:ref>, Western Gull (Larus occidentalis) <ns0:ref type='bibr' target='#b14'>(Cockerham et al. 2019)</ns0:ref>, herring gull (Larus argentatus) <ns0:ref type='bibr' target='#b27'>(Fuirst et al. 2018</ns0:ref>) and black-legged kittiwake (Rissa tridactyla) <ns0:ref type='bibr' target='#b78'>(van Dongen et al. 2013)</ns0:ref>, have increased rapidly. The specific function and mechanism of the gut microbiota in birds are distinct due to their body structure, physiological attributes and life history <ns0:ref type='bibr' target='#b42'>(Kobayashi 1969;</ns0:ref><ns0:ref type='bibr' target='#b84'>Williams & Tieleman 2005;</ns0:ref><ns0:ref type='bibr' target='#b86'>Winter et al. 2006)</ns0:ref>. For example, for most birds, a stable body temperature above ambient temperature ensures a high metabolic rate for the birds needed for flight <ns0:ref type='bibr' target='#b59'>(O'Mara et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b69'>Schleucher 2002;</ns0:ref><ns0:ref type='bibr' target='#b72'>Smit et al. 2016)</ns0:ref>. Streamlined bodies, efficient breathing patterns and relatively short gastrointestinal tracts are also special attributes <ns0:ref type='bibr' target='#b40'>(Klasing 1999;</ns0:ref><ns0:ref type='bibr' target='#b61'>Orosz & Lichtenberger 2011)</ns0:ref>. Meanwhile, the birds' ability to fly sets them apart from other animals, altering their intestinal microbiota to some extent. However, as a research focus, data on the gut microbiota of the common kestrel are currently very scarce.</ns0:p><ns0:p>The common kestrel (Falco tinnunculus) is a small raptor that belongs to Falconidae, which is a family of diurnal birds of prey, including falcons and kestrels. A total of 12 subspecies for common kestrel are distributed widely from the Palearctic to Oriental regions <ns0:ref type='bibr' target='#b21'>(Cramp & Brooks 1992)</ns0:ref>. Although listed in the least concern (LC) class by the International Union for Conservation of Nature (IUCN) (BirdLife International. 2016), the common kestrel was listed as state second-class protected animals (Defined by the LAW OF THE PEOPLE'S REPUBLIC OF CHINA ON THE PROTECTION OF WILDLIFE, Chapter II, Article 9) in China. The common kestrel is a typical opportunistic forager that catches small and medium-sized animals, including small mammals, birds, reptiles and some invertebrates <ns0:ref type='bibr' target='#b8'>(Anthony 1993;</ns0:ref><ns0:ref type='bibr' target='#b9'>Aparicio 2000;</ns0:ref><ns0:ref type='bibr' target='#b80'>Village 2010</ns0:ref>). Insects such as grasshoppers and dragonflies were also identified in the diet of the common kestrel <ns0:ref type='bibr' target='#b29'>(Geng et al. 2009</ns0:ref>). As generalist predators, common kestrels choose distinct predatory strategies when non-breeding and breeding to minimize the expenditure of energy, Microbial DNA was extracted from fresh fecal samples using an E.Z.N.A.® Stool DNA Kit (Omega Bio-tek, Norcross, GA, U.S.) according to the manufacturer's protocols. The V3-V4 region of the bacterial 16S ribosomal RNA gene was amplified by PCR (95 °C for 3 min; followed by 25 cycles at 95 °C for 30 s, 55 °C for 30 s, and 72 °C for 30 s; and a final extension at 72 °C for 5 min) using the primers 338F (5'-barcode-ACTCCTACGGGAGGCAGCAG-3') and 806R (5'-GGACTACHVGGGTWTCTAAT-3'), where the barcode is an eight-base sequence unique to each sample. PCRs were performed in triplicate in a 20 μL mixture containing 4 μL of 5 × FastPfu Buffer, 2 μL of 2.5 mM dNTPs, 0.8 μL of each primer (5 μM), 0.4 μL of FastPfu Polymerase, and 10 ng of template DNA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Illumina MiSeq sequencing</ns0:head><ns0:p>Amplicons were extracted from 2% agarose gels and purified using an AxyPrep DNA Gel Extraction Kit (Axygen Biosciences, Union City, CA, U.S.) according to the manufacturer's instructions and quantified using QuantiFluor™ -ST (Promega, U.S.). Purified amplicons were pooled in equimolar amounts and paired-end sequenced (2 × 250) on an Illumina MiSeq platform according to standard protocols.</ns0:p></ns0:div>
<ns0:div><ns0:head>Processing of sequencing data</ns0:head><ns0:p>Raw fastq files were demultiplexed and quality-filtered using QIIME (version 1.17) <ns0:ref type='bibr' target='#b12'>(Caporaso et al. 2010)</ns0:ref> with the following criteria. (i) The 300 bp reads were truncated at any site receiving an average quality score <20 over a 50 bp sliding window, discarding the truncated reads that were shorter than 50 bp. (ii) Exact barcode matching, 2 nucleotide mismatches in primer matching, and reads containing ambiguous characters were removed. (iii) Only sequences that overlapped longer than 10 bp were assembled according to their overlap sequence. Reads that could not be assembled were discarded. Manuscript to be reviewed Operational taxonomic units (OTUs) were clustered with a 97% similarity cutoff using UPARSE (version 7.1 http://drive5.com/uparse/), and chimeric sequences were identified and removed using UCHIME <ns0:ref type='bibr' target='#b23'>(Edgar et al. 2011)</ns0:ref>. The taxonomy of each 16S rRNA gene sequence was analyzed by RDP Classifier (http://rdp.cme.msu.edu/) against the SILVA (SSU115)16S rRNA database using a confidence threshold of 70% <ns0:ref type='bibr' target='#b7'>(Amato et al. 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>All the indices of alpha diversity, including Chao, ACE, Shannon, Simpson, and coverage, and the analysis of beta diversity were calculated with QIIME. The rarefaction curves, rank abundance curves, and stacked histogram of relative abundance were displayed with R (R Core Team, 2015) .</ns0:p><ns0:p>The hierarchical clustering trees were built using UPGMA (unweighted pair-group method with arithmetic mean) based on weighted and unweighted distance matrices at different levels.</ns0:p><ns0:p>Principal coordinate analysis (PCoA) was calculated and displayed using QIIME and R, as well as hierarchical clustering trees.</ns0:p><ns0:p>This study was performed in accordance with the recommendations of the Animal Ethics Review Committee of Beijing Normal University (approval reference number: CLS-EAW-2019-026). <ns0:ref type='table'>2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:ref> Manuscript to be reviewed Additionally, the rarefaction curves (A) and the rank abundance curves (B) are shown in Fig. <ns0:ref type='figure' target='#fig_9'>S1</ns0:ref>, which indicated that the number of OTUs for further analysis was reasonable, as well as the abundance of species in common kestrel feces. The total sequences, total bases and OTU distributions of all samples are shown in Table <ns0:ref type='table'>S4 and Table S5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Overall</ns0:head></ns0:div>
<ns0:div><ns0:head>Bacterial composition and relative abundance</ns0:head><ns0:p>At the phylum level of the gut microbiota in the common kestrel, the most predominant phylum was Proteobacteria (41.078%), followed by Firmicutes (40.923%), <ns0:ref type='bibr'>Actinobacteria (11.191%)</ns0:ref> and Bacteroidetes (3.821%). In addition to Tenericutes (0.178%) and Verrucomicrobia (0.162%), Patescibacteria (0.543%) and Deinococcus-Thermus (0.504%) were also ranked in the top 10 species in the common kestrel fecal microbiota (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>The top 5 families in the gut microbiota were Lactobacillaceae (20.563%), Enterobacteriaceae (18.346%), Moraxellaceae (6.733%), Bifidobacteriaceae (5.624%) and Burkholderiaceae (4.752%).</ns0:p><ns0:p>At the genus level, Lactobacillus (20.563%), Escherichia-Shigella (17.588%) and Acinetobacter (5.956%) were the most dominant genera. These were followed by Bifidobacterium (5.624%) and Enterococcus (4.024%) (Table <ns0:ref type='table'>3</ns0:ref>). These five genera in the total gut microbiota of several samples accounted for a small proportion, such as for E5 (28.755%) and E6 (10.905%) and especially for E4 (2.861%), while the largest proportion was 98.416% in E1.</ns0:p><ns0:p>The stacked histogram of relative abundance for species is also demonstrated in Fig. <ns0:ref type='figure' target='#fig_10'>2</ns0:ref> at the phylum (A) and genus (B) levels, which could intuitively represent the basic bacterial composition and relative abundance. The community structures of E1 and E9 were more similar than those of the other feces samples at both levels.</ns0:p><ns0:p>The hierarchical clustering trees showed the similarity of community structure among different samples, which were generated by UPGMA (unweighted pair-group method with arithmetic mean) with the unweighted UniFrac (Fig. <ns0:ref type='figure' target='#fig_11'>3A</ns0:ref>) and weighted UniFrac (Fig. <ns0:ref type='figure' target='#fig_11'>3B</ns0:ref>) distance matrixes. Manuscript to be reviewed Although the fecal samples were collected from the common kestrel in chronological order (E1-E9) of therapy treatments, no distinct or obvious clustering relationships are discernable in Fig.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.</ns0:head></ns0:div>
<ns0:div><ns0:head>Discrepancy of community composition</ns0:head><ns0:p>To further demonstrate the differences in community composition among the nine samples, principal coordinates analysis (PCoA) was applied (Fig. <ns0:ref type='figure'>4</ns0:ref>). For PCoA, we chose the same two distance matrices (unweighted UniFrac in Fig. <ns0:ref type='figure'>4A</ns0:ref> and weighted UniFrac in Fig. <ns0:ref type='figure'>4B</ns0:ref>) as above to analyze the discrepancies. The results in Fig. <ns0:ref type='figure'>4</ns0:ref> were similar to those in Fig. <ns0:ref type='figure' target='#fig_11'>3</ns0:ref>, in which all samples scattered dispersedly, suggesting that variation in the composition of the gut microbiota of the common kestrel was not obvious over time.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Knowledge and comprehension concerning gut microbiota have continued to progressively develop with relevant techniques over the past decade <ns0:ref type='bibr' target='#b33'>(Guarner 2014;</ns0:ref><ns0:ref type='bibr' target='#b49'>Li et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b67'>Qin et al. 2010</ns0:ref>). The application of analysis for intestinal microecology continues to be also a research focus in the field of wildlife rescue.</ns0:p><ns0:p>The common kestrel (Falco tinnunculus) is listed as a second-class protected animal species in China. Although research concerning avian species, including the common kestrel, has been increasing gradually, the available data on the gut microbiota in the common kestrel were currently unknown.</ns0:p><ns0:p>We characterized the basic composition and structure of the gut microbiota from a wounded common kestrel in this study, which was rescued by the Beijing Raptor Rescue Center (BRRC).</ns0:p><ns0:p>In general, the overall community structure of the gut microbiota in this common kestrel was in accordance with previous relevant characterizations in birds, such as Cooper's hawks <ns0:ref type='bibr' target='#b74'>(Taylor et al. 2019)</ns0:ref>, bar-headed geese <ns0:ref type='bibr' target='#b82'>(Wang et al. 2017)</ns0:ref>, hooded cranes <ns0:ref type='bibr' target='#b89'>(Zhao et al. 2017)</ns0:ref> Manuscript to be reviewed <ns0:ref type='bibr' target='#b81'>(Wang et al. 2016)</ns0:ref>, which included Proteobacteria, Firmicutes, Actinobacteria and Bacteroidetes.</ns0:p><ns0:p>The most predominant phylum in the fecal gut microbiota of the common kestrel was Proteobacteria (41.078%), which ranked after Firmicutes in other birds, such as cockatiels (Nymphicus hollandicus) <ns0:ref type='bibr' target='#b3'>(Alcaraz et al. 2016</ns0:ref>) and black-legged kittiwakes <ns0:ref type='bibr' target='#b78'>(van Dongen et al. 2013)</ns0:ref>. This crucial phylum plays many valuable roles. For instance, Proteobacteria is beneficial for the giant panda, which can degrade lignin in its major food resource <ns0:ref type='bibr' target='#b24'>(Fang et al. 2012</ns0:ref>).</ns0:p><ns0:p>Additionally, it has been reported that Proteobacteria is also the most dominant phylum in obese dogs <ns0:ref type='bibr' target='#b63'>(Park et al. 2015)</ns0:ref>. The specific function of this phylum could be distinct in birds due to their unique physiological traits, as well as their developmental strategies <ns0:ref type='bibr' target='#b44'>(Kohl 2012</ns0:ref>). However, the high relative abundance of Proteobacteria in the total bacterial community was observed mainly in several samples that were collected during surgeries or drug treatments, such as E1 and E4. Sample E1 was collected on 23rd June that the day after the kestrel rescued from the wild. On 22nd June, the kestrel was bandaged with silver sulfadiazine cream (SSD), also subcutaneously injected with 10 ml and orally administered with 4ml lactated ringer's solution (LRS) respectively. The increased level of Proteobacteria was associated with some cardiovascular events, inflammation and inflammatory bowel disease <ns0:ref type='bibr' target='#b4'>(Amar et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b13'>Carvalho et al. 2012)</ns0:ref>. Although the kestrel's weight increased 34 grams when E4 was collected, it just ate a mouse's head. Combined with the status when the kestrel was rescued, we speculated that the increased proportion of Proteobacteria may reflect its food consumption or gastrointestinal status to some extent. Environmental influential factors, as well as dietary changes, should also be considered an important index that could result in variations in the relative abundance of species in the gut microbiota <ns0:ref type='bibr' target='#b22'>(De Filippo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b70'>Scott et al. 2013)</ns0:ref>.</ns0:p><ns0:p>Furthermore, the dominant genera within Proteobacteria in our study were Escherichia-Shigella (17.588%), Acinetobacter (5.956%), Paracoccus (2.904%) and Burkholderia-Caballeronia-Paraburkholderia (2.408%). Escherichia-Shigella is a common pathogenic bacterium that can Manuscript to be reviewed cause diarrhea in humans <ns0:ref type='bibr' target='#b36'>(Hermes et al. 2009</ns0:ref>). The main cause for the high relative abundance of Escherichia-Shigella was the E1 (88.610%) sample, which suggested indirectly that the physical condition of the common kestrel was not normal when it was rescued by staff from the BRRC. This result was also consistent with the actual state of this wounded common kestrel that we observed (Table <ns0:ref type='table'>S3</ns0:ref>).</ns0:p><ns0:p>Although Firmicutes (40.923%) ranked after Proteobacteria, its actual relative abundance was only slightly lower than that in the common kestrel. As a common phylum of the gut microbiota, Firmicutes exists widely in both mammals and birds, and this ancient symbiosis may be linked to the common ancestor of amniotes <ns0:ref type='bibr' target='#b19'>(Costello et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b44'>Kohl 2012)</ns0:ref>. Firmicutes can provide certain energy for the host through catabolizing complex carbohydrates, sugar, and even by digesting fiber in some species <ns0:ref type='bibr' target='#b16'>(Costa et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b26'>Flint et al. 2008;</ns0:ref><ns0:ref type='bibr' target='#b32'>Guan et al. 2017)</ns0:ref>.</ns0:p><ns0:p>The dominant genera in Firmicutes were Lactobacillus (20.563%), Enterococcus (4.024%) and Clostridium_sensu_stricto_1 (3.586%). The relative abundance of Enterococcus in E5 (15.026%) contributed to the highest ranking of this genus. Enterococcus is not regarded as a pathogenic bacterium due to its harmlessness and can even be used as a normal food additive in related industries <ns0:ref type='bibr' target='#b25'>(Fisher & Phillips 2009;</ns0:ref><ns0:ref type='bibr' target='#b55'>Moreno et al. 2006)</ns0:ref>. Enterococcus species are also considered common nosocomial pathogens that can cause a high death rate <ns0:ref type='bibr' target='#b51'>(Lopes et al. 2005)</ns0:ref>.</ns0:p><ns0:p>Meanwhile, these species are also associated with certain infections, including neonatal infections, intraabdominal and pelvic infections, as well as the nosocomial infections and superinfections <ns0:ref type='bibr' target='#b56'>(Murray 1990</ns0:ref>). Coincidentally, prior to the collection of sample E5, the kestrel was anesthetized for the treatment of the right tarsometatarsus injury. The right digit tendon of the kestrel was exposed before managing the wound, without any function. Although ensuring the sterile conditions, we inferred that the kestrel was infected by certain bacteria during the surgery. The BRRC could be regarded as a specific hospital for raptor, which could explain the high proportion of Enterococcus in the fecal samples of this common kestrel. However, this genus should be given sufficient attention in subsequent studies with additional samples from Manuscript to be reviewed different individuals. The abundance of Clostridium increases as more protein is digested <ns0:ref type='bibr' target='#b52'>(Lubbs et al. 2009)</ns0:ref>. Clostridium difficile has been reported to be associated with certain diseases, such as diarrhea and severely life-threatening pseudomembranous colitis <ns0:ref type='bibr' target='#b46'>(Kuijper et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b66'>Pepin et al. 2004</ns0:ref>). The high relative abundance of this genus also resulted primarily from certain samples (E8, 28.177%), similar to the Enterococcus mentioned above. And it's remarkable that the collection of sample E8 was in the same situation as E5. On 13th July, the kestrel also underwent surgery under anesthesia. While E5 was collected, the kestrel's status was still normal according to relevant records. These results indicated that the high relative abundance of certain pathogens may not show any symptoms of illness for the kestrel. In general, the abnormal situation of E5 and E8 still need to be paid enough attention. Moreover, to minimize the influences due to the individual differences, more samples from different individuals should be collected for further study.</ns0:p><ns0:p>The third dominant phylum in the gut microbiota in our study was Actinobacteria (11.191%), which was also detected in other species, such as turkeys (Meleagris gallopavo) <ns0:ref type='bibr' target='#b83'>(Wilkinson et al. 2017</ns0:ref>) and Leach's storm petrel (Oceanodroma leucorhoa) <ns0:ref type='bibr' target='#b64'>(Pearce et al. 2017)</ns0:ref>. The relative abundance of Actinobacteria varied in different species, such as house cats (7.30%) and dogs (1.8%) <ns0:ref type='bibr' target='#b35'>(Handl et al. 2011</ns0:ref>), but only accounted for 0.53% in wolves <ns0:ref type='bibr' target='#b87'>(Wu et al. 2017)</ns0:ref>. Within this phylum, Bifidobacterium (5.624%) and Glutamicibacter (1.840%) were the primary genera. The presence of Bifidobacterium is closely related to the utilization of glycans produced by the host, as well as oligosaccharides in human milk <ns0:ref type='bibr' target='#b71'>(Sela et al. 2008;</ns0:ref><ns0:ref type='bibr' target='#b77'>Turroni et al. 2010)</ns0:ref>. Noticeably, Bifidobacterium thermophilum was reported to be used through oral administration for chickens to resist E. coli infection <ns0:ref type='bibr' target='#b41'>(Kobayashi et al. 2002)</ns0:ref>. The detection and application of Bifidobacterium, especially for the rescue of many rare avian species, would be worth considering for curing various diseases in the future.</ns0:p><ns0:p>Additionally, the relative abundance of Bacteroidetes was 3.821% in this study, which consisted mainly of Sphingobacterium. Bacteroidetes is another important component of the gut Manuscript to be reviewed microbiota that can degrade relevant carbohydrates from secretions of the gut, as well as high molecular weight substances <ns0:ref type='bibr' target='#b75'>(Thoetkiattikul et al. 2013)</ns0:ref>. The proportion of Bacteroidetes, which was stable in most samples we collected except E5 (18.166%), would increase correspondingly with weight loss for mice or changes in fiber content in rural children's daily diet <ns0:ref type='bibr' target='#b22'>(De Filippo et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b48'>Ley et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b76'>Turnbaugh et al. 2008)</ns0:ref>. However, the weight of the kestrel was increasing during the collection of E5 and E8. Additionally, although the kestrel underwent surgery on 4th July, the reason for the high proportion of Bacteroidetes in its fecal sample E5 were unclear. To characterize the basic composition and structure of the gut microbiota for the common kestrel more accurately, additional fresh fecal samples from healthy individuals should be collected in follow-up studies.</ns0:p><ns0:p>Furthermore, additional attention should be paid to the high ranking of Patescibacteria (0.543%) and Deinococcus-Thermus (0.504%) at the phylum level. Patescibacteria might be related to basic biosynthesis of amino acids, nucleotides and so on <ns0:ref type='bibr' target='#b47'>(Lemos et al. 2019)</ns0:ref>. Members of Deinococcus-Thermus are known mainly for their capability to resist extreme radiation, including ultraviolet radiation, as well as oxidizing agents <ns0:ref type='bibr' target='#b20'>(Cox & Battista 2005;</ns0:ref><ns0:ref type='bibr' target='#b30'>Griffiths & Gupta 2007)</ns0:ref>. The specific function of certain species in these phyla for the common kestrel should be studied by controlled experiments, detailed observations or more advanced approaches, as molecular biological techniques are developed.</ns0:p><ns0:p>In addition to the quantity of samples, living environment, age, sex and individual differentiation should also be considered as influencing factors, which would cause a degree of discrepancies at all levels in the gut microbiota. In addition, A comparison of wounded and healthy samples for the bacterial composition in the intestinal microbiota is another essential research direction that may provide additional information for wild animal rescue, such as important biomarkers that indirectly indicate potential diseases. Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>sequencing data A total of 28 phyla, 70 classes, 183 orders, 329 families and 681 genera were detected among the gastrointestinal bacterial communities. There were altogether 389,474 reads obtained and classified into 1673 OTUs at the 0.97 sequence identity cut-off in 9 fecal samples from a common kestrel. Alpha diversity indices (including Sobs, Shannon, Simpson, ACE, Chao and coverage) of each sample are shown in Table 1. The Sobs and Shannon index of all samples are shown in Fig. 1. PeerJ reviewing PDF | (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>and swan geese PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)Manuscript to be reviewed In summary, using high-throughput sequencing technology in this study, we first characterized the elementary bacterial composition and structure of the gut microbiota for a wounded common kestrel in the BRRC, which could provide valuable basic data for future studies. Further research on Enterococcus, Patescibacteria and Deinococcus-Thermus should be conducted in the future with additional samples. The integration of other auxiliary techniques or disciplines, such as metagenomics and transcriptomics, could offer a deeper understanding of the function and mechanism of the gut microbiota, as well as the wildlife rescue.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 3 The</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,255.37,525.00,200.25' type='bitmap' /></ns0:figure>
<ns0:note place='foot' n='1'>PeerJ reviewing PDF | (2020:01:45085:3:0:NEW 17 Aug 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Editor and Reviewers,
We are very grateful for your rigorous attitude and careful review. We also realized that we learned a lot through several revisions. Thanks for your review again.
We have corrected our manuscript carefully for meeting the publication requirements according to your comments.
We hope for your final decisions as soon as possible.
Sincerely yours,
Yu Guan
L42, 52, 120 & 240: I think the phrase 'animal protection' is far too vague. I think you mean that optimising the host gut microbiota could *possibly* be used as a conservation tool, but that's not clear here. More importantly your paper does not contain evidence that can be used to support this statement, so it certainly should appear in the abstract as if you have developed a 'novel strategy'.
Reply: We have changed the ‘animal protection’ into ‘wildlife rescue’ and also rephrased the last sentence in the abstract.
L48: The opening sentence about advancement of sequencing technologies is not useful, in my opinion. Perhaps just open with the second sentence. edited to 'Recent research on host-associated gut microbial communities have revealed their...'
Reply: We have rephrased this sentence.
L51: 'processes'
Reply: We have corrected it.
L51: 'diseases' are neither critical nor basic 'processes'
Reply: We have deleted ‘diseases’.
L54: see above - remove vague statement about being a 'critical tool for animal protection'.
Reply: We have removed and rephrased it.
L69: 'Although studies of the microbiome of these species have increased, there has been no corresponding trend in the study of each species.' This sentence makes no sense.
Reply: We have removed this sentence.
L75' citations needed for these physiological generalisations
Reply: We have added several citations.
L119: As requested by a reviewer, please make this clear that this study is of a single kestrel sampled multiple times at a wildlife rescue centre. 'Wounded kestrel' isn't quite sufficient detail
Reply: We have rephrased this section.
L175: the citation you provide for R is for a book, not the software
Reply: We have corrected this citation.
L240: I think by 'protection' you mean 'conservation'
Reply: We have rephrased it.
L389: 'the' kestrel
Reply: We have added ‘the’.
" | Here is a paper. Please give your review comments after reading it. |
640 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Although asthma is one of the most common chronic, noncommunicable diseases worldwide, the pathogenesis of childhood asthma is not yet clear. Genetic factors and environmental factors may lead to airway immune-inflammation responses and an imbalance of airway nerve regulation. The aim of the present study was to determine which serum proteins are differentially expressed between children with or without asthma and to ascertain the potential roles that these differentially expressed proteins (DEPs) may play in the pathogenesis of childhood asthma.</ns0:p><ns0:p>Methods. Serum samples derived from four children with asthma and four children without asthma were collected. The DEPs were identified by using isobaric tags for relative and absolute quantitation (iTRAQ) combined with liquid chromatography tandem mass spectrometry (LC-MS/MS) analyses. Using biological information technology, including Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), and Cluster of Orthologous Groups of Proteins (COG) databases and analyses, we determined the biological processes associated with these DEPs. Key protein glucose-6-phosphate dehydrogenase (G6PD) was verified by enzyme linked immunosorbent assay (ELISA).</ns0:p><ns0:p>Results. We found 46 DEPs in serum samples of children with asthma vs. children without asthma. Among these DEPs, 12 proteins were significantly (>1.5 fold change) upregulated and 34 proteins were downregulated. The results of GO analyses showed that the DEPs were mainly involved in binding, the immune system, or responding to stimuli or were part of a cellular anatomical entity. In the KEGG signaling pathway analysis, most of the downregulated DEPs were associated with cardiomyopathy, phagosomes, viral infections, and regulation of the actin cytoskeleton. The results of a COG analysis showed that the DEPs were primarily involved in signal transduction mechanisms and posttranslational modifications. These DEPs were associated with and may play important roles in the immune response, the inflammatory response, extracellular matrix degradation, and the nervous system. The downregulated of G6PD in the asthma group was confirmed using ELISA experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion.</ns0:head><ns0:p>After bioinformatics analyses, we found numerous DEPs that may play important roles in the pathogenesis of childhood asthma. Those proteins may be novel biomarkers of childhood asthma and may provide new clues for the early clinical diagnosis and treatment of childhood asthma.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Asthma is one of the most common chronic noncommunicable diseases. According to reports from different countries, the current global prevalence rate of asthma is 1%-18% <ns0:ref type='bibr' target='#b32'>(Neelamegan et al. 2016)</ns0:ref>, threatening approximately 334 million people worldwide. Asthma is characterized by airway hyperresponsiveness, reversible airway obstruction, and chronic airway inflammation.</ns0:p><ns0:p>Symptoms are often recurrent and worsen with time <ns0:ref type='bibr' target='#b37'>(Papi et al. 2018</ns0:ref>). Asthma presents its highest incidence in childhood and affects children's quality of life. Serious cases may lead to death, which brings great economic burdens to families and to health systems <ns0:ref type='bibr' target='#b39'>(Pincheira et al. 2020)</ns0:ref>. Therefore, identifying pathological mechanisms and finding new therapeutic targets for asthma are urgent.</ns0:p><ns0:p>Despite a worldwide presence and economic burden, the pathogenesis of childhood asthma is still unclear. It is currently thought that under the influence of genetic and environmental factors, the mechanisms underlying asthma include inflammatory cells, cytokines, and inflammatory mediators acting on the airway to cause airway inflammation and remodeling, and the imbalance of airway nerve regulation and abnormal structure and function of airway smooth muscle lead to airway hyperresponsiveness and induce asthma <ns0:ref type='bibr' target='#b4'>(Demenais et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Morales & Duffy 2019;</ns0:ref><ns0:ref type='bibr' target='#b37'>Papi et al. 2018)</ns0:ref>. There are two types of asthma. The first is called eosinophilic asthma and is characterized by an imbalance in the helper T (Th) cell Th1/Th2 ratio. The second type is called non-eosinophilic asthma and is mainly neutrophilic asthma that is controlled by Th17 <ns0:ref type='bibr' target='#b19'>(Lambrecht & Hammad 2015)</ns0:ref>. Children with asthma typically present with eosinophilic asthma and allergy, which easily leads to airway remodeling <ns0:ref type='bibr' target='#b7'>(Hamsten et al. 2016)</ns0:ref>.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>In asthma, some specific proteins produced by tissue cells may be secreted into the circulation.</ns0:p><ns0:p>Thus, proteomics may be a useful approach to detect and quantitate such serum proteins and to determine whether they are differentially expressed in asthma and may be potential therapeutic targets. For example, by combining affinity proteomics with the human protein atlas, Hamsten et al. <ns0:ref type='bibr' target='#b7'>(Hamsten et al. 2016</ns0:ref>) discovered that selective chemokine ligand 5 (CCL5), hematopoietic prostaglandin D synthase (HPGDS), and neuropeptide S receptor 1 (NPSR1) were involved in inflammatory reactions and affected the onset of childhood asthma. In addition, Suojalehto et al. <ns0:ref type='bibr' target='#b46'>(Suojalehto et al. 2015)</ns0:ref> identified differentially regulated proteins by using two-dimensional differential gel electrophoresis and mass spectrometry. They determined that fatty acid binding protein 5 (FABP5) was increased in the sputum of patients with allergic asthma and showed the relationship of this protein with airway remodeling and inflammation <ns0:ref type='bibr' target='#b46'>(Suojalehto et al. 2015)</ns0:ref>.</ns0:p><ns0:p>However, among the currently available proteomics methods, isobaric tags for relative and absolute quantitation (iTRAQ) is considered to be most effective <ns0:ref type='bibr' target='#b31'>(Moulder et al. 2018)</ns0:ref>. Therefore, to gain mechanistic insights into the pathogenesis of childhood asthma, the present study used iTRAQ technology combined with liquid chromatography tandem mass spectrometry (LC-MS/MS) to analyze the protein composition and expression levels in serum samples obtain from children with or without asthma. Using bioinformatics analyses, we aimed to determine key proteins that may be (1) used as biological markers or (2) part of critical signaling pathways involved in the development of asthma or (3) useful in determining the prognosis of children with asthma.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div>
<ns0:div><ns0:head>Experimental design</ns0:head><ns0:p>The study proceeded according to the flowchart shown in Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Clinical information and serum sample collection</ns0:head><ns0:p>From September to October 2019, serum samples were collected from eight children who had received no treatment but who had been admitted to the Maternal and Child Health Hospital of Anhui Medical University Affiliated Hospital. No child included in the study had received a diagnosis of an immune disease, chronic kidney disease, or other disease affecting serum proteins.</ns0:p><ns0:p>The included samples were collected from four children with asthma (experimental group, all samples were belongs to eosinophilic asthma, and obtained after the diagnosis immediately but before drug treatment) and four children without asthma (control group). All clinical diagnoses followed the 2019 Global Initiative for Asthma guidelines. The experiment was approved by the Medical Ethics Committee of Anhui Medical University (approval number 20200284), and the parents or guardians of all participants signed informed consent forms.</ns0:p><ns0:p>On the morning of the second day after the children were admitted to the hospital without drug treatment, blood samples (4 mL) were collected, placed at room temperature (22-25 °C) in the dark for 1 h, and centrifuged at 3000 rpm at 4 °C for 15 min. The supernatant was transferred to a new tube with a pipette and stored at −80 °C until it was used in an experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Protein extraction and quality control</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed A ProteoExtract albumin/IgG removal kit (Merck & Co.) was used to extract the serum samples.</ns0:p><ns0:p>The total amount of protein extracted from each one serum sample was more than 400 μg. The protein bands were clear, complete and uniform. The protein was not degraded, and the total amount of protein in each sample was able to be used to do two or more experiments. The protein solution was prepared, and the Bradford Protein Assay working solution was added. Sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and Coomassie blue staining were performed to evaluate the sample quality after detecting the total amount of protein.</ns0:p></ns0:div>
<ns0:div><ns0:head>Labeling after enzymatic hydrolysis of proteins</ns0:head><ns0:p>We used iTRAQ techniques to label the peptide segments after enzymolysis <ns0:ref type='bibr' target='#b42'>(Sandberg et al. 2012</ns0:ref>).</ns0:p><ns0:p>After protein quantification, 60 μg of the protein solution was placed in a centrifuge tube and 5 μL dithiothreitol solution was added. The mixture remained at 37 °C for 1 h. Next, 20 μL of iodoacetamide solution was added, and the mixture was placed at room temperature in the dark for 1 h. The samples were centrifuged, and the collected supernatant was discarded. The precipitated sediment was twice treated with 100 μL of uric acid buffer (8 M urea, 100 mM Tris-HCl; pH 8.0).</ns0:p><ns0:p>The samples were washed with NH 4 HCO 3 (50 mM, 100 μL) three times, and trypsin (the ratio of protein to enzyme 50:1) was added to the sample in an ultrafiltration tube. Enzymolysis was performed on the samples at 37 °C for 12-16 h. Finally, the samples were labeled and desalted using a C18 cartridge.</ns0:p></ns0:div>
<ns0:div><ns0:head>LC-MS/MS analysis</ns0:head><ns0:p>We identified differentially expressed proteins (DEPs) between the two groups by using an LC-MS/MS Spectrum system <ns0:ref type='bibr' target='#b42'>(Sandberg et al. 2012)</ns0:ref>. After re-dissolving the labeled samples in 40 μL Manuscript to be reviewed of 0.1 % formic acid aqueous solution, we analyzed them using nano-LC-MS/MS. The mobile phases were phase A (2% acetonitrile/0.1% formic acid/98% water) and phase B (80% acetonitrile/0.08% formic acid/20% water). The column was equilibrated with 95% phase A liquid.</ns0:p><ns0:p>The gradient from phase B was adjusted as follows: 0-80 min, linear increase from 0% to 40%; 80-80.1 min, increased to 95%; 80.1-85 min, maintained at 95%; 85-88 min, decreased to 6%.</ns0:p><ns0:p>The separated samples were analyzed using mass spectrometry with a Q Exactive HF-X Mass Spectrometer (Thermo Fisher Scientific). Proteome Discover software, version 2.2.0.388, was used to search the Uniport Human database in FASTA format. Protein fold changes (FC) of at least 1.5 were obtained, and P < 0.05 was considered a statistically significant difference. A FC >1.5 represented upregulated proteins and a FC <0.667 represented downregulated proteins; a FC between 0.667 and 1.5 indicated no obvious change in protein expression between the two groups <ns0:ref type='bibr' target='#b30'>(Morrissey et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b34'>Oliveira et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b41'>Rawat et al. 2016)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gene Ontology (GO), Kyoto Encyclopedia of Genes and Genomes (KEGG), and Cluster of</ns0:head></ns0:div>
<ns0:div><ns0:head>Orthologous Groups of Proteins (COG) Signaling Pathway Analyses</ns0:head><ns0:p>GO is a database that can be applied to various species to define and describe the functions of genes and proteins <ns0:ref type='bibr' target='#b5'>(Fang et al. 2019)</ns0:ref>. The GO database is often used to clarify the roles of eukaryotic genes and proteins in cells. GO is useful for comprehensively describing the attributes of genes and gene products in organisms. GO consists of three domains <ns0:ref type='bibr' target='#b24'>(Liu et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b50'>Wang et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b55'>Xing et al. 2020</ns0:ref>): (1) the cellular component domain contains the descriptions of proteins related to cell composition, which may be a subcellular structure (e.g., endoplasmic reticulum or nucleus) or a protein production component (e.g., ribosome, or proteasome); (2) the PeerJ reviewing <ns0:ref type='table'>PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:ref> Manuscript to be reviewed molecular function domain contains descriptions of all proteins related to molecular functions, such as biological activities and operations performed by specific gene products (i.e., molecules or complexes); (3) the biological process domain contains descriptions of all proteins related to biological processes, that is, a series of events in which molecular functions cooperate with one another. The GO function annotation result refers to the statistical number of DEPs detected between the two groups of serum samples in the three GO domains. The GO functional significance enrichment analysis provides the GO functional terms significantly enriched with the DEPs compared with all identified proteins, thus determining which biological functions the DEPs are significantly related to. The GO terms can explain the role of eukaryotic genes and proteins in cells, thus comprehensively describing the attributes of genes and gene products in organisms <ns0:ref type='bibr' target='#b3'>(Cai et al. 2015)</ns0:ref>.</ns0:p><ns0:p>KEGG is a group of databases used to connect a series of genes in the genome with a molecular interaction network in cells to identify biological functions at the genomic and molecular levels.</ns0:p><ns0:p>KEGG contains the signaling pathways of multiple cell processes, such as metabolism, membrane conversion, signal transduction, and the cell cycle <ns0:ref type='bibr' target='#b14'>(Kanehisa & Goto 2000)</ns0:ref>. The results of a KEGG analysis provides insights into the higher biological functions of cells. KEGG analysis can provide the most important biochemical metabolic pathways and signal transduction pathways involved in protein <ns0:ref type='bibr' target='#b57'>(Yang et al. 2018)</ns0:ref>. COG annotates the functions of homologous proteins and includes both the COG database (clusters of homologous proteins from prokaryotes) and the KOG database (clusters of homologous proteins in eukaryotes) <ns0:ref type='bibr' target='#b48'>(Tatusov et al. 1997)</ns0:ref>. COG database can provide the function of differential proteins <ns0:ref type='bibr' target='#b54'>(Wu et al. 2019)</ns0:ref>.</ns0:p><ns0:p>PeerJ reviewing <ns0:ref type='table'>PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Enzyme linked immunosorbent assay (ELISA)</ns0:head><ns0:p>Glucose-6-phosphate dehydrogenase (G6PD), the downregulated protein in the asthma group, was verified by ELISA kit. The experiment was performed according to the kit instructions. Blood samples were collected from children with or without asthma.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical Analysis</ns0:head><ns0:p>Two-tailed Mann-Whitney test was performed with SigmaPlot software. The chi-square test by Fisher's exact test was used to compare the categorical variables (only gender data). Values are expressed as means ± SEM. A value of P < 0.05 was considered statistically significant.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Participants</ns0:head><ns0:p>In the participants of LC-MS/MS experiments, no significant difference in age (asthma : control, means ± SD, 4.0 ± 1.0 vs. 3.3 ± 0.6 years; n = 8, P = 0.5, gender (asthma : control, 3 males and 1 female vs. 2 males and 2 females, P = 0.5) or body mass index (asthma : control, means ± SD, 19.1 ± 0.6 vs. 17.9 ± 0.9 kg/m2; n = 8, P = 0.3) was detected between the experimental group (children with asthma) and the control group (children with trauma but no infection or asthma).</ns0:p></ns0:div>
<ns0:div><ns0:head>SDS-PAGE</ns0:head><ns0:p>Serum samples obtained from children with or without asthma were separated using SDS-PAGE.</ns0:p><ns0:p>Total protein from eight samples was effectively separated without protein degradation within the PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed molecular weight range of 15-220 kDa. The protein levels were sufficient to be used in subsequent experiments (Fig. <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>LC-MS/MS analysis and identification of DEPs</ns0:head><ns0:p>LC-MS/MS is a powerful tool that enables identification of proteins in a mixed sample. In total, 103 proteins were identified in the serum of children with or without asthma, of which 46 DEPs were detected. As shown in Table <ns0:ref type='table'>1</ns0:ref>, 12 DEPs were upregulated and 34 DEPs were downregulated.</ns0:p><ns0:p>We plotted the magnitude of FC (log 2 FC) on the x-axis and the statistical significance of that change (−log 10 of the P value) on the y-axis to obtain a volcano plot (Fig. <ns0:ref type='figure' target='#fig_6'>3A</ns0:ref>). The cluster analysis for the expression of the DEPs clearly indicated that the expression patterns between children with or without asthma differed and that the protein expression in each group was clustered together (Fig. <ns0:ref type='figure' target='#fig_5'>3B</ns0:ref>). These results suggested that there was a significant difference in the levels of proteins expressed in the serum of children with or without asthma.</ns0:p></ns0:div>
<ns0:div><ns0:head>GO functional annotation and enrichment analysis</ns0:head><ns0:p>The Manuscript to be reviewed Significant enrichment analysis of the GO function refers to a rough understanding of the biological processes in which DEPs are enriched based on the simple annotation of genes, which increases the reliability of research focused on determining pathogenesis. By analyzing the GO functional significance enrichment map of the DEPs between children with or without asthma (Fig. <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>), it was found that the detected DEPs play important roles as cellular anatomical entities and in binding, cellular processes, and responding to stimuli.</ns0:p></ns0:div>
<ns0:div><ns0:head>KEGG metabolic pathway analysis</ns0:head><ns0:p>In organisms, different proteins perform their biological functions in coordination with one another. Analyses based on metabolic pathways are helpful to further understand the biological functions of the DEPs. KEGG is the main public database used to analyze such pathways, and analyses using KEGG can determine the most important biochemical metabolic pathways and signal transduction pathways in which the DEGs participate. Here, KEGG functional annotation analysis was carried out on the DEPs identified in the serum samples from children with or without asthma. The results showed that KEGG pathways annotated with the DEPs included dilated cardiomyopathy (DCM), hypertrophic cardiomyopathy (HCM), phagosomes, regulation of the actin cytoskeleton, focal adhesions, and metabolic pathways (Fig. <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>). The downregulated DEPs associated with DCM/HCM were ITGB1, ITGA2, ACTB, and CACNA2D1. The downregulated DEPs associated with phagosomes were ITGB1, ITGA2, ACTB, and MPO. The downregulated DEPs associated with regulation of the actin cytoskeleton were ITGB1, ITGA2, ACTB, and PFN1.</ns0:p><ns0:p>The downregulated DEPs associated with focal adhesions were ITGB1, ITGA2, TNXB, and ACTB. The downregulated DEPs associated with metabolic pathways were AOC3, ALPL, PKM2, and G6PD. The pathway enrichment analysis is the same as the GO functional enrichment analysis, which uses the KEGG pathway as a unit and applies hypergeometric testing to determine the pathways significantly enriched with DEPs compared with all identified proteins. The most important biochemical metabolic pathways and signal transduction pathways of DEPs can be determined by pathway enrichment analysis (Fig. <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>). The present analysis showed that most of the detected downregulated proteins were concentrated in metabolic pathways such as focal adhesion, MAPK signaling pathway, platelet activation and Rap1 signaling pathway..</ns0:p></ns0:div>
<ns0:div><ns0:head>COG protein functional analysis</ns0:head><ns0:p>COG is a database for orthologous classification of proteins. It is assumed that the group of proteins constituting each COG are all derived from the same ancestral protein and are divided into orthologs and paralogs. Orthologs refer to proteins evolved from vertical families (speciation) of different species and typically retain the same functions as the original proteins. Paralogs refer to proteins derived from gene replication in certain species that may evolve new functions related to the original functions. The DEPs between children with or without asthma were analyzed using the COG database (Fig. <ns0:ref type='figure'>8</ns0:ref>). We predicted the possible functions of these protein, and generated the functional classification statistics. The results showed that the functions of these DEPs were mainly concentrated in signal transduction mechanisms; posttranslational modification, protein turnover, and chaperones; amino acid transport and metabolism; cytoskeleton; and extracellular structures.</ns0:p></ns0:div>
<ns0:div><ns0:head>G6PD protein concentration in serum</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>To confirm the DEP in the LC-MS/MS experiment, we used an ELISA experiment and choose a key protein G6PD, which is decreased in asthma reported by studies from other groups <ns0:ref type='bibr' target='#b9'>(Hirasawa et al. 2003)</ns0:ref>. Our data also showed a significant reduction in asthma group compared to control group (Fig. <ns0:ref type='figure'>9</ns0:ref>). Therefore, the result is consistent with the data from LC-MS/MS experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The pathological mechanisms underlying asthma are still not clear; however, the application of proteomics may be valuable for finding some new clues. Recently, several studies also used proteomics to try to find biomarkers in asthma. Gharib et al. utilized shortgun proteomics method to identify protein expression pattern in adult induced sputum samples <ns0:ref type='bibr' target='#b6'>(Gharib et al. 2011</ns0:ref>); Bhowmik et al. used quantitative label-free liquid chromatography-tandem mass spectrometry method to find that apolipoprotein E (ApoE) is significantly downregulated but interleukin 33 (IL-33) is significantly upregulated in serum samples from adult atopic asthma compared to healthy control <ns0:ref type='bibr' target='#b0'>(Bhowmik et al. 2019)</ns0:ref>; Moreover, helped by method of iTRAQ combined with LC-MS/MS, Liu et al. reported DEPs in serum samples from children patients with controlled, partly controlled, or uncontrolled childhood asthma <ns0:ref type='bibr' target='#b25'>(Liu et al. 2017)</ns0:ref>. However, in our study, we used iTRAQ method to identify the DEPs in the serum of children with asthma vs. those without asthma.</ns0:p><ns0:p>Asthma is an allergic response, that is, an individual's own immune system overreacts. In a previous study, it was found that when the human airway is exposed to invading pathogens, the congenital immune process is rapidly induced <ns0:ref type='bibr' target='#b20'>(Lebold et al. 2016</ns0:ref>). The congenital immune system is the first line of defense in the human immune system <ns0:ref type='bibr' target='#b58'>(Yin et al. 2020)</ns0:ref>. When the body is infected by foreign agents, inflammatory reactions will occur first <ns0:ref type='bibr' target='#b15'>(Kimbrell & Beutler 2001)</ns0:ref>. These reactions can produce a variety of chemical factors, including cytokines <ns0:ref type='bibr' target='#b18'>(Kzhyshkowska & Bugert 2016)</ns0:ref>, to recruit immune cells (e.g., macrophages, neutrophils) to infected or inflamed tissues to kill or inhibit the growth of pathogens through phagocytosis and other actions to prevent the spread of infection. Inflammatory reactions can also promote the healing of injured tissues. Another defensive reaction to invading pathogens is the complement system, which helps or complements the antibody itself to remove or to label antigenic substances to control pathogens <ns0:ref type='bibr' target='#b8'>(Hato & Pierre 2015)</ns0:ref>. Complement component 1s (C1s) is involved in the complement and coagulation cascade in the metabolic pathway of the KEGG database. In our GO analysis, we found that several proteins that are involved in complement activation (namely, IGHV3-74, IGLC7, and IGKV1-16) were differentially expressed between children with or without asthma. These mass spectrometry results are consistent with results from a previous study <ns0:ref type='bibr' target='#b8'>(Hato & Pierre 2015)</ns0:ref>. However, we also detected DEPs that have not been reported in previous studies on childhood asthma, including IGKV2-40, IGHV3-74, IGKV1-27, IGKV1-16, CTSG, HSPA1A, SERPINB1, LTF, IGLC7, CTSG, ADAMTS13, NCAM2, TNXB, ACTB, and CNTN1. These findings may indicate that when infection causes a rapid congenital immune response and airway inflammation, airway remodeling will be induced, leading to the onset or exacerbation of childhood asthma <ns0:ref type='bibr' target='#b17'>(Krusche et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b13'>Johnston et al. 1995)</ns0:ref>. In addition, inflammation is divided into local manifestations and systemic reactions. Small trauma may cause local inflammation. When local lesions are serious, especially when pathogenic microorganisms spread in the body, obvious systemic reactions often occur. The children in the control group had only minor local trauma which did not develop systemic inflammation. However, the inflammatory reaction in asthmatic children prefers a systemic inflammation.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed By analyzing DEPs through GO terms or KEGG pathway analysis, we found that the upregulated proteins IGKV2-40, IGHV3-74, IGKV1-27, and IGKV1-16 were involved in the innate immune process. Most of the downregulated proteins were involved in the congenital immune system and neutrophil degranulation, including CTSG, MPO, IFNa2, HSPA1A, SERPINB1, C1S, and LTF.</ns0:p><ns0:p>Among them, IFNa2 is produced after the body recognizes many pathogens and damage-related molecules released by infected or dead cells and is a key component of the innate immune response <ns0:ref type='bibr' target='#b38'>(Paul et al. 2015)</ns0:ref>. HSPA1A can exert immune functions through fusion with the membrane, endocytosis, autophagy, and interaction with ligands <ns0:ref type='bibr' target='#b1'>(Bilog et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Oliverio et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b43'>Sangaphunchai et al. 2020)</ns0:ref>. SERPINB1 is a neutrophil elastase inhibitor that plays an important role in regulating cell activity, inflammatory response, and cell migration <ns0:ref type='bibr' target='#b49'>(Torriglia et al. 2017)</ns0:ref>.</ns0:p><ns0:p>Therefore, our findings provided several new potential protein targets in the immune system for treatment of children with asthma.</ns0:p><ns0:p>Among the upregulated proteins, the GO terms or KEGG pathways analysis showed that the DEPs Manuscript to be reviewed the degradation of the extracellular matrix decreases, which will aggravate airway remodeling <ns0:ref type='bibr' target='#b23'>(Linden et al. 2005)</ns0:ref>. Other group <ns0:ref type='bibr' target='#b9'>(Hirasawa et al. 2003)</ns0:ref> found that MAPK and activated protein kinases are related to viral replication. Lacking G6PD will increase the products of cellular reactive oxygen species, which strengthens those kinase pathways to facilitate viral replication, thus inducing or aggravating asthma. AOC3 is commonly referred to as vascular adhesion protein-1 (VAP-1) and is expressed in lymphocyte endothelial interactions. As a new marker of myofibroblasts, AOC3 may play a role in pulmonary fibrosis and thus induce asthma <ns0:ref type='bibr' target='#b11'>(Hsia et al. 2016)</ns0:ref>. The imbalance in the expression levels of LRP1 in fibroblasts of healing tissues may lead to unlimited expansion of contractile fibroblasts, thus causing or aggravating pulmonary fibrosis and participating in the pathogenesis of asthma <ns0:ref type='bibr' target='#b45'>(Schnieder et al. 2019)</ns0:ref>. Neural cell adhesion molecule 2 (NCAM2) protein has been shown to reduce inflammation in Alzheimer's disease <ns0:ref type='bibr' target='#b40'>(Rasmussen et al. 2018)</ns0:ref>. CNTN1, which is also a cell adhesion protein, can promote the invasion of prostate cancer cells, enhance Akt activation, and reduce the expression of epithelial cadherin in cancer cells <ns0:ref type='bibr' target='#b56'>(Yan et al. 2016)</ns0:ref>. Some studies have found that DPP4 plays an important role in angiogenesis and growth, immune response, cell proliferation, fibrin degradation, cytokine production, signal transduction, and other physiological and pathological processes of the body <ns0:ref type='bibr' target='#b26'>(Liu et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b27'>Mark 2005;</ns0:ref><ns0:ref type='bibr' target='#b33'>Ohnuma et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b47'>Ta et al. 2010</ns0:ref>). In addition, DPP4 is widely distributed in lung tissue and participates in pulmonary inflammation and the formation of pulmonary surfactant <ns0:ref type='bibr' target='#b28'>(Mentlein 1999;</ns0:ref><ns0:ref type='bibr' target='#b44'>Schmiedl et al. 2014)</ns0:ref>. We speculate that when a pathogenic infection exists in the lungs of children, the downregulation of DPP4 will lead to an immune imbalance and inflammatory cascade reaction to affect the formation of lung substances, thus inducing asthma. However, our findings and speculation should be confirmed in future studies.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Other DEPs detected in our KEGG analysis also play important roles in cellular physiological and pathological activities and may be part of mechanisms underlying childhood asthma. Among the upregulated proteins, APP participates in the following pathways: neuron growth, adhesion, axon generation, cell migration, regulation of protein movement, cell apoptosis, and combination with extracellular matrix components. Some studies have also found that APP plays an important role in the pathogenesis of Alzheimer's disease and is overexpressed in cancer cells, such as in nasopharyngeal carcinoma <ns0:ref type='bibr' target='#b21'>(Li et al. 2019a;</ns0:ref><ns0:ref type='bibr' target='#b22'>Li et al. 2019b</ns0:ref>). Another upregulated protein, PIP, a pathway annotated in KEGG, has a regulatory effect on natural immune cells. Some studies have found that PIP can specifically degrade fibronectin to help hosts resist infection and play a role in the migration, adhesion, and invasion of cancer cells <ns0:ref type='bibr' target='#b12'>(Ihedioha et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b16'>Kitano et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b52'>Wang et al. 2019b</ns0:ref>).</ns0:p><ns0:p>Among the downregulated proteins detected through our GO term or KEGG pathway analysis, CACNA2D1 is involved in the macrophage CCR5 pathway; ITGB1 is involved in the adaptive immune process, cell adhesion, blood-brain barrier, immune cell migration, and cytoskeleton signaling process; and LTF is involved in amyloid fiber formation. Thus, through KEGG analyses, we have provided insights into the potential mechanisms involved in the development of childhood asthma.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We used iTRAQ combined with LC-MS/MS technologies to identify DEPs in serum samples between children with or without asthma. The present study provided additional evidence PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed consistent with the role of the inflammation-immune mechanism in the pathogenesis of childhood asthma, and offered new potential biomarkers for childhood asthma that may be helpful for its early diagnosis. The analysis of signaling pathways provided numerous key proteins that may be involved in the development of childhood asthma and that may be new targets in the study of asthma in future. Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>GO functional annotation results of the DEPs between children with or without asthma are shown in Fig. 4. For the cellular component domain, 37 DEPs were mainly concentrated in the cellular anatomical term, among which the top 3 upregulated proteins were IGKV2-40, IGHV3-74, and IGKV1-27. For the molecular function domain, the role of 38 DEPs was mainly in binding, among which the top 3 upregulated proteins were IGKV2-40, IGHV3-74, and IGKV1-27. For the biological process domain, the DEPS were primarily involved in cellular processes (31 DEPs) and responding to stimuli (30 DEPs), with the top 3 upregulated proteins being IGKV2-40, IGHV3-74, and V1-19. PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>participate in immunoglobulin production (IGKV2-40, IGKV1-27, and IGKV1-16), receptormediated endocytosis (IGKV2-40, IGLC7, and IGKV1-16), phagocytosis, recognition and engulfment (IGHV3-74, IGLC7), and complement activation (IGHV3-74, IGLC7, and IGKV1-16). The downregulated proteins are involved in extracellular matrix degradation (CTSG, TNXB, ELANE, and COL10A1), metabolism of various substances (CTSG, ADAMTS13, DPP4, G6PD, APOB, ACAN, AOC3, and LRP1), and cell adhesion and focal adhesion (NCAM2, TNXB, ACTB, CNTN1, ITGB1, and ELANE). During airway remodeling, epithelial cell exfoliation, goblet cell proliferation, vascular proliferation, extracellular matrix deposition, and hypertrophy of smooth muscle cells are involved. When the expression levels of ELANE are downregulated, PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Flowchart</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>( A )</ns0:head><ns0:label>A</ns0:label><ns0:figDesc>In this volcano plot, yellow dots represent proteins with a significant fold change (FC) >1.5; blue-green dots, proteins with a significant FC <0.667; black dots, proteins with no significant change. (B) In a heatmap, the upregulation and downregulation of different proteins are observed by cluster analysis. Each line in the figure represents a protein, each column is a sample (A1-4: the asthmatic group, B1-4: the control group), and the colors represent different expression levels (the log2 value of the quantitative value is obtained and a median correction is carried out during drawing of the heatmap).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 Gene</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 Gene</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 Kyoto</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 Kyoto</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,255.37,525.00,349.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,280.87,525.00,435.00' type='bitmap' /></ns0:figure>
<ns0:note place='foot' n='2'>PeerJ reviewing PDF | (2020:03:47037:1:1:NEW 30 Jul 2020)</ns0:note>
</ns0:body>
" | "Point-to-point Response
General response: We kindly thank reviewers for their constructive suggestions. In response to reviewers' comments, we have followed their suggestions to perform additional experiments to address their concerns and revise the manuscript carefully. We believe that this version of the manuscript is largely improved.
Referee's comments:
Reviewer #1
Basic reporting:
1. Figure 3B. The author should indicate which group “B1-4” and “A1-4” belong to?
Answer: Thank you very much for careful check. A1-4 is the asthmatic group, B1-4 is the control group (without asthma). We added the information into the text. Please take a look at line 598-602.
2. Figure 4. The text in the picture is too small. Please increase the font size.
Answer: Thanks for your suggestion. We modified the figures size in this version to ensure that the font was clearly seen.
3. All Figure. The vector image exported by R language can be modified with drawing software (Canvas or AI) again. Please choose the appropriate font size for the convenience of readers. In addition, author should keep the horizontal and vertical ratio in the picture zooms in and out.
Answer: Thank you very much for your suggestions. We modified the pictures in this version.
Experimental design:
1. “Experimental design & Validity of the findings” In general, the research idea of this manuscript is novelty. However, the biggest defect is the lack of sample quantity, which makes it difficult to judge whether the conclusion is universal or not. If the authors increase each group of samples to 10, where possible, the results should be more applicable. However, considering the high cost of iTRAQ technology, I recommend that the author select 1 or 2 from the 37 DEPs for ELISA verification, and expand the samples to about 50+, which may significantly improve the impact of this study.
Answer: Thank you very much for your suggestions. This is indeed an important issue in our study. We agree that more data should be more reliable. However, because as you said, iTRAQ experiment is very expensive and clinical samples are very limited, we are unable to use more serum samples for mass spectrometry. Compared with other proteomics methods, iTRAQ technology has the advantages of high sensitivity, exactly and repeatability (Li et al. 2020; Ma et al. 2020; Xu et al. 2020; Zhu et al. 2020). To responding your concerns, we collected more samples and further used ELISA method to conform that key protein glucose-6-phosphate dehydrogenase (G6PD) was significantly decreased in the asthma group, which is consistent with iTRAQ result. We added this new data in this version for supporting our conclusion. We recollected 16 serum samples. To verify the G6PD protein low level in serum from children with asthma, we used ELISA to identify again. The data showed a significant difference between asthma group and normal group (asthma : control, means ± SD, 26.6 ± 23.1 vs. 103.1 ± 49.6 ng/L, n = 16, P < 0.05, ). The result is consistent with the result of mass spectrometry. We added the data into the text in this version. Please take a look at line 283-288 and Fig. 9.
2. “Materials & Methods – Clinical information and serum sample collection” Line 103-112. “The included samples were collected from four children with asthma (experimental group) and four children without asthma (control group).” When grouping, did the authors take into account factors such as age, gender, and body mass index (BMI) of patients in the balance between the two groups? Because the physiological factors mentioned above also have a certain influence on serum protein, considering the small sample size of this study, these factors must be matched to exclude the relevant influence, please explain in the article. P.S. I saw that the author showed no difference in age and gender between the two groups in the results section. Please analyze BMI also.
Answer: Thank you for your suggestion. We agree your viewpoint and added more information into the text. No significant differences in age (asthma : control, means ± SD, 4.0 ± 1.0 vs. 3.3 ± 0.6 years; n = 8, P = 0.5, gender (asthma : control, 3 males and 1 female vs. 2 males and 2 females, P = 0.5) or BMI (asthma : control, means ± SD, 19.1 ± 0.6 vs. 17.9 ± 0.9 kg/m2; n = 8, P = 0.3) was detected between control and asthma groups. Please take a look Results section at line 205-210.
3. “Materials & Methods – LC-MS/MS analysis” Line 147-150. “Protein fold changes (FC) of at least 1.5 were obtained, and P < 0.05 was considered a statistically significant difference. A” Generally, in high-throughput analysis, the threshold value of FC is defined at 2. Why the author chooses 1.5, please clarify the reasons and references.
Answer: Thank you for your question. We used the folding changes (FC) selection of 1.5 because more differential proteins can be provided. The folding change (FC) selection of 1.5 is also found in many studies (Morrissey et al. 2013; Oliveira et al. 2020; Rawat et al. 2016). Therefore, we followed their studies with FC of 1.5 providing timely new information to readership. We added the references into the Method section. Please take a look Results section at line 160.
4. “Table 1” Page 45. The data showed that G6PD was one of the most significantly down-regulated proteins. This is a very interesting finding. As an important gene related to G6PD deficiency disease, the detection of G6PD activity should be widely used in children's hospitals and general hospitals in mainland China. Can the author make a retrospective study, collect and analyze relevant data, and observe whether the activity of G6PD is down-regulated in the blood of children with asthma? The research value of this paper can be further enhanced by appropriately expanding the sample size.
Answer: Many thanks for your constructive suggestions. We really followed your advice and used ELISA to conform G6PD concentration again and found that G6PD concentration was largely reduced in asthma group compared to control group indeed. The data were added into text. However, due to time limitation, we are unable to provide data for retrospective study in the short term. But it is really an important issue and we will perform the study further in future. Please take a look Result section at line 283-288 and Fig. 9.
5. Through PubMed searching, the keywords are 'asthma' and 'iTRAQ', and I find that there are very few relevant studies. There is a lack of research on the detection of serum in children with asthma, so I think the authors should try to add as many samples as possible.
Answer: Thank you very much for your suggestion. This is indeed an important issue in our study. We agree that more data should be more reliable. However, because iTRAQ experiment is very expensive and clinical samples are very limited, we are unable to use more serum samples for mass spectrometry. Considering the reliability of the experimental results, we collected more samples to use ELISA to confirm G6PD concentration and found that G6PD concentration was largely reduced in asthma group compared to control group indeed. Therefore, we believe that our iTRAQ result is reliable.
Reviewer #2
Comments for the Author:
1. Line 84-86 “……selective chemokine ligand 5, hematopoietic prostaglandin D synthase, and neuropeptide S receptor 1 were involved……” line 88 “……fatty acid binding protein 5……” and line 272 “Complement component 1s……” Do all these descriptions refer to specific genes? Please mark their official symbol accurately.
Answer: Thank you very much for your valuable suggestion. We added these information into the text.
official symbol
selective chemokine ligand 5
CCL5
hematopoietic prostaglandin D synthase
HPGDS
neuropeptide S receptor 1
NPSR1
fatty acid binding protein 5
FABP5
Complement component 1s
C1s
2. Line 269 mentions “Inflammatory reactions can also promote the healing of injured tissues.” In the results of the manuscript, lines 184-185, the description of the control group is “children with trauma ……” Will the trauma here affect the experimental results? Maybe some discussion needs to be added.
Answer: Thanks for your question. Inflammation is divided into local manifestations and systemic reactions. Small trauma may cause local inflammation. When local lesions are serious, especially when pathogenic microorganisms spread in the body, obvious systemic reactions often occur. The children in the control group had only minor local trauma which did not develop systemic inflammation. However, the inflammatory reaction in asthmatic children prefers a systemic inflammation. We added the explain into Discussion section. Please take a look at line 324-328.
3. Figure 3A, please accurately describe or set the color of the volcano plot. The description in figure legends “red dots” and “green dots” should be “yellow dots” and “blue-green dots” respectively.
Answer: Sorry for our mistake. We modified this description in this version. Please take a look at line 596; 597.
4. Figure 8 need to adjust because some have exceeded the maximum value of the Y-axis.
Answer: Sorry for our mistake. We modified Figure 8 in this version.
5. Although “……Children with asthma typically present with eosinophilic asthma and allergy……” (line 77), the pathology type of children with asthma should be provided so that readers can clearly understand the possible impact on the experimental results.
Answer: Thank you very much for your comments. This is indeed an important issue in our study. Many children with asthma are eosinophilic asthma. Therefore, we choose eosinophilic asthma as our study aim. We added this information into the text. Please check it at line 113-115.
6. Line 149 “……>1.5 represented upregulated proteins and a FC <0.667 represented downregulated proteins……” however, the description in Figure 3 is “FC> 1.5” and “FC <1.5” as a significant change. And the volcano uses log10FC, and the heatmap uses log2FC, please explain clearly.
Answer: Sorry for our mistake. In this volcano plot (Figure 3), yellow dots represent proteins with a significant fold change (FC) >1.5; blue-green dots, proteins with a significant (FC) <0.667; black dots, proteins with no significant change. There is no obvious difference in the use of log10FC or log2FC for mapping. In the modified version, log2FC is chosen for the volcano and the heatmap. Please take a look at line 222.
Reviewer #3
Basic reporting:
1. Several studies that tried to identify differentially expressed proteins of asthma have been published (as listed below) and one also focused on childhood asthma. Could the authors explain the difference and significance regarding the design and results of their study compared to those studies?
1) Pilot-Scale Study Of Human Plasma Proteomics Identifies ApoE And IL33 As Markers In Atopic Asthma. J Asthma Allergy. doi: 10.2147/JAA.S211569.
2) Screening Serum Differential Proteins for Childhood Asthma at Different Control Levels by Isobaric Tags for Relative and Absolute Quantification-based Proteomic Technology. Zhongguo Yi Xue Ke Xue Yuan Xue Bao. DOI: 10.3881/j.issn.1000-503X.2017.06.014
3) Induced Sputum Proteome in Healthy Subjects and Asthmatic Patients. J Allergy Clin Immunol. doi: 10.1016/j.jaci.2011.07.053.
Answer: Thank you very much for your questions. Recently, several studies also used proteomics to try to find biomarkers in asthma. Gharib et al. utilized shortgun proteomics method to identify protein expression pattern in adult induced sputum samples (Gharib et al. 2011); Bhowmik et al. used quantitative label-free liquid chromatography–tandem mass spectrometry method to find that ApoE is significantly downregulated but IL33 is significantly upregulated in serum samples from adult atopic asthma compared to healthy control (Bhowmik et al. 2019); Moreover, helped by method of iTRAQ combined with LC-MS/MS, Liu et al. reported DEPs in serum samples from children patients with controlled, partly controlled, or uncontrolled childhood asthma (Liu et al. 2017). In our study, we used iTRAQ method to identify the DEPs in the serum of children with asthma vs. those without asthma. Therefore, our study is completely different from these studies and may provide new information for asthma study. We also give a table for comparing clearly for reviewer refer. We added our explain into the text. Please take a look at Discussion section at line 292-302.
1)
Our study
1) Pilot-Scale Study Of Human Plasma Proteomics Identifies ApoE And IL33 As Markers In Atopic Asthma.
Sample Collection
Serum samples (4 ml) from four children with asthma and four children without asthma.
Venous blood samples (10 mL) were collected from healthy subjects (n=5), atopic asthma patients (n=5), and COPD patients (n=5).
Validation Of LC-MS/MS Results
Elisa was performed to validate the outcome of LC-MS/MS (added in this version)
Western blotting was performed to validate the outcome of LC-MS/MS
Results
ApoB (down) and so on
ApoE (down); IL33 (up)
2)
Our study
2) Screening Serum Differential Proteins for Childhood Asthma at Different Control Levels by Isobaric Tags for Relative and Absolute Quantification-based Proteomic Technology.
Sample Collection
Serum samples (4 ml) from four children with asthma and four children without asthma.
Serum samples from pediatric patients with controlled (n=15), partly controlled (n=15), or uncontrolled (n=15) childhood asthma.
Fold change
A FC > 1.5 represented upregulated proteins and a FC < 0.667 represented downregulated proteins.
57 differentially expressed proteins were found among the different control levels of childhood asthma (fold < 0.8 or fold > 1.2).
Bioinformatics analysis
GO; KEGG; COG
GO; KEGG
Statistical Analysis
Paired samples T Test; Fisher’s Exact Test.
Tukey post-hoc test (GraphPad Prism 5)
3)
Our study
3) Induced Sputum Proteome in Healthy Subjects and Asthmatic Patients.
Sample composition
Serum samples
Sputum samples
Sample Collection
four children with asthma and four children without asthma.
5 normal individuals and 10 asthmatics, including 5 with exercise-induced bronchoconstriction
Bioinformatics analysis
GO; KEGG; COG
GO; Shotgun proteomics analysis
Validation Of LC-MS/MS Results
Elisa was performed to validate the outcome of LC-MS/MS (added in this version)
Western blots were conducted to measure differences in the levels of SERPINA1, SCGB1A1, SMR3B, C3a, and HPX in induced sputum supernatant.
Result
Differential proteins are different.
2. Some of the references are old, and the authors should also refer to studies of top journals in the field.
Answer: Thank you very much for your constructive suggestion. We added some up-to-date references into our manuscript.
Added references:
Bhowmik M, Majumdar S, Dasgupta A, Bhattacharya S, and Saha S. 2019. Pilot-Scale Study Of Human Plasma Proteomics Identifies ApoE And IL33 As Markers In Atopic Asthma. Journal of Asthma and Allergy:273–283. 10.2147/JAA.S211569
Cai A, Qi S, Su Z, Shen H, Yang Y, He L, and Dai Y. 2015. Quantitative Proteomic Analysis of Peripheral Blood Mononuclear Cells in Ankylosing Spondylitis by iTRAQ. Clinical & Translational Science 8:579–583. 10.1111/cts.12265
Gharib S, Nguyen E, Lai Y, Plampin J, Goodlett D, and Hallstrand T. 2011. Induced Sputum Proteome in Health and Asthma. J Allergy Clin Immunol 128(6):1176–1184. 10.1016/j.jaci.2011.07.053
liu J, Wang Y, Wang Y, Zhu J, and Dao F. 2017. Screening Serum Differential Proteins for Childhood Asthma at Different Control Levels by Isobaric Tags for Relative and Absolute Quantification-based Proteomic Technology. Acta Academiae Medicinae Sinicae 39:817-826. 10.3881/j.issn.1000-503X.2017.06.014
Wu W, Lin X, Wang C, Ke J, Wang L, and Liu H. 2019. Transcriptome of white shrimp Litopenaeus vannamei induced with rapamycin reveals the role of autophagy in shrimp immunity. Fish Shellfish Immunol 86:1009-1018. 10.1016/j.fsi.2018.12.039
Yang Y, Fu X, Qu W, Xiao Y, and Shen H. 2018. MiRGOFS: a GO-based functional similarity measurement for miRNAs, with applications to the prediction of miRNA subcellular localization and miRNA–disease association. Bioinformatics. 10.1093/bioinformatics/bty343
Experimental design
1. The authors should describe the amount of protein they extracted from the blood sample as well as the amount they applied in each tests respectively.
Answer: Thanks for your suggestion. The total amount of protein extracted from each one serum sample was more than 400 μg. The protein bands were clear, complete and uniform. The protein was not degraded, and the total amount of protein in each sample was able to be used to do two or more experiments. Please take a look at line 126-128.
2. The statistical analysis applied in this study should be mentioned in method.
Answer: Thank you for your valuable suggestion. We added statistical analysis in this version. Two-tailed Mann-Whitney test was performed with SigmaPlot software. The chi-square test by Fisher’s exact test was used to compare the categorical variables (only gender data). Values are expressed as means ± SEM. A value of P < 0.05 was considered statistically significant. Please take a look at line 199-202.
3. In method, further explanations in terms of how the analyses of GO, KEGG, COG were carried out together with their meanings in the study should be described, instead of only focusing on their definitions.
Answer: Thank you very much for your constructive suggestions. The GO terms can explain the role of eukaryotic genes and proteins in cells, thus comprehensively describing the attributes of genes and gene products in organisms (Cai et al. 2015). KEGG analysis can provide the most important biochemical metabolic pathways and signal transduction pathways involved in protein (Yang et al. 2018). COG database can provide the function of differential proteins (Wu et al. 2019). We added these information into Method section. Please take a look at line 179-181, 187-189, 192.
Comments for the Author
1. Serum samples derived from four children with asthma and four children without asthma were collected in this study, which is quite a small sample size compared with other similar studies. Could the author give their reason or evidence to support that this sample size is sufficient enough?
Answer: Thank you very much for your challenge. We agree that more data should be more reliable. However, because iTRAQ experiment is very expensive and clinical samples are very limited, we are unable to use more serum samples for mass spectrometry. To responding this issue, we further used ELISA method to demonstrate that G6PD in DEPs was decreased in the asthma group, which is consistent with our iTRAQ result. We added this new data into this version for supporting our finding. Please take a look at line 283-288 and Fig. 9.
2. For those four children with asthma, were they hospitalized due to asthma exacerbation, or just with a history of chronic asthma? Furthermore, since their blood was collected the other day of their hospitalization, they were supposed to be treated with antiasthmatic drugs. Did the authors notice the impact of antiasthmatic drugs on DEPs, and the difference of DEPs during asthma exacerbation, which the authors should try to explain?
Answer: Thank you very much for your questions. The samples homogeneity and antiasthmatic drug effect are really important. Therefore, to avoid these problem, we obtained the serum samples after the diagnosis immediately but before drug treatment. We added the information at line 114-115.
2. Should the authors explain the reasons and evidence why they chose to analyze the blood of the patients instead of sputum or samples derived from bronchoalveolar lavage?
Answer: Thank you very much for your questions. It is yes that sputum or samples derived from broncho-alveolar lavage fluid (BALF) may be direct for finding some biomarkers. But it is difficult to obtain sputum or BALF samples from little baby. Therefore, we choose blood samples to perform our study.
References
Cai A, Qi S, Su Z, Shen H, Yang Y, He L, and Dai Y. 2015. Quantitative Proteomic Analysis of Peripheral Blood Mononuclear Cells in Ankylosing Spondylitis by iTRAQ. Clinical & Translational Science 8:579–583. 10.1111/cts.12265
Li F, Xu D, Wang J, Jing J, Li Z, and Jin X. 2020. Comparative proteomics analysis of patients with quick development and slow development Chronic Obstructive Pulmonary Disease (COPD). Life Sciences. 10.1016/j.lfs.2020.117829
Ma S, Zheng J, Xu Y, Yang Z, Zhu Y, Su X, and Mo X. 2020. Identified plasma proteins related to vascular structure are associated with coarctation of the aorta in children. Ital J Pediatr 46:63. 10.1186/s13052-020-00830-7
Morrissey B, O' Shea C, Armstrong J, Rooney C, Staunton L, Sheehan M, Shannon A, and Pennington S. 2013. Development of a label-free LC-MS/MS strategy to approach the identification of candidate protein biomarkers of disease recurrence in
prostate cancer patients in a clinical trial of combined hormone and radiation therapy. 10.1002/prca.201300004
Oliveira T, Lacerda J, Leite G, Dias M, Mendes M, Kassab P, Silva C, Juliano M, and Forones N. 2020. Label-free peptide quantification coupled with in silico mapping of proteases for identification of potential serum biomarkers in gastric adenocarcinoma patients. Clinical Biochemistry. 10.1016/j.clinbiochem.2020.02.010
Rawat P, Bathla S, Baithalu R, Yadav M, Kumar S, Ali S, Tiwari A, Lotfan M, Naru J, Jena M, Behere P, Balhara A, Vashisth R, Singh I, Dang A, Kaushik J, Mohanty T, and Mohanty A. 2016. Identification of potential protein biomarkers for early detection of pregnancy in cow urine using 2D DIGE and label free quantitation. Clinical Proteomics 13:15. 10.1186/s12014-016-9116-y
Wu W, Lin X, Wang C, Ke J, Wang L, and Liu H. 2019. Transcriptome of white shrimp Litopenaeus vannamei induced with rapamycin reveals the role of autophagy in shrimp immunity. Fish Shellfish Immunol 86:1009-1018. 10.1016/j.fsi.2018.12.039
Xu C, Su X, Chen Y, Xu Y, Wang Z, and Mo X. 2020. Proteomics analysis of plasma protein changes in patent ductus arteriosus patients. Ital J Pediatr 46:64. 10.1186/s13052-020-00831-6
Yang Y, Fu X, Qu W, Xiao Y, and Shen H. 2018. MiRGOFS: a GO-based functional similarity measurement for miRNAs, with applications to the prediction of miRNA subcellular localization and miRNA–disease association. Bioinformatics. 10.1093/bioinformatics/bty343
Zhu Y, Yun Y, Jin M, Li G, Li H, Miao P, Ding X, Feng X, Xu L, and Sun B. 2020. Identification of novel biomarkers for neonatal hypoxic-ischemic encephalopathy using iTRAQ. Ital J Pediatr 46:67. 10.1186/s13052-020-00822-7
" | Here is a paper. Please give your review comments after reading it. |
642 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Clostridioides difficile infection (CDI) is the most common cause of hospitalacquired diarrhea. There is little available data regarding risk factors of CDI for patients who undergo cardiac surgery. The study evaluated the course of CDI in patients after cardiac surgery. Methods. Of 6,198 patients studied, 70 (1.1%) developed CDI. The control group consisted of 73 patients in whom CDI was excluded. Perioperative data and clinical outcomes were analyzed.</ns0:p><ns0:p>Results. Patients with CDI were significantly older in comparison to the control group (median age 73.0 vs 67.0, P = 0.005) and more frequently received proton pump inhibitors, statins, βblockers and acetylsalicylic acid before surgery (P = 0.008, P = 0.012, P = 0.004, and P = 0.001, respectively). In addition, the presence of atherosclerosis, coronary disease and history of malignant neoplasms correlated positively with the development of CDI (P = 0.012, P = 0.036 and P = 0.05, respectively). There were no differences in the type or timing of surgery, aortic cross-clamp and cardiopulmonary bypass time, volume of postoperative drainage and administration of blood products between the studied groups. Relapse was more common among overweight patients with high postoperative plasma glucose or patients with higher C-reactive protein during the first episode of CDI, as well as those with a history of coronary disease or diabetes mellitus (P = 0.005, P = 0.030, P = 0.009, P = 0.049, and P = 0.025, respectively).</ns0:p><ns0:p>Fifteen patients died (21.4%) from the CDI group and 7 (9.6%) from the control group (P = 0.050). Emergent procedures, prolonged stay in the intensive care unit, longer mechanical ventilation and high white blood cell count during the diarrhea were associated with higher mortality among patients with CDI (P = 0.05, P = 0.041, P = 0.004 and P = 0.007, respectively).</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Background. Clostridioides difficile infection (CDI) is the most common cause of hospital-</ns0:head><ns0:p>Results. Patients with CDI were significantly older in comparison to the control group (median age 73.0 vs 67.0, P = 0.005) and more frequently received proton pump inhibitors, statins, β-blockers and acetylsalicylic acid before surgery (P = 0.008, P = 0.012, P = 0.004, and P = 0.001, respectively). In addition, the presence of atherosclerosis, coronary disease and history of malignant neoplasms correlated positively with the development of CDI (P = 0.012, P = 0.036 and P = 0.05, respectively). There were no differences in the type or timing of surgery, aortic cross-clamp and cardiopulmonary bypass time, volume of postoperative drainage and administration of blood products between the studied groups. Relapse was more common among overweight patients with high postoperative plasma glucose or patients with higher C-reactive protein during the first episode of CDI, as well as those with a history of coronary disease or diabetes mellitus (P = 0.005, P = 0.030, P = 0.009, P = 0.049, and P = 0.025, respectively). Fifteen patients died (21.4%) from the CDI group and 7 (9.6%) from the control group (P = 0.050). INTRODUCTION Clostridioides difficile (CD) is widely spread in the human environment and present in about 7 to 18% of the adult population <ns0:ref type='bibr' target='#b12'>(Donskey, Kundrapu & Deshpande, 2015)</ns0:ref>. CD infection (CDI) is the most common cause of hospital-acquired diarrhea and may follow a severe course with many complications, which can include fatal colitis <ns0:ref type='bibr' target='#b7'>(Cunha, 1998;</ns0:ref><ns0:ref type='bibr' target='#b36'>Ricciardi et al., 2007)</ns0:ref>. Despite increased efforts to prevent this infection, the incidence and severity of nosocomial CDI has continued to grow worldwide <ns0:ref type='bibr' target='#b36'>(Ricciardi et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b3'>Clements et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Known risk factors for CDI include advanced age, female gender, use of broad-spectrum antibiotics, use of proton pump inhibitors (PPI), chronic comorbidities, immunocompromised states and prolonged, multiple hospital stays <ns0:ref type='bibr' target='#b34'>(McFarland, Surawicz & Stamm, 1990;</ns0:ref><ns0:ref type='bibr' target='#b13'>Eze et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b8'>De Roo & Regenbogen, 2020;</ns0:ref><ns0:ref type='bibr'>Furuya-Kanamori et al., 2015a)</ns0:ref>. Patients who undergo surgery present additional risks for CDI associated with catheter-related infections, prolonged mechanical ventilation, extensive blood product usage, indwelling catheter drainage and open cavities <ns0:ref type='bibr' target='#b21'>(Gelijns et al., 2014)</ns0:ref>.</ns0:p><ns0:p>There is little available data regarding risk factors for CDI among patients who undergo cardiac surgery. There have only been a few reports investigating the risk of CDI in patients after heart procedures <ns0:ref type='bibr' target='#b21'>(Gelijns et al., 2014;</ns0:ref><ns0:ref type='bibr'>Vondran et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Flagg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b28'>Kirkwood et al., 2018)</ns0:ref>. This prompted us to evaluate the prevalence of hospital-acquired CDI after cardiac surgery, identify patient characteristics and detect risk factors for CDI. Moreover we assessed the course of the disease and final outcomes for this group of patients.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIALS & METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Patients</ns0:head><ns0:p>Between January 2014 and December 2016, a total of 6,198 adult patients underwent cardiac surgery in our hospital. Seventy of the patients were diagnosed with CDI. The control group was comprised of 73 patients for whom CDI had been excluded and this group was matched to the group of CDI patients by the date of surgery.</ns0:p><ns0:p>Demographics, comorbidities, type and timing of cardiac surgery, operative characteristics, perioperative antibiotic use, exposure to known risk factors for CDI and in-hospital mortality were collected retrospectively. Additionally, length of hospitalization until the onset of diarrhea, severity and recurrence of the disease, methods of treatment and seasonal distribution of CDI were obtained. CDI was suspected in each patient who experienced diarrhea (defined as the passage of three or more unformed stools per day). CDI was defined as a combination of symptoms and signs of the disease and confirmed by microbiological evidence of toxin-producing CD in the patients' stools <ns0:ref type='bibr' target='#b9'>(Debast, Bauer & Kuijper, 2014)</ns0:ref>. Stool samples were analyzed using the rapid enzyme immunoassays test, C. Diff Quik Chek Complete test (Techlab, Orlando, USA).</ns0:p><ns0:p>Non-severe CDI was defined by a white blood cell (WBC) count of ≤ 15,000 cells/mL and a serum creatinine level < 1.5 mg/dL. Severe CDI was specified by a WBC count of ≥ 15,000 cells/mL or a serum creatinine level >1.5 mg/dL. Criteria for fulminant CDI included occurrence of hypotension or shock, ileus or megacolon <ns0:ref type='bibr'>(McDonald et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The treatment of CDI was consistent with the 2010 recommendations <ns0:ref type='bibr'>(Cohen et al., 2010)</ns0:ref>.</ns0:p><ns0:p>Metronidazole was the drug of choice for an initial episode of non-severe CDI and vancomycin was the drug of choice for an initial episode of severe CDI. Combination therapy with oral or rectal vancomycin and intravenously administered metronidazole was the regimen of choice for the treatment of severe, complicated or fulminant CDI.</ns0:p><ns0:p>Recurrence of the disease was identified as a relapse within 8 weeks after the onset of a previous episode <ns0:ref type='bibr' target='#b9'>(Debast, Bauer & Kuijper, 2014)</ns0:ref>. Stress hyperglycemia was defined as one or more blood sugar measurements > 180 mg/dL during the first postoperative 24 hours <ns0:ref type='bibr' target='#b21'>(Gelijns et al., 2014)</ns0:ref>. In-hospital mortality was specified as death occurring during the same hospitalization stay as the cardiac surgery.</ns0:p><ns0:p>Each patient received periprocedural antimicrobial prophylaxis (most often the first generation of cephalosporin). Routine laboratory variables were determined using standard laboratory techniques. Study protocol was approved by the local Research Ethics Committee (Andrzej Frycz Modrzewski Krakow University, Krakow, Poland 10/2019). Verbal consent of patients was acquired.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis</ns0:head><ns0:p>Descriptive statistics were described as numbers and percentages for categorical variables.</ns0:p><ns0:p>Continuous variables were presented as mean (± standard deviation) or median and quartiles, as appropriate. Normality was assessed using the Shapiro-Wilk test. Equality of variances was assessed using Levene's test. Differences between groups were compared using the Student's or Welch's t-test depending on the equality of variances for normal distributed variables. The Mann-Whitney U test was used for non-normal distributed continuous variables. Nominal variables were compared by the Pearson's chi-square test or Fisher's exact test if 20% of cells had an expected count of less than 5. Significance was accepted at P ≤ 0.05. Statistical analyses were performed with JMP®, Version 14.2.0 (SAS Institute INC., Cary, NC, USA) and using R, Version 3.4.1 (R Core Team. R: A language and environment for statistical computing. R Foundation for Statistical Computing. Vienna, Austria, 2017. https://www.r-project.org/).</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Baseline characteristics</ns0:head><ns0:p>Of the 6,198 patients, 70 (1.1%) developed CDI. Patients with CDI were significantly older in comparison to the control group (median age 73.0 vs 67.0, P = 0.005). There was no correlation between gender and incidence of CDI (P = 0.595). The European System for Cardiac Operative Risk Evaluation (EuroSCORE) values were higher in patients with CDI (P < 0.001). Patients with CDI more often received PPI, statins, β-blockers and acetylsalicylic acid before surgery (P = 0.008, P = 0.012, P = 0.004, and P = 0.001, respectively). In addition, the presence of atherosclerosis, coronary disease, and history of malignant neoplasms correlated positively with the development of CDI (P = 0.012, P = 0.036 and P = 0.05, respectively). Patients in the CDI group were hospitalized more often during the six months prior to the surgery (P = 0.001). ).</ns0:p><ns0:p>Mean preoperative hospitalization time in the cardiac surgery ward was 1.5 ± 0.2 days. Other baseline variables were comparable among groups (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Perioperative characteristics</ns0:head><ns0:p>The most common surgical procedures performed before CDI were valvular heart surgery and coronary artery bypass grafting (41.4% and 35.7%, respectively). There were no differences between the studied groups as far as the type or timing of surgery, aortic cross-clamp and cardiopulmonary bypass time, volume of postoperative drainage, administration of blood products, value of postoperative ejection fraction and frequency of reoperations. Patients with PeerJ reviewing <ns0:ref type='table' target='#tab_5'>PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)</ns0:ref> Manuscript to be reviewed CDI more frequently received additional antibiotics (P = 0.014). During the early postoperative course patients with CDI had a significantly higher glucose level and were exposed more frequently to stress hyperglycemia (P < 0.001 for both comparisons). During the preoperative period, as well as after surgery, patients with CDI had a significantly lower WBC count (P = 0.007 for both comparisons). Other intra-and postoperative variables were similar in both groups (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Course of Clostridioides difficile infection</ns0:head><ns0:p>The type of antibiotic therapy used before the first episode of CDI, median times of the disease diagnosis and severity of the infection are shown in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. Five patients in whom fulminant CDI developed, underwent emergency laparotomy and two patients died due to extensive multiple organ failure.</ns0:p><ns0:p>All patients with CDI were treated with oral metronidazole, oral vancomycin or both (intravenous metronidazole with oral vancomycin). Fidaxomicin was not used in our department during the study period (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>). Two patients underwent a fecal microbiota transplant during recurrence of the disease.</ns0:p><ns0:p>Although baseline and early postoperative WBC count were significantly lower in patients with CDI, during the course of the disease , WBC count was similar among analyzed groups (P = 0.139). In contrast, C-reactive protein (CRP) was higher in the CDI group during this period (P < 0.001).</ns0:p><ns0:p>There were no differences in the incidence rate of CDI between the analyzed years although a seasonal pattern was observed. Most cases occurred in March (n=14, 20%) and the least number of cases were found in January (n=2, 3%), (P = 0.038 between March and rest of the months) (Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>).</ns0:p><ns0:p>In 9 patients (12.9%) CDI recurred during hospitalization. Mean time of relapse was 27.7 ± 11.8 days after the first episode of the disease. Recurrent CDI was more common in overweight patients having high plasma glucose just after surgery or a higher CRP level during the first episode of the disease as well as for those with a history of coronary disease or diabetes mellitus (P = 0.005, P = 0.030, P = 0.009, P = 0.049, and P = 0.025, respectively), (Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>).</ns0:p><ns0:p>Fifteen patients (21.4%) died from the CDI group and 7 (9.6%) from the control group (P = 0.050). The median number of days between CDI diagnosis and death was 14.0 [4.0;25.0].</ns0:p><ns0:p>Emergent procedures, prolonged stay in the intensive care unit, longer mechanical ventilation and high WBC count during the diarrhea were associated with higher mortality in patients with CDI (P = 0.05, P = 0.041, P = 0.004 and P = 0.007, respectively), (Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The study showed that cardiac surgery related factors such as a type and timing of surgery, aortic cross-clamp and cardiopulmonary bypass time, volume of postoperative drainage, administration of blood products and value of postoperative ejection fraction were not correlated with the risk of CDI.</ns0:p><ns0:p>Interestingly, in our study, patients with CDI had a lower level of WBC count both before and after surgery in comparison to the control group. It is known that WBC, especially neutrophils, play an important role in the immune response against CD toxin A ( <ns0:ref type='bibr'>Kelly et al., 1994)</ns0:ref>, therefore, patients with a low WBC count have a higher risk of acquiring CDI <ns0:ref type='bibr' target='#b22'>(Gorschlüter et al., 2001)</ns0:ref>.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Unfortunately, in our study, neutrophil level was not determined, only overall WBC count was analyzed.</ns0:p><ns0:p>In contrast to other studies, our results did not show that diabetes mellitus was a risk factor for CDI <ns0:ref type='bibr'>(Furuya-Kanamori et al., 2015a)</ns0:ref>. However, we did demonstrate that patients with high glucose levels and stress hyperglycemia during the early postoperative period were at greater risk for development of CDI. This finding is consistent with the results of a study by <ns0:ref type='bibr'>Gelijns et al.,</ns0:ref> who demonstrated the association of hyperglycemia with CDI <ns0:ref type='bibr' target='#b21'>(Gelijns et al., 2014)</ns0:ref>. Similarly, <ns0:ref type='bibr'>Kirkwood et al.</ns0:ref> showed that postoperative hyperglycemia was associated with an increased risk of CDI <ns0:ref type='bibr' target='#b28'>(Kirkwood et al., 2018)</ns0:ref>. Therefore, prevention and treatment of hyperglycemia after cardiac surgery should be taken into consideration.</ns0:p><ns0:p>In our study, other risk factors for CDI were similar to those from non-cardiac surgery reports <ns0:ref type='bibr' target='#b1'>(Belton, Litofsky & Humphries, 2019)</ns0:ref>. Our findings confirm the results of many studies that older age is an independent risk factor for CDI <ns0:ref type='bibr' target='#b34'>(McFarland, Surawicz & Stamm, 1990;</ns0:ref><ns0:ref type='bibr' target='#b8'>De Roo & Regenbogen, 2020;</ns0:ref><ns0:ref type='bibr' target='#b15'>Flagg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b1'>Belton, Litofsky & Humphries, 2019)</ns0:ref>. Unlike other reports, we did not find a correlation between female gender and development of CDI <ns0:ref type='bibr' target='#b15'>(Flagg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b19'>Ge et al., 2018)</ns0:ref>. Some studies suggest an association between PPI and CDI risk whereas others do not confirm this correlation <ns0:ref type='bibr' target='#b34'>(McFarland, Surawicz & Stamm, 1990;</ns0:ref><ns0:ref type='bibr' target='#b13'>Eze et al., 2017;</ns0:ref><ns0:ref type='bibr'>Furuya-Kanamori et al., 2015a;</ns0:ref><ns0:ref type='bibr' target='#b14'>Faleck et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Maes, Fixen & Linnebur, 2017)</ns0:ref>. The exact mechanism of proliferation of CD in patients using PPI remains unclear <ns0:ref type='bibr' target='#b29'>(Maes, Fixen & Linnebur, 2017)</ns0:ref>. In our study, patients with CDI significantly more often used PPI. The necessity of PPI use should be carefully evaluated for hospital patients, especially those already receiving antibiotics.</ns0:p><ns0:p>Patients with CDI more frequently have chronic illnesses <ns0:ref type='bibr' target='#b13'>(Eze et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b8'>De Roo & Regenbogen, 2020;</ns0:ref><ns0:ref type='bibr'>Furuya-Kanamori et al., 2015a;</ns0:ref><ns0:ref type='bibr' target='#b28'>Kirkwood et al., 2018)</ns0:ref>. In our study atherosclerosis, ischemic heart disease and history of malignant neoplasms were correlated with CDI. Furthermore, patients with CDI significantly more often received statins, b-blockers and acetylsalicylic acid. This correlation could be explained by the fact that these drugs are used to treat the comorbidities that are associated with development of CDI. The correlation between EuroSCORE and CDI could be explained similarly. Time of hospital stay is an important risk factor for the development of CDI <ns0:ref type='bibr' target='#b15'>(Flagg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b2'>Chalmers et al., 2016)</ns0:ref>. Spores can remain in the hospital environment for several months and are difficult to remove with traditional disinfectants <ns0:ref type='bibr' target='#b0'>(Barbut, 2015)</ns0:ref>. We also showed the significant role of hospitalization time in the risk of CDI.</ns0:p><ns0:p>Besides periprocedural antimicrobial prophylaxis, some patients received an additional antibiotic due to accompanying infections, and these patients had greater chance of contracting CDI. In our study cefazolin, ceftriaxone and fluoroquinolone were the most frequently used antibiotics (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>). The relationship between antibiotic treatment and the risk of CDI has also been demonstrated in other studies <ns0:ref type='bibr' target='#b34'>(McFarland, Surawicz & Stamm, 1990;</ns0:ref><ns0:ref type='bibr' target='#b13'>Eze et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b8'>De Roo & Regenbogen, 2020;</ns0:ref><ns0:ref type='bibr'>Furuya-Kanamori et al., 2015a;</ns0:ref><ns0:ref type='bibr' target='#b28'>Kirkwood et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Ge et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Kazakova et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b23'>Jachowicz et al., 2020)</ns0:ref>. It should be remembered that any antibiotic may be the cause of CDI, even those used to treat CDI <ns0:ref type='bibr'>(McDonald et al., 2018)</ns0:ref>. Appropriate antibiotic management can help to reduce the risk of postoperative CDI.</ns0:p><ns0:p>In this study, a large number of patients received metronidazole for CDI treatment (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>). Our analysis was performed between 2014 and 2016 and treatment of CDI was consistent with the 2010 recommendations <ns0:ref type='bibr'>(Cohen et al., 2010)</ns0:ref>. Current guidelines confirm that metronidazole has a lower efficacy compared with vancomycin and they support the use of vancomycin over metronidazole in <ns0:ref type='bibr'>CDI (McDonald et al., 2018)</ns0:ref>.</ns0:p><ns0:p>In our study the most cases of CDI occurred in March. This result is comparable with other studies that have shown that CDI has a similar seasonal pattern characterized by a peak in spring and lower frequencies in summer and autumn months <ns0:ref type='bibr'>(Furuya-Kanamori et al., 2015b)</ns0:ref>. <ns0:ref type='bibr'>Jachowicz et al.</ns0:ref> showed an additional peak of healthcare associated CDI occurring from October to December <ns0:ref type='bibr' target='#b23'>(Jachowicz et al., 2020)</ns0:ref>. The mechanisms responsible for the seasonality of CDI remain poorly understood, although it has been proposed that the observed seasonality is associated with a higher incidence of respiratory infections, which leads to an intensified use of antibiotics during winter and spring months.</ns0:p><ns0:p>One of the most challenging aspects for patients with CDI is the recurrence of the disease after successful initial therapy is completed, which has been observed in between 15 to 35% of patients <ns0:ref type='bibr' target='#b8'>(De Roo & Regenbogen, 2020;</ns0:ref><ns0:ref type='bibr' target='#b11'>Dharbhamulla et al., 2019)</ns0:ref>. We observed a relapse in about 13% of patients with CDI. It is possible that this discrepancy was the result of different times of observation. In our study we assessed relapses that occurred only during the same hospitalization stay as the surgery. The causes of recurrent CDI are also unclear <ns0:ref type='bibr' target='#b11'>(Dharbhamulla et al., 2019)</ns0:ref>. In our study, the risk of recurrent CDI significantly increased for patients with diabetes mellitus or ischemic heart disease and for those with higher BMI or higher glucose level on the day of surgery. Similar to Predrag et al., we also revealed an association of relapses with high CRP during the first episode of CDI <ns0:ref type='bibr' target='#b35'>(Predrag et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Data on mortality for patients who acquired CDI after surgery varies from 2.5 to 27.7% and is much higher than mean in-hospital death after cardiac surgery (1-4%) <ns0:ref type='bibr'>(Vondran et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Flagg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b30'>Mazzeffi et al., 2014)</ns0:ref>. Our result of a 21% mortality rate validate these findings. We showed that patients that had emergent procedures, prolonged stay in the intensive care unit, longer mechanical ventilation or a high WBC count during an episode of diarrhea have a higher risk of death. These findings are consistent with other results <ns0:ref type='bibr' target='#b36'>(Ricciardi et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b16'>Furuya-Kanamori et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b21'>Gelijns et al., 2014;</ns0:ref><ns0:ref type='bibr'>Vondran et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Flagg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Debast, Bauer & Kuijper, 2014)</ns0:ref>. In our study, prolonged hospitalization time before surgery was a risk factor for CDI but correlated with lower mortality. Longer hospital stay could help to ensure that patients enter elective surgery in the best condition possible in contrast to emergent procedures.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>This study has several limitations. It was a retrospective study based on available data obtained during patients' cardiac procedure hospital stay. Therefore, the true incidence of CDI could be higher due to a lack of information regarding potential post-discharge diagnosis of the disease.</ns0:p><ns0:p>The size of the study group was limited and patients were very heterogeneous. It is probable that analysis of patients with one type of cardiac surgery may provide different results.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In conclusion, this study did not reveal any specific cardiac surgery-related risk factors for development of CDI. Also, frequency of relapse and mortality rate were similar to non-cardiac surgery. Studies in larger cohorts are needed to confirm these findings. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>acquired diarrhea. There is little available data regarding risk factors of CDI for patients who undergo cardiac surgery. The study evaluated the course of CDI in patients after cardiac surgery. Methods. Of 6,198 patients studied, 70 (1.1%) developed CDI. The control group consisted of 73 patients in whom CDI was excluded. Perioperative data and clinical outcomes were analyzed.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>ACE: angiotensin-converting enzyme; BMI: body mass index; COPD: chronic obstructive pulmonary disease; LVEF: left ventricular ejection fraction; PPI: proton pump inhibitors; RBC: red blood cells; WBC: white blood cells.PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>of the CDI diagnosis (days) Between hospital admission and CDI diagnosis 12.0 [6.2-30.5] Between the surgery and CDI diagnosis 9.0 [5.0-27.2] Length of ICU stay before the CDI diagnosis 4.0 [2.0-7.0] Assisted ventilation before the CDI diagnosis 1.0 [1.0-2.0] Time of hospitalization after CDI diagnosis 11 presented as median (interquartile range). Categorical variables are presented as number (percentage).* Some patients used more than one antibiotic, therefore the percentage sum does not equal 100%.CDI: Clostridioides difficile infection; ICU: intensive care unit.PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Continuous variables are presented as median (interquartile range). Categorical variables are presented as number (percentage). (C): Pearson's chi-3 square test; (M): Mann-Whitney U test. 4 ACE: angiotensin-converting enzyme; BMI: body mass index; CABG: coronary artery bypass grafting; CDI: Clostridioides difficile infection; 5 COPD: chronic obstructive pulmonary disease; CPB: cardiopulmonary bypass; CRP: C-reactive protein; IABP: intra-aortic balloon pump; LVEF: 6 left ventricular ejection fraction; PPI: proton pump inhibitors; WBC: white blood cells; ICU: intensive care unit; VHS: valvular heart surgery; VAC: 7 Vacuum-assisted closure. PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 ANEKS</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Demographics and preoperative data. Continuous variables are presented as median (interquartile range). Categorical variables are presented as</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell></ns0:row></ns0:table><ns0:note>1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Intraoperative and postoperative data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Continuous variables are presented as median (interquartile range). Categorical variables are</ns0:cell></ns0:row><ns0:row><ns0:cell>presented as number (percentage). CABG: coronary artery bypass grafting; CPB:</ns0:cell></ns0:row><ns0:row><ns0:cell>cardiopulmonary bypass; IABP: intra-aortic balloon pump; LVEF: left ventricular ejection</ns0:cell></ns0:row><ns0:row><ns0:cell>fraction; RBC: red blood cells; VHS: valvular heart surgery; VAC: Vacuum-assisted closure;</ns0:cell></ns0:row><ns0:row><ns0:cell>WBC: white blood cells.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)Manuscript to be reviewed 1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Intraoperative and postoperative data. Continuous variables are presented as median (interquartile range). Categorical variables are presented as 3 number (percentage). (C): Pearson's chi-square test; (M): Mann-Whitney U test.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell>Patients with CDI</ns0:cell><ns0:cell>Patients without CDI</ns0:cell><ns0:cell>P-value</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(n=70)</ns0:cell><ns0:cell>(n=73)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Type of surgery, n (%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.239</ns0:cell></ns0:row><ns0:row><ns0:cell>CABG</ns0:cell><ns0:cell>25 (35.7)</ns0:cell><ns0:cell>21 (28.8)</ns0:cell><ns0:cell>0.374</ns0:cell></ns0:row><ns0:row><ns0:cell>VHS</ns0:cell><ns0:cell>29 (41.4)</ns0:cell><ns0:cell>34 (46.6)</ns0:cell><ns0:cell>0.535</ns0:cell></ns0:row><ns0:row><ns0:cell>CABG+VHS</ns0:cell><ns0:cell>7 (10.0)</ns0:cell><ns0:cell>6 (8.2)</ns0:cell><ns0:cell>0.711</ns0:cell></ns0:row><ns0:row><ns0:cell>Aortic surgery</ns0:cell><ns0:cell>6 (8.6)</ns0:cell><ns0:cell>12 (16.4)</ns0:cell><ns0:cell>0.156</ns0:cell></ns0:row><ns0:row><ns0:cell>CABG+aortic surgery</ns0:cell><ns0:cell>3 (4.3)</ns0:cell><ns0:cell>0 (0.0)</ns0:cell><ns0:cell>0.115</ns0:cell></ns0:row><ns0:row><ns0:cell>Reoperation, n (%)</ns0:cell><ns0:cell>5 (7.1)</ns0:cell><ns0:cell>7 (9.6)</ns0:cell><ns0:cell>0.821</ns0:cell></ns0:row><ns0:row><ns0:cell>Timing of surgery, n (%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>0.217</ns0:cell></ns0:row><ns0:row><ns0:cell>Elective</ns0:cell><ns0:cell>41 (58.6)</ns0:cell><ns0:cell>51 (69.9)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emergent</ns0:cell><ns0:cell>29 (41.4)</ns0:cell><ns0:cell>22 (30.1)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Additional antibiotic, n (%)</ns0:cell><ns0:cell>46 (65.7)</ns0:cell><ns0:cell>32 (43.8)</ns0:cell><ns0:cell>0.014 (C)</ns0:cell></ns0:row><ns0:row><ns0:cell>LVEF after surgery (%)</ns0:cell><ns0:cell>45.0 [40.0-50.0]</ns0:cell><ns0:cell>45.0 [35.0-55.0]</ns0:cell><ns0:cell>0.839</ns0:cell></ns0:row><ns0:row><ns0:cell>LVEF < 30%, n (%)</ns0:cell><ns0:cell>5 (7.1)</ns0:cell><ns0:cell>10 (13.7)</ns0:cell><ns0:cell>0.201</ns0:cell></ns0:row><ns0:row><ns0:cell>Aortic cross-clamp time (min)</ns0:cell><ns0:cell>65.0 [36.2-89.5]</ns0:cell><ns0:cell>69.0 [49.0-92.0]</ns0:cell><ns0:cell>0.517</ns0:cell></ns0:row><ns0:row><ns0:cell>CPB time (min)</ns0:cell><ns0:cell>104.5 [74.0-154.5]</ns0:cell><ns0:cell>125.0 [85.0-165.0]</ns0:cell><ns0:cell>0.156</ns0:cell></ns0:row><ns0:row><ns0:cell>Postoperative drainage (ml/first 24 h)</ns0:cell><ns0:cell>530.0 [332.5-940.0]</ns0:cell><ns0:cell>520.0 [380.0-810.0]</ns0:cell><ns0:cell>0.981</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Postoperative drainage > 1000 ml/first 24 h, n (%) 17 (24.3)</ns0:cell><ns0:cell>12 (16.4)</ns0:cell><ns0:cell>0.243</ns0:cell></ns0:row><ns0:row><ns0:cell>Inotropic agents, n (%)</ns0:cell><ns0:cell>48 (68.6)</ns0:cell><ns0:cell>53 (72.6)</ns0:cell><ns0:cell>0.730</ns0:cell></ns0:row><ns0:row><ns0:cell>IABP after surgery, n (%)</ns0:cell><ns0:cell>2 (2.9)</ns0:cell><ns0:cell>2 (2.7)</ns0:cell><ns0:cell>1.000</ns0:cell></ns0:row><ns0:row><ns0:cell>Accompanying infections, n (%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Wound infection</ns0:cell><ns0:cell>13 (18.6)</ns0:cell><ns0:cell>13 (17.8)</ns0:cell><ns0:cell>1.000</ns0:cell></ns0:row><ns0:row><ns0:cell>VAC therapy</ns0:cell><ns0:cell>11 (15.7)</ns0:cell><ns0:cell>5 (6.8)</ns0:cell><ns0:cell>0.157</ns0:cell></ns0:row><ns0:row><ns0:cell>Positive blood cultures</ns0:cell><ns0:cell>10 (14.3)</ns0:cell><ns0:cell>15 (20.5)</ns0:cell><ns0:cell>0.444</ns0:cell></ns0:row><ns0:row><ns0:cell>Transfusion, n (%) Red blood cells ≥2 units Plasma ≥2 units Platelets ≥1 unit</ns0:cell><ns0:cell>43 (61.4) 23 (32.9) 21 (30.0)</ns0:cell><ns0:cell>41 (56.2) 23 (31.5) 24 (32.9)</ns0:cell><ns0:cell>0.639 1.000 0.849</ns0:cell></ns0:row><ns0:row><ns0:cell>Laboratory parameters</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>WBC (x10 3 /µL)</ns0:cell><ns0:cell>9.7 [7.0-13.6]</ns0:cell><ns0:cell>12.9 [9.3-15.6]</ns0:cell><ns0:cell>0.007 (M)</ns0:cell></ns0:row><ns0:row><ns0:cell>WBC > 13,000 µL, n (%)</ns0:cell><ns0:cell>21 (30.0)</ns0:cell><ns0:cell>36 (49.3)</ns0:cell><ns0:cell>0.018 (C)</ns0:cell></ns0:row><ns0:row><ns0:cell>Platelets (x10 3 /µL)</ns0:cell><ns0:cell>122.5 [86.5-154.0]</ns0:cell><ns0:cell>130.0 [93.0-168.0]</ns0:cell><ns0:cell>0.261</ns0:cell></ns0:row><ns0:row><ns0:cell>Platelets < 100,000 µL, n (%)</ns0:cell><ns0:cell>23 (32.9)</ns0:cell><ns0:cell>22 (30.1)</ns0:cell><ns0:cell>0.726</ns0:cell></ns0:row><ns0:row><ns0:cell>Hematocrit (%)</ns0:cell><ns0:cell>28.6 [26.8-30.9]</ns0:cell><ns0:cell>29.4 [27.0-32.4]</ns0:cell><ns0:cell>0.104</ns0:cell></ns0:row><ns0:row><ns0:cell>Hemoglobin (g/dL)</ns0:cell><ns0:cell>9.6 [8.8-10.5]</ns0:cell><ns0:cell>9.8 [9.0-10.7]</ns0:cell><ns0:cell>0.229</ns0:cell></ns0:row><ns0:row><ns0:cell>Hemoglobin < 8.0 g/dL, n(%)</ns0:cell><ns0:cell>9 (12.9)</ns0:cell><ns0:cell>6 (8.2)</ns0:cell><ns0:cell>0.366</ns0:cell></ns0:row><ns0:row><ns0:cell>RBC (x10 6 /µL)</ns0:cell><ns0:cell>3.2 [3.0-3.4]</ns0:cell><ns0:cell>3.3 [3.0-3.5]</ns0:cell><ns0:cell>0.118</ns0:cell></ns0:row><ns0:row><ns0:cell>Plasma glucose (mmol/L)</ns0:cell><ns0:cell>11.5 [9.7-12.8]</ns0:cell><ns0:cell>9.2 [8.1-10.7]</ns0:cell><ns0:cell><0.001 (M)</ns0:cell></ns0:row><ns0:row><ns0:cell>Stress hyperglycemia, n (%)</ns0:cell><ns0:cell>48 (68.6)</ns0:cell><ns0:cell>24 (32.9)</ns0:cell><ns0:cell><0.001 (C)</ns0:cell></ns0:row><ns0:row><ns0:cell>In-hospital death, n (%)</ns0:cell><ns0:cell>15 (21.4)</ns0:cell><ns0:cell>7 (9.6)</ns0:cell><ns0:cell>0.050 (C)</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Course of Clostridioides difficile infection general data.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Univariate analysis of the Clostridioides difficile infection group stratified by relapse and in-hospital death.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Continuous variables are presented as median (interquartile range). Categorical variables are</ns0:cell></ns0:row><ns0:row><ns0:cell>presented as number (percentage). ACE: angiotensin-converting enzyme; BMI: body mass</ns0:cell></ns0:row><ns0:row><ns0:cell>index; CABG: coronary artery bypass grafting; CDI: Clostridioides difficile infection; COPD:</ns0:cell></ns0:row><ns0:row><ns0:cell>chronic obstructive pulmonary disease; CPB: cardiopulmonary bypass; CRP: C-reactive</ns0:cell></ns0:row><ns0:row><ns0:cell>protein; IABP: intra-aortic balloon pump; LVEF: left ventricular ejection fraction; PPI: proton</ns0:cell></ns0:row><ns0:row><ns0:cell>pump inhibitors; WBC: white blood cells; ICU: intensive care unit; VHS: valvular heart surgery;</ns0:cell></ns0:row><ns0:row><ns0:cell>VAC: Vacuum-assisted closure.</ns0:cell></ns0:row></ns0:table><ns0:note>1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Univariate analysis of the Clostridioides difficile infection group stratified by relapse and in-hospital death.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:note place='foot' n='3'>number (percentage). (C): Pearson's chi-square test; (M): Mann-Whitney U test. PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)</ns0:note>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:05:49555:1:1:NEW 24 Aug 2020)</ns0:note>
</ns0:body>
" | "Dear Reviewers of The PeerJ,
Thank you very much for your patience and all the valuable comments and suggestions. I tried my best to address all your concerns. Please find my answers below.
Sincerely yours,
Dariusz Plicner on behalf of the authors.
Dear Prof. Paola Di Carlo
1. The link between seasonal and antibiotic use in CDI was reported by other authors and there are evidence in the geographical area of Poland (Jachowicz, E.; Wałaszek, M.; Sulimka, G.; Maciejczak, A.; Zieńczuk, W.; Kołodziej, D.; Karaś, J.; Pobiega, M.; Wójkowska-Mach, J. Long-Term Antibiotic Prophylaxis in Urology and High Incidence of Clostridioides difficile Infections in Surgical Adult Patients. Microorganisms 2020, 8, 810.). Please comment these findings take in account the use of third-generation cephalosporins and Fluoroquinolones reported in table 3.
Ad. 1. We added the part of the discussion regarding seasonal and antibiotic use according to your suggestions, (p. 10-11, lines 217-218, 221-224 and 230-237). We also added the above reference to the manuscript (p. 10, lines 221 and 234).
2. Globally, guidelines and papers confirm that metronidazole has a lower efficacy compared with vancomycin and support the use of vancomycin over metronidazole in C difficile infection. To optimize vancomycin treatment evaluated in recurrent C difficile infection, it might be possible to use a pulsed or tapered regimen of vancomycin.( Guery Benoit, Galperine Tatiana, Barbut Frédéric. Clostridioides difficile: diagnosis and treatments BMJ 2019; 366 :l4609)
Please, comment why the treatment with metronidazole is over vancomycin in the paper.
Ad. 2. Our analysis was performed between 2014 and 2016 and treatment of C. difficile infection was consistent with the 2010 recommendations. In the current (2017) guidelines metronidazole has a lower efficacy compared with vancomycin and now we use vancomycin over metronidazole for C. difficile infection. In accordance to your suggestions we added this information to the methods section (p. 4-5, lines 90-94) and discussion (p. 10, lines 225-229).
Dear Dr Nicola Serra
1. To facilitate the reading of the Tables with statistical tests, the authors should write in bold the significant p-values and in brackets the type of test used (for example (C) in the case of chi-square test).
Ad. 1. We changed this according to your suggestion (Table 1, 2 and 4).
2. Lines 112-113: “There was no correlation between gender and incidence of CDI (P = 0.595)” . The authors describe only some no significant results, but they do not describe all no significant results. Please, check it.
Ad. 2. We showed this no significant result, because many reports described the correlation between gender and incidence of CDI. We made a note about this discrepancy in the discussion: “Unlike other reports, we did not find a correlation between female gender and development of CDI (Flagg et al., 2014; Ge et al., 2018).” (p. 9, lines 195-196).
3. Lines 118: the authors claim to consider all tests with p <0.05 as significant, but the history of malignant neoplasms has a p=0.05. Please, check it.
Ad. 3. According to our biostatistics significance was accepted at P ≤ 0.05. We changed it according to your suggestion (p. 5, line 113).
4. I suggest that authors use additional tests for parameters where a normal range is defined (Table 1, 2, and 4). In particular they should compare the percentages of patients with abnormal values (for example for BMI, Platelets, WBC, etc) . These tests could provide important information that may not emerge from the comparison between medians or means.
Ad. 4. We performed additional tests according to your recommendations. All extra results are listed in the tables ((Table 1, 2 and 4).
5. In Table 2 the authors should also compare the subcategories of type of surgery.
Ad. 6. We compared the subcategories of type of surgery, according to your suggestion (Table 2).
6. Line 128: “of cases were found in January (n=2, 3%), (P = 0.038 between March and rest of the months). Which test is used by the authors? If a multiple-comparison chi-square test was used, which post hoc test was used to find the most frequent proportion?
Ad. 7. Pearson’s chi-square test and Benjamini-Hochberg FDR procedure were performed.
" | Here is a paper. Please give your review comments after reading it. |
643 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Glacier-fed streams (GFS) are harsh ecosystems dominated by microbial life organized in benthic biofilms, yet the biodiversity and ecosystem functions provided by these communities remain under-appreciated. To better understand the microbial processes and communities contributing to GFS ecosystems, it is necessary to leverage high throughput sequencing. Low biomass and high inorganic particle load in GFS sediment samples may affect nucleic acid extraction efficiency using extraction methods tailored to other extreme environments such as deep-sea sediments. Here, we benchmarked the utility and efficacy of four extraction protocols, including an up-scaled phenol-chloroform protocol. We found that established protocols for comparable sample types consistently failed to yield sufficient high-quality DNA, delineating the extreme character of GFS. The methods differed in the success of downstream applications such as library preparation and sequencing. An adapted phenol-chloroform-based extraction method resulted in higher yields and better recovered the expected taxonomic profile and abundance of reconstructed genomes when compared to commercially-available methods. Affordable and straight-forward, this method consistently recapitulated the abundance and genomes of a 'mock' community, including eukaryotes. Moreover, by increasing the amount of input sediment, the protocol is readily adjustable to the microbial load of the processed samples without compromising protocol efficiency. Our study provides a first systematic and extensive analysis of the different options for extraction of nucleic acids from glacierfed streams for high-throughput sequencing applications, which may be applied to other extreme environments.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The advent of high-throughput sequencing technologies has brought hitherto inconceivable capacities to characterize the microbial ecology of both well-studied <ns0:ref type='bibr' target='#b23'>(Jansson and Hofmockel 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Nielsen and Ji 2015)</ns0:ref> and under-explored environments <ns0:ref type='bibr' target='#b21'>(Hotaling, Hood, and Hamilton 2017;</ns0:ref><ns0:ref type='bibr' target='#b37'>Milner et al. 2017</ns0:ref>). Among the latter include high-mountain and particularly glacier-fed streams <ns0:ref type='bibr' target='#b37'>(Milner et al. 2017</ns0:ref>) and the microbial biofilms that colonize their beds <ns0:ref type='bibr'>(Battin et al. 2016)</ns0:ref>.</ns0:p><ns0:p>Today, these streams are changing at an unprecedented pace owing to climate change and the thereby shrinking glaciers, and yet little is known of their microbial diversity <ns0:ref type='bibr'>(Wilhelm et al. 2013</ns0:ref><ns0:ref type='bibr' target='#b37'>, Milner et al. 2017)</ns0:ref>. Glacier-fed stream (GFS) sediments are extreme habitats characterized by low microbial cell abundance and activities but very high loads of fine mineral particles <ns0:ref type='bibr'>(Wilhelm et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b15'>Godone 2017;</ns0:ref><ns0:ref type='bibr' target='#b46'>Peter and Sommaruga 2017;</ns0:ref><ns0:ref type='bibr' target='#b8'>Chanudet and Filella 2008;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bogen 1988)</ns0:ref>. In order to understand the diversity and composition of these microbial communities, including both eukaryotes and prokaryotes, and the role that they play, it is essential to extract nucleic acids in sufficient quantity and quality from often complex environmental matrices. After extracting the nucleic acids, downstream applications including molecular biology methods such as PCR and next-generation sequencing of amplicons or metagenomes allow for the compositional, functional and phylogenetic characterization of microbial populations and the communities that they form <ns0:ref type='bibr' target='#b54'>(Roume et al. 2013)</ns0:ref>.</ns0:p><ns0:p>While there is no lack of protocols and literature pertaining to the extraction of nucleic acids from a wide variety of environments <ns0:ref type='bibr' target='#b54'>(Roume et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b36'>Miller et al. 1999;</ns0:ref><ns0:ref type='bibr'>Xin and Chen 2012;</ns0:ref><ns0:ref type='bibr' target='#b47'>Porebski, Bailey, and Baum 1997;</ns0:ref><ns0:ref type='bibr'>Zhou, Bruns, and Tiedje 1996)</ns0:ref>, few reports dwell on the utility of these methods for biomolecular extractions from sedimentary samples with very low cell abundance as typical for GFS <ns0:ref type='bibr'>(Wilhelm et al. 2013;</ns0:ref><ns0:ref type='bibr'>Ren, Gao, and Elser 2017)</ns0:ref>. In 2015, <ns0:ref type='bibr'>Lever et al.</ns0:ref> elaborately described diverse factors and components that need to be considered for efficient nucleic acid extractions <ns0:ref type='bibr' target='#b29'>(Lever et al. 2015)</ns0:ref>. These include, but are not limited to key steps like cell lysis, removal of impurities and inhibitors and of critical additives like carrier DNA molecules to enhance aggregation and thus precipitation of DNA in case of very low concentrations. Since the first extraction of DNA by Swiss medical doctor Friedrich Miescher in 1869 <ns0:ref type='bibr' target='#b10'>(Dahm 2008)</ns0:ref>, biomolecule extractions have shifted from those performed with solutions prepared primarily in the laboratory <ns0:ref type='bibr' target='#b55'>(Sambrook and Russell 2006;</ns0:ref><ns0:ref type='bibr' target='#b36'>Miller et al. 1999</ns0:ref>) to using commercially-available kits. These ready-made options are designed to avoid the use of volatile and toxic chemicals such as phenol and chloroform, and are tailored to various environments including blood, faecal material, plant and soils <ns0:ref type='bibr' target='#b9'>(Claassen et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b48'>Psifidi et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b57'>Smith, Diggle, and Clarke 2003;</ns0:ref><ns0:ref type='bibr'>Vishnivetskaya et al. 2014)</ns0:ref>. While studies have concentrated on nucleic acid extraction from glacial ice cores <ns0:ref type='bibr' target='#b13'>(Dancer, Shears, and Platt 1997)</ns0:ref> or surface snow <ns0:ref type='bibr' target='#b44'>(Pei-Ying et al. 2012)</ns0:ref>, none has demonstrated their utility for GFS sediments. Together with low cell abundance <ns0:ref type='bibr'>(Wilhelm et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b29'>Lever et al. 2015)</ns0:ref>, the complex mineral matrices in GFS (Peter and Sommaruga 2017)-a consequence of the erosion activity of glaciers <ns0:ref type='bibr' target='#b3'>(Bogen 1988</ns0:ref>)-may affect nucleic acid extraction efficiency <ns0:ref type='bibr' target='#b29'>(Lever et al. 2015)</ns0:ref>. As we attempt to better understand how nature works at its limits through the study of extreme environments, non-commercial approaches <ns0:ref type='bibr' target='#b38'>(Mukhopadhyay and Roth 1993)</ns0:ref>and methodologies <ns0:ref type='bibr' target='#b29'>(Lever et al. 2015)</ns0:ref>, need to be revisited and optimized.</ns0:p><ns0:p>In recent years, several research groups <ns0:ref type='bibr' target='#b2'>(Besemer et al. 2012;</ns0:ref><ns0:ref type='bibr'>Ren et al. 2017;</ns0:ref><ns0:ref type='bibr'>Ren, Gao, and Elser 2017;</ns0:ref><ns0:ref type='bibr' target='#b13'>Dancer, Shears, and Platt 1997)</ns0:ref> have successfully used kit-based methods for DNA extraction and subsequent 16S ribosomal RNA gene amplicon sequencing on GFS samples. However, the requirements for whole genome shotgun sequencing currently include at least 50 ng of input DNA to minimize bias due to PCR reactions during library preparation <ns0:ref type='bibr' target='#b26'>(Kebschull and Zador 2015;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bowers et al. 2015;</ns0:ref><ns0:ref type='bibr'>Thomas, Gilbert, and Meyer 2012;</ns0:ref><ns0:ref type='bibr' target='#b7'>Chafee, Maignien, and Simmons 2015;</ns0:ref><ns0:ref type='bibr' target='#b45'>Peng et al. 2020</ns0:ref>). Here, we address the utility and efficiency of the 'gold' standard phenol-chloroform extraction <ns0:ref type='bibr' target='#b11'>(Dairawan and Shetty 2020)</ns0:ref>, and three alternative methodologies to identify the process(es) that yield not only the highest quantity but also quality of DNA, from GFS sediments. Our goal was to address whether the phenol-chloroform method yielded the expected diversity and taxonomic profiles when extracting GFS sediments, while also enabling reconstruction of metagenome-assembled genomes. Simultaneously, we wanted to validate the utility of this method for the extraction of nucleic acids from both pro-and eu-karyotic sources.</ns0:p><ns0:p>Overall, our findings provide a framework for the extraction of nucleic acids such as DNA for whole genome shotgun sequencing from GFS sediments, whilst highlighting the potential variability introduced due to the isolation method employed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Sample origin & collection</ns0:head><ns0:p>DNA extraction protocols were benchmarked using three different GFS sediments from the Swiss Alps: Corbassière (CBS, collection date: 13.11.2018), 2444 meters above sea level (m a.s.l) and Val Ferret (FE), 1995 m a.s.l at the glacier snout (up site, FEU, collection date: 23.10.2018) and one kilometer further downstream (down site, FED, collection date: 24.01.2019). Sampling was always performed later in the morning, before noon. Sediments (0.25 to 3.15 mm) were collected using two flame-sterilized metal sieves with a mesh size of 3.15 mm and 0.250 mm respectively. CBS differs from FEU and FED in terms of bedrock geology, with clastic sedimentary limestone dominating the catchment of CBS and Brecchia of gneiss dominating in FEU and FED. Sediments generally contain more organic material further downstream from the glacier, which may inhibit DNA extraction. Sediments (0.25 to 3.15 mm) were collected using flame-sterilized sampling equipment. Wet sediments were transferred into 10 ml sterile, DNA/DNase-free tubes and immediately flash-frozen in liquid nitrogen in the field. Samples were transferred to the laboratory and kept at -80 °C until analysis. All necessary measures were taken to ensure contamination-free sampling.</ns0:p></ns0:div>
<ns0:div><ns0:head>DNA extraction methods</ns0:head><ns0:p>Four different DNA extraction methods were applied to the samples. The key characteristics of the different methods are summarized in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Method-1, -2 and -4 were manual protocols differing primarily in the lysis step (bead-beating and lysis buffer composition; <ns0:ref type='bibr' target='#b54'>(Roume et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b29'>Lever et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b55'>Sambrook and Russell 2006)</ns0:ref> while method-3 was a modified protocol of the DNeasy PowerMax Soil <ns0:ref type='bibr'>Kit (Cat.No. 12988-10)</ns0:ref> provided by Qiagen (based on communication exchanged with the manufacturer). Due to the very low microbial abundance, additional precautions were taken to establish contamination-free conditions, including daily decontamination of equipment/areas with bleach, using DNA/DNase-free glassware and plasticware, reagents and chemicals. Additionally, 'germ-free' sediment is not a viable option and is hard to remove any and all microorganisms from sediments. So, extraction blanks, i.e. tubes without any sample, were used as controls, which underwent the same extraction protocols along with the other samples.</ns0:p><ns0:p>Post-DNA recovery, we assessed whether any of the eluted samples had DNA via both NanoDrop and Qubit, and found them to be below detectable levels. Additionally, the PCR reactions during library preparation did not yield any product confirming serving as a contamination-check. The input weight of sediment ranged from 0.5 -5 g and are described in the respective protocols.</ns0:p><ns0:p>Method-1 was based on a previously established method <ns0:ref type='bibr' target='#b17'>(Griffiths et al. 2000)</ns0:ref>. Introduced modifications concerned primarily the step of mechanical lysis and DNA precipitation that was rendered more stringent to improve the recovery of small amounts of DNA. Sample cell lysis was achieved by adding 0.5 g of sediment into a lysing matrix E tube with beads of variable diameter provided by the manufacturer (MP Biomedicals, SKU 116914050), 500 μl CTAB buffer (5% CTAB, 120 mM KPO 4 , pH 8.0) and 500 μl of phenol/chloroform/isoamyl alcohol (ratio 25:24:1).</ns0:p><ns0:p>Samples were loaded on a Precellys beater for 45 s at 5.500 r/s. DNA was extracted once more with chloroform/isoamyl alcohol (24:1) and precipitated with 2 vol PEG-6000, 15 μg/ml linear polyacrylamide (LPA) and 2 h incubation on ice (Supplementary material).</ns0:p><ns0:p>Method-2 was an adaptation to alpine stream sediments of the modular method for DNA extraction previously published <ns0:ref type='bibr' target='#b29'>(Lever et al. 2015)</ns0:ref>. The appropriate modules of the method, based on the nature of our samples, were put together in our protocol without further modification. Samples were prepared by mixing 5 g of sediment, 10-20% of 0.1mm zirconium beads and 1 ml of 100 mM dNTP solution. Cell lysis was achieved with 5 ml lysis buffer (30 mM Tris-HCl, 30 mM EDTA, 1% Triton X-100, 800 mM guanidium hydrochloride, pH 10.0) and incubation at 50 °C for 1h with gentle agitation (Hybridization oven, Labnet Problot L6). The supernatant was extracted once with chloroform/isoamyl alcohol (24:1) and DNA was precipitated with 10 μg/ml LPA, 0.2 vol 0.5 M NaCl, 2.5 vol ethanol and 2h incubation at RT in the dark (Supplementary material). The input weight of 5 g sediment was a modification from previously established protocol and the subsequent reagent volumes were adjusted accordingly.</ns0:p><ns0:p>Method-3 has been previously applied successfully on sand and clay soils <ns0:ref type='bibr' target='#b19'>(Hale and Crowley 2015)</ns0:ref>. In this protocol, the standard lysis capacity of the DNeasy PowerMax Soil Kit (Qiagen, Cat.</ns0:p><ns0:p>No. 112988-10) was modified and enhanced by the addition of phenol:chloroform:isoamyl alcohol along with PowerBeads (kit provided) and C1 solution to 5 g of sediment. Then, the manufacturersuggested sequence of treatments and rinses with the standard buffers of the kit were followed to reach elution of extracted DNA from silica columns with 6 ml of elution buffer. Further concentration of extracted DNA was carried out with the addition of 240 μl 5M NaCl, 2.5 vol ethanol and 10 μg/ml LPA (Supplementary material). LPA was an additional modification to the original protocol for improved DNA recovery.</ns0:p><ns0:p>Method-4 involved chemical and enzymatic treatment of samples according <ns0:ref type='bibr' target='#b16'>(Green and Sambrook 2017</ns0:ref>) with minor modifications. Five g of sample was mixed with 10 ml of lysis buffer, incorporating the SDS as well, (0.1 M Tris-HCl pH 7.5, 0.05 M EDTA pH 8, 1.25 % SDS) and 10 μl RNase A (100 mg/ml). Then sediment was vortexed for 15 s and incubated at 37 °C for 1h in a hybridization oven. 100 μl Proteinase K (20 mg/ml) was added in a subsequent step and the mixture was incubated for 10 min at 70 °C. Samples were extracted once with phenol/chloroform/isoamyl alcohol (ratio 25:24:1) and supernatants were extracted subsequently with chloroform/isoamyl alcohol (24:1). More stringent DNA precipitation conditions were applied with the addition of 10 μg/ml LPA, and overnight incubation at -20 °C (Supplementary material).</ns0:p><ns0:p>All DNA extracts were suspended in 100 μl of DNA/DNase-free water (ThermoFisher Scientific).</ns0:p><ns0:p>Due to the inadequacy of DNA obtained from Method-1 given the 0.5g input sediment weight, we scaled the extraction to 5 g starting weight prior to sequencing. Extracted DNA was thereafter stored at -20 °C until further use. Due to the low DNA yields, it was necessary to use Qubit dsDNA HS kit (Invitrogen), a fluorescent DNA quantification method with high sensitivity. Quality assessment, with Nanodrop and DNA visualization on 0.8% agarose gel containing GelRed nucleic acid stain, was possible only for DNA extracted with method-4 and for DNA concentrations higher than 0.5 ng/ul. All samples yielded sufficient DNA, i.e. 50 ng (total input), for metagenomic sequencing and subsequent analyses. Additionally, a commercially-available microbial mock community (ZymoBIOMICS, Cat.No. D6300) was extracted using Method-4 and used for subsequent sequencing.</ns0:p></ns0:div>
<ns0:div><ns0:head>DNA sequencing</ns0:head><ns0:p>50 ng of DNA from all samples were subjected to random shotgun sequencing. The sequencing libraries were prepared using the NEBNext Ultra II FS DNA Library Prep kit for Illumina (Cat.No. E7805) using the protocol provided with the kit. The libraries were prepared considering 350 base pairs (bp) average insert size. Qubit (Invitrogen) was used to quantify the prepared libraries while their quality was assessed on a Bioanalyzer (Agilent). We used the NextSeq500 (Illumina) instrument to perform the sequencing using 2x150 bp read length at the Luxembourg Centre for Systems Biomedicine Sequencing Platform.</ns0:p></ns0:div>
<ns0:div><ns0:head>Genome reconstruction and metagenomic data processing</ns0:head><ns0:p>Paired sequences (i.e., forward and reverse) were processed using the Integrated Meta-omic Pipeline (IMP) <ns0:ref type='bibr'>(Narayanasamy et al. 12/2016)</ns0:ref>. The metagenomic workflow encompasses preprocessing (read quality filtering and trimming), assembly, and genome reconstruction in a reproducible manner. The adapter sequences were trimmed in the pre-processing step including the removal of human reads. Thereafter, de novo assembly was performed using the MEGAHIT (version 2.0) assembler (D. <ns0:ref type='bibr' target='#b30'>Li et al. 2015)</ns0:ref> . Default IMP parameters were retained for all samples. Subsequently, we used MetaBAT2 <ns0:ref type='bibr' target='#b25'>(Kang et al. 2019</ns0:ref>) and MaxBin2 (Wu, Simmons, and Singer 2016) for binning in addition to an in-house binning methodology previously described <ns0:ref type='bibr' target='#b20'>(Heintz-Buschart et al. 2017)</ns0:ref>. The latter method initially ignores the ribosomal RNA sequences in kmer profiles based on VizBin embedding clusters <ns0:ref type='bibr' target='#b28'>(Laczny et al. 2015)</ns0:ref>. In this context, VizBin utilises density-based non-hierarchical clustering algorithms and depth of coverage for genome reconstructions. Subsequently we obtained a non-redundant set of metagenome-assembled genomes (MAGs) using DASTool <ns0:ref type='bibr' target='#b56'>(Sieber et al. 2018</ns0:ref>) with a score threshold of 0.7 for downstream analyses. The abundance of MAGs in each sample was determined by mapping the reads to the reconstructed genomes using BWA-MEM (H. Li 2013), taking the average coverage across all contigs. Diversity measures from metagenomic sequencing were assessed by determining the abundance-weighted average coverage of all the reads to identify the number of non-redundant read sets <ns0:ref type='bibr' target='#b52'>(Rodriguez-R and Konstantinidis 2014)</ns0:ref>.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:05:48507:1:2:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Taxonomic classification for metagenomic operational taxonomic units</ns0:head><ns0:p>We used the trimmed and pre-processed reads from the IMP workflow to determine the microbial abundance and taxonomic profiles based on the mOTU (v2) tool <ns0:ref type='bibr' target='#b35'>(Milanese et al. 2019)</ns0:ref>. Based on the updated marker genes in the mOTU2 database including those from the TARA Oceans Study <ns0:ref type='bibr' target='#b58'>(Sunagawa et al. 2015)</ns0:ref> and recently generated MAGs (Tully, Graham, and Heidelberg 2018), taxonomic profiling was performed on our sequence datasets. We used a minimum alignment length of 140 bp to determine the relative abundances of the mOTUs, including the normalisation of read counts to the gene length, also accounting for the base coverage of the genes. Additionally, we used CheckM <ns0:ref type='bibr' target='#b43'>(Parks et al. 2015)</ns0:ref> to assess completeness and contamination. Subsequently, taxonomy for MAGs recovered after the redundancy analyses from DASTool was determined using the GTDB (Genome Taxonomy Database) toolkit (gtdb-tk) <ns0:ref type='bibr' target='#b42'>(Parks et al. 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>All figures for the DNA concentrations, library preparation, assembly metrics and supplementary figures were generated using GraphPad Prism (v8.3.0). Taxonomical assessment and diversity measures were created using version 3.6 of the R statistical software package <ns0:ref type='bibr' target='#b60'>(Team 2013</ns0:ref>).</ns0:p><ns0:p>DESeq2 <ns0:ref type='bibr' target='#b33'>(Love, Huber, and Anders 2014)</ns0:ref> with FDR-adjustments for multiple testing were used to assess significant differences in the MAG abundances. The genomic cluster figure for the mock community was obtained as an output from the IMP metagenomic pipeline.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Phenol-chloroform-based extraction method results in higher DNA yields</ns0:head><ns0:p>To ensure native sequencing, by minimizing the number of PCR (polymerase chain reaction) cycles within the library preparation protocols, we tested four protocols for biomolecular extraction, with an aim of acquiring large quantities (>50 ng) DNA from glacier-fed stream benthic sediments. The four methods tested were selected because of their wide applicability on related environmental samples (Method-1 & -2) <ns0:ref type='bibr' target='#b17'>(Griffiths et al. 2000;</ns0:ref><ns0:ref type='bibr' target='#b29'>Lever et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b59'>Tatti et al. 2016)</ns0:ref> and their improved chances of higher yields (Method-3; Qiagen communication). Since method-4 is considered the gold standard of DNA extraction in biomedical sciences <ns0:ref type='bibr' target='#b11'>(Dairawan and Shetty 2020)</ns0:ref> and bacterial cultures <ns0:ref type='bibr' target='#b16'>(Green and Sambrook 2017)</ns0:ref>, it was included in our study. The four PeerJ reviewing PDF | (2020:05:48507:1:2:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed protocols are largely based on the same principles, viz. sample preparation, cell lysis, purification, precipitation and washing (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). From preliminary tests, it became apparent that a small-scale approach (Method-1; 0.5 g input sediment) did not yield sufficient amounts of DNA for metagenomics due to, on average, limited microbial biomass in the samples. Thus, all protocols (aside from Qiagen's -already produced for maxi scale) were scaled up to 5 g of input sediment and a co-precipitant, like linear polyacrylamide, was included in all precipitation steps. This was essential for the quantitative recovery of the small amounts of extracted DNA from high solution volumes (6-10 ml).</ns0:p><ns0:p>Overall, we found that extractions using the commercial kit from Qiagen (method-3) yielded increased total DNA as compared to a commonly used protocol (method-1; Fig. <ns0:ref type='figure' target='#fig_2'>1A</ns0:ref>). Furthermore, method-3 was similar in terms of DNA yield when compared to a generalized protocol (method-2) previously proposed <ns0:ref type='bibr' target='#b29'>(Lever et al. 2015</ns0:ref>) (Fig. <ns0:ref type='figure' target='#fig_2'>1B</ns0:ref>). On the other hand, the phenol-chloroform based extraction protocol (method-4) was tested against both methods 2 and 3, using sediment samples collected from the three different glacier floodplain streams (CBS, FEU, FED) from Switzerland. Method-1 was omitted from these tests due to insufficient DNA concentrations in the preliminary extractions. We found that for all three GFS, the phenol-chloroform extraction yielded the highest DNA concentrations. In some cases, and notably samples with low cell abundance, we even obtained one order of magnitude more DNA (Fig. <ns0:ref type='figure' target='#fig_2'>1C</ns0:ref>).</ns0:p><ns0:p>Quality assessment of these DNA extracts with Nanodrop showed OD 260/280 ratios between ~1.4 and ~1.6. Agarose gel electrophoresis revealed a high-molecular weight band with no apparent shearing, smearing or residual RNA, indicative of high-quality DNA (Fig. <ns0:ref type='figure' target='#fig_3'>2A</ns0:ref>). A secondary effect appearing in certain samples, but without any perceived consequences in the quality of extracted DNA whatsoever, was the development of a pink-red color of varying intensities with the addition of phenol:chloroform:isoamyl alcohol (Fig. <ns0:ref type='figure' target='#fig_3'>2B</ns0:ref>). This was pH dependent since samples were decolorized with the addition of sodium acetate pH 5.2 in the precipitation step. This could possibly be due to a ferric-chloride-phenol compound formed when chloride and phenol constituents of the protocol interact reversibly with Fe +3 ions contained in certain samples depending on local geology <ns0:ref type='bibr' target='#b0'>(Banerjee and Haldar 1950)</ns0:ref>. Similar coloration has been previously reported <ns0:ref type='bibr' target='#b29'>(Lever et al. 2015)</ns0:ref>.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:05:48507:1:2:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Extraction method affects library preparation efficiency</ns0:head><ns0:p>The DNA extractions based on method-3 and using phenol-chloroform methods were subsequently subjected to library preparation for high-throughput whole genome shotgun sequencing. Despite similar quality of DNA across both methods (~1.4-1.6 OD 260/280 ), library preparation using the modified commercial kit did not yield any successful libraries (Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). To assess if any impurities or inhibitors hampered library preparation we tested two clean-up methods for the DNA extracted from the commercial kit: 1) ethanol precipitation and 2) magnetic-bead based clean-up. For magnetic bead clean-up the SPRIselect beads (Beckman Coulter, 23318) were used according to the manufacturer's protocol. We found that the magnetic-bead method leads to a complete loss of sample (i.e., undetectable DNA quantity via Qubit analyses) during the process, especially if starting with a low input DNA concentration. Although we lost six out of twelve samples using the magnetic-bead clean-up, we achieved 100% efficiency as indicated by a concentration of greater than 0.5 ng/ul after library preparation quantified by Qubit, with the remaining six samples.</ns0:p><ns0:p>On the other hand, ~20% of the samples cleaned via ethanol precipitation failed library preparation.</ns0:p><ns0:p>Contradictory to these methods, DNA extracted using the phenol-chloroform based method (method-4) yielded 100% efficiency in terms of library preparation without any additional cleanup (Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). Additionally, we found that the distribution of the total yield after library preparation using the phenol-chloroform method was more uniform across samples compared to the other methods (Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Whole genome shotgun assembly unaffected by extraction methods</ns0:head><ns0:p>Extraction methods for whole genome shotgun sequencing may affect the sequencing itself, including the quality and assembly of the reads downstream. To assess this, we used the libraries prepared as described above (Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>), and performed whole genome shotgun sequencing on an Illumina NextSeq500. The average quality across all three methods based on short-read sequencing was Q30 after trimming the leading and trailing sequences (described in Methods). We assessed several assembly metrics including the average length of contigs (N50), largest alignment, total aligned length and coverage. We did not find any significant differences among any of these measures across all three methods (Fig. <ns0:ref type='figure' target='#fig_5'>4A-C, 4E</ns0:ref>). Using a diversity index metric, we however found a more uniform distribution across all samples prepared using method-4, albeit no significant differences to the commercial kit-based extraction and library preparation (Fig. <ns0:ref type='figure' target='#fig_5'>4D</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Extraction methods influence metagenomic profiles</ns0:head><ns0:p>It is well established that extraction methods (Wagner Mackenzie, Waite, and Taylor 2015) and library preparation <ns0:ref type='bibr' target='#b4'>(Bowers et al. 2015)</ns0:ref> protocols affect the taxonomic profiles and genomes recovered after high-throughput sequencing. We determined if the preparation methods affected the overall diversity of taxa recovered and found that phenol-chloroform and the magnetic-bead clean-up methods demonstrated similar levels of diversity (Shannon) as compared to samples precipitated using ethanol (Fig. <ns0:ref type='figure' target='#fig_9'>5A</ns0:ref>). Overall, the community profiles of the ethanol precipitationbased method were highly diverse (Fig. <ns0:ref type='figure' target='#fig_9'>5B</ns0:ref>). Interestingly, the genomes recovered and their abundances were similar in the phenol-chloroform and magnetic-bead methods as well (Fig. <ns0:ref type='figure' target='#fig_9'>5C</ns0:ref>). However, we observed a significant increase (p<0.001, FDR-adjusted p-value) in the abundance of a Ralstonia genome when prepared with the ethanol precipitation protocol (Supplementary fig. <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>). Additionally, we found that the number of genomes recovered using the phenol-chloroform was more consistent with previously reported 16S rRNA gene sequencing profiles for GFS from Austria <ns0:ref type='bibr'>(Wilhelm et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b2'>Besemer et al. 2012;</ns0:ref><ns0:ref type='bibr'>Wilhelm et al. 2014)</ns0:ref>. Simultaneously, we used an approach to identify metagenomic operational taxonomic units (mOTUs) and found that the phenol-chloroform and magnetic-bead methods showed similar profiles of mOTUs compared to that of ethanol precipitation (Fig. <ns0:ref type='figure' target='#fig_9'>5D</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Efficiency of phenol-chloroform extraction on a mock community including eukaryotes</ns0:head><ns0:p>To determine whether the phenol-chloroform extraction method is biased against eukaryotes, we used a commercially-available mock community (ZymoBIOMICS Microbial Community Standard #D6300) to assess bias and errors. After sequencing, we recovered high quality (>90% completion, <5% contamination) bacterial genomes (Fig. <ns0:ref type='figure' target='#fig_10'>6A</ns0:ref>). Additionally, the abundance of the microbial genomes, including one of the eukaryotes -Saccharomyces cerevisiae, were similar to the expected levels in the mock community (Fig. <ns0:ref type='figure' target='#fig_10'>6B</ns0:ref>). On the other hand, the protocol enabled the identification and partial recovery of the Cryptococcus neoformans genome, albeit at lower levels possibly due to increased melanisation of the cell wall <ns0:ref type='bibr' target='#b18'>(Grossman and Casadevall 2017)</ns0:ref> affecting lysis and subsequent extraction. Manuscript to be reviewed Discussion Improved omic techniques not limited to metagenomics are robust methods for analyzing nucleic acids and the characterisation of microbial communities in various environments <ns0:ref type='bibr' target='#b24'>(Jansson et al. 2012)</ns0:ref>. One way of understanding the impacts of global climate change on GFS includes the establishment of their census of microbial life <ns0:ref type='bibr' target='#b37'>(Milner et al. 2017)</ns0:ref>. However, methods designed for the extraction of biomolecules including DNA have not been validated for GFS sediments.</ns0:p><ns0:p>Although previous glacier-fed streams studies successfully used extracted DNA for 16S rRNA amplicon sequencing <ns0:ref type='bibr'>(Ren et al. 2017;</ns0:ref><ns0:ref type='bibr'>Ren, Gao, and Elser 2017;</ns0:ref><ns0:ref type='bibr'>Vardhan Reddy et al. 2009;</ns0:ref><ns0:ref type='bibr'>Wilhelm et al. 2013</ns0:ref>) the input DNA concentration requirements are considerably higher for whole genome shotgun sequencing. In order to pursue a deeper characterisation of the microbial communities within the GFS sediments, increased concentrations of DNA may additionally alleviate PCR biases <ns0:ref type='bibr' target='#b5'>(Brooks et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kim and Bae 2011)</ns0:ref>. Also, as previously highlighted, several methods exist for extractions from a wide variety of environmental samples, but not for GFS sediments. Here, we systematically tested the utility of four extraction protocols to identify a ubiquitous methodology. We found that a phenol-chloroform based extraction protocol can be used for samples across geographical separations, differences in bedrock, and samples collected at various distances from the glacier.</ns0:p><ns0:p>Glassing et al. demonstrated that inherent DNA contamination may influence microbiota interpretation in low biomass samples <ns0:ref type='bibr' target='#b14'>(Glassing et al. 2016)</ns0:ref>. Additionally, it is known that certain compounds -polysaccharides, humic acids, may affect PCR reactions <ns0:ref type='bibr' target='#b49'>(Rådström et al. 2004</ns0:ref>), requiring the need for additional DNA clean-up. It has been established that DNA losses occur during the purification steps (Roose-Amsaleg, Garnier-Sillam, and Harry 2001), including when using commercial column methods <ns0:ref type='bibr' target='#b22'>(Howeler, Ghiorse, and Walker 2003;</ns0:ref><ns0:ref type='bibr' target='#b32'>Lloyd et al. 2013)</ns0:ref>, and phenol-chloroform <ns0:ref type='bibr' target='#b41'>(Ogram, Sayler, and Barkay 1987)</ns0:ref>. Interestingly, we found similar losses when using the magnetic bead clean-up, whereas the ethanol precipitation method was inefficient compared to the phenol-chloroform protocol. Though the kit-based methods are more convenient and safer than phenol-chloroform extractions <ns0:ref type='bibr' target='#b60'>(Tesena et al. 2017)</ns0:ref>, access to reagents and costs may be a considerable factor. On the other hand, isolation of the aqueous phase from phenolchloroform can be user-dependent potentially affecting reproducibility, while kits have been shown to be more consistent across samples <ns0:ref type='bibr' target='#b9'>(Claassen et al. 2013)</ns0:ref>. Another key feature of our findings was the potential for the kit-based methods to influence the efficiency of genome reconstruction and variability in the taxonomic profiles that were recovered. While this has been reported previously <ns0:ref type='bibr'>(Wagner Mackenzie, Waite, and Taylor 2015;</ns0:ref><ns0:ref type='bibr' target='#b6'>Carrigg et al. 2007)</ns0:ref>, we found considerable variability when compared to the phenol-chloroform. This is plausible due to the incomplete dissolution of DNA in buffers, especially when using methods involving charged minerals (Vorhies and Gaines 2009; <ns0:ref type='bibr' target='#b1'>Barton et al. 2006;</ns0:ref><ns0:ref type='bibr'>Vishnivetskaya et al. 2014)</ns0:ref>, which may additionally affect DNA stability.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The utility of extraction methods extends beyond the process itself, impacting downstream applications such as whole genome shotgun sequencing. Our study shows that phenol-chloroform may be an under-appreciated yet powerful method for isolating nucleic acids from glacier-fed stream sediments. While additional steps may be required towards extraction of other biomolecules such as RNA, proteins and metabolites, minor modifications may be sufficient <ns0:ref type='bibr'>(Toni et al. 2018)</ns0:ref>.</ns0:p><ns0:p>Moreover, we report for the first time a systematic assessment of biomolecular extraction methods for GFS sediments. Our findings though fundamental and previously unexplored, may lay the foundations for future in-depth characterisation of GFS microbial communities.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Availability</ns0:head><ns0:p>The sequencing data generated during the current study are available from NCBI under the BioProject accession number PRJNA624048. A reporting summary for the uploaded data has been included as a metadata file at the accession listed ID. All extraction protocols including the modified commercial methods are available in the Supplementary Materials. </ns0:p></ns0:div>
<ns0:div><ns0:head>Ethical declarations</ns0:head></ns0:div>
<ns0:div><ns0:head>Conflicts of interest</ns0:head><ns0:p>The authors do not have any competing interests. Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Figure legends</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:05:48507:1:2:NEW 12 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Contributions P.P. performed the biomolecular extractions, including the validation of methods 1-3 alongside quality analyses and quantification. S.B.B. curated and validated the phenol-chloroform extraction method and whole genome shotgun sequencing analyses. T.B. and H.P. collected the glacier-fed stream samples for the experiments. S.F. did the DNA extractions, quantification and qualifications alongside P.P. R.H. handled the library preparation for all samples and the subsequent sequencing. P.W. contributed significantly to the development of method-1 in the manuscript. S.B.B., P.P., S.F., H.P., P.W., and T.J.B. conceived and formulated the experiments. S.B.B. and P.P. developed the manuscript with equal contributions from all authors.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Total DNA concentrations using different extraction protocols Boxplots represent the total amount of DNA (ng) extracted from 5 g of sediment when comparing (A) method-1 versus the modified-commercial kit-based method-3 and (B) method-2 versus method-3. (C) Boxplots of the DNA quantities isolated from three glacial floodplains (CBS -Corbassière, FEU -Val Ferret up site, FED -Val Ferret down site), using method -2, -3 and -4. Method-1: CTAB buffer lysis (Griffiths et al. 2000), Method-2: Modular DNA extraction (Lever et al. 2015), Method-3: Qiagen PowerMax Soil DNA extraction kit, Method-4: Chemical and enzymatic extraction. Significance was tested using a Two-Way ANOVA with Student-Neuman Keul's post-hoc analyses. **p<0.01, ***p<0.001</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Characteristics of DNA extracted with method-4 (A) Agarose gel electrophoresis of DNA extracted with mild vortexing of sediments and incubation in lysis buffer, proteinase K treatment and phenol-chloroform extraction. Lane 1: GeneRuler 1 kb DNA ladder; lanes 2-4: CBS, FED, FEU respectively. (B) Pink-red supernatants developed during phenol:chloroform extraction step.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Library preparation efficiencyThe efficiency or success percentage for prepared libraries based on the individual methods is indicated in the table. Boxplots represent concentrations of the prepared libraries.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Estimate of assembly metrics following extraction Barplots demonstrate the (A) N50 for the sequence assemblies, (B) length of the longest aligned sequence, (C) the total aligned length. Bars indicate standard deviation from the mean. (D) Boxplot showing the nonpareil diversity index across the three groups. (E) Percentage of coverage of the assembled sequences by read-mapping is depicted.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Diversity and taxonomic profiles of the metagenomic sequencing (A) Boxplot showing the Shannon diversity index for the taxonomic profiling for the three groups. Significance was tested using a One-way ANOVA with Student-Neuman Keul's post-hoc analysis. ***p-value<0.001, ****p-value<0.0001. (B) Principal component analyses generated using Bray-Curtis dissimilarity matrix depicts similarities or lack thereof between the three groups. (C) Abundances of the reconstructed genomes are depicted for method-3 + EtOH, method-3 + magnetic bead clean-up and method-4 extraction. (D) Heatmap demonstrating the mOTUs for the three methods is depicted. The hierarchical clustering for the heatmap was generated using Ward's clustering algorithm.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Evaluation of phenol-chloroform extraction using a mock community (A) Scatterplot depicts the clusters of contigs representative of the reconstructed genomes after processing the mock community using the IMP meta-omics pipeline. The taxonomic identity is displayed next to the respective clusters. (B) Barplots indicate the relative abundance of the individual genomes recovered from the mock community sequencing after extraction with the phenol-chloroform method. The upper (black) line represents the expected abundance (12%) of the prokaryotes, while the lower (red) line indicates the expected abundance (2%) of the eukaryotes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 Figure 3 .</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>where n, is the total number of times each condition was tested on the material; RT: room temperature PeerJ reviewing PDF | (2020:05:48507:1:2:NEW 12 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,70.87,525.00,370.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,280.87,525.00,370.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,280.87,525.00,254.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,70.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Key characteristics of the four selected methodsThe table lists the key and specific characteristics of the four extraction methods tested,</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:05:48507:1:2:NEW 12 Aug 2020)</ns0:note>
</ns0:body>
" | "Rebuttal Letter
Optimised biomolecular extraction for metagenomic analysis of microbial biofilms from high-mountain streams
Reviewer 1
Basic reporting
An interesting and important study! This manuscript was well written and organized. I do like the work, but I think this manuscript would benefit from revisions on figures, especially.
Experimental design
The experiments were well designed.
Validity of the findings
Generally good. The figures can be improved a little bit to clarify the results. Referring to the comments below.
We thank the reviewer for valuable and constructive feedback and have made the suggested changes with additional details addressed below.
Comments for the author
1. L22: “GFS ecosystems”
GFS ecology modified as suggested to ‘ecosystems’.
2. L114: add “,” after 'abundance'
Added.
3. L135-150: some of the tenses were wrong, which should be past tense.
Tenses corrected in the suggested lines and modified to past-tense.
4. L218: delete “of”
Deleted.
5. L239: delete “in”
Deleted.
6. L252: change “extracted” to “extractions”
Changed.
7. L278: delete “were observed compared”
Deleted.
8. Figure 1: it is better to add the significant difference results in the plot using different letters, stars, or just P-values, or like Figure 6a. Moreover, it would be good to briefly explain what the methods -1,2,4 are, so the readers can understand the figure without referring to the main context.
Figure has been updated with stars, including description of the methods in the figure legend in lines 418-423.
9. Figure 2: briefly explain method-4.
Figure legend updated with description in lines 426-429
10. Figure 3: is this figure really necessary? Or combine this figure with others?
Combined with figure 2 as fig.2B
11. Figure 4: it is weird to have a table in a figure. It is easy to convert the table as annotations in the figure.
We thank the reviewer for their valuable input. However, the now-updated (figure 3) depicts the concentrations of the libraries prepared for sequencing. Additionally, we chose to emphasize the efficiency of the preparation in the table without crowding the figure itself.
12. Figure 5: (1) there seems no reason to use different plots in figure with boxes and bars together. You can use either one. (2) the variables’ name is better to show on the top of the plot not on the X-axis like Figure S1. The legends (extraction methods) can be added to the X-axis.
Figure updated as suggested. Individual points depicted for (D) and (E) to indicate variability.
Reviewer 2
Basic reporting
The manuscript is well written - clear, professional and articulate English has been used throughout. In microbial ecology, we still face several challenges in the area of finding/devising efficient DNA extraction methods to study microbial community dynamics in understudied locations, and therefore studies such as this merit the scope of enhancing our opportunities to explore remote and less-represented sites such as the ones represented in this study.
I have the following comments about the manuscript, please see below.
We thank the reviewer for their constructive feedback including the language and most importantly, the regarding the need for DNA extraction methods for understudied locations.
INTRODUCTION:
Overall, I see a lack of citations throughout the introduction of the manuscript. While authors make several statements to support the goal of their study, they must also cite enough sources to back up their claims. Please see a few suggestions below:
We acknowledge the reviewer’s comments about the citations, and have made the necessary and appropriate changes as addressed below:
1. Line 42: Please add a few citations for “under-studied environments”
Added.
2. Line 46-48: Are there studies/reviews that more descriptively talk about this? Please include a few citations not just to support this statement, but also to help readers refer back to more detail on this should they wish to.
Multiple references added at new lines 48-49.
3. Line 73-75: Please add citation(s).
Multiple references added at new lines 75-78.
4. Line 75-77: add citations.
Added at new lines 78-80.
5. Lines 82-83: needs a reference, I am aware of this 50 ng cutoff, there must be some literature on this, just cite a few here.
Multiple references added at new lines 85-88.
6. Lines 86-93: There’s no need to include results in introduction. Authors should rephrase this paragraph so that this reads to include the goals of your study based on the background and rationale they’ve have provided earlier in the introduction. Results should go under the separate section.
We are grateful for this valuable feedback from the reviewer and have made the appropriate edits in lines 91-97.
METHODS:
7. Lines 98, 99, 100 – What is CBS, FE, FEU, FED? Please elaborate of abbreviations on first use.
The abbreviations and their elaborations have been modified to make it clearer to the reader in lines 102-104.
8. Line 98: What is a.s.l.? Please elaborate on first use.
What are the dates and times of sampling? Please add this to sample collection methods section.
We have updated the abbreviation on first use on line 102. The dates and times of sampling have also been added from lines 102 to 106. Additionally, we highlight that the sampling was performed later in the morning, before noon in line 105-106.
9. Line 103: What kind of sampling equipment was used?
Methods updated to reflect sampling equipment in new lines 105-107.
10. This is a comment for ALL the four DNA extraction protocols the authors have included under the methods section: it seems like all of these protocols have been adapted from already existing protocols or commercially-available kits. Authors mention that they’ve modified these existing methods. Since this is methods paper, it is very important to specify in detail the steps that have been specifically modified in your own procedure.
We addressed this comment in the Methods sections in lines 125-160 and also in the “Supplementary material” document where the protocols are elaborately laid out. We included statements indicating where the protocols differ from the original or previously-published methods.
11. Line 130: Mention the equipment used for agitation:
Added to new line 149.
12. This is also a comment for ALL the extraction methods: Did the authors start with the same weight of sample for all extractions? If not, a systematic and scientific comparison of the methods is not possible. Authors mention between lines 222-224 and in Figure 1 that 5 g was used for all. In the methods, maybe a range should be used. In line 119, it is mentioned that 0.5 g of sediment was the starting sample, whereas in line 128, it is mentioned that 5 g of sediment was the starting material. While there’s no mention of the amount for method 3, method 4 in line 144 again states that 5 g of sediment was the starting material.
The original protocol for method-1 was 0.5 g while all other methods used 5 g of input sediment weight. Due to the inadequacy of DNA obtained from method-1 given the low sediment weight, we tested the other methods with 5 g and scaled Method-1 to 5 g for the DNA used for sequencing. We have clarified this by including appropriate statements in lines 130-131, 177-178 and 249-251. The weight of the input sediment for method-3 has also been rectified and added to line 158.
13. What kind of normalization was achieved for this comparison? In other words, what amount of DNA was the starting material for the DNA sequencing from each of the methods? Authors should mention this in their methods. You could add a line or two in line 158 to address this issue. As this is a methods paper, simply saying ‘sufficient” DNA was obtained is not enough. What were the concentrations obtained based on the starting material? This could also be provided as a short table to enhance the understanding for the readers.
Changes made to new line at 183 and 189 to indicate 50 ng total input DNA was used for sequencing. Figure-1 indicates the concentrations obtained from the same starting material used for extractions, and so a table may not be necessary. If the reviewer is suggesting that we provide DNA concentrations for various input sediment weight, this is unfortunately, out of the scope of this manuscript.
14. How did the authors address contamination issues? For a methods paper concerning DNA extraction, this must be included in methods for readers who might be following your protocols later.
This has already been addressed in the original manuscript in updated lines 122-131.
15. What were the controls used during the extraction procedures? Please add this to methods.
Since “germ-free” sediment is not a viable option and is hard to remove any and all microorganisms from sediments, we used extraction blanks, i.e. tubes without any sample, as controls. The blanks underwent the same extraction steps as the other samples. Post-DNA recovery, we assessed whether any of the eluted samples had DNA via both NanoDrop and Qubit, and found them to be below detectable levels. Additionally, the PCR steps during the library preparation did not yield any detectable levels of product. This has been added to the manuscript in lines 122-131.
16. Figures 2 and 3 are images, they could be combined together to form one single figure. This helps to make the manuscript concise and precise.
Figures combined to form 2A and 2B
17. Figure 2 shows the bands from only one of the extraction methods. Could the authors provide a comparative yield image of all four methods that has been started with the same amount of sediment?
Methods 1 to 3 yielded very little DNA, especially from samples with low bacterial abundance, that was not sufficient to run on a gel. Only Method-4 yielded from all samples, amounts of DNA sufficient enough not only to be visualized on agarose gel (>10 ng per well), but also to be used for whole-genome shotgun metagenomics. Figure 2 and the lack of DNA samples extracted with the first three methods is just a demonstration of this fact. This is also reflected in the updated manuscript from lines 176-182.
Experimental design
please see my entire review under BASIC REPORTING section
Validity of the findings
please see my entire review under BASIC REPORTING section
Reviewer 3
Basic reporting
line 42- sentence starting 'among the latter' is awkward. Perhaps add 'Among the latter include...'
otherwise all is OK.
We thank the reviewer for the valuable and critical feedback allowing us to improve our manuscript. The suggested edit has been addressed in the updated manuscript in the new line 43.
Experimental design
1. line 119 - please clarify - these were the beads that are already in that tube?
The appropriate changes have been made to line 136 in the updated manuscript.
2. line 136 - list manufacturer of power soil kit. (note, this kit has changed components when MoBio sold it so be clear as to which company it came from)
Updated in lines 156-157.
3. line 138: what is meant by 'elaborate'? Are you just following kit protocol?
This has been replaced with “the manufacturer-suggested” in lines 158-159.
4. line 143 - you switch tenses here. All other methods were past tense.
Has been corrected to past tense where appropriate.
5. line 154 - use a method 'like the Qubit' or did you actually use a Qubit
Rectified in new line 178 to indicate the use of Qubit.
6. line 173: when was quality control of the reads performed?
The ‘preprocesing’ step of the IMP workflow includes read quality filtering and trimming. This has been updated in the manuscript in lines 198-200.
7. at the end of methods, you mention a 'mock community' yet the methods to create this mock community are not described.
The mock community was commercially-available via ZymoBIOMICS (Cat.No. D6300). This has been updated in the manuscript in lines 183-185.
8. In the results, line 257 you describe a magnetic bead clean up, but this was not clear in the methods.
Additional clean up steps were performed either with ethanol precipitation or with magnetic beads. For magnetic bead clean-up the SPRIselect beads were used (Beckman Coulter, 23318) according to the manufacturer’s protocol. This has been highlighted in the updated manuscript in lines 282-286.
9. line 300 - how exactly did you utilize this mock community? Did you add it to sediment and then extract? This needs a section in the methods.
The mock community was extracted using Method-4 and sequenced similar to other samples. This information has been added in lines 183-185 in the updated manuscript.
10. line 347 - please show the data. Considering it comes this late in the manuscript, you could even leave this sentence out.
The sentence has been removed as proposed, reflected in line 376.
11. Figure 3 can be moved to supplemental. It is not informative to your findings.
As proposed by reviewers #1 and #2, this figure has been merged with figure 2, to create an updated figure 2.
12. Figure 4- I'm confused on what 100% success means. Please clarify your units. Do you mean micrograms in resulted in the micrograms out of library prep?
Since some samples did not yield any libraries, 100% efficiency refers to the quantification of biomolecules recovered after library preparation by Qubit. This has been clarified and elaborated in Lines 288-290.
13. Please provide more detailed information on exactly how many times each condition was tested on the same material.
We added the number of times each condition was tested as “n” to table 1 in line 467.
Validity of the findings
All findings seem valid, details just need to be more clearly explained.
" | Here is a paper. Please give your review comments after reading it. |
644 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Quantitative polymerase chain reaction (qPCR) has been used as a standard molecular detection tool in many scientific fields. Unfortunately, there is no standard method for managing published qPCR data, and those currently used generally focus on only managing raw fluorescence data. However, associated with qPCR experiments are extensive sample and assay metadata, often under-examined and under-reported. Here, we present the Molecular Detection Mapping and Analysis Platform for R (MDMAPR), an open-source and fully scalable informatics tool for researchers to merge raw qPCR fluorescence data with associated metadata into a standard format, while geospatially visualizing the distribution of the data and relative intensity of the qPCR results. The advance of this approach is in the ability to use MDMAPR to store varied qPCR data. This includes pathogen and environmental qPCR species detection studies ideally suited to geographical visualization. However, it also goes beyond these and can be utilized with other qPCR data including gene expression studies, quantification studies used in identifying health dangers associated with food and water bacteria, and the identification of unknown samples. In addition, MDMAPR's novel centralized management and geospatial visualization of qPCR data can further enable cross-discipline large-scale qPCR data standardization and accessibility to support research spanning multiple fields of science and qPCR applications.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Understanding patterns of biodiversity and detecting instances of biological species presence and absence are fundamental steps towards enhancing global biosurveillance and biomonitoring capabilities <ns0:ref type='bibr' target='#b4'>(Buckeridge et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b69'>Tatem et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b16'>Fefferman & Naumova, 2010;</ns0:ref><ns0:ref type='bibr' target='#b38'>Koopmans, 2013)</ns0:ref>. The use of quantitative polymerase chain reaction (qPCR) assays and the PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed resulting data they generate offer valuable information due to their wide acceptance across multiple biological fields, and their ability to detect and quantify species' DNA quickly and with high sensitivity (Box 1; <ns0:ref type='bibr' target='#b74'>Valasek & Repa, 2005;</ns0:ref><ns0:ref type='bibr' target='#b9'>Deepak et al., 2007)</ns0:ref>. While several international biodiversity projects [e.g., Global Biodiversity Information Facility (GBIF, gbif.org accessed January 13, 2020), Species Link (http://splink.cria.org.br/), Botanical Information Network and Ecology Network (BIEN, http://bien.nceas.ucsb.edu/bien/)] aggregate global biodiversity data and facilitate the analysis of global patterns of species occurrences, the biodiversity community</ns0:p><ns0:p>has not yet integrated qPCR data into current data frameworks.</ns0:p><ns0:p>Centralizing qPCR datasets, similar to the efforts to standardize and centralize biodiversity data, remains challenging due to the overall lack of standardized data reported in published qPCR studies <ns0:ref type='bibr' target='#b26'>(Hardisty & Roberts, 2013;</ns0:ref><ns0:ref type='bibr' target='#b53'>Peterson et al., 2010)</ns0:ref>. Many published qPCR results are presented according to the interpretations of authors, and the raw data necessary to reach these interpretations (such as standard curves, cycler reactions, and primer and probe sequences) are often not included <ns0:ref type='bibr' target='#b49'>(Nicholson et al., 2020)</ns0:ref>. Researchers who have qPCR data from their experiments will often share the data in publications and data repositories such the National Center for Biotechnology Information (NCBI) Gene Expression Omnibus (GEO, https://www.ncbi.nlm.nih.gov/geo/) and Dryad (https://datadryad.org/stash). While these datasets are available to the public, it is still difficult to locate and combine them for comparative analyses due to the lack of data indexing for search engines <ns0:ref type='bibr' target='#b55'>(Pope et al., 2015)</ns0:ref>. So, unless researchers know exactly where qPCR datasets are located and can obtain them, published qPCR data is not often utilized beyond its initial research purpose. The use of standardized data formats such as XML-based Real-Time PCR Data Markup Language (RDML) to promote qPCR data sharing and improve data utility has been proposed <ns0:ref type='bibr' target='#b40'>(Lefever et al., 2009)</ns0:ref>. However, the XML-PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed based RDML is not universally adopted by biological researchers due to the difficulties reading the data format for researchers unfamiliar with XML language <ns0:ref type='bibr' target='#b7'>(Cerami, 2010)</ns0:ref>.</ns0:p><ns0:p>Another obstacle to the centralization of qPCR data is the lack of reporting standards for samplelevel metadata (Box 2; <ns0:ref type='bibr' target='#b55'>Pope et al., 2015)</ns0:ref>, which causes the subsequent failure to establish relationships between habitat data, molecular data, and biological and life history data. Most qPCR metadata standards (e.g. the Minimum Information for Publication of Quantitative Real-Time PCR Experiment (MIQE) Guidelines, NCBI GEO's Metadata Worksheet) only require the disclosure of molecular experiment information. The lack of sample-level metadata creates difficulties in assembling and pooling qPCR data generated across researchers and institutions <ns0:ref type='bibr' target='#b49'>(Nicholson et al., 2020)</ns0:ref>. Current recommended qPCR metadata standards lack sample-related data such as geographic location, date of sample collection, and collector(s). This lack of sample metadata leaves the eco-geographical aspect of qPCR data under-examined and diminishes the value of the qPCR data for biodiversity studies.</ns0:p><ns0:p>The volume of qPCR data is increasing, along with the urgent need for qPCR data integration and centralized documentation. In the past decade, qPCR has been utilized as a tool to support numerous biological fields of inquiry, including natural resource management <ns0:ref type='bibr' target='#b72'>(Thomas et al., 2019;</ns0:ref><ns0:ref type='bibr'>Fritts et al., 2018)</ns0:ref>, food safety <ns0:ref type='bibr' target='#b1'>(Amaral et al., 2016)</ns0:ref>, conservation planning <ns0:ref type='bibr'>(Franklin et al., 2019)</ns0:ref>, and disease vector/infectious disease monitoring <ns0:ref type='bibr' target='#b56'>(Qurollo et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b31'>Ikten et al., 2016)</ns0:ref>. Research using qPCR methodologies have extended beyond the detection and quantification of target gene expression. Environmental samples can be analyzed with qPCR as a method of environmental or disease monitoring, where an organism's DNA can be detected in the sampled environmental <ns0:ref type='bibr' target='#b75'>(Veldhoen et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b65'>Sato et al., 2018)</ns0:ref>. As a consequence, the extended use of qPCR in environmental DNA (eDNA) surveys is producing a large amount of qPCR data (e.g., the qPCR raw fluorescence outputs) and associated metadata. The ability to combine these data sets with well-structured, sample-level metadata will extend their utility for applications to address new research questions in biodiversity science <ns0:ref type='bibr' target='#b53'>(Peterson et al., 2010)</ns0:ref>.</ns0:p><ns0:p>However, current bioinformatics tools largely focus on the quantitative analysis of raw fluorescence data <ns0:ref type='bibr' target='#b33'>(Kandlikar et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b34'>Kemperman & McCall, 2017)</ns0:ref>, with few tools (see examples <ns0:ref type='bibr' target='#b81'>Young et al. 2019</ns0:ref>; Biomeme Tick Map, https://maps.biomeme.com/) available to develop a conceptual framework to standardize, integrate, display, and document qPCR fluorescence outputs with associated metadata <ns0:ref type='bibr' target='#b50'>(Pabinger et al., 2014)</ns0:ref>. This informatics gap limits collective thinking and scientific discovery.</ns0:p><ns0:p>To address the lack of data standards and sharing options for qPCR data, we have developed the extensible open-source informatics tool MDMAPR under the R Shiny framework v. 1.4.0 <ns0:ref type='bibr' target='#b8'>(Chang et al., 2019;</ns0:ref><ns0:ref type='bibr'>R Core Team, 2019 -v. 3.6.1)</ns0:ref>. This tool helps merge raw fluorescence outputs along with associated metadata into a tabular data format, enhancing data searchability and discoverability. Minimal data standards for metadata are set and include temporal, geographic, and environmental information for each sampling event. These data will then facilitate the MDMAPR geospatial visualization of the qPCR results through an interactive world map. These data and their visualization can be applied to environmental DNA qPCR studies and health related qPCR data alike. In this article, we show the strengths of MDMAPR with a focus on environmental DNA applications but also connect the usability of the platform to other uses and describe how the platform can be extended to include more specific purposes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>The MDMAPR program is an application written in R (R Core Team, 2019 -v. 3.6.1) under the Shiny framework <ns0:ref type='bibr' target='#b8'>(Chang et al., 2019)</ns0:ref>. The Shiny framework is a package built from R Studio <ns0:ref type='bibr'>(RStudio Team, 2015)</ns0:ref>. MDMAPR consists of two elements that can be accessed through common web browsers (e.g., Google Chrome, Internet Explorer, and Safari): a data input element and an interactive mapping element.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data input through the Data File Preparation page</ns0:head><ns0:p>In the 'Data File Preparation' page, raw fluorescence qPCR data and metadata are submitted to the application. The MDMAPR accepts raw fluorescence qPCR data and metadata directly from the output of qPCR platforms, with current support for MIC qPCR Cycler (https://biomolecularsystems.com/mic-qpcr/), Biomemetwo3 (https://biomeme.com/) and Biomemethree9 (https://biomeme.com/). The extension of MDMAPR is possible, where additional qPCR platforms can be added to the open-source code, and is addressed in the discussion section (also see associated User Guide for details). Raw fluorescence qPCR data is related to the metadata using individual qPCR well names as both the primary key and unique identifier. The minimum data fields required by MDMAPR are: run_location (the alphanumeric letterings used to identify the sample's qPCR well), run_platform (the qPCR platform that generated the raw qPCR output), threshold (this is a user supplied threshold that is required for every sample submitted to the MDMAPR program and is used by the program to calculate the threshold cycle (Ct) value), organismScope (the target organism which can be a discrete organism or a specific kind of organism aggregation (e.g. 'virus', 'multicellular organism')), eventDate (the collection date of the biological sample), decimalLatitude (the biological sample collection GPS latitude), decimalLongtitude (the biological sample collection GPS longitude), taxonID (the unique identifier for the species target of the qPCR assay), and species (the target qPCR assay species name in 'Genus species' format). While most qPCR assays are specific to species, there are some instances where an assay could amplify all taxa below a higher-level <ns0:ref type='table'>PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:ref> Manuscript to be reviewed taxon (for example all species in a genus). Currently, to address this in the metadata input the user would need to submit the taxonID for the higher level taxon of interest, and where the genus species name was required the user would need to create a unique identifier in place of a specific species to further differentiate the higher taxon-specific assay. While most data and metadata are uploaded by the user, MDMAPR has a built-in algorithm to calculate Ct values using sample threshold values and the function th.cyc() from the R package ChipPCR (v. 0.0.8.10, <ns0:ref type='bibr' target='#b62'>Roediger & Burdukiewicz, 2014)</ns0:ref>. Merged data can be downloaded for manual inspection and editing, or directly uploaded into the 'Dynamic mapping visualization' portion of MDMAPR. The current version of MDMAPR includes the possibility of merging multiple data sets for visualization. To accomplish this users will download each of the single file data sets of interest from the 'Data File Preparation' page, combine these files locally and then upload to the 'Dynamic Mapping Visualization' page (see User Guide for details). Example raw qPCR fluorescence data and associated metadata for the MDMAPR supported platforms is available in a compressed file named Example Files (.zip), located in the 'New Data Submission' panel of the 'Data File Preparation' page. Darwin Core (DwC) terminology and definitions are used in MDMAPR to standardize ecological and spatio-temporal data <ns0:ref type='bibr' target='#b19'>(GBIF, 2010;</ns0:ref><ns0:ref type='bibr' target='#b19'>Wieczorek et al., 2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Visualization through the Dynamic Mapping Visualization page</ns0:head><ns0:p>The merged MDMAPR data file can be uploaded via the submission portal, located in the data panel on the 'Dynamic Visualization Mapping' page. Uploaded data can be selectively displayed on the map by applying the filters Organism Scope(s), Species, and/or Time Range, located in the 'Dynamic Mapping Visualization' data panel.</ns0:p><ns0:p>The visualized data points are colour-coded based on relative cycle threshold (Ct) values (see <ns0:ref type='bibr' target='#b73'>Tsuji et al. (2019)</ns0:ref> for discussion on interpreting presence/absence using eDNA assays).In MDMAPR's default settings, the cut-off Ct value for visualizing positive detection is set to 40.</ns0:p><ns0:p>A Ct value above 40 is regarded as a negative detection, suggesting the target species DNA is not detected in the sample <ns0:ref type='bibr' target='#b35'>(Klymus et al., 2019)</ns0:ref>. Conversely, Ct values of less than 40 are considered positive detections and suggest species presence. The default maximum Ct value for visualization of positive detection in MDMAPR is adjustable as a parameter in the 'Dynamic Mapping Visualization' data panel, according to researchers' project needs. Previous studies have suggested that reliable qPCR detections depend on a cycle threshold of no more than 40 cycles <ns0:ref type='bibr' target='#b35'>(Klymus et al., 2019)</ns0:ref>. Nevertheless, qPCR runs can have different amplification efficiencies, and it has been reported that duplicate runs of the same qPCR sample can generate varied Ct values that differ by up to 2.3 cycles <ns0:ref type='bibr' target='#b6'>(Caraguel et al., 2011)</ns0:ref>. Therefore, researchers may need to set species-specific or project-specific Ct cut-off values to refine analyses and better represent expected presence.</ns0:p><ns0:p>Assessment of the presence of a target species using qPCR is associated with the quantity of DNA present in a sample <ns0:ref type='bibr' target='#b77'>(Weltz et al., 2017)</ns0:ref>. In the case of eDNA surveys, this correlation can provide a relative abundance of DNA in a given sample <ns0:ref type='bibr' target='#b77'>(Weltz et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b54'>Pilliod et al., 2014)</ns0:ref>.</ns0:p><ns0:p>MDMAPR categorizes Ct values into five intensity levels to better visualize the potential variation in target DNA abundance across sampling locations on the map. These intensity levels include: 'none detected', 'weak', 'moderate', 'strong', and 'very strong'. No detection of target DNA in the sample (when Ct > 40) is represented by green colour, whereas presence (when Ct < 40) is represented by a palette of colours depending on the Ct value. Geographic data points having coordinates with latitude/longitude differences no more than 0.005 degrees will be collapsed into a single data point with the ability to spiderfy. This spiderfy effect will take biological replicate samples from the same geographic point and allow the visualization of these points.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The MDMAPR application can be accessed online (https://hannerlab.shinyapps.io/MDMAPR/) or alternatively, the source code and example files can be downloaded from GitHub (https://github.com/HannerLab/MDMAPR). MDMAPR consists of two pages 'Data File Preparation' webpage (Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>), where raw qPCR fluorescence data is merged with associated metadata (Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>, Supplemental file 1 and 2). This can then be visualized immediately through MDMAPR's second element, the 'Dynamic Mapping Visualization' webpage (Figure <ns0:ref type='figure' target='#fig_8'>3</ns0:ref>) or downloaded and stored for future use.</ns0:p><ns0:p>The 'Dynamic Visualization Mapping' page provides the ability to visualize qPCR signal intensity data. The tool's default setting for qPCR signal intensity levels is: 'none detected' (Ct > 40; light green), 'weak' (30 < Ct < 40; light yellow), 'moderate' (20 < Ct < 30; cerulean), 'strong' (10 < Ct < 20; light magenta-pink) and 'very strong' (0 < Ct < 10; tawny). The range of Ct values for each presence intensity level can be customized by users through the selection of a starting value for each intensity level from the drop-down list, located at the bottom of the data panel. Data points with similar or identical geographic coordinates are clustered together (Figure <ns0:ref type='figure' target='#fig_8'>3B</ns0:ref>). When users click on one of the clusters, the interactive map will zoom in to the region where the selected cluster is located, and the corresponding data points with identical or similar coordinates will move apart in a spiderfied shape (Figure <ns0:ref type='figure' target='#fig_8'>3F</ns0:ref>). MDMAPR was built using the R language (R Core Team, 2019 -v. 3.6.1) for statistical computing and the R Shiny framework <ns0:ref type='bibr' target='#b8'>(Chang et al., 2019)</ns0:ref>, which enables web-based interactive applications. The strengths of developing MDMAPR using R include cross-platform accessibility and wide adoption in the biological sciences for programming, data manipulation, and statistical analyses (2019; <ns0:ref type='bibr' target='#b39'>Lai et al., 2019)</ns0:ref>. The establishment of an R community of researchers and programmers, together with an international and centralized resource network named The Comprehensive R Archive Network (CRAN; <ns0:ref type='bibr' target='#b30'>Hornik, 2012)</ns0:ref> provides a large resource for the implementation and extension of the MDMAPR program. The open-source nature of MDMAPR is significant, especially in the life sciences, where many biological laboratories tend to use accessible and customizable informatics tools to implement their research methodologies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>More importantly, open-source informatics programs have the ability to be rewritten for addressing new biological questions, which is integral to the biology community where researchers with different areas of specialization work together <ns0:ref type='bibr' target='#b10'>(Deibel, 2014)</ns0:ref>. Manuscript to be reviewed While other R-based informatic tools for qPCR data visualization exist <ns0:ref type='bibr' target='#b14'>(Dvinge et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b51'>Pabinger et al. 2009</ns0:ref>), they largely focus on statistical qPCR results rather than establishing the connection between biological information, geographical locations, and other metadata.</ns0:p><ns0:p>Specifically, these tools display qPCR results through individual data sets in plots, histograms, and density distribution graphs. These forms of visualization are useful to analyze single-study qPCR data and aid data interpretation. However, these analyses lack the ability to interpret results with respect to sample metadata, which is quickly becoming a standard in the field of environmental DNA <ns0:ref type='bibr' target='#b49'>(Nicholson et al. 2020)</ns0:ref>. Data fields such as collection location, type of collection, and others described in the MDMAPR program provide connection of metadata to qPCR data. This connection promotes a greater breadth and depth of data interpretation.</ns0:p><ns0:p>MDMAPR addresses the lack of visualized and accessible qPCR sample metadata in three different ways. First, the data file generated in MDMAPR's 'Data File Preparation' page combines the data for use in MDMAPR 'Dynamic Visualization Mapping' page, and also makes the columnar data format accessible for easy manipulation and selection of records after visualization. As such, biological researchers can informatically link disparate data types from diverse sources, including genomic, ecological, and environmental data. Moreover, the columnar data format can be easily shared onto public data repositories such as Dryad (https://datadryad.org/), which can then also be associated with publications through existing avenues such as the association of a digital object identifier (DOI).</ns0:p><ns0:p>The second aspect where MDMAPR advances the interoperability of accumulated qPCR data is in the ability to adjust the range of each qPCR intensity level. This results in real-time visualization changes to mapped data points. The abundances of different species may vary greatly. For example, endangered species tend to have relatively lower DNA concentrations (or PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed higher Ct values) within a given region <ns0:ref type='bibr' target='#b77'>(Weltz et al., 2017)</ns0:ref>. In such cases, higher Ct values have greater frequencies. To address this phenomenon, users can update the default range setting of the qPCR intensity level in MDMAPR to visualize the variation of Ct values by color. Future development of MDMAPR will incorporate the option of visualizing sample points with their Ct values displayed for a less subjective interpretation of mapped results.</ns0:p><ns0:p>Thirdly, MDMAPR has the option to filter data to visualize temporal relationships. This functionality is useful when investigating how species or populations are distributed over time.</ns0:p><ns0:p>For example, filtering submitted data by time helps understand the invasion pathway of introduced non-native species and can identify possible routes of species introduction. In epidemiology, this functionality helps evaluate the temporal distribution of disease-causing pathogens <ns0:ref type='bibr' target='#b2'>(Arino, 2017;</ns0:ref><ns0:ref type='bibr' target='#b71'>Thalinger et al., 2019)</ns0:ref>.</ns0:p><ns0:p>The lack of a central location for storing qPCR fluorescence data and metadata limits the current and future applications of biological data <ns0:ref type='bibr' target='#b70'>(Tedersoo et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b49'>Nicholson et al., 2020)</ns0:ref>. A unifying data platform that is both scalable and interactive can preserve existing research efforts and centralize information from diverse projects, while simultaneously providing opportunities for comparative research <ns0:ref type='bibr' target='#b52'>(Penev et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b36'>König et al., 2019)</ns0:ref>. We use the DNA barcoding effort as an example to illustrate the challenges and opportunities facing qPCR data centralization, and the strengths of standardized data storage. The global DNA barcoding effort is an initiative to characterize all metazoan life on earth using one or a few short segments of DNA <ns0:ref type='bibr' target='#b59'>(Ratnasingham & Hebert, 2007)</ns0:ref>. The International Barcode of Life (iBOL; <ns0:ref type='bibr' target='#b0'>Adamowicz, 2015)</ns0:ref> project has established a central database and data framework (Barcode of Life Data System, BOLD) to store and share barcode data <ns0:ref type='bibr'>(Hebert et al., 2003a;</ns0:ref><ns0:ref type='bibr'>Hebert et al., 2003b)</ns0:ref>.</ns0:p><ns0:p>Research using a DNA barcoding approach has been applied across numerous biological Manuscript to be reviewed disciplines, including epidemiology <ns0:ref type='bibr' target='#b68'>(Stothard et al., 2009)</ns0:ref>, border surveillance <ns0:ref type='bibr' target='#b44'>(Madden et al., 2019)</ns0:ref>, and environmental DNA studies <ns0:ref type='bibr' target='#b11'>(Dejean et al., 2012)</ns0:ref>. One of the beneficial outcomes of these large barcoding efforts has been the retrospective study of data in the shared data resource, using the aggregate data from many smaller projects <ns0:ref type='bibr' target='#b67'>(Shen et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b44'>Madden et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b45'>Manel et al., 2020)</ns0:ref>. For example, <ns0:ref type='bibr' target='#b45'>Manel et al. (2020)</ns0:ref> used the centralized DNA barcoding data to investigate the genetic diversity of marine species and identified the relationship between species' genetic diversity and environmental factors. These large DNA barcode studies were made possible through the use of a standard data ontology and data sharing frameworks. The need for similar data structure and centralization has been identified for qPCR and its associated metadata <ns0:ref type='bibr' target='#b29'>(Holland et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b36'>König et al., 2019)</ns0:ref>.</ns0:p><ns0:p>While the centralized storage element of BOLD is highly effective, there are drawbacks to the system. One such drawback is the 'one-size-fits-all' nature of the system. The BOLD system classifies submitted DNA barcoding data by comparing them with pre-existing taxonomic work already stored on BOLD using five built-in algorithms <ns0:ref type='bibr' target='#b60'>(Ratnasingham & Hebert, 2013)</ns0:ref>. This means the sequence classification outcome may vary depending on the available taxonomic data on BOLD that can be used as reference for sequence comparison, thereby making it challenging to reproduce classification outcomes. Furthermore, the fixed nature of sequence classification algorithms in BOLD prohibits researchers from integrating state-of-art sequence analysis methods in their studies. Hence, the ability for bioinformatic tools to be open-source and fully extensible is integral to continuous innovation in the biological sciences. MDMAPR addresses this concern by establishing required data elements but also providing open-source code to allow for, and encourage, the extensibility of the underlying R code. This is significant to the biological sciences, as it allows scientists to expand on the pre-existing MDMAPR code to produce novel Manuscript to be reviewed and more advanced informatics analyses and applications <ns0:ref type='bibr' target='#b22'>(Gu & Sauro, 2014)</ns0:ref>. In addition, using the 'Data File Preparation' page, datasets can be stored for reanalysis at a later date, allowing for the reproducibility of research results.</ns0:p><ns0:p>To further facilitate the integration and shareability of qPCR data and associated metadata, MDMAPR has used DwC data standards to provide standardization and harmonization with other data repositories <ns0:ref type='bibr' target='#b19'>(Wieczorek et al., 2012)</ns0:ref>. The use of DwC-compatible identifiers provides the ability to connect qPCR data in MDOP to other repositories like GBIF (GBIF, 2010). Of chief importance among these standardized data fields is the TaxonID field. This field holds unique numerical identifiers that represent species-specific taxonomic records stored in the NCBI Taxonomy database <ns0:ref type='bibr'>(Federhen, 2011)</ns0:ref>, which link MDMAPR's qPCR data to molecular and taxonomic data resources on other databases. This linkage adds value to the MDMAPR data format, in its ability to be exported and associated with other large biological databases. The use of standard terms in MDMAPR removes the heterogenicity in the meaning of data, easing the process of discovering, combining, and comparing data from different sources. MDMAPR's data format, which adheres to the FAIR principle (Findable, Accessible, Interoperable, Reusable; <ns0:ref type='bibr' target='#b80'>Wilkinson et al., 2016)</ns0:ref>, combined with the use of Darwin Core ensures the future discoverability and shareability of qPCR data.</ns0:p><ns0:p>The collation and integration of metadata allows for comprehensive data exploration and visualization, which is an approach we believe can accelerate biological knowledge synthesis and revolutionize the biological research field <ns0:ref type='bibr' target='#b32'>(Jetz et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b36'>König et al., 2019)</ns0:ref>. In MDMAPR, the integration of associated metadata allows researchers to filter qPCR samples by DwC-compatible species names. This is an important feature, as a single species can have multiple qPCR assays targeting different genetic markers (see examples in <ns0:ref type='bibr' target='#b47'>Medina et al., 1999;</ns0:ref><ns0:ref type='bibr' /> PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr' target='#b23'>Guo et al., 2015)</ns0:ref>. Moreover, the sensitivity of species detection is enhanced when multiple DNA markers are used for analysis <ns0:ref type='bibr' target='#b66'>(See et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b41'>Liu et al., 2017)</ns0:ref>. Thus, the preservation of species-and biomarker-specific qPCR results becomes important for maintaining the data robustness required for detecting the presence/absence of species. In MDMAPR, the DwCcompatible species field is what links these data together <ns0:ref type='bibr' target='#b76'>(Walls et al., 2014)</ns0:ref>. Currently MDMAPR can filter data by species. Ongoing development of the platform will include other filtering options like filtering by molecular marker.</ns0:p><ns0:p>A core element of the MDMAPR approach is in establishing a platform that can accept data from different qPCR instruments and their corresponding software's data formats. This diversity of potential instruments becomes a bottleneck in the biological informatics research workflow, as extra efforts are required to integrate raw qPCR fluorescence data from different platforms before these data can be further analyzed in a comparative context. Unfortunately, unlike DNA sequence repositories that store nucleotide data in a common format (FASTA), the qPCR raw fluorescence outputs from different instruments do not share a common data format. Thus, accepting data from different qPCR platforms and integrating these data into a single location is essential for data centralization. MDMAPR accepts raw fluorescent outputs from multiple platforms and integrates these data into a tabular format. This functionality allows the aggregation of many qPCR results, and more importantly, it provides convenience for those researchers who want to compare performances or biases across different qPCR platforms during species detection <ns0:ref type='bibr' target='#b63'>(Ross et al., 2013)</ns0:ref>. Although there are only a few qPCR platforms currently supported with the MDMAPR program, the open-source code makes it easy for users to add additional platforms directly in the programing (see User Guide on GitHub for details). Ongoing development of the MDMAPR platform is focused on making the addition of platforms modular through the creation of reference files for the system to access.</ns0:p><ns0:p>The mapping of centralized qPCR data can reveal useful information on the dynamics of species distribution patterns across space and time. MDMAPR can reveal patterns in what appears to be unrelated instances of species occurrences. For example, centralized data storage and mapping of Salmonellosis cases, which are often categorized as sporadic events, may provide insight into the relationships among different outbreaks <ns0:ref type='bibr' target='#b61'>(Riley, 2019)</ns0:ref>. The accumulation of qPCR results in a centralized repository, like MDMAPR, can unmask interrelationships and could also help to elucidate dispersal pathways and barriers to distributions through visualizing data through time <ns0:ref type='bibr' target='#b48'>(Nelson & Platnick, 1981)</ns0:ref>.</ns0:p><ns0:p>MDMAPR preserves both qPCR-derived presence and absence data, which is valuable for modelling and tracking biological organisms across space and time. In biodiversity research inferring species absence from available data can be approached using modelling, however, assertations of absence are often regarded as uncertain <ns0:ref type='bibr' target='#b43'>(Mackenzie & Royle, 2005)</ns0:ref>. Species distribution modelling can have better predictive outcomes when combining as many data records as possible including both presence and absence data <ns0:ref type='bibr' target='#b3'>(Brotons et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b42'>Lobo et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b58'>Rahman et al., 2019)</ns0:ref>. MDMAPR's approach to integrating qPCR data enables the documentation of both positive (presence) and negative (absence) detections obtained from environmental studies that use qPCR technologies. The choice of R as a coding language for MDMAPR provides further opportunities for the integration of existing modelling analyses such as the R eDNAOccupancy package <ns0:ref type='bibr' target='#b13'>(Dorazio & Erickson 2017)</ns0:ref>.</ns0:p><ns0:p>The future development of MDMAPR should focus on its core strengths of being open-source, extensible, and centralized while using standardized data fields to connect to other data storage efforts <ns0:ref type='bibr' target='#b24'>(Guralnick & Hill, 2009)</ns0:ref>. With the increasing number of qPCR technologies available with platform specific data formats , such as output data file types (csv vs. xlsx) and different structures and naming within these file types, the inclusion of all data formats in this or future releases of MDMAPR is not feasible. A necessary next step to further the extensibility of MDMAPR is to develop a standardized process to allow the upload of additional qPCR fluorescence data formats. A related future goal would be the establishment of a central storage location for these extensions such as a supported website or GitHub repository. Finally, future work by the qPCR community at large is needed where a single standard format for reporting qPCR fluorescence is adopted.</ns0:p><ns0:p>The increased amount of qPCR data accepted by future MDMAPR may require more robust data storage capacity (e.g. a relational database), and more diverse data filters (e.g. by geographic coordinates) to be implemented so that users can still find and subset targeted data in an efficient manner. Ongoing development for MDMAPR will incorporate more diverse data structures which will support situations such as multiple qPCR assays in a single reaction and additional metadata including reporting standards recommended by the MIQE Guidelines <ns0:ref type='bibr' target='#b5'>(Bustin et al., 2009)</ns0:ref>. The export of data from MDMAPR should not be limited to a single spreadsheet format.</ns0:p><ns0:p>One option is that MDMAPR could include the ability to transform presence/absence data in a shapefile format, so that it could be imported into other mapping platforms such as ArcGIS Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020) Manuscript to be reviewed MDMAPR offers researchers an interactive environment for merging raw qPCR fluorescence values with sample metadata, and the ability to visualize qPCR data in a geographic context. These two elements enable researchers to visualize qPCR signal intensities (presence or absence) on an interactive world map, thereby demonstrating the potential of centralized qPCR data generated from multiple projects for use in comparative studies. In addition, MDMAPR not only brings these data together, but also transforms them into a more accessible format. The opensource, customizable, and scalable nature of MDMAPR's code offers researchers flexibility and extensibility options while simultaneously providing standard formats for the centralization and searchability of data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>https://www.arcgis.com).Currently, MDMAPR addresses data security by having the qPCR data stored on a local computer and then utilizing the web-based R-Shiny MDMAPR instance for data combing and visualization. Future work to develop MDMAPR should focus on integrating a more robust underlying data structure to address concerns related to accessibility and security. To accomplish this, the integration of existing R and R Shiny options, such as the use of an SQL database and Shiny Server Pro for enhanced data security features (https://rstudio.com/products/shiny-serverpro/) is ideally suited. The further development of an underlying database and additional filtering options (while maintaining open access to all code) presents many opportunities to consolidate qPCR data in an internationally accessible global qPCR data repository.ConclusionMDMAPR is a significant first step toward providing an open-source and scalable framework for qPCR data centralization and geographic visualization. The features of MDMAPR are designed to meet the needs of a variety of research aims including biodetection and surveillance. With the quality and reliability improvements of portable qPCR devices, MDMAPR is addressing a critical need by providing a resource to centralize data and present computational options to accompany technological advances. With the integration and centralization of qPCR and associated metadata through platforms like MDMAPR, the expedited visualization of species presence/absence data is possible which can contribute to quicker management decisions by researchers, governments, and other involved personnel.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 '</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>A) During qPCR data merge, MDMAP converts the fluorescence data for each of the samples into a column of data strings, which is then combined with the metadata file. (B) The merged file includes all metadata columns, a column containing the raw fluorescence output for all samples, and a column with calculated Ct values (Supplemental File 1). Extra columns can be added according to researchers' study needs. The structure of the merged data table is the format accepted by the 'Dynamic Mapping Visualization' page. A full list of metadata fields and their descriptions that are currently used in MDMAP can be found in Supplemental File 2. PeerJ reviewing PDF | (2020:05:48695:1:1:NEW 25 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 '</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
</ns0:body>
" | "Dear Dr. Ruslan Kalendar, August 10, 2020
We would like to sincerely thank the two reviewers and yourself for their thoughtful reading and comments of our submitted manuscript “Molecular Detection Mapping and Analysis Platform (MDMAP) facilitating the standardization, analysis, visualization, and sharing of qPCR data and metadata.“ We have carefully reviewed all comments and responded to each inline below in the attached review document.
There were four distinct groups of comments which we have generally commented on in this cover letter.
1. Simple comments including spelling, grammatical, and general sentence structure for greater understanding. Each instance of these were specifically addressed and corrected with a response and link back to the correction in the main document.
2. Questions on justification of the limitations for the manuscript. While we do recognize there are limitations to this program with respect to its use in qPCR visualization, we also acknowledge that we were not as effective at communicating these limitations and the reasons why further efforts were not taken for this manuscript. We updated the manuscript and provide context for our updates for each reviewer comment which requested further clarification about these issues.
3. Broad scoping comments. While there were few larger comments affecting the entire paper there was one specific comment we would like to highlight. Reviewer 2 pointed out the existence of another mapping effort using the name MDMAP. While we were aware of this other initiative, we do agree that distinguishing our work, where possible, is always desired. In an attempt to differentiate the other effort while maintaining our acronym we changed the title of the paper and the subsequent program to “Molecular Detection Mapping and Analysis Platform for R (MDMAPR) facilitating the standardization, analysis, visualization, and sharing of qPCR data and metadata.” We very much thank the reviewer for this comment as we feel this title and name change will better separate our work from the current use of MDMAP.
4. Finally, there were several elements from a editorial perspective that were addressed including:
◦ Author affiliations between manuscript and online accounts were standardized
◦ Lisence information (GPL) was added to the program and associated files are now included in the online access to the program on GitHiub
◦ Figure permissions were confirmed
◦ Figures were uploaded in the proper format.
◦ The combined Methods and Results sections were separated as per the journals request.
We would like to again thank all parties who committed their time to our review and note that we believe that this paper is superior because of the changes and is now suitable for publication in PeerJ.
Sincerely,
Robert Young (co-first author)
Post-Doctoral Fellow, Hanner Lab University of Guelph
In line reviewer responses
Please find our responses included below directly after the reviewer comments in the orange text colour.
Reviewer 1 (Jason Ferrante)
As an end-user who would appreciate and benefit from this work, I was a bit curious as to why the Platform, tasked at allowing multiple datasets to be co-analyzed, was designed to integrate output data from 3 relatively new, specialized (compact/low throughput and/or field ready) qPCR systems as opposed to systems that are present in a majority of labs around the world. Using BioRad, ThermoFisher/Applied Biosystems, etc. output files would make this Platform immediately applicable to a large number of labs and would presumably allow for immediate integration and analysis of years of previously collected data. While I don’t think this point in any way precludes the work from being publishable, I can assure you that some detail on the thought process would be desired from researchers that will read this article, even if it just happened to be availability of the systems in your lab. I am heartened by your desire to develop R script to help broaden the outputs available to be integrated into the Platform.
Thank you for the comment, we agree that the three qPCR platforms currently supported do not represent a large number of labs. We used these platforms as they were the platforms for which we had data. We have further clarified the points you raise in this comment in our response to your next comment directly below.
Another major question that arises from reading this manuscript is what will become of the Platform after this (Lines 334 to 343)? You suggest that users will have the ability to adapt the Platform to various qPCR software programs, but a centralized database still needs to be established, and the standards chosen as data fields did not seem to come from the qPCR community. Can you expand on these a bit more?
In this first instance of MDMAPR we have hardcoded the inclusion of qPCR platforms so no database is needed. For future expansion using this version the user would need to use the source code and add in additional qPCR platform formats (easily done for someone with R coding skills and the open-source code). For future development of the MDMAPR we will focus on integrating a modular file system where each file would represent a qPCR instrument and could be placed in the file structure of the program for extensibility. In addition, the number of fields in the metadata table is completely extensible for the MDMAPR presented here. The inclusion of additional fields will not break the current system. The system is extensible in that if a user wishes to add filtering to the program for new fields then the source code is available to do so. We are also working on the inclusion of additional data and database storage options for ongoing development of MDMAPR. To address this comment to the reader we have added the following text…
Although there are only a few qPCR platforms currently supported with the MDMAPR program, the open-source code makes it easy for users to add additional platforms directly in the programing (see the associated user guide for MDMAPR). Ongoing development of the MDMAPR platform is focused on making the addition of platforms modular through the creation of reference files for the system to access.
An excellent feature is the ability to save a dataset from the Data File Preparation for re-analysis and to reproduce results. This aspect of data management is becoming vitally important in fields where new data are added to databases. I commend your effort to ensure FAIR principles and the inclusion of DwC-compatible identifiers.
Best of luck!
Annotated manuscript
The reviewer has also provided an annotated manuscript as part of their review:
Download Annotated Manuscript
Experimental Design
This work is original research and appears to fall within the scope of the journal. They have clearly outlined its relevance to the qPCR field of study. I am particularly appreciative of the fact that the authors have made the various settings adjustable (i.e. Ct cutoff, etc.), recognizing the inherent variability from assay to assay, thus making the Platform more broadly applicable. The data fields chosen for the metadata are useful for secondary analysis of qPCR results. However, I wonder if the authors considered MIQE reporting for qPCR assays for the metadata (Bustin, et al 2009. https://academic.oup.com/clinchem/article/55/4/611/5631762).
As it currently stands the metadata associated with sample collections and geospatially visualizing the qPCR data we used more universally accepted DwC categories for the metadata. Ongoing work for the MDMAPR program is focused on more robust data collection and storage linking to a relational database where we will incorporate more MIQE data elements. We have also added a sentence in the discussion to clarify this idea to readers …
Ongoing development for MDMAPR will incorporate more diverse data structures which will support situations such as multiple qPCR assays in a single reaction and additional metadata including reporting standards recommended by the MIQE Guidelines (Bustin et al., 2009).
Validity of the Findings
This is a novel tool for compiling and standardizing qPCR data and its associated metadata. I feel tools such as this are very much in need in the field, and I congratulate the authors on producing it. The authors give good examples as to the potential uses of the tool. That being said, the primary functionality is the ability to pull raw fluorescence data from qPCR outputs and calculate a Ct value, then inserting the 2 columns at the end of a metadata table the user needs to fill out. The Platform’s strength is in this first step, which is limited to use in 3 qPCR systems (2 from the same company), none of which are broadly used yet in the field of qPCR research (understanding the irony that the Biomeme systems are in fact more broadly used in the actual “field”, being a portable system!). Therefore, a bit more focus on the planned development of R scripts to allow the upload of additional qPCR fluorescence data formats is warranted and would be encouraging to the reader.
Please see a previous response where we addressed to this reviewer comment (2 responses before this one).
Keywords: Generally, do not need keywords that are in your title (qPCR).
The use of qPCR in the keywords has been removed.
Introduction:
Line 39: I suggest caution using the term “molecular diagnostics” with qPCR as much of the use
for qPCR to date has included gene expression studies, which that term aptly describes, but the
lean towards eDNA you lend in your manuscript, while appropriate as a growing field using
qPCR, does not meet the traditional concept of the term “molecular diagnostics”. Effectively,
your tool crosses over nicely with medical and field research, so using a term generally assigned
to the former overshadows the latter. Remember, “diagnostics” help to diagnose something, as in
a disease or condition, and does not apply to areas such as eDNA necessarily.
This sentence has been updated to reflect the reviewer comment and now reads…
The use of quantitative polymerase chain reaction (qPCR) assays and the resulting data they generate, offer valuable information due to their wide acceptance across multiple biological fields, and their ability to detect and quantify species' DNA quickly and with high sensitivity (Box 1; Valasek & Repa, 2005; Deepak et al., 2007).
Line 41: Context of term “real time” – while successfully described in the caption of Box 1, my
interpretation of your reference to “real time” in this sentence itself was less clear. For example,
in eDNA studies, that DNA may have been present for some time before its presence is detected.
Perhaps use a different term, like “quickly and with high sensitivity”.
The document has been updated to reflect this suggestion. See above comment sentence update.
Line 78: Use of word “diagnostics”, in my opinion, relates more to the example of food safety
(generally) than it does the resource management or conservation planning references. Perhaps
use a more neutral term?
The term “molecular diagnostics” was removed from this sentence and the resulting sentence is…
In the past decade, qPCR has been utilized as a tool to support numerous biological fields of inquiry, including natural resource management (Thomas et al., 2019; Fritts et al., 2018), food safety (Amaral et al., 2016), conservation planning (Franklin et al., 2019), and disease vector/infectious disease monitoring (Qurollo et al., 2017; Ikten et al., 2016).
Methods and Results
Lines 167-170: Signal intensity values: where did these come from? Is this based on a reference?
Assigning a signal intensity carries a lot of weight in the interpretation of the results.
To address this confusion, we have updated the previous sentence to this concern to better explain how the signal intensity values are obtained. The updated sentence reads as follows…
MDMAPR categorizes Ct values into five intensity levels to better visualize the potential variation in target DNA abundance across sampling locations on the map.
Discussion
Line 322: This sentence needs some work – Suggest “MDMAP could help to…” and the part
“either assay or species specific of geographically, over time” is confusing.
This sentence has been updated and combined with the previous sentence to reflect the reviewer’s comment. It now reads…
The accumulation of qPCR results in a centralized repository, like MDMAPR, can unmask interrelationships and could also help to elucidate dispersal pathways and barriers to distributions through visualizing data through time (Nelson & Platnick, 1981).
Paragraph 325-333: Suggest word-smithing the pp. Line 328 has the word conclusions twice, one
word apart, and should be “regarded as a source…”. Last sentence is a repeat of the first.
Thank you, we agree and have addressed your comment and changed the sentence to the following…
In biodiversity research inferring species absence from available data can be approached using modelling, however, assertations of absence are often regarded as uncertain (Mackenzie & Royle, 2005).
Line 336: Perhaps be more specific here? Did you look into other data formats? Which ones?
Readers may be expecting the more common systems are used. Also, all outputs from every
major system are in .csv or .xlsx these days, so clarification as to what aspects are proprietary
would also help here.
We have clarified our writing here. The sentence now reads…
With the increasing number of qPCR technologies available with platform specific data formats, such as output data file types (csv. vs. xlsx) and different structures and naming within these file types, the inclusion of all data formats in this or future releases of MDMAPR is not feasible.
Line 341: You have referenced many articles that look at standards for reporting (i.e. Klymus, et
al 2019), and more are out there (i.e. the Bustin 2009 I reference in the Experimental Design
comments). Can you address why these were not incorporated (such as LOD, threshold values,
etc.)?
Currently MDMAPR is focused on the centralization and visualization of qPCR data. The other elements you note above require more work and are part of the ongoing development of MDMAPR. We have addressed this comment in the second to last paragraph and included a statement about future inclusion of MIQE standard data which includes these elements. It reads as follows…
Ongoing development for MDMAPR will incorporate more diverse data structures which will support situations such as multiple qPCR assays in a single reaction and additional metadata including reporting standards recommended by the MIQE Guidelines (Bustin et al., 2009).
Conclusion
Line 362: Geographical? Not sure if this is correct.
This was changed to geographic.
Line 364: Please check grammar, or comma usage “the availability and quality and reliability
improvements of…” It’s a bit unclear.
This sentence was addressed and reworked and now reads…
With the quality and reliability improvements of portable qPCR devices, MDMAPR is addressing a critical need by providing a resource to centralize data and present computational options to accompany technological advances.
Line 365: “addressing” used twice in the sentence, recommend replacing one with another word
for clarity.
See previous response.
Reference
General: Check formatting. Some titles are first word capitalized, some are all lower case, and
some are first word in sentence capitalized.
Capitalization variation has been addressed in the references section.
Line 397: XML in title should be all caps
This was corrected
Line 452: Remove “( )” from 2015
This was correct on line 542 where it occurred (I believe that the above reviewer comment was a typo).
Supplemental file 1
verbatimElevation – Description should indicate “meters”
verbatimDepth – Description should indicate units (meters, cm, inches, etc)
decimalLatitude, decimalLongtitude – Description should indicate which datum is to be used
(NAD, WGS, etc).
Thank you for the comment. The verbatimElevation and verbatimDepth have had the unit of meters added to their description in the supplemental file 1.
MDMAPR uses EPSG:3857 as the spatial reference system and this information has been included in the descriptions for the fields decimalLatitude and decimalLongtitude in the supplemental file 1.
________________________________________
Reviewer 2 (Anonymous)
Experimental design
It was difficult to adequately test the MDMAP application. Example files worked without issue but would be more of a benefit to test custom datasets and to compare cycle threshold values with supported platforms/software and those employed within MDMAP (th.cyc in the ChipPCR package). More extensive and clear documentation of uploading data, particularly with multiple PCR runs or projects that span a longer time frame would be an improvement.
Thank you for the comment. We realized the importance of having a clear documentation in helping readers getting familiar with MDMAPR. Thus, we have prepared a MDMAPR user guide which contains more detail about data upload and processing in MDMAPR. The user guide is now available in Supplemental Files.
Comments for the Author
The following constitutes a review of the scientific article entitled “Molecular Detection Mapping and Analysis Platform (MDMAP) facilitating the standardization, analysis, visualization, and sharing of qPCR data and metadata” for publication consideration in the journal PeerJ. Overall the writing is clear and concise with appropriate and relevant literature cited. The problem and the justification for the submitted work is clearly identified.
This reviewer took the liberty of testing the ShinyApp interface with the example files provided. Linking the web tool through the CRAN community and potentially having access to developed packages that relate to qPCR data reduction and analysis is advantageous by itself. The interface is simple and fairly easy to use. However, currently there are significant limitations in use of the tool and overall clear documentation for use of the MDMAP application is lacking. Many labs do not use the platforms the ShinyApp currently supports, which restricts the reach of this tool to those labs. It’s also difficult to evaluate the algorithm used for determining cycle threshold values with any data other than those created by MIC or Biomeme instrumentation. Although the authors have clearly identified qPCR data standards are urgently needed, it is suggested since testing of this application is difficult with data from any other machine than the three currently supported that the authors provide additional reasoning why MDMAP will reach the standard that other labs should follow. It would also be very helpful to evaluate or compare calculated cycle threshold values from onboard software (regardless of the instrument) to the th.cyc algorithm used in the MDMAP application. In other words, answering the questions how do the algorithms compare and which one(s) are the best. This is critical since MDMAP is specifically reporting and mapping cycle threshold values.
The concept of linking qPCR raw fluorescence values to a standardized cycle threshold value, and then linking that data to project or sample metadata is a worthwhile effort. I imagine further documentation on use of the MDMAP application will be provided in the future although it would have been helpful for a better evaluation of the tool currently. As such, in its current form the manuscript is recommended for publication but will need revision to address the concerns raised above and the specific comments that follow.
Specific comments
Title page; lines 2 and 3: There is another large database effort with the acronym MDMAP through NOAA. If this effort was to become publicly disseminated and searched through web browsers would it be appropriate to alter the acronym slightly to avoid overlap? In fairness, the two efforts are not at all related in focus and should be no issue with leaving as is but a slight change could help with visibility.
Thank you for the comment. We have updated the name of the program to Molecular Detection Mapping and Analysis Platform for R (MDMAPR). This change addresses your comment and provides our paper and program distinction form the other MDMAPR while at the same time keeping the informative nature of the name. This change has been reflected throughout all documentation associated with the program and this manuscript.
Abstract; line 26: I’m ok with leaving this sentence in the abstract, but as written using the word ‘rarely’ implies there are efforts that exist that include associated metadata with qPCR results making MDMAP less novel. Are these efforts referenced appropriately? A couple of examples are eDNAtlas through the US Forest Service and a more recent effort by USGS eNAS. Suggest adding references for these efforts and the main document.
Young, Michael K.; Isaak, Daniel J.; Schwartz, Michael K.; McKelvey, Kevin S.; Nagel, David E.; Franklin, Thomas W.; Greaves, Samuel E.; Dysthe, J. Caleb; Pilgrim, Kristine L.; Chandler, Gwynne L.; Wollrab, Sherry P.; Carim, Kellie J.; Wilcox, Taylor M.; Parkes-Payne, Sharon L.; Horan, Dona L. 2018. Species occurrence data from the aquatic eDNAtlas database. Fort Collins, CO: Forest Service Research Data Archive. Updated 08 November 2019. https://doi.org/10.2737/RDS-2018-0010
This comment has been addressed. The sentence in the abstract has been updated and Young et al. 2019 has been added in the introduction with reference to tools used in qPCR that look primarily at statistical analyses. The sentence in the introduction now reads…
Unfortunately, there is no standard method for managing published qPCR data, and those currently used generally focus on only managing raw fluorescence data.
Abstract; line 33: The effort is not just eDNA focused with regards to species distribution, but includes qPCR results from other applications such as gene expression studies, pathogen detection, etc, is it not? This is perhaps where the novelty of MDMAP resides. Can the authors spell out better exactly what types of studies MDMAP supports in the abstract? What areas of molecular diagnostics can MDMAP facilitate sharing of data?
Thank you for this comment, we completely agree that the abstract did not sufficiently convey the possibilities of the MDMAPR program. The last half of the abstract has been updated and expanded to better reflect the potential of MDMAPR. The updated portion now reads as follows…
The advance of this approach is in the ability to use MDMAPR to store varied qPCR data. This includes pathogen and environmental qPCR species detection studies ideally suited to geographical visualization. However, it also goes beyond these and can be utilized with other qPCR data including gene expression studies, quantification studies used in identifying health dangers associated with food and water bacteria, and the identification of unknown samples. In addition, MDMAPR’s novel centralized management and geospatial visualization of qPCR data can further enable cross-discipline large-scale qPCR data standardization and accessibility to support research spanning multiple fields of science and qPCR applications.
Introduction; line 74: Seems a strange way to identify ‘samples’ as ‘qPCR samples’. Samples can be water samples, soil samples, tissue samples, etc. Considering rewording.
Thank you for the comment. We have reworded the sentence and now the updated sentence reads as below …
This lack of sample metadata leaves the eco-geographical aspect of qPCR data under-examined and diminishes the value of the qPCR data for biodiversity studies.
Introduction; line 84 – 87: Poor sentence structure, reword. Here is a suggestion, “As a consequence, the extended use of qPCR in environmental DNA (eDNA) surveys is producing a large amount of qPCR data (e.g., the qPCR raw fluorescence outputs) and associated metadata.”
Thank you for the nice suggestion! We have implemented this sentence into the article.
Introduction; line 90: ‘bioinformatics tools’…Can the authors provide specific examples?
We have included some references at this location as examples of bioinformatics tools processing qPCR data. Those references are:
Kandlikar GS, Gold ZJ, Cowen MC, Meyer RS, Freise AC, Kraft NJB, Moberg-Parker J, Sprague J, Kushner DJ, Curd EE. 2018. ranacapa: An R package and Shiny web app to explore environmental DNA data with exploratory statistics and interactive visualizations. F1000Research 7:1734. DOI: 10.12688/f1000research.16680.1.
Kemperman L, McCall MN. 2017. miRcomp-Shiny: Interactive assessment of qPCR-based microRNA quantification and quality control algorithms. F1000Research 6:2046. DOI: 10.12688/f1000research.13098.1.
Introduction; line 90 – 91: ‘with few tools’. If there are a few, what are they? eDNAtlas? Other?
We have included some references as examples of bioinformatics tools/platforms that integrate eDNA or qPCR data with metadata. Those references are:
Young, Michael K.; Isaak, Daniel J.; Schwartz, Michael K.; McKelvey, Kevin S.; Nagel, David E.; Franklin, Thomas W.; Greaves, Samuel E.; Dysthe, J. Caleb; Pilgrim, Kristine L.; Chandler, Gwynne L.; Wollrab, Sherry P.; Carim, Kellie J.; Wilcox, Taylor M.; Parkes-Payne, Sharon L.; Horan, Dona L. 2018. Species occurrence data from the aquatic eDNAtlas database. Fort Collins, CO: Forest Service Research Data Archive. Updated 08 November 2019. https://doi.org/10.2737/RDS-2018-0010.
Biomeme Tick Map, https://maps.biomeme.com/
As per the reviewers suggestions the sentence starting on line 90 has been updated to read as follows…
However, current bioinformatics tools largely focus on the quantitative analysis of raw fluorescence data (Kandlikar et al., 2018; Kemperman & McCall, 2017), with few tools (see examples Young et al. 2019; Biomeme Tick Map, https://maps.biomeme.com/) available to develop a conceptual framework to standardize, integrate, display, and document qPCR fluorescence outputs with associated metadata (Pabinger et al., 2014).
Introduction; line 101: What specific qPCR applications will be supported? Only eDNA? Other applications? In addition, it would be beneficial for the authors to expand on how this tool will be deployed…perhaps most appropriately in the discussion section.
In response to several other comments by both reviewers we have added several areas of additional content better describing the possible applications and extensibility of MDMAPR and specific methods to extend the program. This specific comment has been addressed and the section of writing in question at the end of the introduction has been updated to the following…
These data and their visualization can be applied to environmental DNA qPCR studies and health related qPCR data alike. In this article, we show the strengths of MDMAPR with a focus on environmental DNA applications but also connect the usability of the platform to other uses and describe how the platform can be extended to include more specific purposes.
Introduction; line 102: Hyphenate when open-source is used as an adjective.
This was completed.
Methods and Results; line 108: MDMAP acronym may overlap through searching common web browsers with NOAA’s Marine Debris Monitoring and Assessment Project (MDMAP). Author(s) may consider another acronym for visibility and easier accessibility.
Thank you for the comment. We have updated the name of the program to Molecular Detection Mapping and Analysis Platform for R (MDMAPR). The addition of the R makes it more clear that this is an R implemented program and separates the name from the other MDMAPR program while at the same time maintaining our acronym with information about the objectives of the program. We have updated the name throughout the all documentation and the code.
Methods and Results; line 119 – 120: This is a fairly limited list of thermalcyclers and associated software packages. I can see the next sentence references the possibility of extending to other systems, but this is still a very limited list and in the short term limits the utility to only those labs that have these instruments. What time frame or how extensive of an effort will be required to expand support to other systems? If not in the discussion, the authors should comment on this.
Reviewer 1 has also noted that the platforms chosen were not necessarily the ones that other labs would use thereby limiting the implementation of the MDMAPR program. We have addressed this concern and better communicated this in the manuscript. Currently, MDMAPR has hardcoded qPCR platforms in the program. For future expansion using this version the user would need to use the source code and add in additional qPCR platform formats (easily done for someone with R coding skills and the open-source code). For future development of the MDMAPR we will focus on integrating a modular file system where each file would represent a qPCR instrument and could be placed in the file structure of the program for extensibility. To address this comment, we have included the sentence…
The extension of MDMAPR is possible, where additional qPCR platforms can be added to the open-source code, and is addressed in the discussion section (see user guide for details).
Methods and Results; line 122: ‘qPCR well names’. This is adequate for individual qPCR runs, but what happens for subsequent runs or repeated runs where well names are redundant? Is there an underlying database schema that retains individual run information, or is this tool useful for only getting the raw qPCR fluorescence data in a format to be mapped, visualized, and shared one qPCR run at a time?
Thank you for the comment. The current version of MDMAPR focuses on one qPCR run at a time due to the program’s limitation of data collection and storage capacity. Future work of MDMAPR program will focus on building more comprehensive data collection and storage capacity so that the future version of MDMAPR can document repeated qPCR runs at a time. To clarify to readers, we have added a sentence in discussion and it reads as below …
Ongoing development for MDMAPR will incorporate more diverse data structures which will support situations such as multiple qPCR assays in a single reaction and additional metadata including reporting standards recommended by the MIQE Guidelines (Bustin et al., 2009).
Methods and Results; line 125: Pre-set implies the user has some control over this. Does that happen in the qPCR software prior to raw data download to MDMAP or is there user control within the ShinyApp for threshold? Threshold varies considerably in the Example Files in the Data File Preparation page of the app. Can the authors expand on how much control users have over the threshold setting and exactly where that may occur? This is critical since the ChipPCR package uses threshold to calculate Ct values via the function th.cyc().
Threshold for Ct value calculation needs to be provided by users in the metadata spreadsheet. In specific, users are required to provide threshold values corresponding to each PCR well under the column named threshold. MDMAPR will take the threshold values to calculate Ct values via the function th.cyc(). We have also provided detailed clarification in the MDMAPR user guide (See Supplemental File 02). We have updated the sentence in question and the applicable portion of this sentence reads as follows…
…threshold (this is a user supplied threshold that is required for every sample submitted to the MDMAPR program and is used by the program to calculate the threshold cycle (Ct) value),…
Methods and Results; line 139: Commend the authors for thinking ahead here as inevitably there will be other tools such as MDMAP that have been or will be developed. Standardization of records and record labels will make sharing of data across different platforms more streamlined. This is certainly a strength of this effort.
Thank you for your kind words!
Methods and Results; line 151: Not the target species…rather…the target species DNA
This is corrected.
Methods and Results; line 152 – 153: ‘default cycle threshold value in MDMAP is adjustable’. Another critical point for eDNA studies since different qPCR markers will carry different levels of PCR efficiency and corresponding sensitivity. Can the authors explain why this parameter must be adjusted in the R code itself, and was not included as an adjustable parameter in the MDMAP ShinyApp interface?
Thank you for the comment. The “default cycle threshold value” means the maximum Ct values for a positive detection. That means samples with Ct values greater than this cutoff will be considered negative. We agree with your suggestion and have added a field in the “Dynamic Mapping Visualization” data panel to allow user input. This comment is also addressed in the manuscript and the updated sentence now reads …
The default maximum Ct value for positive detection in MDMAPR is adjustable as a parameter in the “Dynamic Mapping Visualization” data panel, according to researchers' project needs.
Methods and Results; line 162: change ‘in the field’ to ‘in a given sample’.
This is corrected.
Methods and Results; line 163: change ‘divides’ to categorizes’.
This is corrected.
Methods and Results; line 168 – 170: The selected default qPCR intensity settings are extremely wide. Most eDNA Ct values will likely fall at or above Ct 30, rarely below that unless sampling where the target species is at a very high density (such as epidemiological studies where bacterial/viral loads are extremely high). It’s great that the authors provided user control over the intensity settings, but is there any guidance provided (or can it be provided) on the ShinyApp for some considerations on real world examples for the range of values that should be expected for a given qPCR application?
As pointed out by this reviewer there are different user groups that can make use of the MDMAPR program. As such, we did not provide specific guidance or suggestions on how the user could select or alter the intensities. We feel that this is best left up to the interpretation of the data and the specific study questions of the user. We have however, made it more clear on how the visualization of these colours can be altered in several instances in the manuscript to provide clear information for the user to make their determinations on how to use this element.
Discussion; line 184: ‘presence or absence’. Is it more appropriate to say ‘relative Ct values’ when referring to qPCR signal intensities. Presence or absence only refers to whatever the set Ct cutoff value is, not the range of values.
Thank you for the comment. Changes have been implemented and better reflect the idea of Ct cut off values and their association to the presence of target DNA as your comment suggests. The updated sentence reads as below …
The visualized data points are colour-coded based on relative cycle threshold (Ct) values (see Tsuji et al. (2019) for discussion on interpreting presence/absence using eDNA assays).
Discussion; line 185 – 186: How do multiple projects or studies get merged and mapped together? I’m thinking along the lines of multi-year projects where the same areas are sampled repeatedly. Can this be done and can the authors expand on that topic a bit more?
This comment was addressed with additional text in the manuscript and a detailed process in the User Guide. The manuscript addition is as follows…
The current version of MDMAPR includes the possibility of merging multiple data sets for visualization. To accomplish this users will download each of the single file data sets of interest from the “Data File Preparation” page, combine these files locally and then upload to the “Dynamic Mapping Visualization” page (see associated User Guide for more details).
Discussion; line 198 – 201: Although the summary of R packages and CRAN is nice, it’s not relevant to this discussion in my view. More of an advertisement for R. I think the previous sentence provides ample information to document the footprint of R and the extensive set of resources and user groups it provides. This sentence can be removed.
This correction has been made.
Discussion; line 209: remove ‘associated’.
This correction has been made.
Discussion; line 231 – 233: The effort to incorporate color-coded signal intensities is a nice addition, but arguably subjective in nature. The example given (detection of endangered species) may give all detections that would be considered ‘weak’ to most eDNA practitioners. It may be more useful in some circumstances to ‘omit’ the color-coded signal intensity mapping and instead map points with associated Ct values instead. Have the authors considered this alternative as a user option in MDMAP?
Thank you for the comment. The incorporation of color-coded signal intensities is to help demonstrate the idea of how qPCR samples can be differentially visualized on a map in MDMAPR. Future development of MDMAPR will have the ability to map sample points with their Ct values. We have provided this information in the manuscript by updating the sentence and it reads as follow …
Future development of MDMAPR will incorporate the option of visualizing sample points with their Ct values displayed for a less subjective interpretation of mapped results.
Discussion; line 234: ‘temporal relationships’. It’s not clear where the merging of multiple raw data sets representing qPCR runs over time occurs. Does this happen prior to upload on MDMAP? Can the authors provide more context on what a user would have to do to format data for this type of upload and mapping?
The current version of MDMAPR can merge one qPCR raw data set and its associated metadata spreadsheet into a single tabular format in a csv. file. In the merged file, there is one column containing the qPCR raw output and other metadata columns. Though this merged file will only contain data that is uploaded most recently, it provides a standardized data format that allows the merging of multiple datasets. To merge multiple datasets over time, users will need to download this file from MDMAPR and combine it with other files of the same type manually on their computers. We have created a user guide on how to use MDMAPR (See Supplemental File 02) to help users get familiar with MDMAPR more easily. For clarification, we have added one sentence in Method and Results and it reads as below …
The current version of MDMAPR include the possibility of merging multiple data sets for visualization. To accomplish this users will download each of the single file data sets of interest from the “Data File Preparation” page, combine these files on their computer and then upload to the “Dynamic Mapping Visualization” page (See associated User Guide included with the data files for the system for more details).
Discussion; line 247: change ‘or few’ to ‘or a few’.
This was corrected
Discussion; line 260: change ‘provides this framework’ to ‘provides a framework’.
This was corrected
Discussion; line 260: Since there are others developed, MDMAP isn’t the only one. However, it would be useful to provide context of how MDMAP compares or contrasts to Holland et al. 2003 and Konig et al. 2019.
This comment has been addressed. There was a sentence structure problem and we have reworked the writing to better reflect the intended meaning. The two references in questions were review or survey studies where commentary on the need and implementation of qPCR data standards and centralization. We have updated the manuscript to reflect our intended meaning…
These large DNA barcode studies were made possible through the use of a standard data ontology and data sharing frameworks. The need for similar data structure and centralization has been identified for qPCR and its associated metadata (Holland et al., 2003; König et al., 2019).
Discussion; line 269 and 271: hyphenate ‘open-source’.
This was corrected
Discussion; line 301 – 303: Certainly a nice feature, but can users also filter by qPCR marker? This could be important when it is known one marker is more sensitive than others. Is markerID retained as a field in the columnar data?
Currently, MDMAPR can only filter data by organism scope, species name and date. The implementation of these data filters provides users experiences in visualizing targeted data subsets. Future development of MDMAPR will have the ability to filter data based on molecular marker. We have provided this information in the manuscript by updating the sentence and it reads as follows…
Currently MDMAPR can filter data by species. Ongoing development of the platform will include other filtering options like filtering by molecular marker.
Discussion; in general: The authors have not addressed anywhere in the paper how MDMAP would handle a qPCR marker that is not species-specific. For instance, many markers are genus-specific (e.g. Dreissenid mussels) or may detect multiple species within a genus but not all species within a genus. How are those circumstances handled within MDMAP?
Currently, the acceptance of qPCR assays for genera would need to be coded during the input of the data files. Using the mussel example one would need to uniquely code the genus species name submitted to the MDMAPR platform to account for an assay that targets a higher level taxonomic rank. All other required data (including taxonID) could be used for the higher taxonomic level. Ongoing development of MDMAPR is looking at addressing this issue where full Linnaean taxonomy is included with assay submissions. Then the visualized data can be filtered by specific taxonomic ranks for visibility on the MDMAPR mapping page. To address the reviewers comment for this manuscript two sentence were added in the material and methods section (after line 132) describing how to complete the metadata submission if one wanted to view data for an assay which was not specific to a species but instead a higher level taxon. This addition reads as follows…
While most qPCR assays are specific to species, there are some instances where an assay could amplify all taxa below a higher-level taxon (for example all species in a genus). Currently, to address this in the metadata input the user would need to submit the taxonID for the higher level taxon of interest, and where the genus species name was required the user would need to create a unique identifier in place of a specific species to further differentiate the higher taxon-specific assay.
Discussion; line 313 – 316: How much effort will it take to incorporate other platforms since many labs do not use Biomeme or MIC qPCR platforms? Please provide some insight into the difficulty or timeframe it may take for this to happen.
Currently the addition of qPCR platforms needs to be hard coded into the MDMAPR program. However, efforts are underway to make the addition of other platforms modular through the creation of platform specific reference files which MDMAPR can read. We have added a two sentence to the end of the paragraph to make it more clear to the reader how additional platforms could be added. Please see the additional sentences below…
Although there are only a few qPCR platforms currently supported with the MDMAPR program, the open source of the code makes it easy for users to add additional platforms directly in the programing. Ongoing development of the MDMAPR platform is focused on making the addition of platforms modular through the creation of reference files for the system to access.
Discussion; line 318 – 320: Suggest rewording one of the two sentences that start with ‘For example’. Redundant.
Thank you for the comment. We agree that this entire paragraph could use a little more clarity. We have reworked the paragraph and addressed this comment (as well as the following two comments). Please find the reworked paragraph below.
The mapping of centralized qPCR data can reveal useful information on the dynamics of species distribution patterns across space and time. MDMAPR can reveal patterns in what appears to be unrelated instances of species occurrences. For example, centralized data storage and mapping of Salmonellosis cases, which are often categorized as sporadic events, may provide insight into the relationships among different outbreaks (Riley, 2019). The accumulation of qPCR results in a centralized repository, like MDMAPR, can unmask interrelationships and could also help to elucidate dispersal pathways and barriers to distributions through visualizing data through time (Nelson & Platnick, 1981).
Discussion; line 322: A descriptor is missing here. MDMAP’s “…”?
Please see above comment.
Discussion; line 322 – 324: This entire sentence needs rewording. It is not clear what the authors are trying to say.
Please see above comment.
Discussion; line 328 – 329: Suggest ending sentence with a ‘.’ after conclusions, removing ‘and’, and starting a new sentence with ‘Conclusions based on absence…’. Run on sentence currently.
This change was completed
Discussion; line 331 – 333: Nice discussion point. Authors may consider mentioning occupancy modelling of eDNA data. One potential integration point of MDMAP with an existing R package for analysis of eDNA data would be ‘eDNAOccupancy’. See Dorazio and Erickson 2017.
Thank you for the fine suggestion. Including an extra sentence with this example and reference added to the manuscript but also provided a nice transition from this paragraph into the next. A sentence was added as follows…
The choice of R as a coding language for MDMAPR provides further opportunities for the integration of existing modelling analyses such as the R eDNAOccupancy package (Dorazio & Erickson 2017).
Conclusion; line 363 – 366: Sentence needs rewriting. Run on with too many ‘ands’ along with strangely worded in the phrase ‘MDMAP is addressing a critical need in addressing…
This sentence was addressed and reworked and now reads…
With the quality and reliability improvements of portable qPCR devices, MDMAPR is addressing a critical need by providing a resource to centralize data and present computational options to accompany technological advances.
" | Here is a paper. Please give your review comments after reading it. |
645 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Breast cancer is a heterogeneous disease. Compared with other subtypes of breast cancer, triple-negative breast cancer (TNBC) is easy to metastasize and has a short survival time, less choice of treatment options. Here, we aimed to identify the potential biomarkers to TNBC diagnosis and prognosis. Material/Methods. Three independent data sets (GSE45827, GSE38959, GSE65194) were downloaded from the Gene Expression Omnibus (GEO). The R software packages were used to integrate the gene profiles and identify differentially expressed genes (DEGs). A variety of bioinformatics tools were used to explore the hub genes, including the DAVID database, STRING database and Cytoscape software. Reverse transcription quantitative PCR (RT-qPCR) was used to verify the hub genes in 14 pairs of TNBC paired tissues. Results. In this study, we screened out 161 DEGs between 222 non-TNBC and 126 TNBC samples, of which 105 genes were up-regulated and 56 were down-regulated. These DEGs were enriched for 27 GO terms and 2 pathways. GO analysis enriched mainly in 'cell division', 'chromosome, centromeric region' and 'microtubule motor activity'. KEGG pathway analysis enriched mostly in 'Cell cycle' and 'Oocyte meiosis'. PPI network was constructed and then 10 top hub genes were screened.</ns0:p><ns0:p>According to the analysis results of the Kaplan-M eier survival curve, the expression levels of only NUF2, FAM83D and CENPH were associated with the recurrence-free survival in TNBC samples (P<0.05). RT-qPCR confirmed that the expression levels of NUF2 and FAM83D in TNBC tissues were indeed up-regulated significantly. Conclusions. The comprehensive analysis showed that NUF2 and FAM83D could be used as potential biomarkers for diagnosis and prognosis of TNBC.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>There were approximately 18.1 million new cancer cases worldwide in 2018, including 2.1 million breast cancers <ns0:ref type='bibr' target='#b4'>(Bray et al. 2018)</ns0:ref>. Breast cancer is the highest incidence among new morbidity and mortality in females with cancer <ns0:ref type='bibr' target='#b5'>(Cao et al. 2019)</ns0:ref>. According to variations in the expressions of the estrogen receptor (ER), progesterone receptor (PR) and human epidermal growth factor receptor 2 (HER2), breast cancer were defined as four major intrinsic molecular subtypes: luminal A, luminal B, HER2-positive and triple-negative breast cancer (TNBC) <ns0:ref type='bibr' target='#b42'>(Sorlie et al. 2001)</ns0:ref>. TNBC is characterized by a lack of expression of the ER and PR as well as HER2 <ns0:ref type='bibr' target='#b41'>(Serra et al. 2014)</ns0:ref>. TNBC that occurs mostly in premenopausal young women represents approximately 15-20% of all invasive breast cancers <ns0:ref type='bibr' target='#b12'>(Foulkes et al. 2010)</ns0:ref>. TNBC is a highly heterogeneous disease, not only at the molecular level, but also in terms of its pathology and clinical manifestation. Its prognosis is worse than other types of breast cancer as well as the risk of death is higher <ns0:ref type='bibr' target='#b35'>(Metzger-Filho et al. 2012)</ns0:ref>. Chemotherapy is currently the primary adjuvant treatment, due to the lack of effective molecular targets, it is not only insensitive to endocrine therapy and HER-2 targeted therapy, but also easily causes chemo-resistant <ns0:ref type='bibr' target='#b53'>(Wein & Loi 2017)</ns0:ref>.</ns0:p><ns0:p>TNBC has become an intractable problem for clinical treatment.</ns0:p><ns0:p>Current researchers are focusing on personalized treatment based on the multi-gene assays <ns0:ref type='bibr' target='#b38'>(Pan et al. 2019)</ns0:ref>. With the continuous development of high-throughput sequencing technology, bioinformatics analysis plays a key role in the diagnosis, prognosis and screening of tumors <ns0:ref type='bibr' target='#b13'>(Goldfeder et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b33'>Ma et al. 2020</ns0:ref>). Many genes have been identified as signatures for diagnosis and prognosis of triple negative breast cancer <ns0:ref type='bibr' target='#b10'>(Dai et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b43'>Stovgaard et al. 2020</ns0:ref>).</ns0:p><ns0:p>Recent study found that CHD4-β1 integrin axis may be a prognostic marker for TNBC using nextgeneration sequencing and bioinformatics analysis <ns0:ref type='bibr' target='#b37'>(Ou-Yang et al. 2019</ns0:ref>). The computational analysis of complex biological networks could help research scholars identify potential genes related to TNBC <ns0:ref type='bibr' target='#b26'>(Li et al. 2020)</ns0:ref>.</ns0:p><ns0:p>In this study, we first identified a group of differentially expressed genes (DEGs) associated with TNBC from the Gene Expression Synthesis (GEO) database. Then, based on bioinformatics PeerJ reviewing PDF | (2020:05:49377:1:1:NEW 16 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed analysis, three candidate genes related to TNBC diagnosis and prognosis were successfully identified. Finally, reverse transcription quantitative PCR (RT-qPCR) was used to verify the candidate biomarkers in TNBC tissues and adjacent tissues. The current research aimed to explore potential biomarkers that may be highly correlated with the prognostic and diagnostic value of triple negative breast cancer.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Data source</ns0:head><ns0:p>Triple-negative breast cancer gene expression data sets in this study were obtained from the publicly available GEO databases (https://www.ncbi.nlm.nih.gov/geo/) <ns0:ref type='bibr' target='#b2'>(Barrett et al. 2013)</ns0:ref>. Three independent data sets from GSE45827 <ns0:ref type='bibr' target='#b14'>(Gruosso et al. 2016)</ns0:ref>, GSE38959 <ns0:ref type='bibr' target='#b22'>(Komatsu et al. 2013)</ns0:ref>, GSE65194 <ns0:ref type='bibr' target='#b34'>(Maire et al. 2013)</ns0:ref> were included. GSE45827 consists of 100 non-triple-negative breast cancer (non-TNBC) samples and 41 TNBC samples, GSE65194 consists of 109 non-TNBC and 55 TNBC samples, both GSE65194 and GSE45827 are based on the platform GPL570 [HG-U133_Plus_2] Affymetrix Human Genome U133 Plus 2.0 Array. GSE38959 consists of 13 non-TNBC and 30 TNBC samples, and the platform is GPL4133 Agilent-014850 Whole Human Genome Microarray 4x44K G4112F. All of the data sets were available online.</ns0:p><ns0:p>A total of 14 TNBC patients were collected in Chongqing Traditional Chinese Medicine Hospital. All patients were diagnosed with triple negative breast cancer (ER-negative, PRnegative, HER-2-negative) by histopathological examination, excluding other malignant tumors and no important organ diseases, such as severe cardiovascular, liver disease as well as renal insufficiency. A total of 28 frozen tissue specimens contained 14 tumor tissues and 14 matched adjacent non-tumor tissues were obtained. All tissues were collected immediately after surgical resection, and snap-frozen in liquid nitrogen until RNA extraction. Clinical information were obtained for all patients by the investigator from medical records. The more detailed clinical information are shown in Supplemental file 1. This study has been approved by the Chongqing Hospital of Traditional Chinese Medicine ethics committee and written informed consent was obtained from all patients.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:05:49377:1:1:NEW 16 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed Data processing of DEGs R software (v3.6.2; http://www.r-project.org) was used for bioinformatics analysis. First, the gene expression profiles of three data sets were downloaded by using GEOquery package. Subsequently, background adjustment were performed by using the dplyr package. Finally, we utilized log2 transformation to normalize the data using the limma package. The RobustRankAggreg package was used to screen the differentially expressed genes, using adjust P value < 0.01 and |logFC|≥ 2 as cut-off criteria. The VennDiagram package was used to present significant co-expression genes.</ns0:p></ns0:div>
<ns0:div><ns0:head>GO enrichment and KEGG pathway analysis of DEGs</ns0:head><ns0:p>Gene ontology (GO)(The Gene Ontology 2019) is a tool for annotating genes from various ontologies, including biological processes (BP), cellular components (CC), molecular functions (MF). The Kyoto Encyclopedia of Genes and Genomes (KEGG) <ns0:ref type='bibr' target='#b19'>(Kanehisa et al. 2019</ns0:ref>) is famous for 'understanding the advanced functions and utility resource library of biological systems', KEGG pathway mainly presents intermolecular interactions and intermolecular networks. GO enrichment and KEGG pathway analysis for DEGs were performed through the DAVID database (v6.8; http://david.abcc.ncifcrf.gov) <ns0:ref type='bibr' target='#b18'>(Jiao et al. 2012)</ns0:ref> with 'after FDR' (corrected P-Value < 0.01, gene count ≥ 5) set as statistically significant. The ggplot2 package in R was used to visualize the GO functional enrichment results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Protein-protein Interaction (PPI) networks and hub gene analysis</ns0:head><ns0:p>The online STRING database (v11.0; https://string-db.org/) collects and integrates information on the correlation between known and predicted proteins from multiple species <ns0:ref type='bibr' target='#b45'>(Szklarczyk et al. 2019)</ns0:ref>. PPI network analysis could systematically study the molecular mechanisms of disease and discover new drug targets. The DEGs screened previously were mapped via the STRING database.</ns0:p><ns0:p>Subsequently, visual analysis of the PPI network was matched to Cytoscape (v3.7.2; https://cytoscape.org), and hub genes were analyzed with the Cytoscape plugin CytoHubba <ns0:ref type='bibr' target='#b7'>(Chin et al. 2014</ns0:ref>). The DMNC algorithm was used to identify the top 10 hub genes.</ns0:p></ns0:div>
<ns0:div><ns0:head>Survival analysis</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:05:49377:1:1:NEW 16 Aug 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>The Kaplan Meier plotter, an online survival analysis tool, could rapidly assess the effect of 54k genes on survival in 21 cancer types (http://kmplot.com/analysis/), including the effect of 22,277 genes on breast cancer prognosis <ns0:ref type='bibr' target='#b16'>(Gyorffy et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b17'>Gyorffy et al. 2010)</ns0:ref>. In this study, TNBC patients were only screened out based on the intrinsic sub-type (basis: n = 879). Probes of genes were selected 'only JetSet best probe set' <ns0:ref type='bibr' target='#b25'>(Li et al. 2011)</ns0:ref>. Recurrence-free survival (RFS) was selected for survival analysis the candidate hub genes, P < 0.05 was considered to be statistically significant.</ns0:p></ns0:div>
<ns0:div><ns0:head>Validation of hub genes</ns0:head><ns0:p>RT-qPCR were used to further verify the mRNA expression of the candidate hub genes in TNBC tissues and adjacent tissues. Total RNA of TNBC patients' tissues samples were isolated by TRIzol reagent (Invitrogen, Carlsbad, CA, USA). Total RNA quantity was evaluated by a NanoDrop ND-1000 spectrophotometer (Thermo Fisher Scientific, Waltham, MA, USA). RNA was reverse transcribed into cDNA according to the instructions of the Takara kit (Takara Bio Inc., Japan). RT-qPCR reactions were performed using the SYBR Green PCR Master Mix System (Tiangen Biotech, Beijing, China). GAPDH was used as a control to compare the relative expression of NUF2, FAM83D and CENPH mRNA in 14 pairs of triple negative breast cancer paired tissues. Three replicate holes were performed for target genes in the RT-qPCR experiment, and the primer sequences are shown in Table <ns0:ref type='table'>1</ns0:ref>. The primers of the target genes and the internal reference gene were synthesized by Sangon Biotech (Shanghai) Co., Ltd.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis</ns0:head><ns0:p>Statistical analyses of this study were analyzed with R software v3.6.2 and GraphPad Prism 5.0. Two-tailed Student's t-test was used to significance of differences between two groups, and P <0.05 was considered statistically significant. The RT-qPCR results were calculated and evaluated using the 2 -△△Ct method.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>DEGs in non-TNBC and TNBC samples</ns0:head><ns0:p>Three series of matrix files, for a total of 222 non-TNBC samples and 126 TNBC samples, were selected to identify DEGs (P < 0.01, |logFC| ≥ 2). A total of 488 genes were identified after analyzing GSE45827, of which 259 genes were up-regulated and 229 genes were down-regulated.</ns0:p><ns0:p>In gene chip GSE38959, 794 DEGs were identified, 478 genes were up-regulated, and 316 genes were down-regulated. And from GSE65194, 531 DEGs including 282 up-regulated genes and 249 down-regulated genes were identified. The Venn diagrams showed that a total of 161 DEGs overlapped, in which 105 genes were up-regulated and 56 genes were down-regulated (Fig. <ns0:ref type='figure'>1</ns0:ref>).</ns0:p><ns0:p>The more detailed results are shown in Supplemental file 2.</ns0:p></ns0:div>
<ns0:div><ns0:head><Fig. 1 > GO and KEGG pathway analysis of DEGs</ns0:head><ns0:p>Next, we attempted to identify the biological function of the 161 common DEGs. GO enrichment and KEGG pathway analysis were performed through the DAVID database. Terms with matching the filter criteria were collected and grouped into clusters according to their membership similarities. As shown in Figure <ns0:ref type='figure'>2</ns0:ref>, the top 5 functions for biological processes were as follows: cell division, mitotic nuclear division, chromosome segregation, sister chromatid cohesion and cell proliferation. The top 5 functions for cellular components were as follows: chromosome centromeric region, midbody, nucleus, condensed chromosome kinetochore and kinetochore. The molecular functions enriched were associated with microtubule motor activity, microtubule binding, ATP binding and protein binding. The KEGG analysis showed that the main enriched signaling pathways were related to the cell cycle and oocyte meiosis. The more detailed results are shown in Supplemental file 3.</ns0:p></ns0:div>
<ns0:div><ns0:head><Fig. 2 > PPI network construction and hub genes detection</ns0:head><ns0:p>In order to better understand which of these DEGs were most likely to be the central regulatory genes for TNBC, PPI network was constructed through the online STRING platform and Cytoscape software (Fig. <ns0:ref type='figure'>3A</ns0:ref>). Subsequently, according to the DMNC algorithm, the top 10 hub genes were screened through the cytoHubba and are sequentially ranked as follows: ANLN, FAM64A, CDCA2, NUF2, FAM83D, CENPH, KIF14, MKLP-1, KIF15, DEPDC1 (Fig. <ns0:ref type='figure'>3B</ns0:ref>). The Manuscript to be reviewed expression of 10 hub genes were all significantly increased in the PPI network. We initially speculate that 10 candidate hub genes may be related to tumor occurrence.</ns0:p></ns0:div>
<ns0:div><ns0:head><Fig. 3 > Survival analysis and validation of hub genes</ns0:head><ns0:p>In order to examine whether the candidate hub genes expression levels were associated with the outcome of TNBC patients. Next, the correlation between these genes and the recurrence-free survival of TNBC patients were analyzed by the Kaplan Meier plotter. According to the analysis results of the Kaplan-Meier survival curve, we found that TNBC patients with higher expression levels of NUF2, FAM83D, CENPH have significantly decreased recurrence-free survival (P<0.05), but not ANLN, FAM64A, CDCA2, KIF14, MKLP-1, KIF15, DEPDC1 (P>0.05). More specific information about these survival-related hub genes is shown in Figure <ns0:ref type='figure'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head><Fig. 4 ></ns0:head><ns0:p>Finally, we validated the expression levels of NUF2, FAM83D and CENPH in 14 pairs of triple negative breast cancer paired tissues by using RT-qPCR. Figure <ns0:ref type='figure'>5</ns0:ref> showed that the expression levels of NUF2 and FAM83D were significantly higher in TNBC tissues than adjacent tissues (P<0.001), but not CENPH (P=0.68). Combined with the above analysis, we preliminarily concluded that NUF2 and FAM83D may be potential biomarkers to TNBC diagnosis and prognosis. The more detailed results are shown in Supplemental file 4.</ns0:p></ns0:div>
<ns0:div><ns0:head><Fig. 5 > Discussion</ns0:head><ns0:p>TNBC is considered as an aggressive subtype of breast cancer. Compared with other types of breast cancer, TNBC is characterized by high malignancy rate, easier recurrence <ns0:ref type='bibr' target='#b11'>(Dent et al. 2007)</ns0:ref>, and low survival rate <ns0:ref type='bibr' target='#b6'>(Carey et al. 2006)</ns0:ref>. Despite advances in the targeted therapies of TNBC, including the approval of poly-ADP-ribose polymerase (PARP) and immune check-point inhibitors for the treatment of BRCA germ cell mutated breast cancers, there is still a lack of clinical evidence to evaluate their efficacy for TNBC patients <ns0:ref type='bibr' target='#b49'>(Vagia et al. 2020)</ns0:ref>. Therefore, it is necessary to identify effective molecular therapeutic targets for TNBC.</ns0:p><ns0:p>In the present study, we screened out 161 DEGs between 222 non-TNBC and 126 TNBC samples by analyzing three datasets, of which 105 were up-regulated and 56 were down-regulated.</ns0:p><ns0:p>The GO enrichment analysis and KEGG pathways showed that the screened DEGs were enriched for 27 GO terms and 2 pathways. To further investigate the interrelationship of 161 DEGs, PPI network was first constructed and then 10 top hub genes were screened out, including ANLN, FAM64A, CDCA2, NUF2, FAM83D, CENPH, KIF14, MKLP-1, KIF15, DEPDC1. The analysis results of the Kaplan-Meier survival curve showed that the expression levels of NUF2, FAM83D and CENPH were associated with the recurrence-free survival in TNBC samples (P<0.05). Finally, we found that the expression levels of only NUF2 and FAM83D did increase significantly in TNBC tissues by using RT-qPCR.</ns0:p><ns0:p>NUF2 is an essential component of the kinetochore-associated NDC80 complex, which plays a regulatory role in chromosome segregation and spindle checkpoint activity <ns0:ref type='bibr' target='#b28'>(Liu et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b59'>Zhang et al. 2015)</ns0:ref>. Several studies have shown that NUF2 was associated with the development of multiple cancers. The results showed that the expression of NUF2 was associated with poor prognosis in patients with colorectal cancer <ns0:ref type='bibr' target='#b21'>(Kobayashi et al. 2014</ns0:ref>) and oral cancer <ns0:ref type='bibr' target='#b47'>(Thang et al. 2016</ns0:ref>), which may be related to the regulation of tumor cell apoptosis involved in the NUF2. <ns0:ref type='bibr' target='#b44'>Sugimasa H et al (Sugimasa et al. 2015)</ns0:ref> demonstrated that the NUF2 gene could be directly transactivated by the heterogeneous ribonucleoprotein K (hnRNP K), and that the hnRNP K-NUF2 axis affected the growth of colon cancer cells by participating in processes of mitosis and proliferation.</ns0:p><ns0:p>Recent studies have shown that NUF2 was also closely related to breast cancer. Xu W et al <ns0:ref type='bibr' target='#b57'>(Xu et al. 2019</ns0:ref>) confirmed that NUF2 was indeed up-regulated in breast cancer tissue by bioinformatics analysis and RT-qPCR assay, and that NUF2 may regulate the carcinogenesis and progression of breast cancer via cell cycle-related pathways. However, the expression level changes of NUF2 in triple-negative breast cancer have not yet been studied. In this study, we found that the expression level of NUF2 was higher in triple-negative breast cancer than in non-triple negative breast cancer and TNBC patients with higher NUF2 expression level had significantly reduced the recurrencefree survival. GO enrichment analysis shows that NUF2 is mainly involved in cell division, mitotic Manuscript to be reviewed nuclear division, chromosome segregation and sister chromatid cohesion, their dysregulation impact significantly on development of cancer <ns0:ref type='bibr' target='#b1'>(Bakhoum et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Guo et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b30'>Lopez-Lazaro 2018)</ns0:ref>. Based on the above analysis, we speculate that NUF2 plays an important role in tumor progression, and NUF2 may be serve as a biomarker for diagnosis and prognosis of triplenegative breast cancer. Certainly, the specific molecular mechanism of NUF2 expression level changes in TNBC still need to be further studied.</ns0:p><ns0:p>FAM83D belongs to the FAM83 family, which could regulate cell proliferation, growth, migration and epithelial to mesenchymal transition <ns0:ref type='bibr' target='#b24'>(Li et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Santamaria et al. 2008</ns0:ref>). The studies have found that FAM83D could not only affect cell proliferation and motility through the tumor suppressor gene FBXW7 <ns0:ref type='bibr' target='#b36'>(Mu et al. 2017)</ns0:ref> or ERK1/ERK2 signaling cascade <ns0:ref type='bibr' target='#b51'>(Wang et al. 2015)</ns0:ref>, but also affect breast cancer cell growth and promote epithelial cell transformation through MAPK signaling <ns0:ref type='bibr' target='#b8'>(Cipriano et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b9'>Cipriano et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b23'>Lee et al. 2012)</ns0:ref>. The expression of FAM83D was significantly increased in primary breast cancer and the high expression level of FAM83D was closely related to the adverse clinical outcomes and distant metastasis in breast cancer patients <ns0:ref type='bibr' target='#b52'>(Wang et al. 2013)</ns0:ref>. In our study, we found that the expression of FAM83D was significantly increased in TNBC patients and TNBC patients with higher FAM83D expression level had significantly reduced the recurrence-free survival. GO enrichment analysis shows that FAM83D is mainly involved in cell division, mitotic nuclear division and cell proliferation, their dysregulation have a major impact on the development of cancer <ns0:ref type='bibr' target='#b1'>(Bakhoum et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Lopez-Lazaro 2018;</ns0:ref><ns0:ref type='bibr' target='#b56'>Wu et al. 2019)</ns0:ref>. We speculated that FAM83D might play a role in the progression and prognosis of triple-negative breast cancer.</ns0:p><ns0:p>Centromere protein H (CENP-H) is a component of the kinetochore and plays an essential role in mitotic processes <ns0:ref type='bibr' target='#b31'>(Lu et al. 2017)</ns0:ref>, accurate chromosome segregation <ns0:ref type='bibr' target='#b61'>(Zhu et al. 2015)</ns0:ref> as well as appropriate kinetochore assembly <ns0:ref type='bibr' target='#b60'>(Zhao et al. 2012)</ns0:ref>. Many studies have shown that CENPH is closely associated with human cancers, including colorectal cancer <ns0:ref type='bibr' target='#b54'>(Wu et al. 2017)</ns0:ref>, renal cell carcinoma <ns0:ref type='bibr' target='#b55'>(Wu et al. 2015)</ns0:ref>, non-small cell lung cancer <ns0:ref type='bibr' target='#b27'>(Liao et al. 2009</ns0:ref>) as well as breast cancer <ns0:ref type='bibr' target='#b50'>(Walian et al. 2016</ns0:ref>). However, there is no current evidence on the correlation PeerJ reviewing PDF | (2020:05:49377:1:1:NEW 16 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed between CENPH and triple negative breast cancer. In this study, we found that there is no significant correlation between the mRNA expression of CENPH and triple negative breast cancer.</ns0:p><ns0:p>It is worth noting that protein-coding genes are not the sole drivers for cancer. Breast cancer is also related to the expressions of non-coding RNAs, include repetitive DNA <ns0:ref type='bibr'>(Yandim & Karakulah 2019)</ns0:ref>, transposable element <ns0:ref type='bibr' target='#b20'>(Karakulah et al. 2019)</ns0:ref>, micro RNA <ns0:ref type='bibr' target='#b0'>(Aslan et al. 2020)</ns0:ref> and Long non-coding RNA <ns0:ref type='bibr' target='#b39'>(Riahi et al. 2020)</ns0:ref>,etc. In this study, we have found that the expressions of NUF2 and FAM83D are associated with triple-negative breast cancer. Next, we will further investigate whether the expression changes of NUF2/FAM83D in triple-negative breast cancer are caused by non-coding RNA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In summary, we firstly demonstrated that the mRNA levels of NUF2/ FAM83D have changed significantly in TNBC tissues compared to adjacent tissues. The mRNA expression levels of NUF2/FAM83D are significantly up-regulated in TNBC tissues. NUF2/FAM83D might serve as potential molecular biomarkers for diagnosis and prognostic indicators of TNBC. However, the functional mechanisms of NUF2 and FAM83D in TNBC patients are still to be further studied, including the expression of their protein levels and their relationship with the clinical characteristics of TNBC patients and so on. Therefore, we still need to do more experiments before clinical trials.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 5</ns0:head><ns0:p>The relative expression levels of NUF2, FAM83D and CENPH mRNA in 14 pairs of triplenegative breast cancer (TNBC) paired tissues</ns0:p><ns0:p>The mRNA expression levels of NUF2 and FAM83D were increased significantly in most TNBC lesions compared with para-adjacent tissues, but not CENPH. ***P<0.001.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:05:49377:1:1:NEW 16 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:05:49377:1:1:NEW 16 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,280.87,525.00,266.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,280.87,525.00,374.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,250.12,525.00,209.25' type='bitmap' /></ns0:figure>
</ns0:body>
" | "Cover Letter
Dear editors:
Thank you very much for giving us the opportunity to revise present manuscript. We are glad to revise and submit the revised manuscript (ID: peerj-49377).
On the basis of the advices raised by editors and reviewers, the manuscript was revised thoroughly after the discussion and approval by all the authors. We have uploaded the manuscript with computer-generated tracked changes to the Revision Response Files section.
We would like to express our great appreciation for your suggestions and comments. It is valuable and helpful for revising and improving our paper, as well as the important guiding significance to our researches.
We believe that the manuscript is now suitable for publication in PeerJ.
On behalf of all co-authors.
Department of Laboratory Medicine,
Chongqing Hospital of Traditional Chinese Medicine
No. 6, Qizhi Road, 400021, Chongqing, P.R China
Tel.: (+86) 023-67063765
Response to Editor:
1. Breast cancer also shows changes in the expressions of noncoding RNAs, (e.g. PMID: 31536958 and PMID: 31824778) and one should also mention that protein-coding genes should not be the sole drivers. The authors include these in the revised version of the discussion.
Response: We thank your constructive suggestion. As you suggested, we have added in the discussion section. It is worth noting that protein-coding genes are not the sole drivers for cancer. Breast cancer is also related to the expressions of non-coding RNAs, include repetitive DNA, transposable element, micro RNA and Long non-coding RNA, etc. In this study, we found that the expressions of NUF2 and FAM83D are associated with triple-negative breast cancer. Next, we will further investigate whether the expression changes of NUF2/FAM83D in triple-negative breast cancer are caused by non-coding RNA.
2. Figure 3 is not clear to me. Gene symbols cannot be read properly and the scroll bar should be removed from 3B. This figure should be improved in the revised manuscript.
Response: Thanks for your valuable suggestions. It is very helpful for our improvement, Figure 3 in the revised manuscript has been improved. Now, it is easier to be understood.
3. In Figure 2, 'Go_term' should be replaced with 'GO terms'. It is also not clear to me what is 'EnrichmentScore' and how it was calculated?
Response: Thank you very much for carefully and patiently reviewing of our manuscript. It is our carelessness. The abscissa label in figure 2 should be 'Gene Ratio' instead of 'EnrichmentScore'. 'Gene Ratio' is the percentage of all of the user-provided genes that are found in the given ontology term (only input genes with at least one ontology term annotation are included in the calculation). Now, we have modified it in the revised manuscript.
Response to Reviewer #1:
1. There are many vague sentences which should be edited, such as a. “GO enrichment analysis contain 13 biological processes, 10 cellular components and 4 molecular functions”.b. “...deal with various confusing biological issues”.
Response: Thanks for your careful revision. (a)“GO enrichment analysis contain 13 biological processes, 10 cellular components and 4 molecular functions”, this sentence is inappropriate. We have revised it to “These DEGs were enriched for 27 GO terms and 2 pathways.” (b) The expression of this sentence is not accurate and we have deleted it in the revised manuscript.
2. Is there any novelty in the present work, which was not reported earlier?
Response: We thank your constructive suggestion. As you suggested, we added a statement in the 'Conclusion' section of the revised manuscript. We firstly demonstrated that the mRNA levels of NUF2/ FAM83D have changed significantly in TNBC tissues compared to adjacent tissues. The mRNA expression levels of NUF2/FAM83D are significantly up-regulated in TNBC tissues.
3. DEGs and survival analysis were done in different data sets. Will this create any technical error in the results? Authors should justify
Response: Thank you for this comment. Kaplan-Meier plotter (http://kmplot.com/analysis/) is capable to access the effect of 54,000 genes on survival in 21 cancer types and its miRNA subsystems include 11,000 samples from 20 distinct cancer types. Sources for the databases include GEO, EGA, and TCGA. Therefore, DEG and survival analysis can be performed in different data sets (e.g. PMID: 31387622 and PMID: 32500028).
4. Why was Recurrence-free survival (RFS) presented? Why not overall survival?
Response: Thanks for your careful revision. The online tool KM plotter contains 6234 breast cancer patients, of which 3955 have recurrence-free survival data (RFS) and 1402 have overall survival data (OS). According to the current screening conditions, 198 TNBC patients have recurrence-free survival data (RFS) and only 9 TNBC patients have overall survival data (OS). Therefore, we verify the hub genes through recurrence-free survival data (e.g. PMID: 30110125).
5. How are the GO and KEGG pathway analysis results linked to TNBC development?
Response: Thank you for your consideration. GO enrichment analysis shows that NUF2 is mainly involved in cell division, mitotic nuclear division, chromosome segregation and sister chromatid cohesion. FAM83D is mainly involved in cell division, mitotic nuclear division and cell proliferation. Their dysregulation impact significantly on development of cancers (e.g. PMID: 29342134, PMID: 31624151, PMID: 24121792 and PMID: 29482784).We have modified them in the revised manuscript.
6. PPI network was constructed from publicly available data, the network should be validated and should be compared with the random network model [Molecular Genetics and Genomics volume 294, pages931–940(2019)].
Response: Thank you for this comment. PPI network of the study (PMID: 30945018) was constructed from GeneMANIA database. PPI network in this study was constructed from STRING database. We locked the width and height of the nodes in Cytoscape, preserving the degree for each node as the original network. Many articles (PMID: 32449517, PMID: 32357912, PMID: 32045668) have analyzed the interactions between proteins using this database, but they have not been compared with the random network model. I think it might be because of the different focus. After all, the main purpose at present is to screen out the core genes related to triple-negative breast cancer. Subsequently, we will also verify the selected genes through PCR experiments.
7. The authors should also calculate the expression correlation between hub genes. If there are higher correlation, then those hub genes are probably involved in similar biological function.
Response: Thank you for your suggestion. Figure 3B in the revised manuscript shows the interactions between the central genes. In addition, a supplementary file 3 is added, we could know which biological functions the central genes are involved in from this supplementary file.
Response to Reviewer #2:
1. There are some sentences that were grammatically incorrect. Line 62 “Many genes have been found could be used as signatures for TN breast cancer diagnosis and prognosis”; Line150-151“The Venn diagrams showed that a total of 161 genes were co-expression”; Line 228-229 “Certainly, the specific molecular mechanisms by which NUF2 mediates TNBC carcinogenesis still need to be further study.”
Response: Thank you very much for carefully reviewing of our manuscript. As you suggested, we have modified them in the revised manuscript.
Line 62: “Many genes have been found could be used as signatures for TN breast cancer diagnosis and prognosis” have revised by “Many genes have been identified as signatures for diagnosis and prognosis of triple negative breast cancer.”
Line150-151:“The Venn diagrams showed that a total of 161 genes were co-expression” have revised by “the Venn diagrams showed that a total of 161 DEGs overlapped, in which 105 genes were up-regulated and 56 genes were down-regulated.”
Line 228-229:“Certainly, the specific molecular mechanisms by which NUF2 mediates TNBC carcinogenesis still need to be further study” have revised by “Certainly, the specific molecular mechanism of NUF2 expression level changes in TNBC still need to be further studied.”
2. Some other phrases, albeit grammatically correct, were scientifically confusing. Line 67 “deal with various confusing biological issues”; Line 72-73 “ This comprehensive analysis may provide a meaningful contribution to the targeted treatment of triple negative breast cancer”; Line 151” clearly up-regulated”; Line 221-222 “However, the role of NUF2 in triple negative breast cancer has not been conducted”.
Response: Thank you very much for patiently reviewing of our manuscript. The expression on Line 67 is not accurate and we have deleted it in the revised manuscript. The expression on lines 72-73, 151 and 221-222 are incorrect, and we have modified them in the revised manuscript.
Line 72-73: “This comprehensive analysis may provide a meaningful contribution to the targeted treatment of triple negative breast cancer” have revised by “The current research aimed to explore potential biomarkers that may be highly correlated with the prognostic and diagnostic value of triple negative breast cancer.”
Line 151: The term “clearly up-regulated” is inappropriate in this sentence. We have revised this sentence by “105 genes were up-regulated and 56 genes were down-regulated”.
Line 221-222: “However, the role of NUF2 in triple negative breast cancer has not been conducted” have revised by “However, the expression level changes of NUF2 in triple-negative breast cancer have not yet been studied.”
3. Line 155”161common” space should be place between “161” and “common”. So does line 147 ” 259genes”
Response: Thanks for your suggestion. As you suggested, we have modified them in the revised manuscript.
4. Please define “BP” in line 158, and “CC” in line 160 and “MF” in line 161.
Response: Thanks for this comment. As you suggested, we have defined them in the revised manuscript.
5. There are two types of microarray in the three datasets, GSE65194 and GSE45827 are Affymetrix arrays while GSE38959 are Agilent array (line 76-84), please indicate more details about data normalization (Line 96)
Response: Thank you for your consideration. As you suggested, we have provided more details in the revised manuscript. First, the gene expression profiles of three data sets were downloaded by using GEOquery package. Subsequently, background adjustment were performed by using the dplyr package. Finally, we utilized log2 transformation to normalize the data using the limma package.
6. In section “DEGs in non-TNBC and TNBC samples”, Please submit the full list of differential expression genes (at least 161 common DEGs) as supplemental files.
Response: Thanks for your careful revision. As you suggested, we have submitted the full list of differential expression genes as supplemental file 2.
7. The authors validated the correlation between NUF2/FAM83D and TNBC only by real-time PCR, however, it still needs more evidence to make a conclusion that NUF2 and FAM83D are TNBC biomarkers. It will be very helpful if you can present histological evidence (you already have clinical samples, line 85) OR in vitro assay, for example, you may confirm the function of NUF2 and FAM83D on TNBC cell lines (if possible).
Response: We thank your constructive suggestion. In clinical pathological diagnosis, immunohistochemistry (IHC) is a very important technique and method, but our team currently does not have the ability to detect the expression and distribution of NUF2/FAM83D in TNBC tissues and adjacent tissues through IHC method. Therefore, we consider further verification after the proficient in the use of IHC technology, after all, TNBC specimens are very precious.
Response to Reviewer #3:
1. Are NUF2 and FAM83D really a novel biomarker? Novel means that these genes have not identified as a biomarker yet but in previous studies eg. https://pubmed.ncbi.nlm.nih.gov/31140425/,
https://www.spandidos-publications.com/ijmm/44/2/390 https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4823109/
it has been shown that the prognostic significance of these genes. Suggest removing the word ‘novel’.
Response: Thank you for your consideration. These studies (PMID: 31140425,PMID: 31198978 and PMID: 26678035 ) found NUF2 and FAM83D genes are potential therapeutic targets and prognostic indicators of breast cancer. However, we found that the expression of NUF2 and FAM83D were significantly increased in triple negative breast cancer (TNBC) patients and TNBC patients with higher NUF2/FAM83D expression levels had significantly reduced the recurrence-free survival. Although the subjects are all breast cancer, the subjects in this study are triple-negative breast cancer, belonging to the breast cancer subtype. Next, we will do more experiments to prove that NUF2 and FAM83D could be used as diagnostic and therapeutic biomarkers for triple negative breast cancer. Therefore, we replaced 'novel' with 'potential' in the revised manuscript.
2. In the abstract, the author mentioned the need of identifying ‘effective biomarkers’ to diagnose or determine the prognosis of TNBC. However, this study merely reports the potential biomarkers. Further studies are needed to determine if the reported biomarkers are useful for diagnosis/prognosis. Consider revising some of the words/sentences in the manuscript as some might sound like an overstatement.
Response: Thanks for your careful revision. As you suggested, we have modified them in the revised manuscript.
3. In line 88: the author mentioned ‘no important organ disease’. Please specify the example of organ diseases as the exclusion criteria.
Response: Thank you for your constructive suggestion. As you suggested, we have provided more details in the revised manuscript. In line 88: “no important organ disease” have revised by “excluding other malignant tumors and no important organ diseases, such as severe cardiovascular, liver disease as well as renal insufficiency.”
4. Line 105-6: is the sentence a repetition? Shouldn’t it be ‘intermolecular interactions and intermolecular networks’?
Response: Thanks for your detailed suggestion. It is our carelessness, we have made changes based on your suggestions.
5. Line 135-6: Why the need for the experiment to be repeated (independently) for more than 3 times? 14 pairs of TNBC samples were used, do you mean experimental replicates were done more than 3 times. If yes, why is this so because the replication level of your experiments will have an impact on the statistical tests.
Response: Thank you for your careful revision. The expression on Line 135-6 is not accurate and we have modified it in the revised manuscript. “The experiment was independently repeated more than 3 times” have revised by “Three replicate holes were performed for target genes in the RT-qPCR experiment”.
6. Although the results from this study show no correlation between CENPH and TNBC, the author may discuss or elaborate more on the role of CENPH in other types of breast/other cancers.
Response: We appreciate your constructive suggestion. We have added relevant content in the discussion section based on your suggestions.
7. The summary in line 244 is quite weak and inconclusive.
Response: Thanks for this comment. The expression on Line 244 is not accurate and we have deleted it in the revised manuscript.
8. The manuscript lacks new information. My suggestion would be to further study the functional role of each potential gene identified in this study to reach a fair conclusion.
Response: Thank you very much for patiently reviewing of our manuscript. We added a statement in the 'Conclusion' section of the revised manuscript. “We firstly demonstrated that the mRNA levels of NUF2/ FAM83D have changed significantly in TNBC tissues compared to adjacent tissues. The mRNA expression levels of NUF2/FAM83D are significantly up-regulated in TNBC tissues. NUF2/FAM83D might serve as potential molecular biomarkers for diagnosis and prognostic indicators of TNBC.” We appreciate your constructive suggestion. Next, we will further explore the causes of the expression changes of NUF2/FAM83D in triple-negative breast cancer.
" | Here is a paper. Please give your review comments after reading it. |
646 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>We analyzed the radiolarian assemblages of 59 surface sediment samples collected from the Yellow Sea and East China Sea of the northwestern Pacific. In the study region, the Kuroshio Current and its derivative branches exerted a crucial impact on radiolarian composition and distribution. Radiolarians in the Yellow Sea shelf showed a quite low abundance, as no tests were found in 15 of 25 Yellow Sea samples. Radiolarians in the East China Sea shelf could be divided into three regional groups, including the East China Sea north region group, the East China Sea middle region group, and the East China Sea south region group. The results of the redundancy analysis suggested that the Sea Surface Temperature and Sea Surface Salinity were primary environmental variables explaining species-environment relationship. The gradients of temperature, salinity, and species diversity reflect the powerful influence of the Kuroshio Current in the study area.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Polycystine Radiolaria (hereafter Radiolaria), with a high diversity of 1192 Cenozoic fossil to Recent species, are a crucial group of marine planktonic protists <ns0:ref type='bibr' target='#b31'>(Lazarus et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b53'>Suzuki, 2016)</ns0:ref>. Living Radiolaria are widely distributed throughout the shallow-to-open oceans <ns0:ref type='bibr' target='#b37'>(Lombari & Boden, 1985;</ns0:ref><ns0:ref type='bibr' target='#b61'>Wang, 2012)</ns0:ref>, and a proportion of their siliceous skeletons settle on the seafloor after death <ns0:ref type='bibr' target='#b55'>(Takahashi, 1981;</ns0:ref><ns0:ref type='bibr' target='#b70'>Yasudomi et al., 2014)</ns0:ref>. The distribution of Radiolaria in a given region is associated with the pattern of water mass, such as temperature, salinity and nutrients <ns0:ref type='bibr' target='#b0'>(Abelmann & Nimmergut, 2005;</ns0:ref><ns0:ref type='bibr' target='#b2'>Anderson, 1983;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hernández-Almeida et al., 2017)</ns0:ref>. The East China Sea (ECS) and Yellow Sea (YS) are marginal seas of the northwestern Pacific <ns0:ref type='bibr' target='#b66'>(Xu et al., 2011)</ns0:ref>. The two regions are divided by the line connecting the northern tip of the mouth of the Changjiang and the southern tip of the Jeju Island <ns0:ref type='bibr' target='#b30'>(Jun, 2014)</ns0:ref>. Hydrographic conditions of the shelf area of both the ECS and YS, where the depth is generally less than 100 meters, vary remarkably with the season <ns0:ref type='bibr' target='#b47'>(Qi, 2014)</ns0:ref>. Generally, the annual sea surface temperature (SST) and sea surface salinity (SSS) show a decreasing trend from the southeast to northwest in study area (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). The Kuroshio Current originates from the Philippine Sea, flows through the ECS, and afterwards forms the Kuroshio Extension <ns0:ref type='bibr' target='#b22'>(Hsueh, 2000;</ns0:ref><ns0:ref type='bibr' target='#b48'>Qiu, 2001)</ns0:ref>. The Kuroshio Current and its derivative branch-the Taiwan Warm Current (TWC), form the main circulation systems in the ECS shelf area, while the Yellow Sea Warm Current, one derivative branch of the Kuroshio Current, dominates in the YS shelf area <ns0:ref type='bibr' target='#b22'>(Hsueh, 2000;</ns0:ref><ns0:ref type='bibr' target='#b59'>Tomczak & Godfrey, 2001</ns0:ref>). In the ECS shelf region's summer (Fig. <ns0:ref type='figure'>2A</ns0:ref>), the Kuroshio subsurface water gradually upwells northwestward from east of Taiwan, and finally reaches 30.5°N off the Changjiang estuary along ~60 m isobaths, forming the Nearshore Kuroshio Branch Current <ns0:ref type='bibr' target='#b67'>(Yang et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b68'>Yang et al., 2011)</ns0:ref>. Meanwhile, the TWC is formed by the mixing of the Taiwan Strait Warm Current and Kuroshio Surface Water <ns0:ref type='bibr' target='#b47'>(Qi, 2014)</ns0:ref>. In winter (Fig. <ns0:ref type='figure'>2B</ns0:ref>), the Kuroshio Surface Water shows relatively intense intrusion as part of the Kuroshio Surface Water northwestward reaches continental shelf area across 100 m isobaths <ns0:ref type='bibr' target='#b73'>(Zhao & Liu, 2015)</ns0:ref>. At this point, the TWC is mainly fed from the Kuroshio Current northeast of Taiwan <ns0:ref type='bibr' target='#b47'>(Qi, 2014)</ns0:ref>.</ns0:p><ns0:p>In the YS shelf region's summer (Fig. <ns0:ref type='figure'>2A</ns0:ref>), the Yellow Sea Cold Water Mass, characterized by low temperature, occupies the central low-lying area mostly below the 50 m isobaths while the Yellow Sea Warm Current shows little influence <ns0:ref type='bibr' target='#b18'>(Guan, 1963)</ns0:ref>. In winter (Fig. <ns0:ref type='figure'>2B</ns0:ref>), the impact of the Yellow Sea Warm Current on shelf region is enhanced, while the Yellow Sea Cold Water Mass disappears <ns0:ref type='bibr'>(Weng et al., 1988)</ns0:ref>. The continuous water circulation in the YS is mainly comprised of the Yellow Sea Warm Current and the China Coastal Current (UNEP, 2005). The radiolarian assemblages in surface sediments have been investigated in the ECS whereas there are few reports in the YS. These reports cover the ECS including the Okinawa Trough <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cheng & Ju, 1998;</ns0:ref><ns0:ref type='bibr' target='#b62'>Wang & Chen, 1996)</ns0:ref> and continental shelf region extensively <ns0:ref type='bibr' target='#b11'>(Chen & Wang, 1982;</ns0:ref><ns0:ref type='bibr' target='#b56'>Tan & Chen, 1999;</ns0:ref><ns0:ref type='bibr' target='#b57'>Tan & Su, 1982)</ns0:ref>. They summarize the distribution patterns of the dominant species and the environmental conditions that affect the composition of radiolarian fauna in the ECS in their excellent taxonomic works. On the basis of these valuable works, we rigorously investigate the relationships between radiolarians and environmental variables. In addition, to which the ECS and YS are influenced by the Kuroshio Current and its derivative branch are specially focused in this study. The radiolarian data collected from 59 surface sediment samples are associated with environmental variables of the upper water to explore the principal variables explaining radiolarian species composition. The influences of the Kuroshio Current and its derivative branch on radiolarian assemblages in the study area are also considerably discussed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Sample collection and treatment</ns0:head><ns0:p>The surface sediments were collected at 59 sites (Fig. <ns0:ref type='figure'>3A</ns0:ref>) in the Yellow Sea and East China Sea using a box corer. The sediment samples in the study area were divided into four groups geographically and were labeled the Yellow Sea region (YSR) samples, the ECS north region (ECSNR) samples, the ECS middle region (ECSMR) samples, and the ECS south region (ECSSR) samples. The samples were prepared using the method described by <ns0:ref type='bibr' target='#b9'>Chen et al. (2008)</ns0:ref>. 30% hydrogen peroxide and 10% hydrochloric acid were added to each dry sample to remove organic component and the calcium tests, respectively. Then the treated sample was sieved with a 50 μm sieve and dried in an oven. After flotation in carbon tetrachloride, the cleaned residue was sealed with Canada balsam for radiolarian identification and quantification under a light microscope with a magnification of 200X or 400X. To reduce counting uncertainty, Dictyocoryne profunda Ehrenberg, Dictyocoryne truncatum (Ehrenberg), Dictyocoryne bandaicum (Harting) were combined as Dictyocoryne group. Photographs of some radiolarians encountered in this study are exhibited in Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Environmental data</ns0:head><ns0:p>Grain size analysis of the surface sediments was conducted with a Laser Diffraction Particle Size Analyzer <ns0:ref type='bibr'>(Cilas 1190, CILAS, Orleans, Loiret, France)</ns0:ref>. The data were used to categorise grain size classes as clay (1-4 μm), silt (4-63 μm) and sand (63-500 μm), and to determine different sediment types according to the Folk classification <ns0:ref type='bibr'>(Folk, Andrews & Lewis, 1970)</ns0:ref>. In addition, the mean grain size was calculated for each site.</ns0:p><ns0:p>The values of annual temperature (SST), salinity (SSS), oxygen, phosphate, nitrate, and silicate of sea surface with a 0.5° resolution for the period of 1930 to 2009 were derived from the CARS2009 dataset <ns0:ref type='bibr' target='#b49'>(Ridgway, Dunn & Wilkin, 2002)</ns0:ref>. The sea surface chlorophyll-a and particulate organic carbon with a 9 km resolution for the period of 1997 to 2010 were obtained from https://oceancolor.gsfc.nasa.gov/l3/. The values of the environmental variables mentioned above for each surface sediment site were estimated by linear interpolation. These values, together with depth, are shown in Supplementary material Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical processing</ns0:head><ns0:p>The minimum number of specimens counted in each sample is customarily 300. However, low radiolarian concentrations are frequent in the shelf type sediments comprised mainly of terrigenous sources <ns0:ref type='bibr' target='#b9'>(Chen et al., 2008)</ns0:ref>. Given small sediment samples, it was difficult to find 300 tests in some sites. According to <ns0:ref type='bibr' target='#b15'>Fatela & Taborda (2002)</ns0:ref>, counting 100 tests allows less than 5% probability of losing those species with a proportion of 3%. Balanced between the insufficient samples and the accuracy of the statistical analysis, the threshold number of radiolarians was adjusted to 100 <ns0:ref type='bibr' target='#b15'>(Fatela & Taborda, 2002;</ns0:ref><ns0:ref type='bibr' target='#b50'>Rogers, 2016)</ns0:ref>. Based on this threshold, 24 samples (Fig. <ns0:ref type='figure'>3B</ns0:ref>) were retained for detailed statistical analysis. Seven of 24 samples had less than 300 tests, containing six ECSNR samples and one ECSSR sample. The proportion of each dominant species in the ECSNR group was higher than 3%, guaranteeing a reliable interpretation of species proportions. We calculated the absolute abundance (tests.(100g) -1 ) and the diversity indices, including the species number (S), Shannon-Wiener's index (H' (log e )). To ensure a creditable estimate of diversity indices, which may be biased by different counting numbers, the specimens of radiolarians in each sample was randomly subsampled and normalized to the equal size of 100 tests by using rrarefy() function in vegan package in R program. For each site, S and H' of sample containing all tests and subsample containing 100 tests were calculated. Relative abundance (%) of each radiolarian taxon was also calculated. Then the hierarchical cluster analysis with group-average linking was applied to analyze the variations of radiolarian assemblage among different regions. The percentage data of the relative abundance was transformed by square root for normalize the dataset. Afterwards, triangular resemblance matrix was constructed based on the Bray-Curtis similarity <ns0:ref type='bibr' target='#b13'>(Clarke & Warwick, 2001)</ns0:ref>. Analysis of similarity (ANOSIM) was employed to determine the differences among different assemblages. Similarity percentage procedure (SIMPER) analysis was used to identify the species that contributed most to the similarities among radiolarian assemblages. Detrended correspondence analysis (DCA) was applied to determine the character of the species data. The gradient length of the first DCA axis was 1.773 < 3, suggesting that redundancy analysis (RDA, linear ordination method) was more suitable than Canonical correspondence analysis (CCA, unimodal ordination method) <ns0:ref type='bibr' target='#b32'>(Lepš & Šmilauer, 2003)</ns0:ref>. RDA was used to evaluate the relationship between environmental variables and radiolarian assemblages identified by SIMPER analysis. The species abundance data was square root transformed before analysis to reduce the effect of extremely high values <ns0:ref type='bibr' target='#b58'>(Ter Braak & Smilauer, 2002)</ns0:ref>. Variance inflation factors (VIF) was calculated to screen the environmental variables with VIF > 5 <ns0:ref type='bibr' target='#b36'>(Lomax & Hahs-Vaughn, 2013)</ns0:ref>. Sand percentage, mean grain size, chlorophyll-a, silicate, particulate organic carbon, oxygen, depth, nitrate, and silt percentage were removed from the RDA model step by step, in order to avoid collinearity <ns0:ref type='bibr' target='#b45'>(Naimi et al., 2014)</ns0:ref>. Finally, four variables, SST, SSS, clay percentage, and phosphate, were employed in the RDA. The significant environmental variables were determined by automatic forward selection with Monte Carlo tests (999 permutations). Station DH 8-5 was excluded from the RDA model for lack of environmental data. Correlation analysis was employed to investigate the relationship between the dominant radiolarian taxa and significant environmental variables. The diversity indices calculation, cluster analysis, ANOSIM, and SIMPER were performed by PRIMER 6.0. Correlation analysis was performed by SPSS 20. DCA and RDA were conducted by CANOCO 4.5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>A total of 137 radiolarian taxa were identified from the surface sediments of study area, including 75 genera, 14 families, and 3 orders The raw radiolarian counting data is shown in Supplementary material Table <ns0:ref type='table'>2</ns0:ref>. Approximately 91.0% of the species belonged to Spumellaria, accounting for the vast majority of the radiolarian fauna. Nassellaria and Collodaria accounted for 8.4% and 0.6%, respectively. Pyloniidae definitely dominated in the species composition as it occupied approximately 61%, followed by Spongodiscidae 18%, and Coccodiscidae 8% (Fig. <ns0:ref type='figure'>5A</ns0:ref>). Radiolarian abundance in surface sediments varied greatly in study area (Fig. <ns0:ref type='figure'>5B</ns0:ref>), showing a tendency of ECSMR (2776 tests.(100g) -1 ) > ECSSR (1776 tests.(100g) -1 ) > ECSNR (500 tests.(100g) -1 ) > YSR (8 tests.(100g) -1 ). The distribution pattern of species number (Fig. <ns0:ref type='figure'>5C</ns0:ref>) was similar to that of the abundance, exhibiting a trend of ECSMR (38 species) > ECSSR (35species) > ECSNR (16 species) > YSR (1 species). The top 9 species taxa, accounting for 79.6% of the total assemblages in the study area, were as follows: Tetrapyle octacantha group Mueller (55.6%), Didymocyrtis tetrathalamus (Haeckel) (7.5%), Dictyocoryne group (3.7%), Spongaster tetras Ehrenberg (2.5%), Stylodictya multispina Haeckel (2.2%), Spongodiscus resurgens Ehrenberg (2.2%), Zygocircus piscicaudatus Popofsky (2.1%), Phorticium pylonium Haeckel (2.0%), and Euchitonia furcata Ehrenberg (1.8%).</ns0:p></ns0:div>
<ns0:div><ns0:head>The radiolarian assemblages in the YS shelf area</ns0:head><ns0:p>In general, radiolarians showed a quite low abundance value in the YS, as no tests were found in 15 samples (Fig. <ns0:ref type='figure'>5</ns0:ref>). For the remaining 10 samples, only 49 tests were originally counted, belonging to 21 species taxa. The radiolarian abundance for 25 samples of the YS ranged from 0 tests.(100g) -1 to 91 tests.(100g) -1 , and species number ranged from 0 to 12. Based on the abundance data, T. octacantha (17.4%), Spongodiscus sp. (10.9%), Didymocyrtis tetrathalamus (9.1%), Acrosphaera spinosa (6.1%), and P. pylonium (6.1%) were the top 5 abundant species taxa in the YS, occupying a proportion of 49.7% of the total assemblages.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selected stations in the ECS shelf area with radiolarian tests ≥ 100</ns0:head><ns0:p>According to Table <ns0:ref type='table'>1</ns0:ref>, among three regions, there exists a significant difference in radiolarian abundance (ANOVA, p = 0.001). Diversity indices, including S and H', displayed an overall ranking of ECSSR > ECSMR > ECSNR both in samples (S, Kruskal-Wallis Test, p = 0.000; H', ANOVA, p = 0.000) and subsamples (S sub , ANOVA, p = 0.000; H' sub , ANOVA, p = 0.000). Cluster analysis based on the relative abundance classified all but one site into three regional groups at the 60% Bray-Curtis similarity level, including the ECSNR group, ECSMR group and ECSSR group (Fig. <ns0:ref type='figure'>6</ns0:ref>). The significant differences among the three groups were examined by ANOSIM (Global R = 0.769, p = 0.001). The dominant species in each regional group were identified by SIMPER analysis with a cut-off of 50% (Table <ns0:ref type='table'>2</ns0:ref>). Tetrapyle octacantha, Didymocyrtis tetrathalamus, and Spongodiscus resurgens dominated in the ECSNR group, with contribution of 41.70%, 9.79%, and 8.89%, respectively. The radiolarian taxa, including T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Stylodictya multispina, and Spongodiscus resurgens, contributed most to the ECSMR group. The dominant species in the ECSSR group were composed of T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Spongaster tetras, Z. piscicaudatus, P. pylonium, Stylodictya multispina, and E. furcata. It was indicated by the RDA that the first two axes explained 39.9% (RDA1 30.0%, RDA2 9.9%) of the species variance, and 86.5% of the species-environment relation variance (Table <ns0:ref type='table'>3A</ns0:ref>). Forward selection with Monte Carlo test (999 Permutation) revealed that SST and SSS were the most significant environmental variables associated with radiolarian composition (Table <ns0:ref type='table'>3B</ns0:ref>). The RDA plot showed a clear distribution pattern of regional samples (Fig. <ns0:ref type='figure'>7A</ns0:ref>). The ECSNR samples generally occupied the left part of the ordination, showing a feature of comparatively lower SST and an extensive fitness to SSS. The ECSMR samples were mostly located in the middle part, suggesting an adaption to higher values of SST and SSS than the ECSNR samples. The ECSSR samples distributed mainly at right part, characterized by the higher value of SST and SSS. The dominant species identified by the SIMPER analysis (Table <ns0:ref type='table'>2</ns0:ref>) were displayed in the RDA plot (Fig. <ns0:ref type='figure'>7B</ns0:ref>). Species taxa, including Spongaster tetras, Dictyocoryne group and P. pylonium, were related to higher SST, while showed little relationship with SSS. Zygocircus piscicaudatus, E. furcata, and Stylodictya multispina displayed a preference of higher SST and lower SSS. Didymocyrtis tetrathalamus was positively related to SST and SSS. Tetrapyle octacantha showed a better fitness to higher SSS and lower SST. Additionally, Spongodiscus resurgens was negatively associated with SST and SSS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Generally, the number of the radiolarian tests in continental shelf sediments of the ECS and YS is several orders of magnitude lower than that of the adjacent Okinawa trough <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cheng & Ju, 1998)</ns0:ref>. First, due to the continental runoff input, coastal area water is featured of lower temperature and salinity, resulting in lower number of living radiolarians <ns0:ref type='bibr' target='#b11'>(Chen & Wang, 1982;</ns0:ref><ns0:ref type='bibr' target='#b40'>Matsuzaki, Itaki & Kimoto, 2016;</ns0:ref><ns0:ref type='bibr' target='#b57'>Tan & Su, 1982)</ns0:ref>. Also, deposition rate in study area is considerably high as 0.1-0.8 cm/yr in the YS, and 0.1-3 cm/yr in the ECS <ns0:ref type='bibr' target='#b14'>(Dong, 2011)</ns0:ref>, which greatly masks the concentration of radiolarian skeleton in sediments <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>The radiolarian assemblages in the YS shelf area</ns0:head><ns0:p>Based on our results, though radiolarian assemblages varied greatly between the YS and ECS, there are some common species as all of the 21 radiolarian species in the YS can be found in the ECS, that is, no endemic species were observed in the YS. The top 5 species taxa, except Spongodiscus sp., were reported as typical warm species <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chen et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b38'>Matsuzaki & Itaki, 2017)</ns0:ref>, suggesting a warm-water origin of radiolarians in the YS. As a semi-enclosed marginal sea mostly shallower than 80 m, YS is influenced by a continuous circulation, primarily comprised by the Yellow Sea Warm Current and China Coastal Current <ns0:ref type='bibr' target='#b60'>(UNEP, 2005)</ns0:ref>. The mean values of SST and SSS in the YS are 15°C and 32psu, respectively (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>), making it quite difficult for radiolarians to survive and proliferate. For the surface sediments in the YS in our study, only a small number of radiolarians were detected at the margin of the YS shelf area, whereas no radiolarians were detected in the 15 sites within the range of the central YS (Fig. <ns0:ref type='figure'>5B</ns0:ref>). For the planktonic samples in the southern YS, low radiolarian stocks were also reported previously <ns0:ref type='bibr' target='#b56'>(Tan & Chen, 1999)</ns0:ref>. Sporadic radiolarians were merely documented in winter, with radiolarian stocks less than 200 tests.m -3 <ns0:ref type='bibr' target='#b56'>(Tan & Chen, 1999)</ns0:ref>. We thus infer the radiolarians in the YS (Fig. <ns0:ref type='figure'>5</ns0:ref>) were probably introduced by the Yellow Sea Warm Current, and transported by the China Coastal Current. The question whether the absence of radiolarians in the central YS is controlled by the Yellow Sea Cold Water Mass remains unclear and needs future investigations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selected stations in the ECS shelf area with radiolarian tests ≥ 100</ns0:head><ns0:p>In the ECS, the gradients of SST and SSS are controlled by the interaction of the Kuroshio branch current, TWC and Changjiang Diluted Water <ns0:ref type='bibr' target='#b67'>(Yang et al., 2012)</ns0:ref>. SST and SSS both show an increase from north to south, corresponding well with the overall distribution of radiolarians (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, Fig. <ns0:ref type='figure'>5</ns0:ref>). Revealed by the RDA, SST was the most significant environmental variable related to the radiolarian composition, followed by SSS (Table <ns0:ref type='table'>3B</ns0:ref>). SST is generally regarded as having an extremely important role in controlling the composition and distribution of radiolarians <ns0:ref type='bibr' target='#b4'>(Boltovskoy & Correa, 2017;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hernández-Almeida et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Ikenoue et al., 2015)</ns0:ref>. According to <ns0:ref type='bibr' target='#b43'>Matsuzaki, Itaki & Tada (2019)</ns0:ref>, the species diversity in the northern ECS was higher during interglacial period than during glacial period. For a long time, the relationship between radiolarian assemblages and SST is used to construct past changes in hydrographic conditions <ns0:ref type='bibr' target='#b38'>(Matsuzaki & Itaki, 2017)</ns0:ref>. In this study, SST showed a significant correlation with abundance, species number, and H' (Table <ns0:ref type='table'>4</ns0:ref>), suggesting that higher SST may often correspond to higher diversity. SSS was also crucial for explaining species-environment correlations in the ECS shelf area. At the offshore Western Australia, salinity is strongly significant in determining radiolarian species distributions <ns0:ref type='bibr' target='#b50'>(Rogers, 2016)</ns0:ref>. <ns0:ref type='bibr' target='#b21'>Hernández-Almeida et al. (2017)</ns0:ref> and <ns0:ref type='bibr' target='#b34'>Liu et al. (2017a)</ns0:ref> stated that the composition and distribution pattern of the radiolarian fauna in the western Pacific responds mainly to SST and SSS. <ns0:ref type='bibr' target='#b20'>Gupta (2002)</ns0:ref> found that the relative abundance of Pyloniidae exhibits a positive correlation with salinity. In this study SSS was positively correlated to abundance and species number (Table <ns0:ref type='table'>4</ns0:ref>), possibly suggesting a positive influence of SSS on radiolarian diversity. The radiolarian assemblages of the ECSSR group were influenced by the Kuroshio Current and TWC, while the TWC predominated. The surface water of the TWC is mainly characterised by high temperature (23-29°C) and salinity (33.3-34.2psu) <ns0:ref type='bibr'>(Weng & Wang, 1988)</ns0:ref>. Some of the TWC waters are supplemented from the South China Sea <ns0:ref type='bibr' target='#b35'>(Liu et al., 2017b)</ns0:ref>, where radiolarians show high diversity <ns0:ref type='bibr' target='#b9'>(Chen et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b72'>Zhang et al., 2009)</ns0:ref>. The dominant species in the ECSSR group included T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Spongaster tetras, Z. piscicaudatus, P. pylonium, Stylodictya multispina, and E. furcata (Table <ns0:ref type='table'>2</ns0:ref>, Fig. <ns0:ref type='figure'>8</ns0:ref>). These species taxa are reported as typical indicators of the Kuroshio Current <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b17'>Gallagher et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b40'>Matsuzaki et al., 2016)</ns0:ref>. The relatively high abundance of these taxa in the study area reflected the influence of the warm Kuroshio and TWC waters. Moreover, moderate percentage (0.91%) of Pterocorys campanula Haeckel was detected in the ECSSR group, in contrast with the ECSMR group (0.14%) and ECSNR group (0.06%). Members of Pterocorys are shallow-water dwellers, as reported by <ns0:ref type='bibr' target='#b42'>Matsuzaki, Itaki & Sugisaki (2019)</ns0:ref>. Pterocorys campanula frequently occurs and dominates in the South China Sea, whereas there are no reports of the dominance of P. campanula in the sediment samples of the ECS <ns0:ref type='bibr' target='#b8'>(Chen & Tan, 1996;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chen et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b23'>Hu et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a)</ns0:ref>. The high abundance of this taxon in the ECSSR group further demonstrates our conclusion that radiolarian assemblages of the ECSSR group were brought by the Kuroshio Current and TWC with the TWC playing the main role. The ECSMR group was influenced by the Kuroshio Current, TWC, and Changjiang Diluted Water. The dominant species in the ECSMR included T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Stylodictya multispina and Spongodiscus resurgens (Table <ns0:ref type='table'>2</ns0:ref>). The dominant species of the ECSMR group showed great overlap with the ECSSR group, which, in some degrees, suggests a similarity between the two groups, as both are influenced by the Kuroshio Current and TWC. On the other hand, the lower percentages of Didymocyrtis tetrathalamus, Dictyocoryne group, and Stylodictya multispina indicated part of the impact by the Changjiang Diluted Water, which is characterized by lower SST (Fig. <ns0:ref type='figure'>8</ns0:ref>). Tetrapyle octacantha, Didymocyrtis tetrathalamus, and Spongodiscus resurgens were dominant species of the ECSNR group, which was primarily impacted by the Changjiang Diluted Water and Kuroshio Current. Compared to the ECSMR and ECSSR group, the ECSNR group occupied higher latitude which means a lower SST, while the large input of Changjiang Diluted Water decreased SSS (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). This combination of lower SST and SSS probably hindered the radiolarian diversity of the ECSNR (Table <ns0:ref type='table'>1</ns0:ref>). The radiolarian assemblages in the shallower sea, i.e., the shelf sea area of the ECS, displayed distinctly different patterns from those in the open ocean. Tetrapyle octacantha occurred in the extraordinarily high proportion of 59% in the study area (Fig. <ns0:ref type='figure'>8</ns0:ref>), much higher than ever reported in adjacent areas with deeper waters <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cheng & Ju, 1998;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b62'>Wang & Chen, 1996)</ns0:ref>. Tetrapyle octacantha, as the most abundant taxon in the subtropical area <ns0:ref type='bibr' target='#b3'>(Boltovskoy, 1989)</ns0:ref>, shows a high tolerance to temperature <ns0:ref type='bibr' target='#b26'>(Ishitani et al., 2008)</ns0:ref>. This taxon has been reported to be associated with water from the ECS shelf area <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b27'>Itaki, Kimoto & Hasegawa, 2010)</ns0:ref>. <ns0:ref type='bibr' target='#b72'>Zhang et al. (2009)</ns0:ref> found that T. octacantha frequency was negatively correlated with SST, and Welling & Pisias (1998) concluded that T. octacantha dominated during the cold tongue period. In our study, T. octacantha was negatively related to SST according to the results of the RDA (Fig. <ns0:ref type='figure'>7B</ns0:ref>), tending to confirm the previous studies. We thus infer that T. octacantha is possibly more resistant to local severe temperature and, so, reaches comparatively high abundance in the shelf area. Therefore, T. octacantha can serve as an indicator that depicts the degree of mixture between the colder shelf water and warm Kuroshio water. The response of T. octacantha to SSS was unclear, though it showed positive relationship with SSS in the RDA plot (Fig. <ns0:ref type='figure'>7B</ns0:ref>). Here a special station with the highest Shannon-Wiener's index (3.2 in both original sample and subsample) was noticed, namely the station 3000-1 (Fig. <ns0:ref type='figure'>3</ns0:ref>), which is located at the Changjiang estuary. In our study, it had the lowest value of salinity (26.6psu) and the lowest percentage of T. octacantha (14.8%). After removed 3000-1, no significant correlation existed between SSS and the relative abundance of T. octacantha (n = 22, r = -0.027, p = 0.906). Spongodiscus resurgens, with an upper sub-surface maximum, was generally considered to be cold water species <ns0:ref type='bibr'>(Suzuki & Not, 2015)</ns0:ref> and related to productive nutrient-rich water <ns0:ref type='bibr' target='#b28'>(Itaki, Minoshima & Kawahata, 2009;</ns0:ref><ns0:ref type='bibr' target='#b38'>Matsuzaki & Itaki, 2017)</ns0:ref>. The ECSNR group was primarily controlled by the colder Changjiang Diluted Water, and thus had the highest percentage of T. octacantha and Spongodiscus resurgens among three regions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We analyzed radiolarian assemblages collected from the YS and ECS shelf area, where the Kuroshio Current and its derivative branches, including the TWC and Yellow Sea Warm Current, exerts great effect.</ns0:p><ns0:p>(1) The radiolarian abundance in the YS was quite low, and no radiolarians were detected in 15 of 25 YS sites.</ns0:p><ns0:p>(2) The radiolarian abundance and diversity in the ECS, which is controlled by the Kuroshio warm water, was much higher. Based on the cluster analysis, the radiolarian assemblages in the ECS could be divided into three regional groups, namely the ECSNR group, ECSMR group and ECSSR group. a. The ECSNR group was chiefly impacted by the Changjiang Diluted Water and Kuroshio Current, with dominant species of T. octacantha, Didymocyrtis tetrathalamus, and Spongodiscus resurgens. b. The ECSMR group was controlled by the Kuroshio Current, TWC and Changjiang Diluted Water. Species contributed most to this group included T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Stylodictya multispina, and Spongodiscus resurgens. c. The ECSSR group was affected by the Kuroshio Current and TWC, in which the TWC occupies major status. The dominant species in this group were composed of T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Spongaster tetras, Z. piscicaudatus, P. pylonium, Stylodictya multispina, and Euchitonia furcata.</ns0:p><ns0:p>(3) The RDA results showed that SST and SSS were main environmental variables that influenced the radiolarian composition in the ECS shelf. Manuscript to be reviewed Figure 2</ns0:p><ns0:p>The circulation system of the study area in summer (A) and winter (B) (redrawn after <ns0:ref type='bibr' target='#b67'>Yang et al. (2012)</ns0:ref> and <ns0:ref type='bibr' target='#b46'>Pi (2016)</ns0:ref>). </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Abbreviations: KBC -Kuroshio Branch Current, OKBC -Offshore Kuroshio Branch Current, NKBC -Nearshore Kuroshio Branch Current, KSW -Kuroshio Surface Water, TWC -Taiwan Warm Current, CCC -China Coastal Current, CDW -Changjiang Diluted Water, YSCWM -Yellow Sea Cold Water Mass, YSWC -Yellow Sea Warm Current, TC -Tsushima Current.</ns0:figDesc><ns0:graphic coords='17,42.52,301.12,525.00,270.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,199.12,525.00,255.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,166.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,280.87,525.00,248.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,392.25' type='bitmap' /></ns0:figure>
</ns0:body>
" | "Editor's Decision
I have heard back from two reviewers, both of whom offer numerous constructive comments on your work. Although not nearly the expert on radiolarians that the reviewers are, I agree with the large majority of their comments and suggestions. I suggest you look these over carefully along with the attachments, and thoroughly rework your manuscript. I look forward to seeing a revised version of your manuscript.
Comments from the reviewers:
Revised Manuscript file: All sentences added in previous manuscript are marked as red in the revised manuscript.
Reply to referees file: All comments are marked as green, and the replies are in black.
Reviewer: John Rogers
Basic reporting
English:
Not too bad but difficult to follow in parts. I have made many comments on the attached pdf.
The paper is probably rather too short but more detailed explanation will help.
References: Sufficient
Article Layout: Poor
o Subheadings not easily distinguishable from the body of the text.
Reply: Thanks for your suggestion. The subheadings are already marked in bold in the first submission, and we are confused why these subheadings show in the normal fonts when being reviewed. We emphasized the subheadings in bold again, and wish they are going well in the revised version.
o Lack of new paragraphs.
Reply: Thanks for your suggestion. We have added the detailed analysis of the Yellow Sea samples.
o Some unnecessary repetition.
Reply: Thanks for your suggestion. We have deleted the repetition in the revised version.
o Probabilities quoted as “p < “ do not meet PeerJ standards.
Reply: Thanks for your suggestion. We have changed “p <” into exact p-value.
Figures:
Inadequate and sometimes illegible labeling in figures 1-3; otherwise ok.
Reply: Thanks for your suggestion. We have revised as follows.
Figure 1 The mean annual sea surface temperature (SST, A) and sea surface salinity (SSS, B) in the shelf area of the ECS and YS. Solid line indicates the boundary between the ECS and YS.
Figure 2 The circulation system of the study area in summer (A) and winter (B) (redrawn after Yang et al. (2012) and Pi (2016)).
Abbreviations: KBC – Kuroshio Branch Current, OKBC – Offshore Kuroshio Branch Current, NKBC – Nearshore Kuroshio Branch Current, KSW – Kuroshio Surface Water, TWC – Taiwan Warm Current, CCC – China Coastal Current, CDW – Changjiang Diluted Water, YSCWM – Yellow Sea Cold Water Mass, YSWC – Yellow Sea Warm Current, TC – Tsushima Current.
Figure 3 The location of the total surface sediment samples in the ECS and YS shelf area (A), and simplified 24 samples with a threshold of 100 tests (B).
Tables:
Fine except that amalgamating Supplementary Tables 2 and 3 would make it easier to understand the discussion in the text.
Reply: Thanks for your suggestion. Supplementary Tables 2 and 3 have been amalgamated as Supplementary Table 2 in the revised version.
Overall: Self-contained
Other:
Many examples of mixed fonts, mainly (solely?) in species names. This is almost certainly due to copying from the radiolaria.org website.
Reply: Sorry for the mistakes. We have carefully corrected in the revised version.
o Genera & species names should be in italics.
Reply: Sorry for the mistakes. We have carefully corrected in the revised version.
o Lipmanella dictyoceras (Haeckel, 1861) occurs twice in both Supplementary Tables 2 & 3. In Table 3 the census results for the two entries for this species differ - should one of be a different species?
Reply: Sorry for the mistake. We have re-checked this species and corrected in the revised version.
Experimental design
Experimental design
Journal Fit:
Within PeerJ Aims & Scope
Research Question:
Adequate statement. The research is a sensible addition to the body of oceanographic knowledge but is local, rather than global, in its scope.
Investigation:
Fairly standard. Some degree of “overkill” in the number of diversity indices calculated – one or two would have been plenty.
Reply: Thanks for your suggestion. We retain the indices of species number and Shannon-Wiener's index in the revised version, with Margalef's index, Simpson index and Pielou's evenness removed. The subsequent analysis has been corrected accordingly.
.
Methods:
Description definitely inadequate. I ran the cluster analysis, the detrended CA, & the RDA but failed to reproduce the authors' results (although my answers were in the same ballpark). Consider that, in addition to their raw data the authors should include a Supplementary Table containing exactly the data they tested & should provide all the parameters they used. The authors used commercial software so should be able to provide all the input parameters.
Reply: Thanks for your suggestion. We submit Supplemental Article 1 containing detailed procedures of the cluster analysis, DCA and RDA. In Supplemental Article 1, the important values of the DCA and RDA are marked in bold and green.
Validity of the findings
Data:
Not provided in a form which would allow replication.
Reply: Thanks for your suggestion. We have submitted Supplemental Article 1 containing detailed procedures of the cluster analysis, DCA and RDA.
Conclusions:
Many examples of assuming causation from correlation or similar statistical evidence (e.g. line 233 “Silt percentage significantly affected the radiolarian species composition in study area.”; also lines 223 & 231).
Reply: Thanks for your suggestion. We have changed the sentences to “In this study, SST showed a significant correlation with abundance, species number, and H’ (Table 4), suggesting that higher SST may often correspond to higher diversity”, and “In this study SSS was positively correlated to abundance and species number (Table 4), possibly suggesting a positive influence of SSS on radiolarian diversity.” Taking “5” as the threshold, Silt percentage is removed from the Discussion part in the revised version.
Note: The reviewer has attached an annotated manuscript to this review.
1. The two regions are divided by the line connecting the northern tip of the mouth of the Changjiang and the southern tip of the Jeju Island.
Comment [JR1]: Should be marked on Fig. 1 maps.
Reply: Thanks for your suggestion. The line dividing the East China Sea and Yellow Sea has been marked on Figure 1 in revised version.
Figure 1 The mean annual sea surface temperature (SST, A) and sea surface salinity (SSS, B) in the shelf area of the ECS and YS. Solid line indicates the boundary between the ECS and YS.
2. …the depth is generally within 100 meters…
Comment [JR2]: Less than 100 metres?
Reply: Thanks for your suggestion. We have changed the sentence into “…the depth is generally less than 100 meters”.
3. Sample collection and treatment
Comment [JR3]: Subheadings like this need to be emphasized in some ways (bold, italics ,etc)
Reply: Thanks for your suggestion. The subheadings are already marked in bold in the first submission, and we are confused why these subheadings show in the normal fonts when being reviewed. We emphasized the subheadings in bold again, and wish they are going well in the revised version.
4. Carbon tetrachloride solution
Comment [JR4]: Carbon tetrachloride is a liquid & is not used in solution.
Reply: Thanks for your suggestion. The sentence is changed to “After flotation in carbon tetrachloride, the cleaned residue was sealed…”.
5. Finally, 12 variables were adapted in the statistical analysis, i.e. SST, SSS, oxygen, phosphate, nitrate, silicate, chlorophyll-a, particulate organic carbon, clay percentage, silt percentage, sand percentage, and mean grain size.
Comment [JR5]: Not obvious what this is intended to mean.
Reply: Thanks for your suggestion. We have deleted this sentence in the revised version.
6. Therefore, the threshold number of radiolarians was adjusted to 100, which is sufficient for a reliable interpretation of species proportions (Fatela & Taborda, 2002).
Comment [JR6]: This statement needs expansion to justify the decision.
Reply: Thanks for your suggestion. We have revised as “Given small sediment samples, it was difficult to find 300 tests in some sites. According to Fatela & Taborda (2002), counting 100 tests allows less than 5% probability of losing those species with a proportion of 3%. Balanced between the insufficient samples and the accuracy of the statistical analysis, the threshold number of radiolarians was adjusted to 100 (Fatela & Taborda, 2002; Rogers, 2016). Based on this threshold, 24 samples (Fig. 3B) were retained for detailed statistical analysis. Seven of 24 samples had less than 300 tests, containing six ECSNR samples and one ECSSR sample. The proportion of each dominant species in the ECSNR group was higher than 3%, guaranteeing a reliable interpretation of species proportions.”
7. …the absolute abundance (inds.(100g)-1)…
Comment [JR7]: “individuals” presumably – “tests” would be better.
Reply: Thanks for your suggestion. The sentence is changed to “…the absolute abundance (tests.(100g)-1)…”.
8.Then the hierarchical cluster analysis with group-average linking was applied to analyze the variations of radiolarian assemblage among different regions.
Comment [JR8]: Insufficient information – I cannot reproduce this.
Reply: Thanks for your suggestion. We submit Supplemental Article 1 containing detailed procedures of the cluster analysis, DCA and RDA. In Supplemental Article 1, the important values of the DCA and RDA are marked in bold and green.
9. The raw data of the relative abundance was transformed by square root.
Comment [JR9]: Percentages so no longer raw!
Reply: Sorry for the mistake. We have revised as “The percentage data of the relative abundance was transformed by square root for normalize the dataset”.
10. The raw data of the relative abundance was transformed by square root.
Comment [JR10]: Why?
Reply: Thanks for your suggestion. We have revised as “The percentage data of the relative abundance was transformed by square root for normalize the dataset”.
11. The gradient length of the first DCA axis was 1.768 < 3.
Comment [JR11]: I cannot reproduce this value.
Reply: Thanks for your suggestion. We submit Supplemental Article 1 containing detailed procedures of the cluster analysis, DCA and RDA. In Supplemental Article 1, the important values of the DCA and RDA are marked in bold and green. The sentence is changed to “The gradient length of the first DCA axis was 1.773 < 3” in the revised version.
12. The gradient length of the first DCA axis was 1.768 < 3, suggesting that redundancy analysis (RDA) was more suitable.
Comment [JR12]: More suitable than what? CCA?
Reply: Thanks for your suggestion. The sentence is changed to “The gradient length of the first DCA axis was 1.773 < 3, suggesting that redundancy analysis (RDA, linear ordination method) was more suitable than Canonical correspondence analysis (CCA, unimodal ordination method)”.
13. Supplementary material Table 2
Comment [JR13]: This should be deleted & the information included as colums B & C in Table 3 to make the rest of this paragraph easier to follow.
Reply: Thanks for your suggestion. Supplementary Tables 2 and 3 have been amalgamated as Supplementary Table 2 in the revised version.
14. Different lowercase a, b and c indicate significant differences among regional groups.
Abbreviations: N, Abundance (inds.(100g)-1); S, number of species; d, Margalef's index; J', Pielou's index; H’ (loge) , Shannon-Wiener's index; 1-λ', Simpson index. Comment [JR14]: The description of Table 1 - not useful here.
Reply: We are sorry for this mistake. We have deleted the repetitive description of Table 1 in revised version.
15. It was indicated by the RDA that the first two axes explained 37.2% (RDA1 27.6%, RDA2 9.6%) of the species variance, and 70.5% of the species-environment relation variance (Table 3A).
Comment [JR15]: I could not repeat this.
Reply: Thanks for your suggestion. We submit Supplemental Article 1 containing detailed procedures of the cluster analysis, DCA and RDA. In Supplemental Article 1, the important values of the DCA and RDA are marked in bold or green. This sentence is changed to “It was indicated by the RDA that the first two axes explained 39.9% (RDA1 30.0%, RDA2 9.9%) of the species variance, and 86.5% of the species-environment relation variance (Table 3A)”.
Table 3A Results of the redundancy analysis (RDA) for the radiolarian assemblages and environmental variables.
Axes
1
2
3
4
Total Inertia
Eigenvalues
0.300
0.099
0.039
0.024
1
Species-environment correlations
0.955
0.971
0.816
0.772
Cumulative percentage variance of species data
30.0
39.9
43.8
46.2
Cumulative percentage variance of species-environment relation
65.1
86.5
94.8
100
Sum of all eigenvalues
1
Sum of all canonical eigenvalues
0.462
16. Silt percentage significantly affected the radiolarian species composition in study area.
Comment [JR16]: No. It possibly affected the preservation of tests.
Reply: Thanks for your suggestion. We reset the maximum acceptable level of VIF values as “5”, and revise the Results and Discussion parts accordingly. SST and SSS are primary environmental variables, whereas silt percentage is not significant at 5% level in the revised version.
17. …suggesting positive relationship between radiolarian diversity and silt percentage.
Comment [JR17]: No. Preservation, not diversity.
Reply: Thanks for your suggestion. We reset the maximum acceptable level of VIF values as “5”, and revise the Results and Discussion parts accordingly. SST and SSS are primary environmental variables, whereas silt percentage is not significant at 5% level in the revised version.
18. sub-high salinity (33.3-34.2psu)
Comment [JR18]: Meaning?
Reply: Thanks for your suggestion. We have deleted “sub-high” and changed the sentence to “The surface water of the TWC is mainly characterised by high temperature (23-29°C) and salinity (33.3-34.2psu)”.
19. The dominant species in the ECSSR group included T. octacantha, Didymocyrtis tetrathalamus, Spongaster tetras, Dictyocoryne profunda, Z. piscicaudatus, Stylodictya multispina, Phorticium pylonium, and Spongodiscus resurgens.
Comment [JR19]: Mixed fonts
Reply: Thanks for your suggestion. We have carefully corrected throughout the revised manuscript.
20. The radiolarian abundance in the YS was quite low, due to the influence of the Yellow Sea Cold Water Mass.
Comment [JR20]: How do we know?
Reply: Thanks for your suggestion. The Discussion part of the YS assemblages was changed as follows.
“The radiolarian assemblages in the YS shelf area
Based on our results, though radiolarian assemblages varied greatly between the YS and ECS, there are some common species as all of the 21 radiolarian species in the YS can be found in the ECS, that is, no endemic species were observed in the YS. The top 5 species taxa, except Spongodiscus sp., were reported as typical warm species (Chang et al., 2003; Chen et al., 2008; Matsuzaki & Itaki, 2017), suggesting a warm-water origin of radiolarians in the YS.
As a semi-enclosed marginal sea mostly shallower than 80 m, YS is influenced by a continuous circulation, primarily comprised by the Yellow Sea Warm Current and China Coastal Current (UNEP, 2005). The mean values of SST and SSS in the YS are 15°C and 32psu, respectively (Fig. 1), making it quite difficult for radiolarians to survive and proliferate. For the surface sediments in the YS in our study, only a small number of radiolarians were detected at the margin of the YS shelf area, whereas no radiolarians were detected in the 15 sites within the range of the central YS (Fig. 4B). For the planktonic samples in the southern YS, low radiolarian stocks were also reported previously (Tan & Chen, 1999). Sporadic radiolarians were merely documented in winter, with radiolarian stocks less than 200 tests.m-3 (Tan & Chen, 1999). We thus infer the distribution patterns of radiolarians in the YS (Fig.4) were probably introduced by the Yellow Sea Warm Current, and transported by the China Coastal Current. The question whether the absence of radiolarians in the central YS is controlled by the Yellow Sea Cold Water Mass remains unclear and needs future investigations.”
The sentence in the Conclusion part was changed to “The radiolarian abundance in the YS was quite low, and no radiolarians were detected in 15 of 25 YS sites.”
21. Figure 1
Comment [JR21]: Maps should show locations of Changjiang and Jeju Island.
Reply: Thanks for your suggestion. The locations of Changjiang and Jeju Island been marked on Figure 1 in revised version.
Figure 1 The mean annual sea surface temperature (SST, A) and sea surface salinity (SSS, B) in the shelf area of the ECS and YS. Solid line indicates the boundary between the ECS and YS.
22. Figure 2
Comment [JR22]: Names of currents hard to read.
Reply: Thanks for your suggestion. We have redrawn Figure 2 as follows.
Figure 2 The circulation system of the study area in summer (A) and winter (B) (redrawn after Yang et al. (2012) and Pi (2016)).
Abbreviations: KBC – Kuroshio Branch Current, OKBC – Offshore Kuroshio Branch Current, NKBC – Nearshore Kuroshio Branch Current, KSW – Kuroshio Surface Water, TWC – Taiwan Warm Current, CCC – China Coastal Current, CDW – Changjiang Diluted Water, YSCWM – Yellow Sea Cold Water Mass, YSWC – Yellow Sea Warm Current, TC – Tsushima Current.
23. Figure 6
Comment [JR23]: Should add sample site names.
Reply: Thanks for your suggestion. We have added site names as follows.
Figure 5 Cluster analysis of radiolarian assemblages in the ECSNR, ECSMR and ECSSR. The dotted line represents 60% similarity level.
24. Figure 8
Comment [JR24]: Labels partially illegible.
Reply: Thanks for your suggestion. We have redrawn Figure 8 as follows.
Figure 7 Distribution of the dominant radiolarian species, SST, and SSS in the ECSNR, ECSMR, ECSSR.
25. Table 3
Comment [JR25]: Values should be aligned with headings.
Reply: Thanks for your suggestion. We have aligned the headings as follows.
Table 3A Results of the redundancy analysis (RDA) for the radiolarian assemblages and environmental variables.
Axes
1
2
3
4
Total Inertia
Eigenvalues
0.300
0.099
0.039
0.024
1
Species-environment correlations
0.955
0.971
0.816
0.772
Cumulative percentage variance of species data
30.0
39.9
43.8
46.2
Cumulative percentage variance of species-environment relation
65.1
86.5
94.8
100
Sum of all eigenvalues
1
Sum of all canonical eigenvalues
0.462
Table 3B Conditional effects of the total environmental variables in the RDA with the significant variables in bold.
Conditional Effects
Variable
VIF
LambdaA
% contribution to canonical eigenvalues
p
F
SST
1.86
0.14
30%
0.004
3.34
SSS
2.76
0.24
52%
0.001
8.01
Clay%
1.02
0.05
11%
0.086
1.43
Phospate
2.69
0.03
6%
0.301
1.16
Reviewer: Kenji Matsuzaki
Basic reporting
The author provide a good report about radiolarian assemblages changes in the Yellow Sea and the East China Sea and they tried to explain the difference in radiolarian assemblages by a different ecology of the local water masses (Salinity, Temperature...). Thus, it is an interesting report.
Experimental design
The design is good and follows the standard methodology and the authors did a considerable effort doing statistical analysis. This is also good.
Validity of the findings
The findings are good and the method also. However, I have seen some discrepancy in the interpretation of Tetrapyle inside their own manuscript and there are also some statistic concerns.
I wonder if the authors checked the multiple collinearity between the environmental variables prior other statistical analysis. Thus, I suggest to revise this issue as it can highly affect the discussion.
The relationship between Tetrapyle and Salinity should be revised as two opposite interpretation is proposed in the manuscript and may lead some confusions.
There is one important taxonomic error, which may affect the discussion. S. resurgens showed by the author is S. biconcavus. This is a concern because S. resurgens inhabit temperate water while S. biconcavus warm water. In addition in the present manuscript, this species group is a dominant species thus, the interpretation may have to change. I suggest to carefully check the spongodiscus data (re-count just this group) and revise it.
Therefore, I think that major revision is most suitable for this time because there are some important things to clarify about the Tetrapyle ecology, and the authors should clarify if Salinity and Temperature are collinear or not.
Comments for the author
MS number: 44305
Authors: Hanxue Qu et al.,
The authors discuss changes in radiolarian assemblages in the marginal seas of the Northwest Pacific focusing on the Yellow Sea and East China Sea (Southern and Northern part). Their results are interesting and are helpful for the community of radiolarian paleontologist. However, there are several concerns in the data interpretation, which must be fixed before publication. Thus I recommend major revisions.
Major concerns:
1/ The authors investigated changes in radiolarian assemblages in the East China Sea and in the Yellow Sea, with many samples collected from the Continental Shelf. This area has a shallow water depths and thus the preservation of radiolarian is not good. Thus, as they showed in their supplements, there are a lot of stations where radiolarian assemblages is composed of less than 100 specimens. They have neglected those samples from their study, which is good. However, I do not think that it is meaningful to estimate the diversity index in such area because you automatically, you will have a higher diversity in samples where you have counted 300 than in those you counted 100. Thus the proposed index is highly biased. It may be good to provide some assumption and show that you are aware from this issue.
Reply: Thanks for your suggestion. We try to find a balance between the reliability of the results and the limitation of unsufficient radiolarian tests. In order to reduce bias caused by different counting numbers, for each site, 100 tests were randomly subsampled from the data of all individuals in that site using rrarefy() function in vegan package in R. The diversity indices (S and H’) were calculated twice not only in the original data containing all tests but also in the subsampled data containing 100 tests to reduce the inaccuracy of diversity analysis. Diversity indices in these two cases showed similar tendency as ECSSR > ECSMR > ECSNR.
The sentences in Statistical processing of Materials & Methods part are changed to “To ensure a creditable estimate of diversity indices, which may be biased by different counting numbers, the specimens of radiolarians in each sample was randomly subsampled and normalized to the equal size of 100 tests by using rrarefy() function in vegan package in R program. For each site, S and H’ of sample containing all tests and subsample containing 100 tests were calculated.”
The sentences in Results part are changed to “According to Table 1, among three regions, there exists a significant difference in radiolarian abundance (ANOVA, p = 0.001). Diversity indices, including S and H’, displayed an overall ranking of ECSSR > ECSMR > ECSNR both in sample (S, Kruskal-Wallis Test, p = 0.000; H’, ANOVA, p = 0.000) and subsample (Ssub, ANOVA, p = 0.000; H’sub, ANOVA, p = 0.000).”
Table 1 The average values and standard errors (mean ± SE) of abundance and diversity indices in different regions (ECSNR, ECSMR, ECSSR).
Diversity index
ECSNR (n = 9 )
ECSMR (n = 7)
ECSSR (n = 7)
N
811 ± 121a
2776 ± 463b
2729 ± 770c
S
21 ± 1a
38 ±1b
48 ± 5b
H'
1.35 ± 0.10a
1.61 ± 0.13b
2.65 ± 0.08c
Ssub
11 ± 1a
16 ±1b
26 ± 2c
H'sub
1.22 ± 0.11a
1.35 ± 0.13b
2.43 ± 0.10c
Different lowercase a, b and c indicate significant differences among regional groups. Abbreviations: N, Abundance (tests.(100g)-1); S, species number; H’ (loge) , Shannon-Wiener's index; Ssub, species number of subsamples; H'sub (loge) , Shannon-Wiener's index of subsamples.
2/ The authors did an effort carrying statistical analysis, which is good. However, it was ambiguous in the text and I did not understand if the authors checked the multiple collinearity between the environmental variables they used (SSS, SST, Grain size, chlorophyll…). This stage is crucial as the authors want to explain changes in radiolarian assemblages based on the 6 environmental variable they retained. I recommend to check first if there is any multiple collinearity in the retain variable by providing a table where the VIF is shown for each. I particularly want to see if the SST and SSS have a multiple collinearity or not. If they are collinear thus the proposed discussion should be modified and say that radiolarians seems to be control by a SSS-SST package hard to be dissociated. If they are really independent variable thus the discussion in it current form is fine.
Reply: Thanks for your suggestion. In the revised version, the maximum acceptable level of VIF was adjusted to “5” as suggested. Sand percentage, mean grain size, chlorophyll-a, silicate, particulate organic carbon (POC), oxygen, depth, Nitrate, and silt percentage were removed from the RDA model step by step, leaving SST, SSS, clay percentage, and phosphate for RDA. The processes are as follows.
Explanatory variables: SST, SSS, oxygen, phosphate, nitrate, silicate, chlorophyll-a, particulate organic carbon, clay percentage, silt percentage, sand percentage, mean grain size and depth.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0182
2 SPEC AX2 0.0000 1.0212
3 SPEC AX3 0.0000 1.0545
4 SPEC AX4 0.0000 1.1662
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
1 Depth 74.7565 16.8072 14.5845
2 SST 21.1497 1.3663 58.0160
3 SSS 33.1070 1.5214 123.1374
4 Silicate 7.0615 1.8636 61.2506
5 Nitrate 2.0589 0.8470 30.4940
6 Phosphate 0.2368 0.0477 23.7822
7 Oxygen 5.2959 0.2116 27.0383
8 Chlor_a 1.2082 1.1019 286.7826
9 POC 168.3448 59.7897 130.6158
10 Sand 0.4770 0.2480 4057.4644
11 Silt 0.3901 0.1932 607.5501
12 Clay 0.1329 0.0624 0.0000
13 Mz 4.4115 1.0372 1839.1205
Note: “Sand” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0182
2 SPEC AX2 0.0000 1.0212
3 SPEC AX3 0.0000 1.0545
4 SPEC AX4 0.0000 1.1662
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
1 Depth 74.7565 16.8072 14.5845
2 SST 21.1497 1.3663 58.0160
3 SSS 33.1070 1.5214 123.1374
4 Silicate 7.0615 1.8636 61.2506
5 Nitrate 2.0589 0.8470 30.4940
6 Phosphate 0.2368 0.0477 23.7822
7 Oxygen 5.2959 0.2116 27.0383
8 Chlor_a 1.2082 1.1019 286.7826
9 POC 168.3448 59.7897 130.6157
11 Silt 0.3901 0.1932 897.1188
12 Clay 0.1329 0.0624 256.8089
13 Mz 4.4115 1.0372 1839.1206
Note: “Mz” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0186
2 SPEC AX2 0.0000 1.0212
3 SPEC AX3 0.0000 1.0547
4 SPEC AX4 0.0000 1.1657
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
1 Depth 74.7565 16.8072 14.5177
2 SST 21.1497 1.3663 57.6240
3 SSS 33.1070 1.5214 110.6693
4 Silicate 7.0615 1.8636 51.8251
5 Nitrate 2.0589 0.8470 10.4252
6 Phosphate 0.2368 0.0477 23.7788
7 Oxygen 5.2959 0.2116 21.9723
8 Chlor_a 1.2082 1.1019 257.8645
9 POC 168.3448 59.7897 94.0426
11 Silt 0.3901 0.1932 20.4520
12 Clay 0.1329 0.0624 14.5212
Note: “Chlor_a” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0207
2 SPEC AX2 0.0000 1.0221
3 SPEC AX3 0.0000 1.0542
4 SPEC AX4 0.0000 1.1701
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
1 Depth 74.7565 16.8072 11.7253
2 SST 21.1497 1.3663 16.5240
3 SSS 33.1070 1.5214 12.1269
4 Silicate 7.0615 1.8636 49.9732
5 Nitrate 2.0589 0.8470 8.8455
6 Phosphate 0.2368 0.0477 22.9917
7 Oxygen 5.2959 0.2116 17.0672
9 POC 168.3448 59.7897 35.0952
11 Silt 0.3901 0.1932 14.2892
12 Clay 0.1329 0.0624 11.0195
Note: “Silicate” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0210
2 SPEC AX2 0.0000 1.0291
3 SPEC AX3 0.0000 1.0686
4 SPEC AX4 0.0000 1.1787
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
1 Depth 74.7565 16.8072 10.1678
2 SST 21.1497 1.3663 13.3485
3 SSS 33.1070 1.5214 8.6591
5 Nitrate 2.0589 0.8470 8.7642
6 Phosphate 0.2368 0.0477 4.8862
7 Oxygen 5.2959 0.2116 17.0049
9 POC 168.3448 59.7897 20.9039
11 Silt 0.3901 0.1932 12.7067
12 Clay 0.1329 0.0624 10.1297
Note: “POC” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0245
2 SPEC AX2 0.0000 1.0286
3 SPEC AX3 0.0000 1.0669
4 SPEC AX4 0.0000 1.1904
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
1 Depth 74.7565 16.8072 7.0435
2 SST 21.1497 1.3663 12.6453
3 SSS 33.1070 1.5214 6.1633
5 Nitrate 2.0589 0.8470 6.7992
6 Phosphate 0.2368 0.0477 4.8774
7 Oxygen 5.2959 0.2116 16.7016
11 Silt 0.3901 0.1932 10.3918
12 Clay 0.1329 0.0624 8.1351
Note: “Oxygen” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0255
2 SPEC AX2 0.0000 1.0342
3 SPEC AX3 0.0000 1.1303
4 SPEC AX4 0.0000 1.1837
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
1 Depth 74.7565 16.8072 7.0398
2 SST 21.1497 1.3663 4.2084
3 SSS 33.1070 1.5214 5.3986
5 Nitrate 2.0589 0.8470 5.8309
6 Phosphate 0.2368 0.0477 4.4523
11 Silt 0.3901 0.1932 6.5596
12 Clay 0.1329 0.0624 5.2944
Note: “Depth” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0256
2 SPEC AX2 0.0000 1.0341
3 SPEC AX3 0.0000 1.1250
4 SPEC AX4 0.0000 1.2221
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
2 SST 21.1497 1.3663 2.5470
3 SSS 33.1070 1.5214 4.3073
5 Nitrate 2.0589 0.8470 5.5778
6 Phosphate 0.2368 0.0477 4.3549
11 Silt 0.3901 0.1932 5.5411
12 Clay 0.1329 0.0624 4.9064
Note: “Nitrate” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0357
2 SPEC AX2 0.0000 1.0314
3 SPEC AX3 0.0000 1.1896
4 SPEC AX4 0.0000 1.1990
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
2 SST 21.1497 1.3663 2.4296
3 SSS 33.1070 1.5214 3.7358
6 Phosphate 0.2368 0.0477 2.6941
11 Silt 0.3901 0.1932 5.3635
12 Clay 0.1329 0.0624 4.7633
Note: “Silt” was removed from RDA model.
N name (weighted) mean stand. dev. inflation factor
1 SPEC AX1 0.0000 1.0473
2 SPEC AX2 0.0000 1.0303
3 SPEC AX3 0.0000 1.2257
4 SPEC AX4 0.0000 1.2951
5 ENVI AX1 0.0000 1.0000
6 ENVI AX2 0.0000 1.0000
7 ENVI AX3 0.0000 1.0000
8 ENVI AX4 0.0000 1.0000
2 SST 21.1497 1.3663 1.8592
3 SSS 33.1070 1.5214 2.7648
6 Phosphate 0.2368 0.0477 2.6933
12 Clay 0.1329 0.0624 1.0194
Note: All of the VIF values are less than 5, thus VIF selection finished.
The results of RDA showed that SST and SSS were primary environmental variables. VIF value for each variable was displayed in Table 3B. VIF values are varied from 1.02 to 2.76, denoting little collinearity between the four environmental variables.
Table 3B Conditional effects of the total environmental variables in the RDA with the significant variables in bold.
Conditional Effects
Variable
VIF
LambdaA
% contribution to canonical eigenvalues
p
F
SST
1.86
0.14
30%
0.004
3.34
SSS
2.76
0.24
52%
0.001
8.01
Clay%
1.02
0.05
11%
0.086
1.43
Phospate
2.69
0.03
6%
0.301
1.16
3/ Interpretation of Tetrapyle spp.: The authors say that Tetrapyle is related to high SSS in the results, however in the discussion they say the Tetrapyle is dominant in an area where you have a mix of fresh water from the Yangtze Rivers discharges with the Kuroshio Current. This is strange as fresh water will diminish the SSS and thus if Tetrapyle is a high SSS marker it should decrease. I think that a careful check in the collinearity between SSS and SST must be done and also a more comprehensive review must be done about Tetrapyle spatio-temporal distribution. Indeed, they also claim that Tetrapyle is a marker of cold water tongue. This is also a strange statement as all the papers published until now showed that Tetrapyle % increase during interglacial period during the Quaternary in the Japan Sea and in the East China Sea. Please check the paper below:
Matsuzaki, K.M., Itaki, T. and Tada, R., 2019., Paleoceanographic changes in the Northern East China Sea during the last 400 kyr as inferred from radiolarian assemblages (IODP Site U1429). Progress in Earth and Planetary Science, v. 6., no.22, pp. 1-21
Itaki, T., Sagawa, T., and Kubota, Y., 2018. Data report: Pleistocene radiolarian biostratigraphy, IODP Expedition 346 Site U1427. In Tada, R., Murray, R.W., Alvarez Zarikian, C.A., and the Expedition 346 Scientists, Proceedings of the Integrated Ocean Drilling Program, 346: College Station, TX (Integrated Ocean Drilling Program). doi:10.2204/iodp.proc.346.202.2018
Thus re-thinking is probably needed.
Reply: Thanks for your suggestion.
1) Tetrapyle octacantha versus SST: Tetrapyle octacantha is a typical warm species as nearly disappeared in high latitude (Boltovskoy, 1987; Boltovskoy et al., 2010). It has been reported to be negatively associated with SST. For example, Chang et al. (2003) presented that T. octacantha is associated with water from the ECS shelf area, featured by lower temperature. This is in accordance with Itaki, Kimoto & Hasegawa (2010). It was the dominant species during the cold tongue period of the 1992 ENSO (Welling & Pisias, 1998). Also, Zhang et al. (2009) found that T. octacantha frequency was negatively correlated with SST, and it can serve as proxy for upwelling conditions. Welling & Pisias (1998) concluded that T. octacantha dominated during the cold tongue period. In the RDA plot, Tetrapyle octacantha showed a negative connection with SST. We infer that it has a good adaption to relatively lower temperature and thus showed high abundance in the ECSNR samples.
Boltovskoy D. 1987. Sedimentary record of radiolarian biogeography in the equatorial to Antarctic western Pacific Ocean. Micropaleontology 33:267-281. https://doi.org/10.2307/1485643
Boltovskoy D, Kling SA, Takahashi K, Bjørklund K. 2010. World atlas of distribution of recent polycystina (Radiolaria). Palaeontologia Electronica 13.
Chang FM, Zhuang LH, Li TG, Yan J, Cao QY, Cang SX. 2003. Radiolarian fauna in surface sediments of the northeastern East China Sea. Marine Micropaleontology 48:169-204. https://doi.org/10.1016/s0377-8398(03)00016-1
Itaki T, Kimoto K, Hasegawa S. 2010. Polycystine radiolarians in the Tsushima Strait in autumn of 2006. Paleontological Research 14:19-32. https://doi.org/10.2517/1342-8144-14.1.019
Welling LA, Pisias NG. 1998. Radiolarian fluxes, stocks, and population residence times in surface waters of the central equatorial Pacific. Deep-Sea Research Part I: Oceanographic Research Papers 45:639-671.
Zhang L, Chen M, Xiang R, Zhang J, Liu C, Huang L, Lu J. 2009. Distribution of polycystine radiolarians in the northern South China Sea in September 2005. Marine Micropaleontology 70:20-38. https://doi.org/10.1016/j.marmicro.2008.10.002
2) Tetrapyle octacantha versus SSS: Based on the correlation analysis, the relative abundance of T. octacantha had significant positive correlation with SSS (n = 23, r = 0.476, p = 0.022). Here a special station with the highest Shannon-Wiener's index (3.2 in both original sample and subsample) was noticed, namely the station 3000-1. This station was marked by yellow cross in Reply figure 1 and located at the Changjiang estuary. It had the lowest value of salinity (26.6psu) and the lowest percentage of T. octacantha (14.8%). After removed 3000-1, no significant correlation existed between SSS and the relative abundance of T. octacantha (n = 22, r = -0.027, p = 0.906). Therefore, we have revised as “The response of T. octacantha to SSS was unclear, though it showed positive relationship with SSS in the RDA plot (Fig. 6B). Here a special station with the highest Shannon-Wiener's index (3.2 in both original sample and subsample) was noticed, namely the station 3000-1 (Fig. 3), which is located at the Changjiang estuary. In our study, it had the lowest value of salinity (26.6psu) and the lowest percentage of T. octacantha (14.8%). After removed 3000-1, no significant correlation existed between SSS and the relative abundance of T. octacantha (n = 22, r = -0.027, p = 0.906).”
Reply figure 1 Distribution pattern of SSS (psu, A) and Tetrapyle octacantha (%, B).
4/ There is a major taxonomic error about S. resurgens. The photo showed here indicate that what the authors called S. resurgens is S. biconcavus. This is a concern because S. resurgens is more likely a temperate to cold water species while S. biconcavus inhabited warm area. I recommend to check carefully the counts about spongodiscus species and re-think the interpretation.
Reply: Thanks for your valuable suggestion. We have re-checked the slides and re-counted the Spongodiscus group. The interpretation of S. resurgens has been corrected accordingly in the revised version.
Minor concerns:
L.29-31: Should be re-phrased. I do not understand the meaning.
Reply: Thanks for your suggestion. The sentence is changed to “The gradients of temperature, salinity, and species diversity reflect the powerful influence of the Kuroshio Current in the study area.”
L. 35: I suggest to check the paper of Suzuki (2016,The statistic information on radiolarian studies based on PaleoTax for Windows, a synonym database: Fossil, 99,15‒31). There is more than 1000 species in modern ocean.
Reply: Thanks for your suggestion. The sentence is changed to “Polycystine Radiolaria (hereafter Radiolaria), with a high diversity of 1192 Cenozoic fossil to Recent species, are a crucial group of marine planktonic protists (Lazarus et al., 2015; Suzuki, 2016).”
L. 37: I also suggest to have a look on Lombard and Boden 1985 Atlas. Reference: Lombari, G., G. Boden (1985), Modern radiolarian global distributions. Cushman Foundation for Foraminiferal Research, Special Publication 16A, 68-69.
Reply: Thanks for your suggestion. We have added this reference in the revised version. The sentence is changed to “Living Radiolaria are widely distributed throughout the shallow-to-open oceans (Lombari & Boden, 1985; Wang, 2012), and a proportion of their siliceous skeletons settle on the seafloor after death (Takahashi, 1981; Yasudomi et al., 2014).”
L. 38: This is not a representative paper for radiolarian dissolution. I suggest to check:
Takahashi K, (1981) Vertical flux, ecology and dissolution of Radiolaria in tropical oceans: implications for the silica cycle (Doctoral dissertation, Massachusetts Institute of Technology and Woods Hole Oceanographic Institution). DOI:10.1575/1912/2420, https://hdl.handle.net/1912/2420
Reply: Thanks for your suggestion. We have checked this reference and revised as “Living Radiolaria are widely distributed throughout the shallow-to-open oceans (Lombari & Boden, 1985; Wang, 2012), and a proportion of their siliceous skeletons settle on the seafloor after death (Takahashi, 1981; Yasudomi et al., 2014).”
L. 40: There is more paper about that showing good schemes, see also :
Abelmann, A., Nimmergut, A. (2005). Radiolarians in the Sea of Okhotsk and their ecological implication for paleoenvironmental reconstructions. Deep Sea Research Part II: Topical Studies in Oceanography, 52(16), 2302-2331.
Reply: Thanks for your suggestion. We have added this reference in the revised version. The sentence is changed to “The distribution of Radiolaria in a given region is associated with the pattern of water mass, such as temperature, salinity and nutrients (Abelmann & Nimmergut, 2005; Anderson, 1983; Hernández‐Almeida et al., 2017).”
L. 49-51: I do not understood well. The KC does not enter the Yellow Sea but the coastal warm current which bifurcate from Taiwan Warm Current is. So perhaps also look at the book below where the oceanography is described easily for a general scientific audience:
Tomczak, M., Godfrey, J.S. (1994). Regional Oceanography: an Introduction. Pergamon Press, Oxford, 1-422.
Reply: Sorry for the ambiguous expression. It is the Yellow Sea Warm Current (one derivative branch of the KC), instead of the KC, that controls in the Yellow Sea. According to Hsueh (2000), north of 28°N, the flow separation of the Kuroshio leads to the formation of the Tsushima Current and Yellow Sea Warm Current. Also, Tomczak (2001) represented the advection of warm saline Kuroshio water in the Yellow Sea Warm Current. The sentences have been changed to “The Kuroshio Current and its derivative branch-the Taiwan Warm Current (TWC), form the main circulation systems in the ECS shelf area, while the Yellow Sea Warm Current, one derivative branch of the Kuroshio Current, dominates in the YS shelf area (Hsueh, 2000; Tomczak & Godfrey, 2001).”
L.68-72: What you say is the true, but Chen & Wang, 1982; Tan & Su 1982 are both excellent taxonomically so it may be good to say that they are amazing taxonomic works.
Reply: Thanks for your suggestion. We have revised into “They summarize the distribution patterns of the dominant species and the environmental conditions that affect the composition of radiolarian fauna in the ECS in their excellent taxonomic works. On the basis of these valuable works, we rigorously investigate the relationships between radiolarians and environmental variables. In addition, to which the ECS and YS are influenced by the Kuroshio Current and its derivative branch are specially focused in this study.”
L.73-77: I recommend to focus on clarifying changes in radiolarian assemblages among both sea, which can be explain by changes SST, SSS… It is better to be more focussed on your assemblage data. Otherwise we get lost in your objectives.
Reply: Thanks for your suggestion. We have added detailed analysis of the YS assemblages in the Results and Discussion parts as follows.
A. Results:
The radiolarian assemblages in the YS shelf area
In general, radiolarians showed a quite low abundance value in the YS, as no tests were found in 15 samples (Fig. 4). For the remaining 10 samples, only 49 tests were originally counted, belonging to 21 species taxa. The radiolarian abundance for 25 samples of the YS ranged from 0 tests.(100g)-1 to 91 tests.(100g)-1, and species number ranged from 0 to 12. Based on the abundance data, T. octacantha (17.4%), Spongodiscus sp. (10.9%), Didymocyrtis tetrathalamus (9.1%), Acrosphaera spinosa (6.1%), and P. pylonium (6.1%) were the top 5 abundant species taxa in the YS, occupying a proportion of 49.7% of the total assemblages.
B. Discussion:
The radiolarian assemblages in the YS shelf area
Based on our results, though radiolarian assemblages varied greatly between the YS and ECS, there are some common species as all of the 21 radiolarian species in the YS can be found in the ECS, that is, no endemic species were observed in the YS. The top 5 species taxa, except Spongodiscus sp., were reported as typical warm species (Chang et al., 2003; Chen et al., 2008; Matsuzaki & Itaki, 2017), suggesting a warm-water origin of radiolarians in the YS.
As a semi-enclosed marginal sea mostly shallower than 80 m, YS is influenced by a continuous circulation, primarily comprised by the Yellow Sea Warm Current and China Coastal Current (UNEP, 2005). The mean values of SST and SSS in the YS are 15°C and 32psu, respectively (Fig. 1), making it quite difficult for radiolarians to survive and proliferate. For the surface sediments in the YS in our study, only a small number of radiolarians were detected at the margin of the YS shelf area, whereas no radiolarians were detected in the 15 sites within the range of the central YS (Fig. 4B). For the planktonic samples in the southern YS, low radiolarian stocks were also reported previously (Tan & Chen, 1999). Sporadic radiolarians were merely documented in winter, with radiolarian stocks less than 200 tests.m-3 (Tan & Chen, 1999). We thus infer the radiolarians in the YS (Fig.4) were probably introduced by the Yellow Sea Warm Current, and transported by the China Coastal Current. The question whether the absence of radiolarians in the central YS is controlled by the Yellow Sea Cold Water Mass remains unclear and needs future investigations.
L. 88: Canada gum-> Canada balsam
Reply: Thanks for your suggestion. We have corrected as “Canada balsam”.
L. 100-101: Is that a 1° interpolation?
Reply: Thanks for your suggestion. The data from CARS 2009 dataset are in a 0.5° resolution, and the data from Oceancolor dataset are in a 9 km resolution. The sentences are changed to “The values of annual temperature (SST), salinity (SSS), oxygen, phosphate, nitrate, and silicate of sea surface with a 0.5° resolution for the period of 1930 to 2009 were derived from the CARS2009 dataset (Ridgway & Dunn, 2002). The sea surface chlorophyll-a and particulate organic carbon with a 9 km resolution for the period of 1997 to 2010 were obtained from https://oceancolor.gsfc.nasa.gov/l3/.”
L. 106-114: See my major concern 1 above.
Reply: Thanks for your suggestion. We have revised as suggested.
The sentences in Statistical processing of Materials & Methods part are changed to “To ensure a credible estimate of diversity indices, which may be biased by different counting numbers, the specimens of radiolarians in each sample was randomly subsampled and normalized to the equal size of 100 tests by using rrarefy() function in vegan package in R program. For each site, S and H’ of sample containing all tests and subsample containing 100 tests were calculated.”
The sentences in Results part are changed to “According to Table 1, among three regions, there exists a significant difference in radiolarian abundance (ANOVA, p = 0.001). Diversity indices, including S and H’, displayed an overall ranking of ECSSR > ECSMR > ECSNR both in sample (S, Kruskal-Wallis Test, p = 0.000; H’, ANOVA, p = 0.000) and subsample (Ssub, ANOVA, p = 0.000; H’sub, ANOVA, p = 0.000).”
Table 1 The average values and standard errors (mean ± SE) of abundance and diversity indices in different regions (ECSNR, ECSMR, ECSSR).
Diversity index
ECSNR (n = 9 )
ECSMR (n = 7)
ECSSR (n = 7)
N
811 ± 121a
2776 ± 463b
2729 ± 770c
S
21 ± 1a
38 ±1b
48 ± 5b
H'
1.35 ± 0.10a
1.61 ± 0.13b
2.65 ± 0.08c
Ssub
11 ± 1a
16 ±1b
26 ± 2c
H'sub
1.22 ± 0.11a
1.35 ± 0.13b
2.43 ± 0.10c
Different lowercase a, b and c indicate significant differences among regional groups. Abbreviations: N, Abundance (tests.(100g)-1); S, species number; H’ (loge) , Shannon-Wiener's index; Ssub, species number of subsamples; H'sub (loge) , Shannon-Wiener's index of subsamples.
L. 118: it is better to add 'for normalize the dataset » after square root.
Reply: Thanks for your suggestion. We have revised as “The percentage data of the relative abundance was transformed by square root for normalize the dataset”.
L. 128-129: I think it is related to my major comment 2. The VIF mean that you check the multiple collinearity between the environmental variable? If so please show all the results in the main text and explain why you took the threshold of VIF>10 usually it is 5 based on Lomax, R. G., & Hahs-Vaughn, D. L. (2012). An introduction to statistical concepts (3rd ed.). Routledge/Taylor & Francis Group.
Reply: Thanks for your suggestion. We reset the maximum acceptable level of VIF values as “5”, and revise the Results and Discussion parts accordingly. Because the complete results of RDA, especially VIF test, are quite redundant, we submit Supplemental Article 1 that contain detailed procedures of RDA and other statistical methods.
L. 128-129: Please show data if you want the neglect it.
Reply: Thanks for your suggestion. Please see short results in “Reply to major comments 2”, or see detailed results in Supplemental Article 1.
L. 156: I understand the meaning but please re-phrase it with word.
Reply: Thanks for your suggestion. We have changed to “Radiolarian abundance in surface sediments varied greatly in study area (Fig. 5B), showing a tendency of ECSMR (2776 tests.(100g)-1) > ECSSR (1776 tests.(100g)-1) > ECSNR (500 tests.(100g)-1) > YSR (8 tests.(100g)-1).”
L. 161-163: Again I am not sure how much is the diversity index suitable in this context.
Reply: Thanks for your suggestion. We have revised as suggested.
The sentences in Statistical processing of Materials & Methods part are changed to “To ensure a credible estimate of diversity indices, which may be biased by different counting numbers, the specimens of radiolarians in each sample was randomly subsampled and normalized to the equal size of 100 tests by using rrarefy() function in vegan package in R program. For each site, S and H’ of sample containing all tests and subsample containing 100 tests were calculated.”
The sentences in Results part are changed to “According to Table 1, among three regions, there exists a significant difference in radiolarian abundance (ANOVA, p = 0.001). Diversity indices, including S and H’, displayed an overall ranking of ECSSR > ECSMR > ECSNR both in sample (S, Kruskal-Wallis Test, p = 0.000; H’, ANOVA, p = 0.000) and subsample (Ssub, ANOVA, p = 0.000; H’sub, ANOVA, p = 0.000).”
Table 1 The average values and standard errors (mean ± SE) of abundance and diversity indices in different regions (ECSNR, ECSMR, ECSSR).
Diversity index
ECSNR (n = 9 )
ECSMR (n = 7)
ECSSR (n = 7)
N
811 ± 121a
2776 ± 463b
2729 ± 770c
S
21 ± 1a
38 ±1b
48 ± 5b
H'
1.35 ± 0.10a
1.61 ± 0.13b
2.65 ± 0.08c
Ssub
11 ± 1a
16 ±1b
26 ± 2c
H'sub
1.22 ± 0.11a
1.35 ± 0.13b
2.43 ± 0.10c
Different lowercase a, b and c indicate significant differences among regional groups. Abbreviations: N, Abundance (tests.(100g)-1); S, species number; H’ (loge) , Shannon-Wiener's index; Ssub, species number of subsamples; H'sub (loge) , Shannon-Wiener's index of subsamples.
L. 171-172 and elsewhere: Please when you cite a species for the first time in a manuscript provide the full taxonomic name, which include Genus, species name and author(s).
Reply: Thanks for your suggestion. We have carefully checked and corrected throughout the manuscript.
L. 196: see major comments 3.
Reply: Thanks for your suggestion. We re-run RDA based on a changed VIF value of “5”, and SST and SSS were primary environmental variables that influenced the composition of radiolarian assemblage. Tetrapyle octacantha was still related to higher SSS and lower SST. The relationship between T. octacantha and SST/SSS was fully discussed in Discussion part.
L. 202-204: Similar thing has been said by Matsuzaki et al. (2016, Mar Micro) also p lease check it.
Reply: Thanks for your suggestion. We have checked and added it to the revised manuscript.
L. 211: How is the water depths influencing it? It may be good to also consider it.
Reply: Thanks for your suggestion. No tests were found in 15 of 25 YS samples (Fig. 4). For the remaining 10 samples, we conducted correlation analysis, the results showed no significant correlation (n = 10, p > 0.5) between absolute abundance/species number and environmental variables (SST, SSS, depth, CHL, POC, sand%, silt%, clay%, mean grain size, silicate, nitrate, phosphate, and oxygen). Therefore, we infer that the radiolarians in the YS shelf area may probably be introduced by the Yellow Sea Warm Current, and transported by the China Coastal Current.
L. 218: Silt percentage mean grain size? if so it may be better to use the term grain size.
Reply: Thanks for your suggestion. Silt percentage is one parameter belonging to the grain size (silt percentage, clay percentage, sand percentage, and mean grain size). Taking “5” as the threshold, silt percentage is removed and the Results and Discussion part is revised accordingly.
L. 222-224: Matsuzaki et al. (2019) also showed that higher diversity is reached during interglacial period in the East China Sea over the last 400 000 years. It is supporting your idea.
Reply: Thanks for providing more evidences. We have changed to “According to Matsuzaki et al. (2019), the species diversity in the northern ECS was higher during interglacial period than during glacial period.”
L. 230-232: See major comment 2.
Reply: Thanks for your suggestion. In the revised version, the maximum acceptable level of VIF was adjusted to “5” as suggested. SST and SSS were main variables controlling radiolarian composition. In addition, we retain the number of species and Shannon-Wiener's index in the revised version, with Margalef's index, Simpson index and Pielou's evenness removed. The Results and Discussion parts are revised accordingly.
L. 236-239: What it is written here is of interest however if the grain size affect the diversity as it is written here, it means that it is likely that the dissolution is affecting the assemblages, with coarser grain sizes causing higher dissolution, which may lead the complete disparition of radiolarians having a weaker Si02 skeletons. Thus I also suggest to consider dissolution effect.
Reply: Thanks for your suggestion. Taking “5” as the threshold, silt percentage is removed and the Results and Discussion parts are revised accordingly.
L. 255-256: I suggest to check Matsuzaki et al., in press. for Pterocorys % as they published new living records in the Kuroshio Area (Kyushu Pale-Ridge):
Matsuzaki, K. M., Itaki, T. and Sugisaki, S., in press: Polycystine radiolarians vertical distribution in the subtropical Northwest Pacific during Spring 2015 (KS15-4). Paleontological Research. 10.2517/2019PR019
Reply: Thanks for your suggestion. We have changed to “Members of Pterocorys are shallow-water dwellers as reported by Matsuzaki et al. (2019). Pterocorys campanula frequently occurs and dominates in the South China Sea, whereas there are no reports of the dominance of P. campanula in the sediment samples of the ECS (Chen & Tan, 1996; Chen et al., 2008; Hu et al., 2015; Liu et al., 2017a).”
L. 269-273: See major comment 3.
Reply: Thanks for your suggestion. We have added the discussion of the relationship between T. octacantha and SSS as “The response of T. octacantha to SSS was unclear, though it showed positive relationship with SSS in the RDA plot (Fig. 6B). Here a special station with the highest Shannon-Wiener's index (3.2 in both original sample and subsample) was noticed, namely the station 3000-1 (Fig. 3), which is located at the Changjiang estuary. In our study, it had the lowest value of salinity (26.6psu) and the lowest percentage of T. octacantha (14.8%). After removed 3000-1, no significant correlation existed between SSS and the relative abundance of T. octacantha (n = 22, r = -0.027, p = 0.906).”
L. 281-287: See major comment 3.
Reply: Thanks for your suggestion. We have revised as “The radiolarian assemblages in the shallower sea, i.e., the shelf sea area of the ECS, displayed distinctly different patterns from those in the open ocean. Tetrapyle octacantha occurred in the extraordinarily high proportion of 59% in the study area (Fig. 7), much higher than ever reported in adjacent areas with deeper waters (Chang et al., 2003; Cheng & Ju, 1998; Liu et al., 2017a; Wang & Chen, 1996). Tetrapyle octacantha, as the most abundant taxon in the subtropical area (Boltovskoy, 1989), shows a high tolerance to temperature (Ishitani et al., 2008). This taxon has been reported to be associated with water from the ECS shelf area (Chang et al., 2003; Itaki, Kimoto & Hasegawa, 2010). Zhang et al. (2009) found that T. octacantha frequency was negatively correlated with SST, and Welling & Pisias (1998) concluded that T. octacantha dominated during the cold tongue period. In our study, T. octacantha was negatively related to SST according to the results of the RDA (Fig. 6B), tending to confirm the previous studies. We thus infer that T. octacantha is possibly more resistant to local severe temperature and, so, reaches comparatively high abundance in the shelf area. Therefore, T. octacantha can serve as an indicator that depicts the degree of mixture between the colder shelf water and warm Kuroshio water. The response of T. octacantha to SSS was unclear, though it showed positive relationship with SSS in the RDA plot (Fig. 6B). Here a special station with the highest Shannon-Wiener's index (3.2 in both original sample and subsample) was noticed, namely the station 3000-1 (Fig. 3), which is located at the Changjiang estuary. In our study, it had the lowest value of salinity (26.6psu) and the lowest percentage of T. octacantha (14.8%). After removed 3000-1, no significant correlation existed between SSS and the relative abundance of T. octacantha (n = 22, r = -0.027, p = 0.906).”
Taxonomy:
There are some important error in particular S. resurgens-> S. biconcavus. Both species clearly lives in different habitat so I think that a careful check should be done on counting Spongodiscus.
Reply: Thanks for your suggestion. We have re-checked the slides and re-counted the Spongodiscus group. The interpretation of S. resurgens has been corrected accordingly in the revised version.
A: This is not a typical D. profunda but more Dictyocoryne bandaicum (Harting) See Matsuzaki et al. (2015 Marine Micropaleontology).
Reply: Thanks for your suggestion. We have combined the members of Dictyocoryne as Dictyocoryne group. We add a sentence in the Materials & Methods part as “To reduce counting uncertainty, Dictyocoryne profunda Ehrenberg, Dictyocoryne truncatum (Ehrenberg), Dictyocoryne bandaicum (Harting) were combined as Dictyocoryne group.”
E: This is definitely not S. resurgens. This is Spongodiscus biconcavus Haeckel. The very opaque and thick center with the presence of a pylome at the periphery clearly indicate it. See Matsuzaki et al. (2016).
Reply: Thanks for your valuable suggestion. We have re-checked the slides and re-counted the Spongodiscus group. The interpretation of S. resurgens has been corrected accordingly in the revised version.
G-H: Good, but I will rather say T. octacantha Muller group as there is a high variability.
Reply: Thanks for your suggestion. We have revised as suggested.
K: This is wrong. This is more Flustrella polygonia (Popofsky)
See: Matsuzaki, K. M., Suzuki, N., and Nishi, H., 2015, Middle to Upper Pleistocene Polycystine Radiolarians from Hole 902-C9001C, Northwestern Pacific: Paleontological Research, v. 19, no. s1, p. 1-77.
Reply: Thanks for your suggestion. We have checked the reference and revised as suggested.
O: This is wrong. The pores are to big and the shell to rounded for be a C.Huxleyi. I rather recommend to not name this species or use sp.
Reply: Thanks for your suggestion. We use Collosphaera sp. in the revised version.
P-Q: This is Pseudocubus obeliscus Haeckel.
Reply: Thanks for your suggestion. We have changed to “Pseudocubus obeliscus Haeckel”.
" | Here is a paper. Please give your review comments after reading it. |
647 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>We analyzed the radiolarian assemblages of 59 surface sediment samples collected from the Yellow Sea and East China Sea of the northwestern Pacific. In the study region, the Kuroshio Current and its derivative branches exerted a crucial impact on radiolarian composition and distribution. Radiolarians in the Yellow Sea shelf showed a quite low abundance as no tests were found in 15 of 25 Yellow Sea samples. Radiolarians in the East China Sea shelf could be divided into three regional groups: the East China Sea north region group, the East China Sea middle region group, and the East China Sea south region group. The results of the redundancy analysis suggested that the Sea Surface Temperature and Sea Surface Salinity were primary environmental variables explaining species-environment relationship. The gradients of temperature, salinity, and species diversity reflect the powerful influence of the Kuroshio Current in the study area.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Polycystine Radiolaria (hereafter Radiolaria), with a high diversity of 1192 Cenozoic fossil to Recent species, are a crucial group of marine planktonic protists <ns0:ref type='bibr' target='#b31'>(Lazarus et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b53'>Suzuki, 2016)</ns0:ref>. Living Radiolaria are widely distributed throughout the shallow-to-open oceans <ns0:ref type='bibr' target='#b37'>(Lombari & Boden, 1985;</ns0:ref><ns0:ref type='bibr' target='#b61'>Wang, 2012)</ns0:ref>, and a proportion of their siliceous skeletons settle on the seafloor after death <ns0:ref type='bibr' target='#b55'>(Takahashi, 1981;</ns0:ref><ns0:ref type='bibr' target='#b70'>Yasudomi et al., 2014)</ns0:ref>. The distribution of Radiolaria in a given region is associated with the pattern of water mass, such as temperature, salinity and nutrients <ns0:ref type='bibr' target='#b0'>(Abelmann & Nimmergut, 2005;</ns0:ref><ns0:ref type='bibr' target='#b2'>Anderson, 1983;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hernández-Almeida et al., 2017)</ns0:ref>. The East China Sea (ECS) and Yellow Sea (YS) are marginal seas of the northwestern Pacific <ns0:ref type='bibr' target='#b66'>(Xu et al., 2011)</ns0:ref>. The two regions are divided by the line connecting the northern tip of the mouth of the Changjiang and the southern tip of the Jeju Island <ns0:ref type='bibr' target='#b30'>(Jun, 2014)</ns0:ref>. Hydrographic conditions of the shelf area of both the ECS and YS, where the depth is generally less than 100 meters, vary remarkably with the season <ns0:ref type='bibr' target='#b47'>(Qi, 2014)</ns0:ref>. Generally, the annual sea surface temperature (SST) and sea surface salinity (SSS) show a decreasing trend from the southeast to northwest in study area (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). The Kuroshio Current originates from the Philippine Sea, flows through the ECS, and afterwards forms the Kuroshio Extension <ns0:ref type='bibr' target='#b22'>(Hsueh, 2000;</ns0:ref><ns0:ref type='bibr' target='#b48'>Qiu, 2001)</ns0:ref>. The Kuroshio Current and its derivative branch-the Taiwan Warm Current (TWC), form the main circulation systems in the ECS shelf area, while the Yellow Sea Warm Current, one derivative branch of the Kuroshio Current, dominates in the YS shelf area <ns0:ref type='bibr' target='#b22'>(Hsueh, 2000;</ns0:ref><ns0:ref type='bibr' target='#b59'>Tomczak & Godfrey, 2001</ns0:ref>). In the ECS shelf region's summer (Fig. <ns0:ref type='figure'>2A</ns0:ref>), the Kuroshio subsurface water gradually upwells northwestward from east of Taiwan, and finally reaches 30.5°N off the Changjiang estuary along ~60 m isobaths, forming the Nearshore Kuroshio Branch Current <ns0:ref type='bibr' target='#b67'>(Yang et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b69'>Yang et al., 2011)</ns0:ref>. Meanwhile, the TWC is formed by the mixing of the Taiwan Strait Warm Current and Kuroshio Surface Water <ns0:ref type='bibr' target='#b47'>(Qi, 2014)</ns0:ref>. In winter (Fig. <ns0:ref type='figure'>2B</ns0:ref>), the Kuroshio Surface Water shows relatively intense intrusion as part of the Kuroshio Surface Water northwestward reaches continental shelf area across 100 m isobaths <ns0:ref type='bibr' target='#b73'>(Zhao & Liu, 2015)</ns0:ref>. At this point, the TWC is mainly fed from the Kuroshio Current northeast of Taiwan <ns0:ref type='bibr' target='#b47'>(Qi, 2014)</ns0:ref>.</ns0:p><ns0:p>In the YS shelf region's summer (Fig. <ns0:ref type='figure'>2A</ns0:ref>), the Yellow Sea Cold Water Mass, characterized by low temperature, occupies the central low-lying area mostly below the 50 m isobaths while the Yellow Sea Warm Current shows little influence <ns0:ref type='bibr' target='#b18'>(Guan, 1963)</ns0:ref>. In winter (Fig. <ns0:ref type='figure'>2B</ns0:ref>), the impact of the Yellow Sea Warm Current on shelf region is enhanced, while the Yellow Sea Cold Water Mass disappears <ns0:ref type='bibr'>(Weng et al., 1988)</ns0:ref>. The continuous water circulation in the YS is mainly comprised of the Yellow Sea Warm Current and the China Coastal Current (UNEP, 2005). The radiolarian assemblages in surface sediments have been investigated in the ECS whereas there are few reports in the YS. These reports cover the ECS including the Okinawa Trough <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cheng & Ju, 1998;</ns0:ref><ns0:ref type='bibr' target='#b62'>Wang & Chen, 1996)</ns0:ref> and continental shelf region extensively <ns0:ref type='bibr' target='#b11'>(Chen & Wang, 1982;</ns0:ref><ns0:ref type='bibr' target='#b56'>Tan & Chen, 1999;</ns0:ref><ns0:ref type='bibr' target='#b57'>Tan & Su, 1982)</ns0:ref>. They summarize the distribution patterns of the dominant species and the environmental conditions that affect the composition of radiolarian fauna in the ECS in their excellent taxonomic works. On the basis of these valuable works, we rigorously investigate the relationships between radiolarians and environmental variables. In addition, to which the ECS and YS are influenced by the Kuroshio Current and its derivative branch are specially focused in this study. The radiolarian data collected from 59 surface sediment samples are associated with environmental variables of the upper water to explore the principal variables explaining radiolarian species composition. The influences of the Kuroshio Current and its derivative branch on radiolarian assemblages in the study area are also considerably discussed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Sample collection and treatment</ns0:head><ns0:p>The surface sediments were collected at 59 sites (Fig. <ns0:ref type='figure'>3A</ns0:ref>) in the Yellow Sea and East China Sea using a box corer. The sediment samples in the study area were divided into four groups geographically and were labeled the Yellow Sea region (YSR) samples, the ECS north region (ECSNR) samples, the ECS middle region (ECSMR) samples, and the ECS south region (ECSSR) samples. The samples were prepared using the method described by <ns0:ref type='bibr' target='#b9'>Chen et al. (2008)</ns0:ref>. 30% hydrogen peroxide and 10% hydrochloric acid were added to each dry sample to remove organic component and the calcium tests, respectively. Then the treated sample was sieved with a 50 μm sieve and dried in an oven. After flotation in carbon tetrachloride, the cleaned residue was sealed with Canada balsam for radiolarian identification and quantification under a light microscope with a magnification of 200X or 400X. To reduce counting uncertainty, Dictyocoryne profunda Ehrenberg, Dictyocoryne truncatum (Ehrenberg), Dictyocoryne bandaicum (Harting) were combined as Dictyocoryne group. Photographs of some radiolarians encountered in this study are exhibited in Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Environmental data</ns0:head><ns0:p>Grain size analysis of the surface sediments was conducted with a Laser Diffraction Particle Size Analyzer <ns0:ref type='bibr'>(Cilas 1190, CILAS, Orleans, Loiret, France)</ns0:ref>. The data were used to categorise grain size classes as clay (1-4 μm), silt (4-63 μm) and sand (63-500 μm), and to determine different sediment types according to the Folk classification <ns0:ref type='bibr' target='#b16'>(Folk, Andrews & Lewis, 1970)</ns0:ref>. In addition, the mean grain size was calculated for each site.</ns0:p><ns0:p>The values of annual temperature (SST), salinity (SSS), oxygen, phosphate, nitrate, and silicate of sea surface with a 0.5° resolution for the period of 1930 to 2009 were derived from the CARS2009 dataset <ns0:ref type='bibr' target='#b49'>(Ridgway, Dunn & Wilkin, 2002)</ns0:ref>. The sea surface chlorophyll-a and particulate organic carbon with a 9 km resolution for the period of 1997 to 2010 were obtained from https://oceancolor.gsfc.nasa.gov/l3/. The values of the environmental variables mentioned above for each surface sediment site were estimated by linear interpolation. These values, together with depth, are shown in Supplementary material Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical processing</ns0:head><ns0:p>The minimum number of specimens counted in each sample is customarily 300. However, low radiolarian concentrations are frequent in the shelf type sediments comprised mainly of terrigenous sources <ns0:ref type='bibr' target='#b9'>(Chen et al., 2008)</ns0:ref>. Given small sediment samples, it was difficult to find 300 tests in some sites. According to <ns0:ref type='bibr' target='#b15'>Fatela & Taborda (2002)</ns0:ref>, counting 100 tests allows less than 5% probability of losing those species with a proportion of 3%. Balanced between the insufficient samples and the accuracy of the statistical analysis, the threshold number of radiolarians was adjusted to 100 <ns0:ref type='bibr' target='#b15'>(Fatela & Taborda, 2002;</ns0:ref><ns0:ref type='bibr' target='#b50'>Rogers, 2016)</ns0:ref>. Based on this threshold, 24 samples (Fig. <ns0:ref type='figure'>3B</ns0:ref>) were retained for detailed statistical analysis. Seven of 24 samples had less than 300 tests, containing six ECSNR samples and one ECSSR sample. The proportion of each dominant species in the ECSNR group was higher than 3%, guaranteeing a reliable interpretation of species proportions. We calculated the absolute abundance (tests.(100g) -1 ) and the diversity indices, including the species number (S), Shannon-Wiener's index (H' (log e )). To ensure a creditable estimate of diversity indices, which may be biased by different counting numbers, the specimens of radiolarians in each sample was randomly subsampled and normalized to the equal size of 100 tests by using rrarefy() function in vegan package in R program. For each site, S and H' of sample containing all tests and subsample containing 100 tests were calculated. Relative abundance (%) of each radiolarian taxon was also calculated. Then the hierarchical cluster analysis with group-average linking was applied to analyze the variations of radiolarian assemblage among different regions. The percentage data of the relative abundance was transformed by square root for normalize the dataset. Afterwards, triangular resemblance matrix was constructed based on the Bray-Curtis similarity <ns0:ref type='bibr' target='#b13'>(Clarke & Warwick, 2001)</ns0:ref>. Analysis of similarity (ANOSIM) was employed to determine the differences among different assemblages. Similarity percentage procedure (SIMPER) analysis was used to identify the species that contributed most to the similarities among radiolarian assemblages. Detrended correspondence analysis (DCA) was applied to determine the character of the species data. The gradient length of the first DCA axis was 1.773 < 3, suggesting that redundancy analysis (RDA, linear ordination method) was more suitable than Canonical correspondence analysis (CCA, unimodal ordination method) <ns0:ref type='bibr' target='#b33'>(Lepš & Šmilauer, 2003)</ns0:ref>. RDA was used to evaluate the relationship between environmental variables and radiolarian assemblages identified by SIMPER analysis. The species abundance data was square root transformed before analysis to reduce the effect of extremely high values <ns0:ref type='bibr' target='#b58'>(Ter Braak & Smilauer, 2002)</ns0:ref>. Variance inflation factors (VIF) was calculated to screen the environmental variables with VIF > 5 <ns0:ref type='bibr' target='#b36'>(Lomax & Hahs-Vaughn, 2013)</ns0:ref>. Sand percentage, mean grain size, chlorophyll-a, silicate, particulate organic carbon, oxygen, depth, nitrate, and silt percentage were removed from the RDA model step by step, in order to avoid collinearity <ns0:ref type='bibr' target='#b45'>(Naimi et al., 2014)</ns0:ref>. Finally, four variables, SST, SSS, clay percentage, and phosphate, were employed in the RDA. The significant environmental variables were determined by automatic forward selection with Monte Carlo tests (999 permutations). Station DH 8-5 was excluded from the RDA model for lack of environmental data. Correlation analysis was employed to investigate the relationship between the dominant radiolarian taxa and significant environmental variables. The diversity indices calculation, cluster analysis, ANOSIM, and SIMPER were performed by PRIMER 6.0. Correlation analysis was performed by SPSS 20. DCA and RDA were conducted by CANOCO 4.5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>A total of 137 radiolarian taxa were identified from the surface sediments of study area, including 75 genera, 14 families, and 3 orders The raw radiolarian counting data are in Supplementary material Table <ns0:ref type='table'>2</ns0:ref>. Approximately 91.0% of the species belonged to Spumellaria, accounting for the vast majority of the radiolarian fauna. Nassellaria and Collodaria accounted for 8.4% and 0.6%, respectively. Pyloniidae definitely dominated in the species composition as they occupied approximately 61%; they are followed by Spongodiscidae 18%, and Coccodiscidae 8% (Fig. <ns0:ref type='figure'>5A</ns0:ref>). Radiolarian abundance in surface sediments varied greatly in study area (Fig. <ns0:ref type='figure'>5B</ns0:ref>), showing a tendency of ECSMR (2776 tests.(100g) -1 ) > ECSSR (1776 tests.(100g) -1 ) > ECSNR (500 tests.(100g) -1 ) > YSR (8 tests.(100g) -1 ). The distribution pattern of species number (Fig. <ns0:ref type='figure'>5C</ns0:ref>) was similar to that of the abundance, exhibiting a trend of ECSMR (38 species) > ECSSR (35species) > ECSNR (16 species) > YSR (1 species). The top 9 species taxa, accounting for 79.6% of the total assemblages in the study area, were as follows: Tetrapyle octacantha group Müller (55.6%), Didymocyrtis tetrathalamus (Haeckel) (7.5%), Dictyocoryne group (3.7%), Spongaster tetras Ehrenberg (2.5%), Stylodictya multispina Haeckel (2.2%), Spongodiscus resurgens Ehrenberg (2.2%), Zygocircus piscicaudatus Popofsky (2.1%), Phorticium pylonium Haeckel (2.0%), and Euchitonia furcata Ehrenberg (1.8%).</ns0:p></ns0:div>
<ns0:div><ns0:head>The radiolarian assemblages in the YS shelf area</ns0:head><ns0:p>In general, radiolarians showed a quite low abundance value in the YS, as no tests were found in 15 samples (Fig. <ns0:ref type='figure'>5</ns0:ref>). For the remaining 10 samples, only 49 tests were originally counted, belonging to 21 species taxa. The radiolarian abundance for 25 samples of the YS ranged from 0 tests.(100g) -1 to 91 tests.(100g) -1 , and species number ranged from 0 to 12. Tetrapyle octacantha (17.4%), Spongodiscus sp. (10.9%), Didymocyrtis tetrathalamus (9.1%), Acrosphaera spinosa (6.1%), and P. pylonium (6.1%) were the five most abundant species taxa in the YS, constituting 49.7% of the total assemblages.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selected stations in the ECS shelf area with radiolarian tests ≥ 100</ns0:head><ns0:p>As can be seen in Table <ns0:ref type='table'>1</ns0:ref>, there exists a significant difference in radiolarian abundance between the three regions (ANOVA, p = 0.001). Diversity indices, including S and H', displayed an overall ranking of ECSSR > ECSMR > ECSNR both in samples (S, Kruskal-Wallis Test, p = 0.000; H', ANOVA, p = 0.000) and subsamples (S sub , ANOVA, p = 0.000; H' sub , ANOVA, p = 0.000). Cluster analysis based on the relative abundance classified all but one site into three regional groups at the 60% Bray-Curtis similarity level, including the ECSNR group, ECSMR group and ECSSR group (Fig. <ns0:ref type='figure'>6</ns0:ref>). The significant differences among the three groups were examined by ANOSIM (Global R = 0.769, p = 0.001). The dominant species in each regional group were identified by SIMPER analysis with a cut-off of 50% (Table <ns0:ref type='table'>2</ns0:ref>). Tetrapyle octacantha, Didymocyrtis tetrathalamus, and Spongodiscus resurgens dominated in the ECSNR group, with contribution of 41.70%, 9.79%, and 8.89%, respectively. The radiolarian taxa, including T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Stylodictya multispina, and Spongodiscus resurgens, contributed most to the ECSMR group. The dominant species in the ECSSR group were composed of T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Spongaster tetras, Z. piscicaudatus, P. pylonium, Stylodictya multispina, and E. furcata. The first two RDA axes explained 39.9% (RDA1 30.0%, RDA2 9.9%) of the species variance, and 86.5% of the species-environment relation variance (Table <ns0:ref type='table'>3A</ns0:ref>). Forward selection with Monte Carlo test (999 Permutation) revealed that SST and SSS were the most significant environmental variables associated with radiolarian composition (Table <ns0:ref type='table'>3B</ns0:ref>). The RDA plot showed a clear distribution pattern of regional samples (Fig. <ns0:ref type='figure'>7A</ns0:ref>). The ECSNR samples generally occupied the upper left-hand quarter of the ordination, showing a feature of comparatively low SST and low SSS. The ECSMR samples were mostly located in the ordination's centre, suggesting an adaptation to higher values of SST and SSS than the ECSNR samples. The ECSSR samples distributed mainly at lower right-hand quarter, characterized by the highest values of SST and SSS. The dominant species identified by the SIMPER analysis (Table <ns0:ref type='table'>2</ns0:ref>) are displayed in the RDA plot (Fig. <ns0:ref type='figure'>7B</ns0:ref>). Species taxa, including Spongaster tetras, Dictyocoryne group, Z. piscicaudatus, E. furcata, P. pylonium and Stylodictya multispina, were related to high SST, while showed little relationship with SSS. Didymocyrtis tetrathalamus was positively related to SST and SSS. Tetrapyle octacantha showed a preference for high SSS. Additionally, Spongodiscus resurgens was adapted to relatively low SST and SSS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Generally, the number of the radiolarian tests in continental shelf sediments of the ECS and YS is several orders of magnitude lower than that of the adjacent Okinawa trough <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cheng & Ju, 1998)</ns0:ref>. Firstly, due to the continental runoff input, the coastal area water is characterised by relatively low temperature and salinity, resulting in few living radiolarians <ns0:ref type='bibr' target='#b11'>(Chen & Wang, 1982;</ns0:ref><ns0:ref type='bibr' target='#b40'>Matsuzaki, Itaki & Kimoto, 2016;</ns0:ref><ns0:ref type='bibr' target='#b57'>Tan & Su, 1982)</ns0:ref>. Also, the deposition rate in study area is high at 0.1-0.8 cm/yr in the YS, and 0.1-3 cm/yr in the ECS <ns0:ref type='bibr' target='#b14'>(Dong, 2011)</ns0:ref>, which greatly masks the concentration of radiolarian skeleton in sediments <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>The radiolarian assemblages in the YS shelf area</ns0:head><ns0:p>Based on our results, although radiolarian assemblages varied greatly between the YS and ECS, there are some common species as all of the 21 radiolarian species in the YS can be found in the ECS, that is, no endemic species were observed in the YS. The five most abundant species taxa, except Spongodiscus sp., were reported as typical warm species <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chen et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b38'>Matsuzaki & Itaki, 2017)</ns0:ref>, suggesting a warm-water origin of radiolarians in the YS. As a semi-enclosed marginal sea mostly shallower than 80 m, YS is influenced by a continuous circulation, primarily composed of the Yellow Sea Warm Current and China Coastal Current <ns0:ref type='bibr' target='#b60'>(UNEP, 2005)</ns0:ref>. The mean values of SST and SSS in the YS are 15°C and 32psu, respectively (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>), making it quite difficult for radiolarians to survive and proliferate. For the surface sediments in the YS in our study, only a small number of radiolarians were detected at the margin of the YS shelf area, whereas no radiolarians were detected in the 15 sites within the range of the central YS (Fig. <ns0:ref type='figure'>5B</ns0:ref>). For the planktonic samples in the southern YS, low radiolarian stocks were also reported previously <ns0:ref type='bibr' target='#b56'>(Tan & Chen, 1999)</ns0:ref>. Sporadic radiolarians were merely documented in winter, with radiolarian stocks less than 200 tests.m -3 <ns0:ref type='bibr' target='#b56'>(Tan & Chen, 1999)</ns0:ref>. We thus infer the radiolarians in the YS (Fig. <ns0:ref type='figure'>5</ns0:ref>) were probably introduced by the Yellow Sea Warm Current, and transported by the China Coastal Current. Whether the absence of radiolarians in the central YS is controlled by the Yellow Sea Cold Water Mass remains unclear and needs future investigation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selected stations in the ECS shelf area with radiolarian tests ≥ 100</ns0:head><ns0:p>In the ECS, the gradients of SST and SSS are controlled by the interaction of the Kuroshio branch current, TWC and Changjiang Diluted Water <ns0:ref type='bibr' target='#b67'>(Yang et al., 2012)</ns0:ref>. SST and SSS both show an increase from north to south, corresponding well with the overall distribution of radiolarians (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, Fig. <ns0:ref type='figure'>5</ns0:ref>). As revealed by the RDA, SST was the most significant environmental variable related to the radiolarian composition, followed by SSS (Table <ns0:ref type='table'>3B</ns0:ref>). SST is generally regarded as having an extremely important role in controlling the composition and distribution of radiolarians <ns0:ref type='bibr' target='#b4'>(Boltovskoy & Correa, 2017;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hernández-Almeida et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Ikenoue et al., 2015)</ns0:ref>. According to <ns0:ref type='bibr' target='#b43'>Matsuzaki, Itaki & Tada (2019)</ns0:ref>, the species diversity in the northern ECS was higher during interglacial period than during glacial period. For a long time, the relationship between radiolarian assemblages and SST has been used to construct past changes in hydrographic conditions <ns0:ref type='bibr' target='#b38'>(Matsuzaki & Itaki, 2017)</ns0:ref>. In this study, SST showed a significant correlation with abundance, species number, and H' (Table <ns0:ref type='table'>4</ns0:ref>), suggesting that higher SST may often correspond to higher diversity. SSS was also crucial for explaining species-environment correlations in the ECS shelf area. At the offshore Western Australia, salinity is strongly significant in determining radiolarian species distributions <ns0:ref type='bibr' target='#b50'>(Rogers, 2016)</ns0:ref>. <ns0:ref type='bibr' target='#b21'>Hernández-Almeida et al. (2017)</ns0:ref> and <ns0:ref type='bibr' target='#b34'>Liu et al. (2017a)</ns0:ref> stated that the composition and distribution pattern of the radiolarian fauna in the western Pacific responds mainly to SST and SSS. <ns0:ref type='bibr' target='#b20'>Gupta (2002)</ns0:ref> found that the relative abundance of Pyloniidae exhibits a positive correlation with salinity. In this study SSS was positively correlated to abundance and species number (Table <ns0:ref type='table'>4</ns0:ref>), possibly suggesting a positive influence of SSS on radiolarian diversity. The radiolarian assemblages of the ECSSR group are influenced by the Kuroshio Current and TWC, with the TWC predominating. The surface water of the TWC is mainly characterised by high temperature (23-29°C) and salinity (33.3-34.2psu) <ns0:ref type='bibr'>(Weng & Wang, 1988)</ns0:ref>. Some of the TWC waters are supplemented from the South China Sea <ns0:ref type='bibr' target='#b35'>(Liu et al., 2017b)</ns0:ref>, where radiolarians show high diversity <ns0:ref type='bibr' target='#b9'>(Chen et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b72'>Zhang et al., 2009)</ns0:ref>. The dominant species in the ECSSR group include T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Spongaster tetras, Z. piscicaudatus, P. pylonium, Stylodictya multispina, and E. furcata (Table <ns0:ref type='table'>2</ns0:ref>, Fig. <ns0:ref type='figure'>8</ns0:ref>). These species taxa are reported as typical indicators of the Kuroshio Current <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b17'>Gallagher et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b40'>Matsuzaki et al., 2016)</ns0:ref>. The relatively high abundance of these taxa in the study area reflects the influence of the warm Kuroshio and TWC waters. Moreover, moderate percentage (0.91%) of Pterocorys campanula Haeckel was detected in the ECSSR group, in contrast with the ECSMR group (0.14%) and ECSNR group (0.06%). Members of Pterocorys are shallow-water dwellers, as reported by <ns0:ref type='bibr' target='#b42'>Matsuzaki, Itaki & Sugisaki (2020)</ns0:ref>. Pterocorys campanula frequently occurs and dominates in the South China Sea, whereas there are no reports of the dominance of P. campanula in the sediment samples of the ECS <ns0:ref type='bibr' target='#b8'>(Chen & Tan, 1996;</ns0:ref><ns0:ref type='bibr' target='#b9'>Chen et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b23'>Hu et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a)</ns0:ref>. The high abundance of this taxon in the ECSSR group further demonstrates our conclusion that radiolarian assemblages of the ECSSR group are brought by the Kuroshio Current and TWC with the TWC playing the main role. The ECSMR group was influenced by the Kuroshio Current, TWC, and Changjiang Diluted Water. The dominant species in the ECSMR included T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Stylodictya multispina and Spongodiscus resurgens (Table <ns0:ref type='table'>2</ns0:ref>). The dominant species of the ECSMR group show a great overlap with the ECSSR group, which, in some degree, suggests a similarity between the two groups, as both are influenced by the Kuroshio Current and TWC. On the other hand, the lower percentages of Didymocyrtis tetrathalamus, Dictyocoryne group, and Stylodictya multispina indicate part of the impact of the Changjiang Diluted Water, which is characterized by lower SST (Fig. <ns0:ref type='figure'>8</ns0:ref>). Tetrapyle octacantha, Didymocyrtis tetrathalamus, and Spongodiscus resurgens are the dominant species of the ECSNR group, which is primarily impacted by the Changjiang Diluted Water and Kuroshio Current. Compared to the ECSMR and ECSSR group, the ECSNR group occupied higher latitude which means a lower SST, while the large input of Changjiang Diluted Water lowers the SSS (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). This combination of lower SST and SSS probably hindered the radiolarian diversity of the ECSNR (Table <ns0:ref type='table'>1</ns0:ref>). The radiolarian assemblages in the shallower sea, i.e., the shelf sea area of the ECS, displayed distinctly different patterns from those in the open ocean. Tetrapyle octacantha occurred in the extraordinarily high proportion of 59% in the study area (Fig. <ns0:ref type='figure'>8</ns0:ref>), much higher than ever reported in adjacent areas with deeper waters <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b12'>Cheng & Ju, 1998;</ns0:ref><ns0:ref type='bibr' target='#b34'>Liu et al., 2017a;</ns0:ref><ns0:ref type='bibr' target='#b62'>Wang & Chen, 1996)</ns0:ref>. The response of T. octacantha to SSS was unclear, although it showed positive relationship with SSS in the RDA plot (Fig. <ns0:ref type='figure'>7B</ns0:ref>). Here a special station with the highest Shannon-Wiener's index (3.2 in both original sample and subsample) was noticed, namely the station 3000-1 (Fig. <ns0:ref type='figure'>3</ns0:ref>), which is located at the Changjiang estuary. In our study, it had the lowest value of salinity (26.6psu) and the lowest percentage of T. octacantha (14.8%). After removing 3000-1, no significant correlation existed between SSS and the relative abundance of T. octacantha (n = 22, r = -0.027, p = 0.906). Tetrapyle octacantha, as the most abundant taxon in the subtropical area <ns0:ref type='bibr' target='#b3'>(Boltovskoy, 1989)</ns0:ref>, shows a high resistance to SST variation <ns0:ref type='bibr' target='#b26'>(Ishitani et al., 2008)</ns0:ref>. This taxon has been reported to be associated with water from the ECS shelf area <ns0:ref type='bibr' target='#b6'>(Chang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b27'>Itaki, Kimoto & Hasegawa, 2010)</ns0:ref>. <ns0:ref type='bibr' target='#b63'>Welling & Pisias (1998)</ns0:ref> concluded that T. octacantha dominated during the cold tongue period in the central equatorial Pacific. In the northwest Pacific, there seems to be a threshold value of ~16 ℃ that only sporadic tests of T. octacantha are found with the temperature lower than 16 ℃ (< 6 tests, see Data Set S1 and Table <ns0:ref type='table'>1</ns0:ref> in <ns0:ref type='bibr' target='#b38'>Matsuzaki & Itaki, 2017)</ns0:ref>. In our study, there are very few tests of T. octacantha at the temperature below 16 ℃ (Supplementary material Tables <ns0:ref type='table'>1 and 2</ns0:ref>), tending to confirm the former research. We thus infer that T. octacantha is possibly more resistant to locally severe temperature and, so, reaches comparatively high abundance in the ECS shelf area. Therefore, T. octacantha could serve as an indicator that depicts the degree of mixture between the colder shelf water and warm Kuroshio water. Spongodiscus resurgens, with an upper sub-surface maximum, was generally considered to be cold water species <ns0:ref type='bibr'>(Suzuki & Not, 2015)</ns0:ref> and related to productive nutrient-rich water <ns0:ref type='bibr' target='#b28'>(Itaki, Minoshima & Kawahata, 2009;</ns0:ref><ns0:ref type='bibr' target='#b38'>Matsuzaki & Itaki, 2017)</ns0:ref>. The ECSNR group was primarily controlled by the colder Changjiang Diluted Water, and thus had the highest percentage of T. octacantha and Spongodiscus resurgens among three regions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We analyzed radiolarian assemblages collected from the YS and ECS shelf area, where the Kuroshio Current and its derivative branches, including the TWC and Yellow Sea Warm Current, exerts great effect.</ns0:p><ns0:p>(1) The radiolarian abundance in the YS was quite low, and no radiolarians were detected in 15 of 25 YS sites.</ns0:p><ns0:p>(2) The radiolarian abundance and diversity in the ECS, which is controlled by the Kuroshio warm water, was much higher. Based on the cluster analysis, the radiolarian assemblages in the ECS could be divided into three regional groups, namely the ECSNR group, ECSMR group and ECSSR group. a. The ECSNR group was chiefly impacted by the Changjiang Diluted Water and Kuroshio Current, with dominant species of T. octacantha, Didymocyrtis tetrathalamus, and Spongodiscus resurgens. b. The ECSMR group was controlled by the Kuroshio Current, TWC and Changjiang Diluted Water. Species contributed most to this group included T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Stylodictya multispina, and Spongodiscus resurgens. c. The ECSSR group was affected by the Kuroshio Current and TWC, in which the TWC occupies major status. The dominant species in this group were composed of T. octacantha, Didymocyrtis tetrathalamus, Dictyocoryne group, Spongaster tetras, Z. piscicaudatus, P. pylonium, Stylodictya multispina, and Euchitonia furcata.</ns0:p><ns0:p>(3) The RDA results indicated that SST and SSS were main environmental variables that influenced the radiolarian composition in the ECS shelf. Manuscript to be reviewed Figure 2</ns0:p><ns0:p>The circulation system of the study area in summer (A) and winter (B) (redrawn after <ns0:ref type='bibr' target='#b67'>Yang et al. (2012)</ns0:ref> and <ns0:ref type='bibr' target='#b46'>Pi (2016)</ns0:ref>). </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Abbreviations: KBC -Kuroshio Branch Current, OKBC -Offshore Kuroshio Branch Current, NKBC -Nearshore Kuroshio Branch Current, KSW -Kuroshio Surface Water, TWC -Taiwan Warm Current, CCC -China Coastal Current, CDW -Changjiang Diluted Water, YSCWM -Yellow Sea Cold Water Mass, YSWC -Yellow Sea Warm Current, TC -Tsushima Current.</ns0:figDesc><ns0:graphic coords='17,42.52,301.12,525.00,270.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,199.12,525.00,255.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,166.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,280.87,525.00,248.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,199.12,525.00,392.25' type='bitmap' /></ns0:figure>
</ns0:body>
" | "Editor comments (James Reimer)
I have heard back again from both reviewers, who find your work greatly improved. Both have added some small edits, mainly to improve the English, and I anticipate the needed revisions will be easy to complete. I look forward to seeing your revised paper.
Thanks for your kind work and consideration of this manuscript. We have revised the manuscript as follows.
Comments from the reviewers:
Revised Manuscript file: All sentences added in previous manuscript are marked as red in the revised manuscript.
Reply to referees file: All comments are marked as green, and the replies are in black.
Reviewer: John Rogers
Basic reporting
English: Much improved on the original version & nothing really difficult to follow. I have made some comments in the attached annotated PDF. I note the enhanced explanations.
References: Appropriate & sufficient.
Article Layout: Much improved & now suitable for publication.
Tables & Figures: Much improved & now suitable for publication.
Overall: Self-contained.
Experimental design
Journal Fit: Within PeerJ Aims & Scope.
Investigation: Good
Methods: The inclusion of the authors' raw data has enabled me to use open-access software to confirm their RDA results as well as their cluster analysis & detrended CA.
Validity of the findings
Data:
There is now enough raw data to allow replication so I have been able to confirm the authors' results.
Conclusions:
The authors' previous wording which frequently suggested correlation meant causation has been amended.
Comments for the author
This version of the paper is a great improvement on the first. The only criticism I have is that I still feel rather too many statistical tests have been applied. For example, I am not sure that SIMPER provides any more information than could have been deduced by visual examination of the census data.
Reply: Thanks for your valuable suggestion. SIMPER analysis was adapted to identify the species that contributed most to the similarities of each cluster clade, and thus we prefer to maintain it. We would carefully consider and apply statistical methods in a brief way to avoid “overkill” in the future study.
Note: Line 31: Key words omitted – were lines 32-33 in previous version.
Reply: Thanks for your kind reminder. We have submitted Key words to PeerJ online submission system.
note The reviewer has attached an annotated manuscript to this review.
Reply: Many thanks for carefully improving the English. We have corrected these sentences accordingly. We are grateful for your precious suggestions on this manuscript.
Reviewer: Kenji Matsuzaki
Basic reporting
The manuscript is clear and well-written. The cited literature is sufficient, providing us a good background about the research context. The hypothesis and the data presented here are consistent. To summarize the present manuscript is in a good shape.
Experimental design
The manuscript showed new original data and the method follow the international standards and is well explained. The research is also relevant and have a clear purpose well defined. There are no issues in the design of the research.
Validity of the findings
The finding presented here are relevant and allow the micropaleontological community to have a better understanding of a particular siliceous microfossil group in the East China Sea and the Yellow Sea, which can help to reconstruct hydrography of the past. The used robust statistics and thus it is looking good.
Comments for the author
The data is of interest and allow us to have a better understanding of key parameters constraining the distribution and diversity of Radiolarians. The authors also stressed well the importance of not only the Kuroshio Current but also the importance of the Yangtze river discharges of fresh water in the regional ecosystem. Therefore, the authors did an excellent job and the manuscript is in a good shape and I think it is ready for publications.
I have few very minor comments/advices, which may be considered before sending the proofs. They are listed below:
L. 166: Mueller-> Müller
Reply: Thanks for your suggestion. We have changed “Mueller” to “Müller”.
L. 203: adaption-> adaptation
Reply: Thanks for your suggestion. We have changed “adaption” to “adaptation”.
L. 209: According to figure 7, I am not sure that Z. piscicaudatus, E. furcata… are related to lower SSS. They are related to High SST. There is a bit of over interpretation. I will suggest to delete « lower SSS » .
Reply: Thanks for your suggestion. We have changed the sentences into “Species taxa, including Spongaster tetras, Dictyocoryne group, Z. piscicaudatus, E. furcata, P. pylonium and Stylodictya multispina, were related to high SST, while showed little relationship with SSS.”
L. 211: Same than above. I think that there is a bit of over interpretation in saying that T. octacantha fit with lower SST. However higher SSS yes. So I suggest to delete « lower SST » .
Reply: Thanks for your suggestion. We have revised as “Tetrapyle octacantha showed a preference for high SSS”.
L. 241-243: What you wrote is good, but because it seems that there is a latitudinal changes in SST and SSS. So how about the possible effect of solar insolation? You may not need to address it in this MS, but may be good for you to keep it in mind for further studies.
Reply: Thanks for your valuable suggestion. We would carefully consider this issue and focus it in the future study.
L.276: Matsuzaki, Itaki, Sugisaki (2019)->Matsuzaki, Itaki, Sugisaki (2020)
Reply: Thanks for your suggestion. We have revised as “Matsuzaki, Itaki & Sugisaki (2020)”.
L. 300-301: I agree what you say, it is true if you just consider the tropical marginal seas. However, if you looked at the entire N. Pacific, it seems that there is a threshold value of about 15℃. This means that Tetrapyle spp. is as you say highly resistant to SST variation, but cannot survive to SST lower than about 15 ℃. (Matsuzaki and Itaki, 2017)
Reply: Thanks for your suggestion. We have changed the sentences into “Tetrapyle octacantha, as the most abundant taxon in the subtropical area (Boltovskoy, 1989), shows a high resistance to SST variation (Ishitani et al., 2008). This taxon has been reported to be associated with water from the ECS shelf area (Chang et al., 2003; Itaki, Kimoto & Hasegawa, 2010). Welling & Pisias (1998) concluded that T. octacantha dominated during the cold tongue period in the central equatorial Pacific. In the northwest Pacific, there seems to be a threshold value of ~16 ℃ that only sporadic tests of T. octacantha are found with the temperature lower than 16 ℃ (< 6 tests, see Data Set S1 and Table 1 in Matsuzaki & Itaki, 2017). In our study, there are very few tests of T. octacantha at the temperature below 16 ℃ (Supplementary material Tables 1 and 2), tending to confirm the former research. We thus infer that T. octacantha is possibly more resistant to locally severe temperature and, so, reaches comparatively high abundance in the ECS shelf area. Therefore, T. octacantha could serve as an indicator that depicts the degree of mixture between the colder shelf water and warm Kuroshio water.”
" | Here is a paper. Please give your review comments after reading it. |
648 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>As coral reefs continue to decline globally, coral restoration practitioners have explored various approaches to return coral cover and diversity to decimated reefs. While branching coral species have long been the focus of restoration efforts, the recent development of the microfragmentation coral propagation technique has made it possible to incorporate massive coral species into restoration efforts. Microfragmentation (i.e., the process of cutting large donor colonies into small fragments that grow fast) has yielded promising early results. Still, best practices for outplanting fragmented corals of massive morphologies are continuing to be developed and modified to maximize survivorship.</ns0:p><ns0:p>Here, we compared outplant success among four species of massive corals (Orbicella faveolata, Montastraea cavernosa, Pseudodiploria clivosa, and P. strigosa) in Southeast Florida, US. Within the first week following coral deployment, predation impacts by fish on the small (< 5 cm 2 ) outplanted colonies resulted in both the complete removal of colonies and significant tissue damage, as evidenced by bite marks. In our study, 8-27% of fragments from four species were removed by fish within one week, with removal rates slowing down over time. Of the corals that remained after one week, over 9% showed signs of fish predation. Our findings showed that predation by corallivorous fish taxa like butterflyfishes (Chaetodontidae), parrotfishes (Scaridae), and damselfishes (Pomacentridae) is a major threat to coral outplants, and that susceptibility varied significantly among coral species and outplanting method. Moreover, we identify factors that reduce predation impacts such as: 1) using cement instead of glue to attach corals, 2) elevating fragments off the substrate, and 3) limiting the amount of skeleton exposed at the time of outplanting. These strategies are essential to maximizing the efficiency of outplanting techniques and enhancing the impact of reef restoration.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Coral populations have experienced drastic declines due to a variety of stressors <ns0:ref type='bibr' target='#b18'>(Gardner et al. 2003;</ns0:ref><ns0:ref type='bibr' target='#b38'>McLean et al. 2016)</ns0:ref>. Globally, increases in ocean temperature and ocean acidification are likely the most serious threats and can lead to mass coral mortality and reduced calcification rates <ns0:ref type='bibr' target='#b22'>(Hoey et al. 2016)</ns0:ref>. Rising ocean temperatures have resulted in increased frequency and intensity of bleaching events <ns0:ref type='bibr' target='#b20'>(Heron et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b23'>Hughes et al. 2019)</ns0:ref>. Also, ocean acidification levels have begun to reduce coral calcification rates and cause framework erosion <ns0:ref type='bibr' target='#b41'>(Muehllehner et al. 2016)</ns0:ref>. The magnitude and rate of coral decline require drastic, large scale actions to curb climate change impacts as well as a suite of local conservation and management measures. Active coral reef restoration has developed as one of the tools available to foster coral recovery and restore the ecosystem services that healthy coral reefs provide <ns0:ref type='bibr' target='#b54'>(Rinkevich 2019)</ns0:ref>. The present study focused on South Florida reefs where coral abundance has declined due to the interaction of high and low thermal anomalies <ns0:ref type='bibr' target='#b35'>(Lirman et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b37'>Manzello 2015;</ns0:ref><ns0:ref type='bibr' target='#b14'>Drury et al. 2017)</ns0:ref>, nutrient inputs and algal overgrowth <ns0:ref type='bibr' target='#b31'>(Lapointe et al. 2019)</ns0:ref>, hurricanes <ns0:ref type='bibr' target='#b32'>(Lirman and Fong 1997)</ns0:ref>, sedimentation <ns0:ref type='bibr' target='#b13'>(Cunning et al. 2019)</ns0:ref>, and coral diseases <ns0:ref type='bibr' target='#b52'>(Richardson et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b47'>Precht et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b65'>Walton et al. 2018)</ns0:ref>.</ns0:p><ns0:p>A popular method of coral reef restoration is coral gardening, where coral stocks propagated through sequential fragmentation within in-water and ex situ nurseries are outplanted in large numbers onto depleted reefs <ns0:ref type='bibr' target='#b53'>(Rinkevich 2006)</ns0:ref>. Until recently, restoration programs based on the coral gardening methodology have focused primarily on branching taxa like Acropora due to their rapid growth rates, resilience to fragmentation, pruning vigor, and ease of outplanting <ns0:ref type='bibr' target='#b5'>(Bowden-Kerby 2008;</ns0:ref><ns0:ref type='bibr' target='#b63'>Shaish et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b33'>Lirman et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b62'>Schopmeyer et al. 2017)</ns0:ref>. While acroporids rapidly enhance the structural complexity of reefs, focusing restoration efforts on single taxa ignores the role that diversity plays in ecosystem function <ns0:ref type='bibr' target='#b7'>(Brandl et al. 2019</ns0:ref>) and makes restored communities susceptible to disturbances like diseases or storms that affect branching corals disproportionately. Thus, there is a need to expand our restoration toolbox to include multiple species with different morphologies and life histories <ns0:ref type='bibr' target='#b36'>(Lustic et al. 2020)</ns0:ref>. The use of massive corals for restoration was initially hindered by the slow growth rates associated with these taxa. However, recent developments in microfragmentation and reskinning techniques <ns0:ref type='bibr' target='#b16'>(Forsman et al. 2015)</ns0:ref> that accelerate the growth of massive corals have made it possible to use these reefbuilding taxa for restoration.</ns0:p><ns0:p>The microfragmentation process involves fragmenting corals with massive morphologies into small (< 5 cm 2 ) ramets that consist mostly of living tissue and a limited amount of skeleton <ns0:ref type='bibr' target='#b46'>(Page et al. 2018)</ns0:ref>. These microfragments can be mounted onto various types of substrate (e.g., ceramic plugs, plastic cards, cement pucks) using glue or epoxy and allowed to grow by skirting over the attachment platform before being outplanted. Once fragmented, ramets of Pseudodiploria clivosa and Orbicella faveolata grew up to 48 cm 2 and 63 cm 2 per month, respectively <ns0:ref type='bibr' target='#b16'>(Forsman et al. 2015)</ns0:ref> and were thereby capable of achieving colony sizes within a few months that would otherwise take years to develop after natural recruitment. Moreover, a single parent colony can produce hundreds of ramets available for continued propagation and restoration.</ns0:p><ns0:p>The microfragmentation technique overcomes the slow-growth bottleneck, but methods for outplanting fragmented massive corals onto depleted reefs still need to be developed and evaluated to maximize outplant survivorship and success. This is especially relevant in Florida and the Caribbean where the massive coral species used here have been severely impacted by the recent outbreak of stony coral tissue loss disease (SCTLD) <ns0:ref type='bibr' target='#b47'>(Precht et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alvarez-Filip et al. 2019)</ns0:ref>. The present study is one of the first to record the survivorship of small fragments of four species of massive corals (O. faveolata, Montastraea cavernosa, P. clivosa, and P. strigosa) outplanted onto reefs in Southeast Florida, US. To measure the success of this technique, we: 1) documented survivorship and removal probability of fragments outplanted using different techniques, 2) monitored the impacts of fish predation on newly planted fragments, and 3) evaluated different outplanting techniques that may reduce the impacts of predation (i.e. coral fragment removal and mortality).</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Coral Fragmentation</ns0:head><ns0:p>Colonies used in both experiments were collected from a seawall at Fisher Island, Miami, Florida (25.76° N, 80.14° W; depth = 1.8 m). Each parent colony was cut into small fragments (average size = 4.2 ± 1.9 cm 2 (mean ± SD) using a diamond band saw, and fragments were attached to ceramic plugs using super glue as described by <ns0:ref type='bibr' target='#b46'>Page et al. (2018)</ns0:ref>. After fragmentation the height of the fragments ranged from 0.5-1.0 cm. The ceramic plugs with corals were placed on PVC frames and then fixed to coral trees <ns0:ref type='bibr' target='#b43'>(Nedimyer et al. 2011</ns0:ref>) at the University of Miami's in-water coral nursery (25.69° N, 80.09° W; depth = 9.4 m) where they were allowed to acclimate for 4-6 weeks before outplanting (Fig. <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>). After this recovery period, the fragments were strongly cemented (no corals were dislodged during the transport and outplanting steps) but had not fully skirted tissue onto the ceramic plugs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment One: Assessing Outplant Survivorship</ns0:head><ns0:p>The first outplanting experiment consisted of four coral species with massive or brain colony morphologies: O. faveolata (listed as threatened under the US Endangered Species Act), P. clivosa, P. strigosa, and M. cavernosa. Fragmented corals first glued onto ceramic plugs using polyurethane waterproof glue (Gorilla Glue) (Fig. <ns0:ref type='figure' target='#fig_0'>1A, B</ns0:ref>). The plugs with the corals were then mounted into cement pucks with 2-part epoxy putty (AllFix). Finally, these pucks were secured onto the reef using cement (1 part Portland cement, 0.1 part Silica fume), raising corals 3 cm above the substrate to limit sediment and algal interactions (Fig. <ns0:ref type='figure' target='#fig_0'>1C</ns0:ref>). Only corals with healthy (no discoloration or lesions) tissue were used. Corals were outplanted onto three reef sites in Miami, Florida, in June-July 2018: Reef 1 (25.70° N, 80.09° W; depth = 6.0 m), Reef 2 (25.68° N, 80.10° W; 7.5 m), and Reef 3 (25.83° N, 80.11° W; 6.4 m). These reefs have low topography and very low cover of stony corals (< 1%). Corals were collected and outplanted under Florida's Fish and Wildlife Commission Permit SAL-19-1794-SCRP. At each reef, the corals were deployed within replicate, square grids (3 x 3 m, 5 x 5 m) based on substrate availability. Four such grids were deployed in Reef 1, 7 in Reef 2, and 6 in Reef 3. The corals were spaced 40-60 cm apart within each grid and grids were separated by at least 2 m. The coral outplants were placed at least 20 cm away from existing stony and soft corals, sponges, and the zoanthid Palythoa. In total, 53 M. cavernosa, 123 O. faveolata, 80 P. clivosa, and 41 P. strigosa were outplanted among the 3 reefs, with all 4 species represented in each plot.</ns0:p><ns0:p>For each outplant in this experiment, we documented presence/absence of the coral fragment (Fig. <ns0:ref type='figure' target='#fig_0'>1D</ns0:ref>) and prevalence (i.e., proportion of corals by species with signs of predation) of tissue mortality caused by predation on remaining fragments (e.g., missing polyps, feeding scars) (Fig. <ns0:ref type='figure' target='#fig_0'>1E</ns0:ref>, F) at one week, one month, and six months after deployment. Due to funding constraints, the parent colonies used in this study were not formally genotyped. Nevertheless, fragments from every parent colony were represented in each reef and, thus the results combine the survivorship of corals from three parent colonies per species. The proportion of corals physically removed by fish predators from their outplanting platforms was compared among coral species, reefs, and time since outplanting using a Generalized Linear Model (GLM) following guidance by <ns0:ref type='bibr' target='#b66'>Warton and Hui (2011)</ns0:ref>. Here, we used a GLM with a binomial distribution and a logit link function to model the probability of outplant removal. The model incorporated species, reef, and monitoring period (time) as fixed variables. Residuals diagnostics plots were used to test model assumptions, and D-squared values, which indicate the amount of deviance accounted for by the models (i.e., analogous to R 2 ), were used to evaluate the goodness of fit of the selected models. Tukey post hoc tests were used to evaluate pairwise differences among the levels of the categorical variables in the models (species, reefs, time). All statistical analyses were performed in R v3.5.3 (R Core Team 2017).</ns0:p></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment Two: Reducing Predation Impacts</ns0:head><ns0:p>Based on the high level of predation observed during Experiment One, a second study was designed to determine if predation impacts could be minimized through modifications to the outplanting method. This experiment tested the role of the skeletal profile (i.e., the height of the coral fragment) and attachment medium (glue vs cement) on coral removal and predation rates. This experiment used coral fragments (average size = 2.8 ± 6.5 cm 2 (mean ± SD)) from five P. clivosa colonies. Due to funding constraints, the parent colonies used in this study were not formally genotyped. Nevertheless, fragments from every parent colony were represented in each treatment and, thus the results combine the survivorship of corals from the five parent colonies. A high level of predation and abundance of fish predators were recorded at Reef 1 in the first experiment, so this location was chosen as the study site for the second experiment.</ns0:p><ns0:p>Compared to fragments with healed, skirting edges, corals with exposed skeletal walls may provide easier access to the coral tissue and encourage growth of endolithic or turf algae that could attract grazing by fish. Hence, we hypothesized that the exposed skeletal profile (height) and the presence/absence of bare skeleton on the sides of a fragment would influence predation patterns (Fig. <ns0:ref type='figure' target='#fig_1'>2A-B</ns0:ref>). We further hypothesized that rate of the physical removal of outplanted fragments would be related to the attachment method. To test this, we developed a triangular cement platform (cement 'pizza'; Fig. <ns0:ref type='figure' target='#fig_1'>2C-D</ns0:ref>) that used cement (in lieu of glue) to secure corals and allowed the height of the fragments to be adjusted by varying the amount of cement used. Corals were secured to the cement pizzas as four treatments: 1) 'raised exposed', with fragments placed on top of the cement so that vertical walls (devoid of tissue) protruded from the cement treatment (Fig. <ns0:ref type='figure' target='#fig_1'>2E</ns0:ref>);</ns0:p><ns0:p>2) 'raised covered', with fragments placed on top of the cement so that vertical walls (covered with tissue) protruded from the cement treatment (Fig. <ns0:ref type='figure' target='#fig_1'>2F</ns0:ref>);</ns0:p><ns0:p>3) 'flushed', with fragments embedded into the cement so that the fragment was level with the cement platform and only the surface of the coral was visible (Fig. <ns0:ref type='figure' target='#fig_1'>2G</ns0:ref>); 4) 'embedded', with fragments embedded into the cement so that the coral was positioned 1 cm below the surface of the cement to prevent access by fish (Fig. <ns0:ref type='figure' target='#fig_1'>2H</ns0:ref>).</ns0:p><ns0:p>Individual coral fragments were attached in groups of three (triads) onto each cement pizza. Corals were deployed in clusters of 3 fragments to foster coral fusion and faster colony development, as described in <ns0:ref type='bibr'>Paget et al. (2018)</ns0:ref>. The pizzas were cemented individually onto the reef pavement within plots (n = 120 corals placed onto 40 pizzas). Each plot consisted of 10 pizzas (n = 3-4 pizzas per treatment) deployed haphazardly within the plot and spaced 30-50 cm apart. Plots were separated by 1 m. In addition to using the cement pizzas, coral fragments were mounted onto ceramic plugs using glue and outplanted directly onto the reef (as used in <ns0:ref type='bibr' target='#b46'>Page et al. 2018)</ns0:ref> to serve as controls (Fig. <ns0:ref type='figure' target='#fig_0'>1B, D</ns0:ref>). All fragments used as controls had healed skeletal walls covered in tissue. Control corals mounted on plugs were grouped as triads with spacing between corals similar to the pizzas, and deployed directly onto the substrate within the same plots as the experimental corals using cement. Each plot received 5 control triads (n = 20 triads, 60 corals). All corals were outplanted in August 2019 and coral condition surveys were conducted immediately after deployment and again after one and three weeks to document the presence/absence of coral fragments and evidence of tissue mortality caused by predation. The average percent tissue removal was estimated visually for each coral using 10%-classification bins. Lastly, the proportion of the tissue area covered by sediments for each coral outplant was visually evaluated at one and three weeks using the methods just described. Values for these two metrics were averaged within pizzas/triads and compared among treatments using ANOVA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Coral Cover, Fish Abundance, and Predation Surveys</ns0:head><ns0:p>The percent cover of stony corals at the three reefs selected was calculated using the point-count method as described by <ns0:ref type='bibr' target='#b34'>Lirman et al. (2007)</ns0:ref>. At each reef, three plots (10 m in diameter) were haphazardly selected in the vicinity of where the coral fragments were deployed. Within each plot, 25 images were haphazardly collected at a distance of 50 cm from the bottom. The cover of stony coral was calculated using random points overlaid onto each image using the CpCe software <ns0:ref type='bibr' target='#b30'>(Kohler and Gill 2006)</ns0:ref>. The proportion of random points placed over stony corals was divided by the total number of points (n = 25 per image) to calculate the proportional cover of corals. Mean percent coral cover was calculated for each plot and averaged for each reef (n = 3 plots). Fish surveys to compare fish abundance at coral outplant sites were conducted as part of experiment one at each reef site using the Reef Visual Census (RVC) method <ns0:ref type='bibr' target='#b4'>(Bohnsack and Bannerot 1986)</ns0:ref>. Using this method, the surveyor recorded the abundance of fish taxa from a stationary point at the center of the study site within a cylindrical 15-m diameter survey area, extending from the substrate to the surface of the water column. Each survey was completed in 15 min and all fishes observed were identified to species level. Between May 2018 and February 2019, we completed 13 RVC surveys at Reef 1, 10 surveys at Reef 2, and 14 surveys at Reef 3. During monitoring, all three reefs were surveyed within one month. All surveys were completed by a single, expert observer. The mean abundance of all corallivorous or predatory fish <ns0:ref type='bibr' target='#b58'>(Robertson 1970;</ns0:ref><ns0:ref type='bibr' target='#b49'>Randall 1974</ns0:ref>) was compared among reefs using ANOVA. In addition to the visual fish surveys, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot. Each video was viewed and a list of species interacting with the outplants was compiled. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment One: Assessing Predation Impacts</ns0:head><ns0:p>Two types of fish-predation impacts were documented: 1) physical removal of outplanted fragments, and 2) tissue removal from corals that remained attached to outplanting platforms. The probability of outplant removal was explained by coral species, reef, and time as fixed effects (GLM χ 2 -test, p < 0.05) and the model explained 67% of the deviance (Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>, Table <ns0:ref type='table'>S1, S2</ns0:ref>). The probability of removal was species-dependent. One week after deployment, 8% of M. cavernosa (n = 53 fragments outplanted), 12% of O. faveolata (n = 123), 23% of P. clivosa (n = 80), and 27% of P. strigosa (n = 41) fragments were physically removed from the outplant platforms by fish (all sites combined) (Fig. <ns0:ref type='figure' target='#fig_3'>4A</ns0:ref>). The ranking of the probability of removal for the four species was consistent across reefs and time (Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>). There was a minor, but significant increase in the probability of removal over time (Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>, Table <ns0:ref type='table'>S2</ns0:ref>). The majority of removal occurred during the first week, but corals continued to be removed over time, with an additional 1.9% of M. cavernosa, 6.6% of O. faveolata, 7.1% of P. clivosa, and 7.0% of P. strigosa removed between one and six months after deployment (Fig. <ns0:ref type='figure' target='#fig_3'>4A</ns0:ref>).</ns0:p><ns0:p>The species with the highest prevalence of fish bites one week after deployment were the two Pseudodiplora species, followed by O. faveolata. M. cavernosa was the only species that did not show any signs of predation on remaining corals after one week (Fig. <ns0:ref type='figure' target='#fig_3'>4B</ns0:ref>). Similar to the rate of removal, predation prevalence slowed over time, as only an average of 0.3% of surviving corals of all four species combined showed fish bites at the one-month survey compared to 9.2% after the first week. After six months, no signs of predation were observed for surviving M. cavernosa and P. strigosa, and < 1% of colonies of the remaining two species showed evidence of fish bites (Fig. <ns0:ref type='figure' target='#fig_3'>4B</ns0:ref>).</ns0:p><ns0:p>Cover of stony corals recorded at the three outplant sites was very low. Mean coral cover was 0.85 (± 1.0) for Reef 1, 0.8 (± 0.4) for Reef 2, and 0.04 (± 0.07) for Reef 3. The prevalence of fish predation, including complete fragment removal and fish bites, was highest at Reefs 1 and 2, which coincided with the significantly greater abundance of corallivorous fish taxa recorded at these two sites compared to Reef 3 (ANOVA; Tukey-Kramer HSD test; p = < 0.05) (Fig. <ns0:ref type='figure' target='#fig_3'>4C, D</ns0:ref>). The average number of corallivorous fish (i.e., parrotfishes, damselfishes, butterflyfishes, surgeonfishes, triggerfishes) was 2.7 individuals survey -1  6.0 (mean  SD) at Reef 1, 2.0  4.7 at Reef 2, and only 0.8  2.3 at Reef 3 (Fig. <ns0:ref type='figure' target='#fig_3'>4B</ns0:ref>). Complete coral removal was 17% at Reef 1, 26% at Reef 2, and only 7% at Reef 3 after one week (Fig. <ns0:ref type='figure' target='#fig_3'>4C</ns0:ref>). Similarly, signs of fish predation were higher among the remaining corals at Reefs 1 (13.7% corals with evidence of predation) and Reef 2 (13.1%), while no evidence of fish bites was observed at Reef 3 after one week (Fig. <ns0:ref type='figure' target='#fig_3'>4C</ns0:ref>). The fish taxa observed biting coral fragments included butterflyfishes, parrotfishes, and damselfishes (Table <ns0:ref type='table'>S3</ns0:ref>). Wrasses and surgeonfishes were also observed approaching the coral outplants but not necessarily biting the coral tissue. While no direct evidence of predation by triggerfishes (a known coral predator) was captured, this taxon was seen in the vicinity of outplants in the video collected in experiment two. Grunts, surgeonfish, and wrasses were the most consistently sighted fish across all 3 sites and were recorded during all 37 surveys. Parrotfishes and damselfishes were also regularly observed at all three outplant locations, having been recorded as being present within 34 and 35, respectively, out of the 37 surveys conducted. Chaetodontidae were recorded within 27 of the 37 surveys, and were present during all 13 surveys at Reef 1, 8 out of 10 surveys conducted at Reef 2, but only 6 out of 14 surveys completed at Reef 3. It is important to note that no evidence of fish removing the corals from their outplanting platforms was captured in our video surveys. Nevertheless, the removal by fish was considered the only driver of the missing corals as corals did not detach during transport nor during their time at the nursery where no fish predators are observed (pers. obs.)</ns0:p><ns0:p>In addition to fragment removal and partial tissue mortality caused by predation, complete coral mortality was observed. After one week, 4% of P. clivosa, 5% of O. faveolata, 9% of M. cavernosa, and 17% of P. strigosa fragments that remained attached to the outplant platforms showed 100% tissue mortality (all sites combined). After six months, the cumulative prevalence of complete mortality was 4% for P. clivosa, 16% for M. cavernosa, 27% for P. strigosa, 41% for O. faveolata fragments. When removal and complete tissue mortality were combined for all corals and sites combined, 26% of corals died after one week, 30% of corals died after one month, and 51% of corals died within six months of outplanting. Overall, M. cavernosa suffered 26% losses (removal + 100% tissue mortality), followed by P. clivosa (40%), O. faveolata (62%), and, finally, by P. strigosa (73%). While it was not possible to ascertain the cause of mortality (besides that visibly caused by predation) among the coral outplants, no evidence of active stony coral tissue loss disease (SCTLD), which affected the reefs of South Florida in recent years, was observed on outplanted or wild corals at any of the sites during either experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment Two: Reducing Predation Impacts</ns0:head><ns0:p>The mode of attachment of outplanted corals influenced removal patterns. After one week, 14% of the corals fixed to ceramic plugs using glue were removed, while none of the corals outplanted using cement within the pizzas were missing. After three weeks, still none of the corals deployed on pizzas were removed, whereas 54% of the corals outplanted using plugs were missing. While none of the corals in any of the four cement pizza treatments were removed, fish predation impacts were significantly affected by coral treatment within the cement bases (ANOVA, p < 0.05) (Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>). After three weeks, the average percentage of tissue removed by predation was significantly lowest for corals within the 'embedded' treatment and highest for corals placed within the 'raised exposed' treatment and corals outplanted using plugs (Tukey-Kramer HSD test; p < 0.05) (Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>). No significant differences were found between corals in the 'raised covered' and 'flushed' treatments (Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>). Predation impacts were lowest among embedded corals, but only corals within this treatment experienced sediment accumulation on the surface of the colony. For corals within the embedded treatment, the average of the total surface area of the coral outplants covered by sediments was 3.1% ± 2.4 (mean ± SD) after one week and 3.8% ± 3.5 (mean ± SD) after three weeks. Neither controls outplanted using plugs nor corals within the other three pizza treatments accumulated sediments on the coral surfaces.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The use of fragmented massive corals expands the number of species of corals available for reef restoration beyond the initial, decade-long focus on branching corals. Massive corals are key reef-building taxa that have experienced accelerated losses in the past few years due to the stony coral tissue loss disease (SCTLD) epidemic that was first detected in Florida in 2014 <ns0:ref type='bibr' target='#b47'>(Precht et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b65'>Walton et al. 2018)</ns0:ref> and has now been documented in several locations in the Caribbean <ns0:ref type='bibr' target='#b1'>(Alvarez-Filip et al. 2019)</ns0:ref>. The impacts of SCTLD, added to the historical declines in these taxa, has created a need to move from single-taxa restoration to a community-based approach that includes corals with different life histories and disturbance responses <ns0:ref type='bibr' target='#b36'>(Lustic et al. 2020)</ns0:ref>. While massive corals can be successfully propagated both in situ and ex situ <ns0:ref type='bibr' target='#b2'>(Becker and Mueller 2001;</ns0:ref><ns0:ref type='bibr' target='#b16'>Forsman et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b46'>Page et al. 2018)</ns0:ref>, our study identified a significant bottleneck in restoration success caused by fish predation on newly outplanted fragments. In our study, 8-27% of fragments from four species (O. faveolata, M. cavernosa, P. clivosa, P. strigosa) outplanted onto three reefs in Miami, Florida, US, were removed by fish within one week. A prior study from Florida also documented large predation impacts on M. cavernosa and O. faveolata, with 45% and 22% of fragments affected by predation respectively within the first week <ns0:ref type='bibr' target='#b46'>(Page et al. 2018)</ns0:ref>. With coral cover being so low presently on Florida reefs (< 1% coral cover on the reefs used in this study), it is likely that fish predation is being concentrated on outplanted corals, posing concerns for restoration in depleted systems until a critical abundance threshold is reached <ns0:ref type='bibr' target='#b61'>(Schopmeyer and Lirman 2015)</ns0:ref>. Supporting this concept, <ns0:ref type='bibr' target='#b25'>Jayewardene et al. (2009)</ns0:ref> found lower prevalence of fish bites on coral nubbins in plots with higher coral cover.</ns0:p><ns0:p>The concentration of predation on surviving corals after major declines in abundance due to a hurricane was previously documented by <ns0:ref type='bibr' target='#b29'>Knowlton et al. (1990)</ns0:ref>.</ns0:p><ns0:p>Previously, research efforts have focused mainly on the impacts of reef fishes on the abundance and distribution of macroalgae, so our understanding of their direct effects on stony corals is comparatively more limited. Only 10 families of fishes have been reported to consume coral polyps and even fewer taxa classified as obligate corallivores <ns0:ref type='bibr' target='#b58'>(Robertson 1970;</ns0:ref><ns0:ref type='bibr' target='#b49'>Randall 1974)</ns0:ref>. Species within the Chaetodontidae (Butterflyfishes), Balistidae (triggerfishes), and Tetraodontidae (pufferfishes) families are among the most common corallivorous fishes <ns0:ref type='bibr' target='#b21'>(Hixon 1997)</ns0:ref>. In this study, butterflyfish, wrasses, parrotfish, surgeonfish, and damselfish consumed or interacted with newly outplanted corals. Except for butterflyfishes that were observed biting coral tissue, it remained unclear from our visual and video observations whether fragments were physically removed by fish grazing on algae growing on exposed coral skeletons or direct consumption of coral tissue.</ns0:p><ns0:p>The high impacts recorded here on coral outplants may be the result of consumptive or territorial activity (or a combination of both). Both butterflyfishes <ns0:ref type='bibr' target='#b51'>(Reese 1989;</ns0:ref><ns0:ref type='bibr'>Roberts and Ormand 1992)</ns0:ref> and the adult terminal phase male stoplight parrotfish Sparisoma viride <ns0:ref type='bibr' target='#b10'>(Bruggemann et al. 1994;</ns0:ref><ns0:ref type='bibr' target='#b9'>Bruckner et al. 2000)</ns0:ref> have been observed to bite corals within their territories, which supports that certain fish species may selectively target new coral outplants as soon as they appear within their territories. Predation impacts on coral outplants were highest within the first week and tapered off with time, declining to <1% of remaining corals removed after six months, suggesting outplant habituation of the fish fauna to new coral 'recruits' may play a role. Similar patterns of temporal predation on coral outplants were reported in Guam, where predation impacts from butterflyfishes and triggerfishes were high within one week of deployment <ns0:ref type='bibr' target='#b44'>(Neudecker 1979)</ns0:ref>. Whether the decline in coral removal by fish was a result of corals reaching a size refuge as they grew or due to habituation of the fish to the presence of these corals could not be ascertained in this study. The territories of potential fish predators like parrotfishes were not assessed in this study and we were thus unable to differentiate between the impacts of both these factors nor their interaction. Similarly, the high level of predation may have been caused by the relatively close spacing of outplanted corals (30-60 cm) so that once a prey item was detected, detection of additional corals within a grid was autocorrelated. Thus, the potential role of fish territoriality and spacing of corals on impacts on newly outplanted corals needs further investigation, especially considering the high impacts recorded here that represent a drain in restoration resources.</ns0:p><ns0:p>In our study, impacts of predation varied by species, with P. clivosa and P. strigosa experiencing the highest levels of mortality. While the potential reasons for the differences in species susceptibility to predation were not measured here, factors such as palatability, nutritional content, or skeletal characteristics may play a role and need to be investigated further. Nevertheless, prey selection based on coral species has been previously documented for Chaetodon unimaculatus that showed a preference for feeding on Montipora verrucosa in Hawaii <ns0:ref type='bibr' target='#b11'>(Cox 1986)</ns0:ref>, by Balistapus undulatus that targeted Pocillopora damicornis over Seriatopora hystrix <ns0:ref type='bibr' target='#b19'>(Gibbs and Hay 2015)</ns0:ref>, and by butterflyfish that target Acropora over other coral taxa <ns0:ref type='bibr' target='#b3'>(Berumen 2005)</ns0:ref>. Similarly, wild and outplanted A. cervicornis and O. annularis were targeted by the territorial three-spot damselfish <ns0:ref type='bibr' target='#b27'>(Kaufman 1977;</ns0:ref><ns0:ref type='bibr' target='#b29'>Knowlton et al. 1990;</ns0:ref><ns0:ref type='bibr' target='#b61'>Schopmeyer and Lirman 2015)</ns0:ref>.</ns0:p><ns0:p>Fish predation impacts varied by reef and were associated with the abundance of fish taxa known to consume coral tissue. Differences in predation impacts between sites were also documented by <ns0:ref type='bibr'>Paige et al. (2018)</ns0:ref> in the Florida Keys. Similar to our findings, <ns0:ref type='bibr' target='#b48'>Quimpo et al. (2019)</ns0:ref> suggested that coral outplants were more likely to be detached when outplanted onto reefs with higher biomass of herbivore and corallivore fishes in the Phillipines. Additionally, <ns0:ref type='bibr' target='#b48'>Quimpo et al. (2019)</ns0:ref> reported that incidental grazing of herbivorous fish, particularly the parrotfish Chlorurus spilurus, were the main sources of coral detachment, but that the direct predation by corallivorous fishes only minimally affected coral outplants. Incidental fish predation was also observed by herbivorous fish removing algal tissue from nursery ropes in the Seychelles by Frias-Torres and van de Geer <ns0:ref type='bibr'>(2015)</ns0:ref>. A simple response to these patterns would be to avoid outplanting on reefs with high abundance of these taxa, but it is important to note that parrotfishes and surgeonfishes (observed here to target outplanted corals) are also key grazers that are essential to maintain a low abundance of macroalgae on reefs <ns0:ref type='bibr' target='#b42'>(Mumby et al. 2006</ns0:ref>) and coral nursery settings <ns0:ref type='bibr' target='#b28'>(Knoester et al. 2019)</ns0:ref>. Best practices for the selection of outplanting sites developed for Acropora suggest that low abundance of macroalgae is a key attribute of an ideal restoration site <ns0:ref type='bibr' target='#b26'>(Johnson et al. 2011</ns0:ref>). In addition to reducing algal overgrowth, damselfish, triggerfishes, puffers, and other corallivorous fish have been documented to limit impacts of corallivorous invertebrates such as the crown-of-thorns starfish (Acanthaster plancii) and Coralliophillia snails <ns0:ref type='bibr' target='#b45'>(Ormond et al. 1973;</ns0:ref><ns0:ref type='bibr' target='#b61'>Schopmeyer and Lirman 2015)</ns0:ref>. Hence, avoiding reefs with a high abundance of grazers that also target corals is not a viable option as it may lead to algal overgrowth and higher impacts by non-fish corallivores. There is, thus, a clear need to develop efficient outplanting methods to minimize the impacts of fish predation on reefs with high abundances of fish herbivores.</ns0:p><ns0:p>While fish impacts were the predominant source of physical removal of fragments in the present study, remaining corals experienced tissue losses due to fish predation and other unknown factors resulting in the mortality of > 30% of remaining corals after six months. Fish predation has also been shown to reduce growth rates <ns0:ref type='bibr' target='#b39'>(Meesters et al. 1994)</ns0:ref>, decrease fecundity <ns0:ref type='bibr' target='#b64'>(Szmant-Froelich 1985;</ns0:ref><ns0:ref type='bibr' target='#b55'>Rinkevich and Loya 1989)</ns0:ref>, and increase susceptibility to diseases <ns0:ref type='bibr' target='#b67'>(Williams and Miller 2005;</ns0:ref><ns0:ref type='bibr' target='#b0'>Aeby and Santavy 2006)</ns0:ref>. Mortality of our outplanted corals was much higher than the average mortality (14.8%) reported for A. cervicornis one year after outplanting <ns0:ref type='bibr' target='#b62'>(Schopmeyer et al. 2017)</ns0:ref>, highlighting a bottleneck that needs to be addressed to optimize the long-term success of using fragmented massive corals for restoration. Lower fragment removal rates and reduced prevalence of fish predation were related to the attachment method (glue vs. cement). None of the fragments attached by cement were removed by fish predators showing that cement provides a stronger hold for the outplanted corals than the commonly used glue. Higher detachment of coral fragments attached with glue was also documented by <ns0:ref type='bibr' target='#b15'>Dizon et al. (2008)</ns0:ref>. Moreover, corals allowed to recover tissue over their exposed skeletal walls prior to outplanting ('raised healed' treatment) had less predation than corals with exposed skeletal walls ('raised exposed' treatment). Colony edges of exposed skeleton can be preferentially targeted by parrotfish feeding on turf or endolithic algae, resulting in the higher prevalence of fish bites recorded <ns0:ref type='bibr' target='#b8'>(Bruckner and Bruckner 1998)</ns0:ref>. Thus, allowing fragmented corals to skirt over the exposed skeleton and grow onto the attachment platform would be the desired step before outplanting. This approach is used in the microfragmentation method described by <ns0:ref type='bibr' target='#b46'>Page et al. (2018)</ns0:ref> where small microfragments composed mainly of tissue with limited skeleton are grown ex situ until the coral tissue reaches the edges of the ceramic plug, resulting in larger coral outplants without exposed skeletal walls and low height, thereby reducing predation risk. This would increase the time a fragment remains within nurseries but would also limit predation impacts. Finally, embedding corals into the cement platform ('embedded treatment') mimics this process by lowering coral height and the amount of skeletal walls exposed and reduced predation prevalence to < 1% of corals. While placing corals embedded into cement may be an option for limiting removal and predation mortality, embedded corals had > 3% of the coral tissue covered by sediments, highlighting a potential tradeoff between reduced predation and sediment impacts that needs to be further evaluated.</ns0:p><ns0:p>Lastly, one of the factors that may have resulted in the high levels of removal and predation recorded here may have been the size of the coral fragments used in this study. Prior research has shown a relationship between the size of the fragments or colonies used for restoration and their survivorship and susceptibility to fish predation. For example, a size refuge for coral nubbins was documented by <ns0:ref type='bibr' target='#b12'>Christiansen et al. (2009)</ns0:ref>, <ns0:ref type='bibr' target='#b25'>Jayewardene et al. (2009)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>Gibbs and Hay (2015)</ns0:ref>, and <ns0:ref type='bibr' target='#b48'>Quimpo et al. (2019)</ns0:ref> in field or laboratory experiments. Moreover, in a recent study by <ns0:ref type='bibr' target='#b36'>Lustic et al. (2020)</ns0:ref>, medium-sized (40-130 cm 2 ) colonies of Orbicella faveolata and Montastraea cavernosa outplanted onto a reef in the Florida Keys showed no significant impacts of fish predation, highlighting a potential size threshold for predationimpacts. Thus, the impacts from fish predation may be mitigated in Florida by outplanting larger fragments or, as described by <ns0:ref type='bibr' target='#b46'>Page et al. (2018)</ns0:ref>, by deploying smaller corals together as tight clusters to foster fusion and function as a larger skeletal unit. There is a trade-off between the number of corals derived from a single parent and the size of the fragments produced, so controlled experiments on the role of size on predation susceptibility are needed before the optimum size of massive coral outplants can be established, especially in habitats with high levels of fish predation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>As coral declines continue worldwide, active reef restoration has emerged as a powerful management alternative to slow down and eventually help reverse these declines (Natl. Acad. Sci. 2019). As the number of techniques and species used in restoration increase beyond the established success of branching corals, practitioners and scientists are collaborating to develop expanded guidelines and best practices. These are critically needed to both broaden the footprint of restoration while keeping restoration costs down. Our study, based on the restoration of small fragments (< 5 cm 2 ) of four species of massive corals, identified predation by fish as a major bottleneck in restoration success as the activities of a subset of fish taxa (butterflyfishes, wrasses, parrotfishes, surgeonfishes, damselfishes) caused both the high rates of removal of fragments and tissue mortality on remaining fragments. Thus, there is a need to develop methods to reduce these predatory impacts for massive-coral fragments for restoration to be an effective tool in Florida. Here, we identified fragment attachment method (cement performed better than glue) and coral placement (fragments performed better with tissue covering the skeletal walls, and deployed either flushed or embedded within outplanting platforms) as factors that can be used to reduce impacts. We also identified the need to conduct additional experiments to discern the interactive role of fish abundance and territoriality on fragment performance and to explore the role of fragment size and species palatability on survivorship. We believe that the recent development and adoption of microfragmentation as a technique for massive coral propagation will provide the corals needed to develop more efficient outplanting methods and circumvent the fish predation bottleneck identified here in the near future, allowing for the successful restoration of these keystone species. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:04:47808:1:1:NEW 9 Jul 2020)</ns0:note>
</ns0:body>
" | "Dear Editor,
We thank you for the opportunity to revise our manuscript based on the very constructive suggestions made by all reviewers. The number of questions the reviewers had is evidence of the interest there is among restoration scientists on the topic of massive coral propagation and restoration as the next frontier in active restoration. While our paper, only the second after Page et al. (2018) to address this coral outplanting approach, will not answer all of the questions posed, we believe it adds another important step towards developing best practices to expand our restoration toolbox. We have completed a major revision based on the reviewers’ suggestions.
Our revisions and responses to the reviewers appear below in bold type.
Reviewer 1
Comments for the author
Overall the study was well written, well organized, and was sound in the analyses. I commend the authors on the clear and organized manuscript.
Thank you
My only concerns about the study are regarding some methodological details that were missing and that require clarification. Please see the specific points below. Additional minor edits and comments can be found in the attached annotated PDF.
All suggested edits included in the annotated pdf were completed as follows:
Please provide more detail to describe where corals were outplanted. Onto cement hardbottom with low coral cover? amongst reef with high cover? near conspecifics? How many replicate fragments per species per site were deployed? and were there multiple clusters of corals at each site? More details are needed about the experimental design, spatial arrangement and sample size.
The description of the deployment scheme was expanded as follows for exp 1:
“At each reef, the corals were deployed within replicate, square grids (3 x 3 m, 5 x 5 m) based on substrate availability. Four such grids were deployed in Reef 1, 7 in Reef 2, and 6 in Reef 3. The corals were spaced 40-60 cm apart within each grid and grids were separated by at least 2 m. The coral outplants were placed at least 20 cm away from existing stony and soft corals, sponges, and the zoanthid Palythoa. In total, 57 M. cavernosa, 125 O. faveolata, 88 P. clivosa, and 45 P. strigosa were outplanted among the 3 reefs, with all 4 species represented in each plot.
For exp 2:
“Individual coral microfragments were attached in groups of three (triads) onto each cement pizza. Corals were deployed in clusters of 3 fragments to foster coral fusion and faster colony development, as described in Paget et al. (2018). The pizzas were cemented individually onto the reef pavement within plots (n = 120 corals placed onto 40 pizzas). Each plot consisted of 10 pizzas (n = 3-4 pizzas per treatment) deployed haphazardly within the plot and spaced 30-50 cm apart. Plots were separated by 1 m. In addition to using the cement pizzas, coral microfragments were mounted onto ceramic plugs using glue and outplanted directly onto the reef (as used in Page et al., 2018) to serve as controls (Fig. 1B, D). All fragments used as controls had healed skeletal walls covered in tissue. Control corals mounted on plugs were also grouped as triads with spacing between corals similar to the pizzas, and deployed directly onto the substrate within the same plots as the experimental corals using cement. Each plot received 5 control triads (n = 20 triads, 60 corals).”
More details are required for how the average percent tissue removal was calculated. Did you do this using some sort of image analysis? Estimates by eye into classification bins?
Text now reads:
“The average percent tissue removal was estimated visually for each coral using 10%-classification bins. Lastly, the proportion of the tissue area covered by sediments for each coral outplant was visually evaluated at one and three weeks using the methods just described. Values for these two metrics were averaged within pizzas/triads and compared among treatments using ANOVA.“
Which species were identified as corallivorous or predatory?
The text was modified to include the taxa considered fish predators as follows: “The average number of corallivorous fish (i.e., parrotfishes, damselfishes, butterflyfishes, surgeonfishes, triggerfishes) was 2.7 individuals survey-1 6.0 (mean SD) at Reef 1, 2.0 4.7 at Reef 2, and 0.8 2.3 at Reef 3 (Fig. 4B).”
It was unclear until the final sentence that the fish abundance and predation surveys were undertaken at all reefs, to provide context for both experiments. Since this section followed the second experiment, I assumed it was only relevant to that one. Consider re-organizing.
We did re-organize the section. The first sentence now reads, making it clear all 3 reefs were surveyed:
“Fish surveys were conducted at each reef site using the Reef Visual Census (RVC) method (Bohnsack and Bannerot 1986). “
We also modified the text relevant to the video surveys as this was brought up by the other reviewers. We now make clear that the video surveys were only used to identify fish interacting with the corals and no quantitative data (besides a species list) were used.
“In addition to the visual fish surveys, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.”
Line 225 lists the %s by species in a different order as line 228. Line 228 appears to be in numerical order. Consider ordering them the same way for ease of comparison.
Done. The species were re-ordered in increasing order based on mortality
Lines 299-304. This paragraph felt a bit unfinished and I was unclear about the point being made with the examples provided. Are you suggesting that the Pseudodiploria species were preferentially targeted because of the specific corallivores present? If so, which species would that be? And was there a preference by the 3-spot damsel for O.annularis observed in this study as well?
We modified the text to state that the factors that may influence prey selection were not tested here but warrant further investigation:
“In our study, impacts of predation varied by species, with P. clivosa and P. strigosa experiencing the highest levels of mortality. While the potential reasons for the differences in species susceptibility to predation were not measured here, factors such as palatability, nutritional content, or skeletal characteristics may play a role and need to be investigated further. Nevertheless, prey selection based on coral species has been previously documented for the teardrop butterflyfish (Chaetodon unimaculatus) that showed a preference for feeding on Montipora verrucosa in Hawaii (Cox 1986). Similarly, wild and outplanted A. cervicornis and O. annularis were selectively targeted by the territorial three-spot damselfish (Kaufman 1977; Knowlton et al. 1990; Schopmeyer and Lirman 2015).”
Could there be a confounding factor here that 'raised healed' corals had a larger starting surface area of live tissue because of the tissue re-growth down the walls? They may also be fully healed reducing the potential for disease transmission - were the other treatments fully healed?
We did not observe any disease as stated regardless of treatment. The driver of mortality here was predation.
• Line 99. More details are needed about the experimental design, spatial arrangement of fragments and sample size, particularly for experiment 1.
The description of the deployment scheme was expanded as follows for exp 1:
“At each reef, the corals were deployed within replicate, square grids (3 x 3 m, 5 x 5 m) based on substrate availability. Four such grids were deployed in Reef 1, 7 in Reef 2, and 6 in Reef 3. The corals were spaced 40-60 cm apart within each grid and grids were separated by at least 2 m. The coral outplants were placed at least 20 cm away from existing stony and soft corals, sponges, and the zoanthid Palythoa. In total, 57 M. cavernosa, 125 O. faveolata, 88 P. clivosa, and 45 P. strigosa were outplanted among the 3 reefs, with all 4 species represented in each plot.
For exp 2:
“Individual coral microfragments were attached in groups of three (triads) onto each cement pizza. Corals were deployed in clusters of 3 fragments to foster coral fusion and faster colony development, as described in Paget et al. (2018). The pizzas were cemented individually onto the reef pavement within plots (n = 120 corals placed onto 40 pizzas). Each plot consisted of 10 pizzas (n = 3-4 pizzas per treatment) deployed haphazardly within the plot and spaced 30-50 cm apart. Plots were separated by 1 m. In addition to using the cement pizzas, coral microfragments were mounted onto ceramic plugs using glue and outplanted directly onto the reef (as used in Page et al., 2018) to serve as controls (Fig. 1B, D). All fragments used as controls had healed skeletal walls covered in tissue. Control corals mounted on plugs were also grouped as triads with spacing between corals similar to the pizzas, and deployed directly onto the substrate within the same plots as the experimental corals using cement. Each plot received 5 control triads (n = 20 triads, 60 corals).”
o If corals were 30-50 cm apart, I think it is likely that the impacts of predation on one fragment could influence the others in close proximity, which would mean that the individual fragments are not independent and are spatially autocorrelated.
This is a really interesting potential confounding factor that was not measured here or captured in the design. We acknowledge this in the discussion as follows:
“Similarly, the high level of predation may have been influenced by the relatively close spacing of outplanted corals (30-60 cm) so that once a prey item was detected, detection of additional corals within a grid was autocorrelated. Thus, the potential role of fish territoriality and spacing of corals on impacts on newly outplanted corals needs further investigation, especially considering the high impacts recorded here that represent a drain in restoration resources.”
o How many replicate fragments per species per site were deployed? and were there multiple clusters of corals at each site? Were the corals deployed in species-specific clusters, or all mixed together?
The description of the deployment scheme was expanded as follows:
“At each reef, the corals were deployed within replicate, square grids (3 x 3 m, 5 x 5 m) based on substrate availability. Four such grids were deployed in Reef 1, 7 in Reef 2, and 6 in Reef 3. The corals were spaced 40-60 cm apart within each grid and grids were separated by at least 2 m. The coral outplants were placed at least 20 cm away from existing stony and soft corals, sponges, and the zoanthid Palythoa. In total, 57 M. cavernosa, 125 O. faveolata, 88 P. clivosa, and 45 P. strigosa were outplanted among the 3 reefs, with all 4 species represented in each plot.
“Individual coral microfragments were attached in groups of three (triads) onto each cement pizza. Corals were deployed in clusters of 3 fragments to foster coral fusion and faster colony development, as described in Paget et al. (2018). The pizzas were cemented individually onto the reef pavement within plots (n = 120 corals placed onto 40 pizzas). Each plot consisted of 10 pizzas (n = 3-4 pizzas per treatment) deployed haphazardly within the plot and spaced 30-50 cm apart. Plots were separated by 1 m. In addition to using the cement pizzas, coral microfragments were mounted onto ceramic plugs using glue and outplanted directly onto the reef (as used in Page et al., 2018) to serve as controls (Fig. 1B, D). All fragments used as controls had healed skeletal walls covered in tissue. Control corals mounted on plugs were also grouped as triads with spacing between corals similar to the pizzas, and deployed directly onto the substrate within the same plots as the experimental corals using cement. Each plot received 5 control triads (n = 20 triads, 60 corals).”
• Line 169/Figure 4D. The fish abundance and predation surveys require more methodological details.
The text was modified as follows:
“Fish surveys to compare fish abundance at coral outplant sites were conducted as part of experiment one at each reef site using the Reef Visual Census (RVC) method (Bohnsack and Bannerot 1986). Using this method, the surveyor recorded the abundance of fish taxa from a stationary point at the center of the study site within a cylindrical 15-m diameter survey area, extending from the substrate to the surface of the water column. Each survey was completed in 15 min and all fishes observed were identified to species level. Between May 2018 and February 2019, we completed 13 RVC surveys at Reef 1, 10 surveys at Reef 2, and 14 surveys at Reef 3. During monitoring, all three reefs were surveyed within one month. All surveys were completed by a single, expert observer. The mean abundance of all corallivorous or predatory fish was compared among reefs using ANOVA. In addition to the visual fish surveys, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot. Each video was viewed and a list of species interacting with the outplants was compiled. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.”
o Are the data presented in Figure 4D from the visual surveys only? Or both visual surveys and video?
The following text was added to the Fig. 4 caption to clarify this point:
“D) Average (± S.D.) abundance of fish taxa that interacted with outplanted corals at the outplanting reefs based on the RVC fish surveys conducted at all 3 reefs.”
o 1-7 hours of video is a huge range. Was there a comparable amount of video captured from each site? Were they standardized for the time of day (fish behaviour is often diurnal/crepuscular). It is unclear if any data were collected from the video and used in analyses, and if so, how?
We modified the text relevant to the video surveys as this was brought up by the other reviewers. We now make clear that the video surveys were only used to identify fish interacting with the corals and no quantitative data (besides a species list) were used.
“In addition to the visual fish surveys, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.”
o Was the RVC survey conducted for a set duration? To what taxonomic level were the fish data collected? Were the data collected by one observer or multiple observers (there can be a strong observer bias in fish visual census data collection)?
I would recommend at the very least including a supplementary table with the dates and times and other details of each survey done across the reefs, in addition to Supp table 3. I would also suggest adding the sample size (number of surveys), average survey time, and standard deviations around the mean estimates to that table S3.
We have added details about the duration of the surveys, the taxonomic resolution, and the number of observers to the methods. The sample size and date range for surveys were already included. We do not believe adding specific dates would add anything meaningful to the information already included but stated that all 3 refs were surveyed within a month during each monitoring period:
“Each survey was completed in 15 min and all fishes observed were identified to species level. Between May 2018 and February 2019, we completed 13 RVC surveys at Reef 1, 10 surveys at Reef 2, and 14 surveys at Reef 3. During monitoring, all three reefs were surveyed within one month. All surveys were completed by a single, expert observer.”
• Line 102. Please provide more detail to describe where corals were outplanted and add contextual information about the reef. Were the fragments deployed onto cement hardbottom with low coral cover? Amongst or near corals or in areas of high cover? Near conspecifics? Were they near any obvious fish territories? Deliberately placed within or outside of fish territories?
The description of the deployment scheme was expanded as follows:
“At each reef, the corals were deployed within replicate, square grids (3 x 3 m, 5 x 5 m) based on substrate availability. Four such grids were deployed in Reef 1, 7 in Reef 2, and 6 in Reef 3. The corals were spaced 40-60 cm apart within each grid and grids were separated by at least 2 m. The coral outplants were placed at least 20 cm away from existing stony and soft corals, sponges, and the zoanthid Palythoa. In total, 57 M. cavernosa, 125 O. faveolata, 88 P. clivosa, and 45 P. strigosa were outplanted among the 3 reefs, with all 4 species represented in each plot.
“Individual coral microfragments were attached in groups of three (triads) onto each cement pizza. Corals were deployed in clusters of 3 fragments to foster coral fusion and faster colony development, as described in Paget et al. (2018). The pizzas were cemented individually onto the reef pavement within plots (n = 120 corals placed onto 40 pizzas). Each plot consisted of 10 pizzas (n = 3-4 pizzas per treatment) deployed haphazardly within the plot and spaced 30-50 cm apart. Plots were separated by 1 m. In addition to using the cement pizzas, coral microfragments were mounted onto ceramic plugs using glue and outplanted directly onto the reef (as used in Page et al., 2018) to serve as controls (Fig. 1B, D). All fragments used as controls had healed skeletal walls covered in tissue. Control corals mounted on plugs were also grouped as triads with spacing between corals similar to the pizzas, and deployed directly onto the substrate within the same plots as the experimental corals using cement. Each plot received 5 control triads (n = 20 triads, 60 corals).”
Additional contextual information would be particularly helpful for the discussion – are removal rates so high because obligate corallivores have no other food sources due to low coral cover at these sites?
The following text was added to methods and discussion to address this comment:
“Corals were outplanted onto three reef sites in Miami, Florida, in June-July 2018: Reef 1 (25.70° N, 80.09° W; depth = 6.0 m), Reef 2 (25.68° N, 80.10° W; 7.5 m), and Reef 3 (25.83° N, 80.11° W; 6.4 m). These reefs have low topography and very low cover of stony corals (< 1%, unpublished data).”
“With coral cover being so low presently on Florida reefs (< 1% coral cover on the reefs used in this study), it is likely that fish predation is being concentrated on outplanted corals, posing concerns for restoration in depleted systems until a critical abundance threshold is reached (Schopmeyer and Lirman 2015). The concentration of predation on surviving corals after major declines in abundance due to a hurricane were previously documented by Knowlton et al. (1990).”
• Line 164, Fig 5. The authors report the % of tissue removed by predation in Figure 5. However, it is unclear how the average percent tissue removed was actually estimated (Line 164). Was the % estimated in situ by eye? Was the estimate made through image analysis of the fragments post-censusing? Was this an estimate on each fragment averaged across pizzas/triads? Please clarify.
Text was modified as follows:
“The average percent tissue removal was estimated visually for each coral using 10%-classification bins. Lastly, the proportion of the tissue area covered by sediments for each coral outplant was visually evaluated at one and three weeks using the methods just described. Values for these two metrics were averaged within pizzas/triads and compared among treatments using ANOVA.”
• Line 90. Please define the ‘height of skeletal wall’ in more detail. Does this mean the depth of the skeletal structure below the penetration of live tissue? I ask this because the 4 species used are quite different in terms of polyp depth and the depth of live tissue penetrating within the skeleton. This can have follow-on consequences for time required for tissue regeneration on the sides of the fragment.
Text was modified as follows:
“Each parent colony was cut into small fragments (average size = 4.2 ± 1.9 cm2 (mean ± SD) using a diamond band saw, and fragments were attached to ceramic plugs using super glue as described by Page et al. (2018). After fragmentation the height of the fragments ranged from 0.5-1.0 cm. We believe the methods describe the fragmentation process adequately and we state that the fragments used had skeletal heights between 0.5-1.0 cm. “
While it is true that skeletal height may influence regeneration, we did not track this in this paper and we only examined removal based on attachment method and whether exposed skeleton was present along the sides of the fragment (exp 2).
• Lines 102-103. Please provide details for the epoxy and cement types used.
Done. Text now reads: “Microfragmented corals first glued onto ceramic plugs using polyurethane waterproof glue (Gorilla Glue) (Fig. 1A, B). The plugs with the corals were then mounted into cement pucks with 2-part epoxy putty (AllFix). Finally, these pucks were secured onto the reef using cement (1 part Portland cement, 0.1 part Silica fume),”
Reviewer 2
Basic reporting
I found that some of the references are old, especially those that pertain to the impact of fishes on coral nubbins. I have listed quite a few papers that could bring more emphasis on the role that fishes play in the detachment or partial mortality of corals (please see GENERAL COMMENTS – Introduction).
I suggest that this be mentioned in a paragraph or two in the introduction as it was a major objective of the study.
Experimental design
I found that some of the section placements were a bit odd (please see GENERAL COMMENTS – Methodology).
However, I have a few clarifications on the number of replicates, the scoring of coral tissue mortality and the variable video duration of the fish surveys (please see GENERAL COMMENTS – Methodology).
Validity of the findings
Data are provided by the authors, together with the R code that was used to run the statistical analyses. I have a few clarifications regarding the statistical analyses, particularly the response variables used in the GLM (i.e. is “proportion removed” (L115) the presence/absence of the microfragment, the tissue mortality or a combination of both?).
The text makes clear it is the proportion removed that is tested with the GLM:
“The proportion of corals physically removed by fish predators from their outplanting platforms was compared among coral species, reefs, and time since outplanting using a Generalized Linear Model (GLM) following guidance by Warton and Hui (2011). Here, we used a GLM with a binomial distribution and a logit link function to model the probability of outplant removal.”
However, I think that the discussion can be further improved by incorporating more details on fish detachment/predation on coral nubbins, and how the authors’ results (i.e. rates of detachment/mortality) are similar/dissimilar to these other studies (please see GENERAL COMMENTS – Discussion).
We have completely rewritten the discussion to address these issues as well as other raised by the additional reviewers.
There are some speculations in the discussion regarding unexplained outcomes of the experiment (e.g. L324-326 mortality through time with unidentified causes), but the authors have highlighted these sections and mentioned that these will be interesting research avenues in the future.
Yes, we fully acknowledge that due to the scope of this study we can’t address all outcomes and that is why we proposed future research avenues in several places within the Discussion.
Introduction
L42-45: These two stressors are likely the two most pressing issues that coral reefs face due to their large spatial coverage (Hoey et al. 2016 – Diversity), and it might be good to highlight that. “Globally, rising ocean temperature and ocean acidification are perhaps the ….. and can lead to mass coral mortality and reduced calcification rates”.
We have added the suggested text and reference:
“Coral populations have experienced drastic declines due to a variety of stressors (Gardner et al. 2003; McLean et al. 2016). Globally, increases in ocean temperature and ocean acidification are likely the most serious threats and can lead to mass coral mortality and reduced calcification rates (Hoey et al. 2016).”
L49-51: Coral restoration is only one of the suites of tools used to conserve coral reefs. It might be good to indicate that addressing these issues (e.g. climate change) will need proper global and local management and governance, but active restoration programs can aid in the faster recovery and rehabilitation of reefs.
Agreed. The text was modified as follows to reflect this:
“The magnitude and rate of coral decline require drastic, large scale actions to curb climate change impacts as well as a suite of local conservation and management measures. Active coral reef restoration has developed as one of the tools available to foster coral recovery and restore the ecosystem services that healthy coral reefs provide (Rinkevich 2019).”
Methodology
L70-71: Commonly attached using glue, epoxy, etc.? Kindly indicate
Text was modified as follows:
“These microfragments can be mounted onto various types of substrate (e.g., ceramic plugs, plastic cards, cement pucks) using glue or epoxy and allowed to grow by skirting over the attachment platform before being outplanted.”
L76-78: I think this section can be expanded by specifying why is there a need to develop and evaluate ways to maximize outplant survivorship and success. For example, outplanted branching corals are vulnerable to detachment by hydrodynamic forces (Shafir et al. 2006 – Mar Biol; Shaish et al. 2008 – J Exp Mar Biol Ecol) or through the predation (of fishes or invertebrates) or incidental grazing of fishes (Miller & Hay 1998 - Oecologia; Christiansen et al. 2009 – Coral Reefs; Jayewardene et al. 2009 – Coral Reefs; Frias-Torres & van de Geer 2015 - PeerJ; Gibbs & Hay 2015 – PeerJ; Gallagher & Doroupolos 2017 – Coral Reefs; Knoester et al. 2019 – Mar Ecol Prog Ser; Quimpo et al. 2019 – Rest Ecol). The latter (i.e. influence of fishes), I think needs to be particularly mentioned since it is a key question in the study.
While the references suggested by the reviewer appear in detail in the Discussion, we did add a sentence to the end of the introduction highlighting why this type of studies with massive coral taxa are especially important at this time:
“…This is especially relevant in Florida and the Caribbean where the massive coral species used here have been severely impacted by the recent outbreak of stony coral tissue loss disease (SCTLD) (Precht et al. 2016; Alvarez-Filip et al. 2019).”
“Moreover, corals allowed to recover tissue over their exposed skeletal walls prior to outplanting (“raised healed” treatment) had less predation than corals with exposed skeletal walls (“raised exposed” treatment). Colony edges of exposed skeleton can be preferentially targeted by parrotfish feeding on turf or endolithic algae, resulting in the higher prevalence of fish bites recorded (Bruckner and Bruckner 1998). Thus, allowing fragmented corals to skirt over the exposed skeleton and grow onto the attachment platform would be the desired step before outplanting.”
L95-96: Can you kindly clarify why this is important to highlight?
This issue (exposed skeletal walls void of tissue) is addressed as a factor influencing predation impacts in the results and discussion:
“After three weeks, the average percentage of tissue removed by predation was significantly lowest for corals within the “embedded” treatment and highest for corals placed within the “raised exposed” treatment..”
L98: Can you please mention how many replicates were done per coral species?
Done. See prior edits based on suggestions made by Reviewer 1.
L101-104: Sentence can be improved. I suggest “Microfragmented corals … epoxy and were secured to the reef using cement, with 30-50 cm (from L109) separating each coral outplant”
The deployment description has been expanded based on suggestions made by Reviewer 1.
L109: Please remove “The corals were spaced …” if authors’ agree with the changes above
The deployment description has been expanded based on suggestions made by Reviewer 1.
L110: Is tissue mortality here scored as % of tissue loss? Kindly specify.
The text was modified to state how prevalence of mortality was assessed:
“For each outplant in this experiment, we documented presence/absence of the coral fragment (Fig. 1D) and prevalence (i.e., proportion of corals by species with signs of predation) of tissue mortality caused by predation on remaining fragments (e.g., missing polyps, feeding scars) (Fig. 1E, F) at one week, one month, and six months after deployment.”
L115: Is proportion of corals removed referring to detachment/dislodgement, fish predation or cumulatively an effect of both? Kindly specify.
The text was modified as follows:
“The proportion of corals physically removed by fish predators from their outplanting platforms was compared among coral species,..”
L124-167: It is quite unorthodox to include research questions and hypothesis in the methodology. I suggest some of the concepts, particularly how coral height and method of attachment be discussed in the introduction. These can be incorporated into the section (i.e. L76-78) that discusses how fish influence coral nubbins, since previous studies have shown that coral nubbin size (Christiansen et al. 2009 – Coral Reefs; Jayewardene et al. 2009 – Coral Reefs; Quimpo et al. 2019 – Res Ecol) influences that rate of detachment/mortality in field and laboratory settings. Moreover, method of attachment has also been shown to affect detachment, but the reasons for these were not really identified (e.g. Dizon et al. 2008 – Aquatic Conser; Levy et al. 2010 – Ecol Eng).
We decided to retain the structure as submitted. These are key methodological questions so we disagree they do not belong in the methods. None of the other reviewers had issues with this structure so we would like to keep it.
L171: Can you kindly expound on why deployment times for the video cameras were variable (i.e. 1 to 7 hours)?
Done. The following text was added:
“In addition to the visual fish surveys, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot. Each video was viewed and a list of species interacting with the outplants was compiled. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.”
L171-173: Sentence can be improved to explain why the fish surveys were done. “Additionally, …. to examine differences in fish species assemblages at each site …”
Agreed. Text was modified as follows:
“Fish surveys to compare fish abundance at coral outplant sites were conducted as part of experiment one at each reef site using the Reef Visual Census (RVC) method (Bohnsack and Bannerot 1986).”
L177: Reference used to classify trophic group of fishes (e.g. fishbase)?
We added the 2 references used:
“The mean abundance of all corallivorous or predatory fish (Robertson 1970; Randall 1974) was compared among reefs using ANOVA.”
Results
L187-190: It might be better to move this sentence down (i.e. after comments on L 192-193).
L191: It might be better to move this up to L187 after “species-dependent”. For example, “The probability …. with clear ranks in coral susceptibility to removal that was consistent among experimental periods and reef locations”.
L192-193: It might be better to move this sentence after the sentence above (i.e. comments on L191). “Moreover, there was a minor, but significant … in removal between … “.
We modified the text based on these comments as well as comments made by another review:
“The probability of removal was species-dependent. One week after deployment, 8% of M. cavernosa (n = 53 fragments outplanted), 12% of O. faveolata (n = 123), 23% of P. clivosa (n = 80), and 27% of P. strigosa (n = 41) fragments were physically removed from the outplant platforms by fish (all sites combined) (Fig. 4A). The ranking of the probability of removal for the four species was consistent across space (reefs) and time (Fig. 3). Moreover, there was a minor, but significant increase in the probability of removal over time (Fig. 3, Table S2). The majority of removal occurred during the first week, but corals continued to be removed over time, with an additional 1.9% of M. cavernosa, 6.6% of O. faveolata, 7.1% of P. clivosa, and 7.0% of P. strigosa removed between one and six months after deployment (Fig. 4A).”
Discussion
L270: It might also be good to highlight that removal declined with time to emphasize that such high rates of removal (8-27%) are only within the first few days after outplanting
Agreed: The following text appears in the Discussion:
“Predation impacts on coral outplants were highest within the first week and tapered off with time, declining to <1% of remaining corals removed after six months, suggesting outplant habituation of the fish fauna to new coral “recruits” may play a role.”
L271-272: Maybe highlight results from the authors’ own work here and relate with the previous study by Page et al. (2018). For instance, “Rates of predation were slightly lower at ~ 5-15% (estimated from Fig. 4B), which were x-fold to y-fold lower than predation rates by Page et al. (2018).”
Agreed, but we already do this by comparing our 1-week outcomes to the results of Page et al., the only other study to have looked at fish predation on moassive corals in Florida to date:
“In our study, 8-27% of fragments from four species (O. faveolata, M. cavernosa, P. clivosa, P. strigosa) outplanted onto three reefs in Miami, Florida, US, were removed by fish within one week. A prior study from Florida also documented large predation impacts on M. cavernosa and O. faveolata, with 45% and 22% of fragments affected by predation respectively within the first week (Page et al. 2018).”
L280-L286: This section can be further improved by incorporating recent research on family-specific consumption/detachment of corals by reef fishes (e.g. Triggerfish: Gibbs & Hay 2015 – PeerJ, Frias-Torres & van de Geer 2015; Butterflyfish: Gallagher & Doropoulos 2015 – Coral Reefs; Quimpo et al. 2019 – Res Ecol; Blennies: Christiansen et al. 2009; Parrotfish and Rabbitfish: Quimpo et al. 2019 – Res Ecol; Boxfish: Jayewardene et al. 2009).
We reviewed the missing references suggested by the reviewer and added several of these when appropriate as follows:
“Supporting this concept, Jayewardene et al. (2009) found lower prevalence of fish bites on coral nubbins in plots with higher coral cover.”
“…by Balistapus undulatus that targeted Pocillopora damicornis over Seriatopora hystrix (Gibbs and Hay 2015),..”
“Incidental fish predation was also observed by herbivorous fish removing algal tissue from nursery ropes in the Seychelles by Frias-Torres and van de Geer (2015).”
L292: Size refuge for coral nubbin has been demonstrated by Christiansen et al. (2009) Jayewardene et al. (2009), Gibbs & Hay (2015) and Quimpo et al. (2019) in field or laboratory experiments.
The following text was drafted to address the influence of size based on the suggestions of this reviewer:
“Lastly, one of the factors that may have resulted in the high levels of removal and predation recorded here may have been the size of the coral fragments used in this study. Prior research has shown a relationship between the size of the fragments or colonies used for restoration and their survivorship and susceptibility to fish predation. For example, a size refuge for coral nubbins was documented by Christiansen et al. (2009), Jayewardene et al. (2009), Gibbs and Hay (2015), and Quimpo et al. (2019) in field or laboratory experiments. Moreover, in a recent study by Lustic et al. (2020), medium-sized (40-130 cm2) colonies of Orbicella faveolata and Montastraea cavernosa outplanted onto a reef in the Florida Keys showed no significant impacts of fish predation, highlighting a potential size threshold for predation-impacts. Thus, the impacts from fish predation may be mitigated in Florida by outplanting larger fragments or, as described by Page et al. (2018) by deploying smaller corals together as tight clusters to foster fusion and function as a larger skeletal unit. There is a trade-off between the number of corals derived from a single parent and the size of the fragments produced, so controlled experiments on the role of size on predation susceptibility are needed before the optimum size of massive coral outplants can be established, especially in habitats with high levels of fish predation.”
L299-302: As butterflyfishes were the only species that were recorded to consume coral tissue in this study, it may be good to expound on the diet of these fishes. Specifically, do they consume any of the four species regularly, and if no studies have been done to understand whether they do, maybe a more general statement about the proportion of massive corals in their diet would suffice. Berumen et al. (2005 – Mar Ecol Prog Ser) and the book by Morgan Pratchett (Biology of Butterflyfishes) are potentially good references.
Butterflyfish (Berumen 2005) were added to the list of taxa that feed selectively on specific coral prey as suggested (see next comment).
L299-304: Feeding selectivity has also been demonstrated for Balistapus undulates, wherein they preyed more on nubbins of Pocillopora damicornis over Seriatopora hystrix (Gibbs & Hay 2015).
L308-311: This sentence could be moved to the paragraph before this one (i.e. L299-304) as this paragraph talks about the species responsible for removing coral nubbins.
Thank you for this reference. We added it to the discussion on prey preference.
The whole paragraph was rewritten as:
“In our study, impacts of predation varied by species, with P. clivosa and P. strigosa experiencing the highest levels of mortality. While the potential reasons for the differences in species susceptibility to predation were not measured here, factors such as palatability, nutritional content, or skeletal characteristics may play a role and need to be investigated further. Nevertheless, prey selection based on coral species has been previously documented for Chaetodon unimaculatus that showed a preference for feeding on Montipora verrucosa in Hawaii (Cox 1986), by Balistapus undulatus that targeted Pocillopora damicornis over Seriatopora hystrix (Gibbs and Hay 2015), and by butterflyfish that target Acropora over other coral taxa (Berumen 2005). Similarly, wild and outplanted A. cervicornis and O. annularis were selectively targeted by the territorial three-spot damselfish (Kaufman 1977; Knowlton et al. 1990; Schopmeyer and Lirman 2015). “
L311-314: Indeed, grazing by fishes is important to control algae at reefs, recent studies have also shown that in experimental outplants, ~ 25-80% of turf algae are removed by herbivores (i.e. Ctenochaetus striatus, Chlorurus spilurus and Siganus fuscescens) (Knoester et al. 2019 – Mar Ecol Prog Ser; Quimpo et al. 2019 – Res Ecol).
We added the missing reference (Knoester et al. 2019) as follows:
“While a simple response to these patterns would be to avoid outplanting on reefs with high abundance of these taxa, it is important to note that parrotfishes and surgeonfishes (observed here to target outplanted corals) are also key grazers that are essential to maintain a low abundance of macroalgae on reefs (Mumby et al. 2006) and coral nursery settings (Knoester et al. 2019).”
Figures
Fig. 1B – Maybe mention what species of Pseudodiploria since 2 species were used
Done
Fig. 1E, F – It may be better to encircle or point at the tissue lesion as unfamiliar readers may not readily distinguish these feeding scars.
Instead of modifying the figure by adding circles, we modified the legend as follows:
“E-F) evidence of fish predation on outplanted microfragments as shown by the white skeletal lesions without living tissue.”
Fig. 5 – Maybe specify that these were P. clivosa fragments as this was the only species used in the second experiment
Done
Tables
STable 1-3: No captions?
Not sure why the reviewer was unable to see the captions for Supplemental tables as there are uploaded in the system.
Nevertheless, here are the captions as uploaded:
Supplemental 1
Title: Summary tables of the Binomial Generalized Linear Model used to test the proportion/probability of coral removal.
Legend: Shown are the coefficient estimates in relation to a reference point for each factor, standard error of estimates, t statistics, and p values for the null hypothesis of no difference with respect to the reference point. Significant coefficients are bolded. Null deviance, deviance, and D-squared present a quality-of-fit of the model.
Supplemental 2
Title: Tukey pairwise comparisons among categorical variable levels of the Binomial Generalized Linear Model used to test the proportion/probability of fragment removal by fish.
Legend: Shown are estimates, standard errors (SE), Z statistics, and P values. Significant comparisons are bolded. Mcav = Montastraea cavernosa, Ofav = Orbicella faveolata, Pcliv = Pseudodiploria clivosa, Pstri = Pseudodiploria strigosa.
Supplemental 3
Title: Abundance of fish taxa commonly observed interacting with coral outplants recorded during visual RVC surveys at the three outplanting reefs.
Legend: Includes scientific and common names as well as average fish abundance for all three outplant sites.
Reviewer 3: Erinn Muller
Introduction
Line 49: consider adding in a reference discussing white band or white plague within S. Florida since diseases were decimating the area long before SCTLD.
Done. We added the Richardson et al. 1998 reference
Line 54: perhaps change to “Until recently, restoration programs…” since microfragmenting massive corals has occurred within the FRT since 2012/2013 and tens of thousands of outplants have already been placed on reefs at this point
Done. To date was replaced by Until recently as suggested
Methods:
Line 112: this says that genotypes were not tracked, which suggests that you did not identify which outplant came from which parent colony. Is this the case? Or did you mean to say that genotype was not assessed molecularly, but you tracked the colonies from particular parent colonies? Please clarify.
Text was modified and expanded as follows to address this concern:
“Due to funding constraints, the parent colonies used in this study were not formally genotyped. Nevertheless, fragments from every parent colony were represented in each reef and, thus. the results combine the survivorship of corals from three parent colonies per species.”
“Due to funding constraints, the parent colonies used in this study were not formally genotyped. Nevertheless, fragments from every parent colony were represented in each treatment and, thus. the results combine the survivorship of corals from the five parent colonies”
Line 116: spell out GLM and then abbreviate after; what factors were your predictor variables and random factors?
Done. GLM is spelled out at first mention and abbreviated thereafter.
The following text was also added:
“The model incorporated species, reef, and monitoring period (time) as fixed variables.”
Because of the use of proportions as our metric, we could not use individual corals as a random factor but we do not believe this statement needs to be included as it is implicit in the selection of proportions as predicted variable.
Line 155: there are substantial differences between your ‘controls’ and the Page et al method including: 1. size of corals, all outplants in Page et al. are grown out on land for 4-12 months after microfragmentation prior to outplanting until they reach the edge of the plugs. See figures in Page et al
2. There is no raised skeleton on the microfragments, as they consist of an incredibly thin layer of tissue with very little skeleton when they are initially fragged.
3. the ceramic plugs are attached with epoxy to be a gradual slope from the plug to the substrate.
4. no cement is ever used for attachment
5. microfragments are usual created as ~ 1 cm 2 frags, so I would actually not call your frags technically 'microfrags'. Just something again to acknowledge.
This needs to be acknowledged because as written it suggests the Page et al method is your control, but you have not followed the Page et al. methods in two key ways.
We acknowledge the difference and upadated the Discussion text as follows when describing our “embedded” treatment that most closely resembles the approach used by Page et al. (2018) as well as the fact that the Page et al. study deploys multiple plugs together to mimic a larger colony:
“…This approach is used in the microfragmentation method described by Page et al. (2018) where small microfragments composed mainly of tissue with limited skeleton are grown ex situ until the coral tissue reaches the edges of the ceramic plug, resulting in larger coral outplants without exposed skeletal walls and low height, thereby reducing predation risk. This would increase the time a fragment remains within nurseries but would also limit predation impacts. Finally, embedding corals into the cement platform (“embedded treatment”) mimics this process by lowering coral height and the amount of skeletal walls exposed and reduced predation prevalence to < 1% of corals.”
And
“Thus, the impacts from fish predation may be mitigated in Florida by outplanting larger fragments or, as described by Page et al. (2018), by deploying smaller corals together as tight clusters to foster fusion and function as a larger skeletal unit.”
We have kept the term microfragmentation when referring to the fragmentation process but now call our corals “fragments” as they were indeed larger 1 vs 2.8 cm 2) than the ones initially produced by Page et al.
Line 165: what’s the difference between ‘pizza’ and ‘plug triad’? I think they are the same so just use one here or explain the difference between the two. Maybe plug triads are for the control group. This just needs maybe one sentence to clarify.
Done. Text was modified as follows to make the distinction (pics of these are also shown in Fig. 1):
“In addition to using the cement pizzas, coral fragments were mounted onto ceramic plugs using glue and outplanted directly onto the reef (as used in Page et al., 2018) to serve as controls (Fig. 1B, D). All fragments used as controls had healed skeletal walls covered in tissue. Control corals mounted on plugs were grouped as triads with spacing between corals similar to the pizzas, and deployed directly onto the substrate within the same plots as the experimental corals using cement. Each plot received 5 control triads (n = 20 triads, 60 corals).”
Line 166: how was this visually assessed? Estimate percent covered? Ranked?
Text now reads:
“The average percent tissue removal was estimated visually for each coral using 10%-classification bins. Lastly, the proportion of the tissue area covered by sediments for each coral outplant was visually evaluated at one and three weeks using the methods just described. Values for these two metrics were averaged within pizzas/triads and compared among treatments using ANOVA.“
Line 170: 1 – 7 hours of a range seems like a lot. Why so variable and how did you account for this in the analyses? How did you tackle the video surveys vs in situ surveys? How did you collect data from the videos?
The following text was added:
“In addition to the visual fish surveys, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot. Each video was viewed and a list of species interacting with the outplants was compiled. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.”
Any attempt at capturing growth rates over those 6 months to assess not just survival but whether the corals are growing too?
Unfortunately, not as part of this study.
Validity of the findings
Results
Line 184: this needs to be explained in the methods (variables used etc).
The following text was added:
“The model incorporated species, reef, and monitoring period (time) as fixed variables.”
Line 185: what does the sequential addition of variables mean? Sounds like you ran the model, then added another factor and re-ran, but then you only provide on p value and R2 value. Does this represent all the variables included in the model?
We modified the text to solve this confusion as follows:
“The probability of outplant removal was explained by coral species, reef, and time as fixed effects (GLM χ2-test, p < 0.05) and the model explained 67% of the deviance (Fig. 3, Table S1, S2).”
Line 191: maybe just stay ‘consistent through space and time”
Done. Text modified as follows:
“The ranking of the probability of removal for the four species was consistent across space (reefs) and time (Fig. 3).”
Line 195: is this cumulative or between 1 months and 6 months (5 month time period). Shouldn’t these two time periods be standardized by day so that you are comparing rates rather than two time points that cover vastly different periods of time?
We clarified by changing the text as follows:
“removed between one and six months after deployment”
We do not believe calculating a removal rate by day makes sense as it assumes a linear removal rate over 5 months and the numbers would end up being very small fractions.
Line 201: provide an average value here rather than >9%, which could be anything.
Changed to 9.2%
Lines 205 – 223: I am confused why there are results on fish abundance here. The methods suggest fish data was collected for experiment 2, but these are results of experiment 1. Please clarify in the methods. Also are these results from in situ diver or video methods?
Sorry for the confusion. We modified the methods to indicate RVC fish surveys were done as part of experiment 1:
“Fish surveys were conducted as part of experiment one at each reef site using the Reef Visual Census (RVC) method (Bohnsack and Bannerot 1986).”
Line 232: What does “After this study” mean? By the end of the study?
Changed to: “Overall, …”
Line 250: extra parenthesis here
Removed
Comments for the author
Discussion
Line 262: use the reference from Mexico (Alvarez-Filip et al in PeerJ) to show that SCTLD is other places besides FL
The text was modified and the suggested reference added:
“Massive corals are key reef-building taxa that have experienced accelerated losses in the past few years due to the stony coral tissue loss disease (SCTLD) epidemic that was first detected in Florida in 2014 (Precht et al. 2016; Walton et al. 2018) and has now been documented in several locations in the Caribbean (Alvarez-Filip et al. 2019).”
Line 272: what I think is important to note within the Page et al. paper is that it was site specific. Offshore corals were highly predated while nearshore corals were predated much less. Acknowledging that site specific interactions between outplants and the fish community is another variable that needs to be addressed in the discussion.
Agreed. We added the following text to address this suggestion in the discussion:
“Differences in predation impacts between sites were also documented by Paige et al. (2018) in the Florida Keys.”
Line 296: there is another part of this…grow out period. You can microfrag and then have longer grow out to reach the size needed for better outplant success. Microfragments are typically outplanted at full plug size with very little plug showing. Then the epoxy shores up the side of the disk. So it looks like your methods were not necessarily typical for what has been used as methods for current outplanting efforts. I think this needs to be acknowledged too.
Adressed in the Discussion:
“This approach is used in the microfragmentation method described by Page et al. (2018) where small microfragments composed mainly of tissue with limited skeleton are grown ex situ until the coral tissue reaches the edges of the ceramic plug, resulting in larger coral outplants without exposed skeletal walls and low height, thereby reducing predation risk. . This would increase the time a fragment remains within nurseries but would also limit predation impacts.”
Line 302: make active voice rather than passive voice
Done: Text was modified as follows:
“Similarly, wild and outplanted A. cervicornis and O. annularis were selectively targeted by the territorial three-spot damselfish, affecting coral growth and survivorship (Kaufman 1977; Knowlton et al. 1990; Schopmeyer and Lirman 2015). “
Line 305: did you run regression analysis on this? If not, then you should use different words than ‘related’.
Changed to “associated with”.
Additional thoughts: why is predation such an issue for massive coral species and not Acropora cervicornis when they are both outplanted within fish territories and on reefs with high abundance? Might be worth discussing here as well.
We have expanded our discussion of territoriality and have added several references on the susceptibility to predation based on coral taxon but do not have any data to show why massive corals are targeted over branching Acropora. We do acknowledge that further experiments need to be conducted to measure palatability, nutritional content, and skeletal factors (beyond the scope of this paper though).
Can you tell from the video if the fish were biting the corals on the cement pizzas but just not removing them? Or were they deterred from biting the substrate in general?
The following text was added to note that coral removal by fish was not observed directly during deployment or in the video collected:
“It is important to note that no evidence of fish removing the corals from their outplanting platforms was captured in our video surveys. Nevertheless, the removal by fish was considered the only driver of the missing corals as corals did not detach during transport or during their time at the nursery where no fish predators are observed (pers. obs.)”
Please acknowledge growth rates and other ramifications for the methods you are suggesting. Although cement may decrease loss from predation, what could be the unintended consequences such as increased time for outplanting coral, potential influence on reduced growth rates by using cement (especially corals embedded in cement…they would take forever to fuse)?
I know you guys use cement all the time in restoration, but we see a significant reduction in growth rates for corals within our ex situ system that grow on cement vs ceramic plugs. The publication is in prep right now. We had to switch back from using cement to ceramic just to keep growth rates up. I think this at least needs to be discussed as a possible issue.
Unfortunately, we do not have growth data to compare coral growth on ceramic plugs and cement pizzas to address these valid issues.
We have had very good success with cement outplanting with Acropora and we are preparing a manuscript with these data but we do not have data yet on the influence of cement on massive coral growth.
Conclusion: seems like there are confounding factors as to whether it is ‘glue’ or the ‘ceramic plug’ that may have influenced removal rate. This needs to be clarified throughout the text. It could be the exposure of the plug since you did not grow out the tissue to the edge.
We were able to separate the effects of the glue and the plug by cementing corals on plugs and also in cement. Only the corals attached with glue were detached which even corals with exposed edges were not removed when placed in cement!
Acknowledgements: suggest identifying role of those you mention, any funding sources needed to be mentioned here?
Done. Text now reads:
“We would like to thank S. Schopmeyer, M. Kaufman, J. Carrick, J. Unsworth, and R. Delp for their help in the field. This project was funded by a grant from NOAA’s Restoration Center (award OAA-NMFS-HCPO-2016-2004840).”
Figures: recommend changing results presented within the figure legend to a visual within the figure. The equal signs do not show all comparisons (e.g. does Reef 1 = Reef 3?)
Identify ‘coral species’ rather than ‘species’ in legends (as you discuss fish and coral in this paper)
Added “coral” when denoting coral species. We would like to keep the figure as it is as it was well-liked by another reviewer who praised the content and ease of understanding.
We modified the legend as follows:
“Reef 1 = Reef 2, Reef 1 ≠ Reef 3, Reef 2 ≠ Reef 3.”
" | Here is a paper. Please give your review comments after reading it. |
649 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>As coral reefs continue to decline globally, coral restoration practitioners have explored various approaches to return coral cover and diversity to decimated reefs. While branching coral species have long been the focus of restoration efforts, the recent development of the microfragmentation coral propagation technique has made it possible to incorporate massive coral species into restoration efforts. Microfragmentation (i.e., the process of cutting large donor colonies into small fragments that grow fast) has yielded promising early results. Still, best practices for outplanting fragmented corals of massive morphologies are continuing to be developed and modified to maximize survivorship.</ns0:p><ns0:p>Here, we compared outplant success among four species of massive corals (Orbicella faveolata, Montastraea cavernosa, Pseudodiploria clivosa, and P. strigosa) in Southeast Florida, US. Within the first week following coral deployment, predation impacts by fish on the small (< 5 cm 2 ) outplanted colonies resulted in both the complete removal of colonies and significant tissue damage, as evidenced by bite marks. In our study, 8-27% of fragments from four species were removed by fish within one week, with removal rates slowing down over time. Of the corals that remained after one week, over 9% showed signs of fish predation. Our findings showed that predation by corallivorous fish taxa like butterflyfishes (Chaetodontidae), parrotfishes (Scaridae), and damselfishes (Pomacentridae) is a major threat to coral outplants, and that susceptibility varied significantly among coral species and outplanting method. Moreover, we identify factors that reduce predation impacts such as: 1) using cement instead of glue to attach corals, 2) elevating fragments off the substrate, and 3) limiting the amount of skeleton exposed at the time of outplanting. These strategies are essential to maximizing the efficiency of outplanting techniques and enhancing the impact of reef restoration.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Coral populations have experienced drastic declines due to a variety of stressors <ns0:ref type='bibr' target='#b18'>(Gardner et al. 2003;</ns0:ref><ns0:ref type='bibr' target='#b38'>McLean et al. 2016)</ns0:ref>. Globally, increases in ocean temperature and ocean acidification are the most serious threats and can lead to mass coral mortality and reduced calcification rates <ns0:ref type='bibr' target='#b22'>(Hoey et al. 2016)</ns0:ref>. Rising ocean temperatures have resulted in increased frequency and intensity of bleaching events <ns0:ref type='bibr' target='#b20'>(Heron et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hughes et al. 2019)</ns0:ref>. Also, ocean acidification has begun to reduce coral calcification rates and cause framework erosion <ns0:ref type='bibr' target='#b40'>(Muehllehner et al. 2016)</ns0:ref>. The magnitude and rate of coral decline require drastic, large scale actions to curb climate change impacts as well as a suite of local conservation and management measures. Active coral reef restoration has developed as one of the tools available to foster coral recovery and restore the ecosystem services that healthy coral reefs provide <ns0:ref type='bibr' target='#b53'>(Rinkevich 2019)</ns0:ref>. The present study focused on South Florida reefs where coral abundance has declined due to the interaction of high and low thermal anomalies <ns0:ref type='bibr' target='#b35'>(Lirman et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b37'>Manzello 2015;</ns0:ref><ns0:ref type='bibr' target='#b14'>Drury et al. 2017)</ns0:ref>, nutrient inputs and algal overgrowth <ns0:ref type='bibr' target='#b31'>(Lapointe et al. 2019)</ns0:ref>, hurricanes <ns0:ref type='bibr' target='#b32'>(Lirman and Fong 1997)</ns0:ref>, sedimentation <ns0:ref type='bibr' target='#b13'>(Cunning et al. 2019)</ns0:ref>, and coral diseases <ns0:ref type='bibr' target='#b51'>(Richardson et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b46'>Precht et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b64'>Walton et al. 2018)</ns0:ref>.</ns0:p><ns0:p>A popular method of coral reef restoration is coral gardening, where coral stocks propagated through sequential fragmentation within in-water and ex situ nurseries are outplanted in large numbers onto depleted reefs <ns0:ref type='bibr' target='#b52'>(Rinkevich 2006)</ns0:ref>. Until recently, restoration programs based on the coral gardening methodology have focused primarily on branching taxa like Acropora due to their rapid growth rates, resilience to fragmentation, pruning vigor, and ease of outplanting <ns0:ref type='bibr' target='#b6'>(Bowden-Kerby 2008;</ns0:ref><ns0:ref type='bibr' target='#b62'>Shaish et al. 2010;</ns0:ref><ns0:ref type='bibr' target='#b33'>Lirman et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b61'>Schopmeyer et al. 2017)</ns0:ref>. While acroporids rapidly enhance the structural complexity of reefs, focusing restoration efforts on single taxa ignores the role that diversity plays in ecosystem function <ns0:ref type='bibr' target='#b7'>(Brandl et al. 2019</ns0:ref>) and makes restored communities susceptible to disturbances like diseases or storms that affect branching corals disproportionately. Thus, there is a need to expand our restoration toolbox to include multiple species with different morphologies and life histories <ns0:ref type='bibr' target='#b36'>(Lustic et al. 2020)</ns0:ref>. The use of massive corals for restoration was initially hindered by the slow growth rates associated with these taxa. However, recent developments in microfragmentation and reskinning techniques <ns0:ref type='bibr' target='#b16'>(Forsman et al. 2015)</ns0:ref> that accelerate the growth of massive corals have made it possible to use these reefbuilding taxa for restoration.</ns0:p><ns0:p>The microfragmentation process involves fragmenting corals with massive morphologies into small (< 5 cm 2 ) ramets that consist mostly of living tissue and a limited amount of skeleton <ns0:ref type='bibr' target='#b45'>(Page et al. 2018)</ns0:ref>. These microfragments can be mounted onto various types of substrate (e.g., ceramic plugs, plastic cards, cement pucks) using glue or epoxy and allowed to grow by skirting over the attachment platform before being outplanted. Once fragmented, ramets of Pseudodiploria clivosa and Orbicella faveolata grew up to 48 cm 2 and 63 cm 2 per month, respectively <ns0:ref type='bibr' target='#b16'>(Forsman et al. 2015)</ns0:ref> and were thereby capable of achieving colony sizes within a few months that would otherwise take years to develop after natural recruitment. Moreover, a single parent colony can produce hundreds of ramets available for continued propagation and restoration.</ns0:p><ns0:p>The microfragmentation technique overcomes the slow-growth bottleneck, but methods for outplanting fragmented massive corals onto degraded reefs need to be developed and evaluated to maximize outplant survivorship and success. This is especially relevant in Florida and the Caribbean where the massive coral species used here have been severely impacted by the recent outbreak of stony coral tissue loss disease (SCTLD) <ns0:ref type='bibr' target='#b46'>(Precht et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alvarez-Filip et al. 2019)</ns0:ref>. The present study is one of the first to record the survivorship of small fragments of four species of massive corals (O. faveolata, Montastraea cavernosa, P. clivosa, and P. strigosa) outplanted onto reefs in Southeast Florida, US. To measure the success of this technique, we: 1) documented survivorship and removal probability of fragments outplanted using different techniques, 2) monitored the impacts of fish predation on newly planted fragments, and 3) evaluated different outplanting techniques that may reduce the impacts of predation (i.e. coral fragment removal and mortality).</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Coral Fragmentation</ns0:head><ns0:p>Colonies used in both experiments were collected from a seawall at Fisher Island, Miami, Florida (25.76° N, 80.14° W; depth = 1.8 m). Each parent colony was cut into small fragments (average size = 4.2 ± 1.9 cm 2 (mean ± SD) using a diamond band saw, and fragments were attached to ceramic plugs using super glue as described by <ns0:ref type='bibr' target='#b45'>Page et al. (2018)</ns0:ref>. After fragmentation, the height of the fragments ranged from 0.5-1.0 cm. The ceramic plugs with corals were placed on PVC frames and then fixed to coral trees <ns0:ref type='bibr' target='#b42'>(Nedimyer et al. 2011</ns0:ref>) at the University of Miami's in-water coral nursery (25.69° N, 80.09° W; depth = 9.4 m) where they were allowed to acclimate for 4-6 weeks before outplanting (Fig. <ns0:ref type='figure' target='#fig_1'>1A</ns0:ref>). After this recovery period, the fragments were strongly cemented (no corals were dislodged during the transport and outplanting steps) but had not fully skirted tissue onto the ceramic plugs. Due to funding constraints, the parent colonies used in this study were not formally genotyped. Nevertheless, fragments from every parent colony were represented in each reef and treatment and, thus, the results combine the survivorship of corals from three parent colonies per species in experiment one and five parent colonies in experiment two.</ns0:p></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment One: Assessing Outplant Survivorship</ns0:head><ns0:p>The first outplanting experiment consisted of four coral species with massive or brain colony morphologies: O. faveolata (listed as threatened under the US Endangered Species Act), P. clivosa, P. strigosa, and M. cavernosa. Fragmented corals were glued onto ceramic plugs using polyurethane waterproof glue (Gorilla Glue) (Fig. <ns0:ref type='figure' target='#fig_1'>1A, B</ns0:ref>). The plugs with the corals were then mounted into cement pucks with 2-part epoxy putty (AllFix). Finally, these pucks were secured onto the reef using cement (1 part Portland cement, 0.1 part Silica fume), raising corals 3 cm above the substrate to limit sediment and algal interactions (Fig. <ns0:ref type='figure' target='#fig_1'>1C</ns0:ref>). Only corals with healthy (no discoloration or lesions) tissue were used. Corals were outplanted onto three reef sites in Miami, Florida, in June-July 2018: Reef 1 (25.70° N, 80.09° W; depth = 6.0 m), Reef 2 (25.68° N, 80.10° W; 7.5 m), and Reef 3 (25.83° N, 80.11° W; 6.4 m). These reefs have low topography and very low cover of stony corals (< 1%). Corals were collected and outplanted under Florida's Fish and Wildlife Commission Permit SAL-19-1794-SCRP. Corals were deployed within replicate grids (from 3x3 to 5x5 m) whose areas were determined based on substrate availability. Four such grids were deployed in Reef 1, 7 in Reef 2, and 6 in Reef 3. The corals were spaced 40-60 cm apart within each grid to maintain a consistent density of corals across replicate plots and grids were separated by at least 2 m. The coral outplants were placed at least 20 cm away from existing stony and soft corals, sponges, and the zoanthid Palythoa. In total, 53 M. cavernosa, 123 O. faveolata, 80 P. clivosa, and 41 P. strigosa were outplanted among the 3 reefs, with all 4 species represented in each plot.</ns0:p><ns0:p>For each outplant in this experiment, we documented presence/absence of the coral fragment (Fig. <ns0:ref type='figure' target='#fig_1'>1D</ns0:ref>) and prevalence (i.e., proportion of corals by species with signs of predation) of tissue mortality caused by predation on remaining fragments (e.g., missing polyps, feeding scars) (Fig. <ns0:ref type='figure' target='#fig_1'>1E</ns0:ref>, F) at one week, one month, and six months after deployment. The proportion of corals physically removed by fish predators from their outplanting platforms was compared among coral species, reefs, and time since outplanting using a Generalized Linear Model (GLM) following guidance by <ns0:ref type='bibr' target='#b65'>Warton and Hui (2011)</ns0:ref>. Here, we used a GLM with a binomial distribution and a logit link function to model the probability of outplant removal. The model incorporated species, reef, and monitoring period (time) as fixed variables. Residuals diagnostics plots were used to test model assumptions, and D-squared values, which indicate the amount of deviance accounted for by the models (i.e., analogous to R 2 ), were used to evaluate the goodness of fit of the selected models. Tukey post hoc tests were used to evaluate pairwise differences among the levels of the categorical variables in the models (species, reefs, time). All statistical analyses were performed in R v3.5.3 (R Core Team 2017).</ns0:p></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment Two: Reducing Predation Impacts</ns0:head><ns0:p>Based on the high level of predation observed during Experiment One, a second study was designed to determine if predation impacts could be minimized through modifications to the outplanting method. This experiment tested the role of the skeletal profile (i.e., the height of the coral fragment) and attachment medium (glue vs cement) on coral removal and predation rates. This experiment used coral fragments (average size = 2.8 ± 6.5 cm 2 (mean ± SD)) from five P. clivosa colonies. A high level of predation and abundance of fish predators were recorded at Reef 1 in the first experiment, so this location was chosen as the study site for the second experiment.</ns0:p><ns0:p>Compared to fragments with healed, skirting edges, corals with exposed skeletal walls may provide easier access to the coral tissue and encourage growth of endolithic or turf algae that could attract grazing by fish. Hence, we hypothesized that the exposed skeletal profile (height) and the presence/absence of bare skeleton on the sides of a fragment would influence predation patterns (Fig. <ns0:ref type='figure' target='#fig_2'>2A-B</ns0:ref>). We further hypothesized that the rate of the physical removal of outplanted fragments would be related to the attachment method. To test this, we developed a triangular cement platform (cement 'pizza'; Fig. <ns0:ref type='figure' target='#fig_2'>2C-D</ns0:ref>) that used cement (in lieu of glue) to secure corals and allowed the height of the fragments to be adjusted by varying the amount of cement used. Corals were secured to the cement pizzas as four treatments: 1) 'raised exposed', with fragments placed on top of the cement so that vertical walls (devoid of tissue) protruded from the cement treatment (Fig. <ns0:ref type='figure' target='#fig_2'>2E</ns0:ref>);</ns0:p><ns0:p>2) 'raised covered', with fragments placed on top of the cement so that vertical walls (covered with tissue) protruded from the cement treatment (Fig. <ns0:ref type='figure' target='#fig_2'>2F</ns0:ref>);</ns0:p><ns0:p>3) 'flushed', with fragments embedded into the cement so that the fragment was level with the cement platform and only the surface of the coral was visible (Fig. <ns0:ref type='figure' target='#fig_2'>2G</ns0:ref>); 4) 'embedded', with fragments embedded into the cement so that the coral was positioned 1 cm below the surface of the cement to prevent access by fish (Fig. <ns0:ref type='figure' target='#fig_2'>2H</ns0:ref>).</ns0:p><ns0:p>Individual coral fragments were attached in groups of three (triads) onto each cement pizza to foster coral fusion and faster colony development as described in <ns0:ref type='bibr'>Paget et al. (2018)</ns0:ref>. The pizzas were cemented individually onto the reef pavement within plots (n = 120 corals placed onto 40 pizzas). Each plot consisted of 10 pizzas (n = 3-4 pizzas per treatment), with each pizzaspaced 30-50 cm apart. Plots were separated by 1 m. In addition to using the cement pizzas, coral fragments were mounted onto ceramic plugs using glue and outplanted directly onto the reef (as used in <ns0:ref type='bibr' target='#b45'>Page et al. 2018)</ns0:ref> to serve as controls (Fig. <ns0:ref type='figure' target='#fig_1'>1B, D</ns0:ref>). All fragments used as controls had healed skeletal walls covered in tissue. Control corals mounted on plugs were grouped as triads with spacing between corals similar to the pizzas, and deployed directly onto the substrate within the same plots as the experimental corals using cement. Each plot received 5 control triads (n = 20 triads, 60 corals). All corals were outplanted in August 2019 and coral condition surveys were conducted immediately after deployment and again after one and three weeks to document the presence/absence of coral fragments and evidence of tissue mortality caused by predation. The average percent tissue removal was estimated visually for each coral using 10%classification bins. Lastly, the proportion of the tissue area covered by sediments for each coral outplant was visually evaluated at one and three weeks using the methods just described. Values for these two metrics were averaged within pizzas/triads and compared among treatments using ANOVA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Coral Cover, Fish Abundance, and Predation Surveys</ns0:head><ns0:p>The percent cover of stony corals at the three reefs selected was calculated using the point-count method as described by <ns0:ref type='bibr' target='#b34'>Lirman et al. (2007)</ns0:ref>. At each reef, three plots (10 m in diameter) were haphazardly selected in the vicinity of where the coral fragments were deployed. Within each plot, 25 images were haphazardly collected at a distance of 50 cm from the bottom. The cover of stony coral was calculated using 25 random points overlaid onto each image using the Coral Point Count with Excel extension (CPCe) software <ns0:ref type='bibr' target='#b30'>(Kohler and Gill 2006)</ns0:ref>. The proportion of random points placed over stony corals was divided by the total number of points (n = 25 per image) to calculate the proportional cover of corals. Mean percent coral cover was calculated for each plot ( n = 3 plots per reef) and averaged for each reef. Fish surveys to compare fish abundance at coral outplant sites were conducted as part of experiment one at each reef site using the Reef Visual Census (RVC) method <ns0:ref type='bibr' target='#b4'>(Bohnsack and Bannerot 1986)</ns0:ref>. Using this method, the surveyor recorded the abundance of fish taxa from a stationary point at the center of the study site within a cylindrical 15-m diameter survey area, extending from the substrate to the surface of the water column. Each survey was completed in 15 min and all fishes observed were identified to species level. Between May 2018 and February 2019, we completed 13 RVC surveys at Reef 1, 10 surveys at Reef 2, and 14 surveys at Reef 3. During the 10-month monitoring period, all three reefs were surveyed within one month, with Reefs 1 and 3 surveyed opportunistically additional times. All surveys were completed by a single, expert observer. The mean abundance of all corallivorous or predatory fish <ns0:ref type='bibr' target='#b57'>(Robertson 1970;</ns0:ref><ns0:ref type='bibr' target='#b48'>Randall 1974</ns0:ref>) was compared among reefs using ANOVA.</ns0:p><ns0:p>In addition to the visual fish surveys conducted during the coral deployment for experiment one, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot for both experiments. Each video was viewed and a list of species interacting with the outplants was compiled. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment One: Assessing Predation Impacts</ns0:head><ns0:p>Two types of fish-predation impacts were documented: 1) physical removal of outplanted fragments, and 2) tissue removal from corals that remained attached to outplanting platforms. The probability of outplant removal was explained by coral species, reef, and time as fixed effects (GLM χ 2 -test, p < 0.05) and the model explained 67% of the deviance (Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, Table <ns0:ref type='table'>S1</ns0:ref>, S2). One week after deployment, 8% of M. cavernosa (n = 53 fragments outplanted), 12% of O. faveolata (n = 123), 23% of P. clivosa (n = 80), and 27% of P. strigosa (n = 41) fragments were physically removed from the outplant platforms by fish (all sites combined) (Fig. <ns0:ref type='figure' target='#fig_4'>4A</ns0:ref>). The ranking of the probability of removal for the four species was consistent across reefs and time (Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>). There was a minor, but significant increase in the probability of removal over time (Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, Table <ns0:ref type='table'>S2</ns0:ref>). The majority of removal occurred during the first week, but corals continued to be removed over time, with an additional 1.9% of M. cavernosa, 6.6% of O. faveolata, 7.1% of P. clivosa, and 7.0% of P. strigosa removed between one and six months after deployment (Fig. <ns0:ref type='figure' target='#fig_4'>4A</ns0:ref>).</ns0:p><ns0:p>The species with the highest prevalence of fish bites one week after deployment were the two Pseudodiplora species, followed by O. faveolata. M. cavernosa was the only species that did not show any signs of predation on remaining corals after one week (Fig. <ns0:ref type='figure' target='#fig_4'>4B</ns0:ref>). Similar to the rate of removal, predation prevalence slowed over time, as only an average of 0.3% of surviving corals of all four species combined showed fish bites at the one-month survey compared to 9.2% after the first week. After six months, no signs of predation were observed for surviving M. cavernosa and P. strigosa, and < 1% of colonies of the remaining two species showed evidence of fish bites (Fig. <ns0:ref type='figure' target='#fig_4'>4B</ns0:ref>).</ns0:p><ns0:p>Cover of stony corals recorded at the three outplant sites was very low. Mean coral cover was 0.85 (± 1.0) for Reef 1, 0.8 (± 0.4) for Reef 2, and 0.04 (± 0.07) for Reef 3. The prevalence of fish predation, including complete fragment removal and fish bites, was highest at Reefs 1 and 2, which coincided with the significantly greater abundance of corallivorous fish taxa recorded at these two sites compared to Reef 3 (ANOVA; Tukey-Kramer HSD test; p = < 0.05) (Fig. <ns0:ref type='figure' target='#fig_4'>4C, D</ns0:ref>). The average number of fish observed interacting with the coral outplants (i.e., parrotfishes, damselfishes, butterflyfishes, surgeonfishes, triggerfishes) was 2.7 individuals survey -1  6.0 (mean  SD) at Reef 1, 2.0  4.7 at Reef 2, and only 0.8  2.3 at Reef 3 (Fig. <ns0:ref type='figure' target='#fig_4'>4B</ns0:ref>). Complete coral removal was 17% at Reef 1, 26% at Reef 2, and only 7% at Reef 3 after one week (Fig. <ns0:ref type='figure' target='#fig_4'>4C</ns0:ref>). Similarly, signs of fish predation were higher among the remaining corals at Reef 1 (13.7% corals with evidence of predation) and Reef 2 (13.1%), while no evidence of fish bites was observed at Reef 3 after one week (Fig. <ns0:ref type='figure' target='#fig_4'>4C</ns0:ref>). The fish taxa observed biting coral fragments included butterflyfishes, parrotfishes, and damselfishes (Table <ns0:ref type='table'>S3</ns0:ref>). Wrasses and surgeonfishes were also observed approaching the coral outplants but not necessarily biting the coral tissue. While no direct evidence of predation by triggerfishes (a known coral predator) was captured, this taxon was seen in the vicinity of outplants in the video collected in experiment two. Grunts, surgeonfish, and wrasses were the most consistently sighted fish across all 3 sites and were recorded during all 37 surveys. Parrotfishes and damselfishes were also regularly observed at all three outplant locations, having been recorded as being present within 34 and 35, respectively, out of the 37 surveys conducted. Chaetodontidae were recorded within 27 of the 37 surveys, and were present during all 13 surveys at Reef 1, 8 out of 10 surveys conducted at Reef 2, but only 6 out of 14 surveys completed at Reef 3. It is important to note that no evidence of fish removing the corals from their outplanting platforms was captured in our video surveys. Nevertheless, the removal by fish was considered the only driver of the missing corals as corals did not detach during transport nor during their time at the nursery where no fish predators are observed (pers. obs.)</ns0:p><ns0:p>In addition to fragment removal and partial tissue mortality caused by predation, complete coral mortality was observed. After one week, 4% of P. clivosa, 5% of O. faveolata, 9% of M. cavernosa, and 17% of P. strigosa fragments that remained attached to the outplant platforms showed 100% tissue mortality (all sites combined). After six months, the cumulative prevalence of complete mortality was 4% for P. clivosa, 16% for M. cavernosa, 27% for P. strigosa, and 41% for O. faveolata fragments. When removal and complete tissue mortality were combined for all corals and sites combined, 26% of corals died after one week, 30% of corals died after one month, and 51% of corals died within six months of outplanting. Overall, M. cavernosa suffered 26% losses (removal + 100% tissue mortality), followed by P. clivosa (40%), O. faveolata (62%), and, finally, by P. strigosa (73%). While it was not possible to ascertain the cause of mortality (besides that visibly caused by predation) among the coral outplants, no evidence of active stony coral tissue loss disease (SCTLD), which affected the reefs of South Florida in recent years, was observed on outplanted or wild corals at any of the sites during either experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Outplanting Experiment Two: Reducing Predation Impacts</ns0:head><ns0:p>The mode of attachment of outplanted corals influenced removal patterns. After one week, 14% of the corals fixed to ceramic plugs using glue were removed, while none of the corals outplanted using cement within the pizzas were missing. After three weeks, still none of the corals deployed on pizzas were removed, whereas 54% of the corals outplanted using plugs were missing. While none of the corals in any of the four cement pizza treatments were removed, fish predation impacts were significantly affected by coral treatment within the cement bases (ANOVA, p < 0.05) (Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>). No significant influence of plot was documented and data wer thus grouped for all plots. After three weeks, the average percentage of tissue removed by predation was significantly lowest for corals within the 'embedded' treatment and highest for corals placed within the 'raised exposed' treatment and corals outplanted using plugs (Tukey-Kramer HSD test; p < 0.05) (Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>). No significant differences were found between corals in the 'raised covered' and 'flushed' treatments (Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref>). Predation impacts were lowest among embedded corals, but only corals within this treatment experienced sediment accumulation on the surface of the colony. For corals within the embedded treatment, the average of the total surface area of the coral outplants covered by sediments was 3.1% ± 2.4 (mean ± SD) after one week and 3.8% ± 3.5 (mean ± SD) after three weeks. Neither controls outplanted using plugs nor corals within the other three pizza treatments accumulated sediments on the coral surfaces.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The use of fragmented massive corals expands the number of coral species available for reef restoration beyond the initial, decade-long focus on branching corals. Massive corals are key reef-building taxa that have experienced accelerated losses in the past few years due to the stony coral tissue loss disease (SCTLD) epidemic that was first detected in Florida in 2014 <ns0:ref type='bibr' target='#b46'>(Precht et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b64'>Walton et al. 2018)</ns0:ref> and has now been documented in several locations in the Caribbean <ns0:ref type='bibr' target='#b1'>(Alvarez-Filip et al. 2019)</ns0:ref>. The impacts of SCTLD, added to the historical declines in these taxa, has created a need to move from single-taxa restoration to a community-based approach that includes corals with different life histories and disturbance responses <ns0:ref type='bibr' target='#b36'>(Lustic et al. 2020)</ns0:ref>. While massive corals can be successfully propagated both in situ and ex situ <ns0:ref type='bibr' target='#b2'>(Becker and Mueller 2001;</ns0:ref><ns0:ref type='bibr' target='#b16'>Forsman et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b45'>Page et al. 2018)</ns0:ref>, our study identified a significant bottleneck in restoration success caused by fish predation on newly outplanted fragments. In our study, 8-27% of fragments from four species (O. faveolata, M. cavernosa, P. clivosa, P. strigosa) outplanted onto three reefs in Miami, Florida, US, were removed by fish within one week. A prior study from Florida also documented large predation impacts on M. cavernosa and O. faveolata, with 45% and 22% of fragments affected by predation respectively within the first week <ns0:ref type='bibr' target='#b45'>(Page et al. 2018)</ns0:ref>. With coral cover being so low presently on Florida reefs (< 1% coral cover on the reefs used in this study), it is likely that fish predation is being concentrated on outplanted corals, posing concerns for restoration in depleted systems until a critical abundance threshold is reached <ns0:ref type='bibr' target='#b60'>(Schopmeyer and Lirman 2015)</ns0:ref>. Supporting this concept, <ns0:ref type='bibr' target='#b25'>Jayewardene et al. (2009)</ns0:ref> found lower prevalence of fish bites on coral nubbins in plots with higher coral cover. The concentration of predation on surviving corals after major declines in abundance due to a hurricane was previously documented by <ns0:ref type='bibr' target='#b29'>Knowlton et al. (1990)</ns0:ref>.</ns0:p><ns0:p>Previously, research efforts have focused mainly on the impacts of reef fishes on the abundance and distribution of macroalgae, so our understanding of their direct effects on stony corals is comparatively more limited. Only 10 families of fishes have been reported to consume coral polyps and even fewer taxa classified as obligate corallivores <ns0:ref type='bibr' target='#b57'>(Robertson 1970;</ns0:ref><ns0:ref type='bibr' target='#b48'>Randall 1974)</ns0:ref>. Species within the Chaetodontidae (butterflyfishes), Balistidae (triggerfishes), and Tetraodontidae (pufferfishes) families are among the most common corallivorous fishes <ns0:ref type='bibr' target='#b21'>(Hixon 1997)</ns0:ref>. In this study, butterflyfish, wrasses, parrotfish, surgeonfish, and damselfish consumed or interacted with newly outplanted corals. Except for butterflyfishes that were observed biting coral tissue, it remained unclear from our visual and video observations whether fragments were physically removed by fish grazing on algae growing on exposed coral skeletons or direct consumption of coral tissue.</ns0:p><ns0:p>The high impacts recorded here on coral outplants may be the result of consumptive or territorial activity (or a combination of both). Both butterflyfishes <ns0:ref type='bibr' target='#b50'>(Reese 1989;</ns0:ref><ns0:ref type='bibr'>Roberts and Ormand 1992)</ns0:ref> and the adult terminal phase male stoplight parrotfish Sparisoma viride <ns0:ref type='bibr' target='#b10'>(Bruggemann et al. 1994;</ns0:ref><ns0:ref type='bibr' target='#b9'>Bruckner et al. 2000)</ns0:ref> have been observed to bite corals within their territories, which supports that certain fish species may selectively target new coral outplants as soon as they appear within their territories. Predation impacts on coral outplants were highest within the first week and tapered off with time, declining to <1% of remaining corals removed after six months, suggesting outplant habituation of the fish fauna to new coral 'recruits' may play a role. Similar patterns of temporal predation on coral outplants were reported in Guam, where predation impacts from butterflyfishes and triggerfishes were high within one week of deployment <ns0:ref type='bibr' target='#b43'>(Neudecker 1979)</ns0:ref>. Whether the decline in coral removal by fish was a result of corals reaching a size refuge as they grew or due to habituation of the fish to the presence of these corals could not be ascertained in this study. The territories of potential fish predators like parrotfishes were not assessed in this study and we were thus unable to differentiate between the impacts of both these factors nor their interaction. Similarly, the high level of predation may have been caused by the relatively close spacing of outplanted corals (30-60 cm) so that once a prey item was detected, detection of additional corals within a grid was autocorrelated. Thus, the potential role of fish territoriality and spacing of corals on impacts on newly outplanted corals needs further investigation, especially considering the high impacts recorded here that represent a drain in restoration resources.</ns0:p><ns0:p>In our study, impacts of predation varied by species, with P. clivosa and P. strigosa experiencing the highest levels of mortality. While the potential reasons for the differences in species susceptibility to predation were not measured here, factors such as palatability, nutritional content, or skeletal characteristics may play a role and need to be investigated further. Manuscript to be reviewed Nevertheless, prey selection based on coral species has been previously documented for Chaetodon unimaculatus that showed a preference for feeding on Montipora verrucosa in Hawaii <ns0:ref type='bibr' target='#b11'>(Cox 1986)</ns0:ref>, by Balistapus undulatus that targeted Pocillopora damicornis over Seriatopora hystrix <ns0:ref type='bibr' target='#b19'>(Gibbs and Hay 2015)</ns0:ref>, and by butterflyfish that target Acropora over other coral taxa <ns0:ref type='bibr' target='#b3'>(Berumen 2005)</ns0:ref>. Similarly, wild and outplanted A. cervicornis and O. annularis were targeted by the territorial three-spot damselfish <ns0:ref type='bibr' target='#b27'>(Kaufman 1977;</ns0:ref><ns0:ref type='bibr' target='#b29'>Knowlton et al. 1990;</ns0:ref><ns0:ref type='bibr' target='#b60'>Schopmeyer and Lirman 2015)</ns0:ref>.</ns0:p><ns0:p>Fish predation impacts varied by reef and were associated with the abundance of fish taxa known to consume coral tissue. Differences in predation impacts on outplanted coral fragments between sites were also documented by <ns0:ref type='bibr'>Paige et al. (2018)</ns0:ref> in the Florida Keys. Similar to our findings, <ns0:ref type='bibr' target='#b47'>Quimpo et al. (2019)</ns0:ref> suggested that coral outplants were more likely to be detached when outplanted onto reefs with higher biomass of herbivore and corallivore fishes in the Phillipines. Additionally, <ns0:ref type='bibr' target='#b47'>Quimpo et al. (2019)</ns0:ref> reported that incidental grazing of herbivorous fish, particularly the parrotfish Chlorurus spilurus, were the main sources of coral detachment, but that the direct predation by corallivorous fishes only minimally affected coral outplants. Incidental fish predation was also observed by herbivorous fish removing algal tissue from nursery ropes in the Seychelles by Frias-Torres and van de Geer <ns0:ref type='bibr'>(2015)</ns0:ref>. A simple response to these patterns would be to avoid outplanting on reefs with high abundance of these taxa, but it is important to note that parrotfishes and surgeonfishes (observed here to target outplanted corals) are also key grazers that are essential to maintain a low abundance of macroalgae on reefs <ns0:ref type='bibr' target='#b41'>(Mumby et al. 2006</ns0:ref>) and coral nursery settings <ns0:ref type='bibr' target='#b28'>(Knoester et al. 2019)</ns0:ref>. Best practices for the selection of outplanting sites developed for Acropora suggest that low abundance of macroalgae is a key attribute of an ideal restoration site <ns0:ref type='bibr' target='#b26'>(Johnson et al. 2011</ns0:ref>). In addition to reducing algal overgrowth, damselfish, triggerfishes, puffers, and other corallivorous fish have been documented to limit impacts of corallivorous invertebrates such as the crown-of-thorns starfish (Acanthaster plancii) and Coralliophillia snails <ns0:ref type='bibr' target='#b44'>(Ormond et al. 1973;</ns0:ref><ns0:ref type='bibr' target='#b60'>Schopmeyer and Lirman 2015)</ns0:ref>. Hence, avoiding reefs with a high abundance of grazers that also target corals is not a viable option as it may lead to algal overgrowth and higher impacts by non-fish corallivores. There is, thus, a clear need to develop efficient outplanting methods to minimize the impacts of fish predation on reefs with high abundances of fish herbivores.</ns0:p><ns0:p>While fish impacts were the predominant source of physical removal of fragments in the present study, remaining corals experienced tissue losses due to fish predation and other unknown factors resulting in the mortality of > 30% of remaining corals after six months. Fish predation has also been shown to reduce growth rates <ns0:ref type='bibr' target='#b39'>(Meesters et al. 1994)</ns0:ref>, decrease fecundity <ns0:ref type='bibr' target='#b63'>(Szmant-Froelich 1985;</ns0:ref><ns0:ref type='bibr' target='#b55'>Rinkevich and Loya 1989)</ns0:ref>, and increase susceptibility to diseases <ns0:ref type='bibr' target='#b66'>(Williams and Miller 2005;</ns0:ref><ns0:ref type='bibr' target='#b0'>Aeby and Santavy 2006)</ns0:ref>. Mortality of our outplanted corals was much higher than the average mortality (14.8%) reported for A. cervicornis one year after outplanting <ns0:ref type='bibr' target='#b61'>(Schopmeyer et al. 2017)</ns0:ref>, highlighting a bottleneck that needs to be addressed to optimize the long-term success of using fragmented massive corals for restoration. Lower fragment removal rates and reduced prevalence of fish predation were related to the attachment method (glue vs. cement). None of the fragments attached by cement were removed by fish predators showing that cement provides a stronger hold for the outplanted corals than the commonly used glue. Higher detachment of coral fragments attached with glue was also documented by <ns0:ref type='bibr' target='#b15'>Dizon et al. (2008)</ns0:ref>. Moreover, corals allowed to recover tissue over their exposed skeletal walls prior to outplanting ('raised healed' treatment) had less predation than corals with exposed skeletal walls ('raised exposed' treatment). Colony edges of exposed skeleton can be preferentially targeted by parrotfish feeding on turf or endolithic algae, resulting in the higher prevalence of fish bites recorded <ns0:ref type='bibr' target='#b8'>(Bruckner and Bruckner 1998)</ns0:ref>. Thus, allowing fragmented corals to skirt over the exposed skeleton and grow onto the attachment platform would be the desired step before outplanting. This approach is used in the microfragmentation method described by <ns0:ref type='bibr' target='#b45'>Page et al. (2018)</ns0:ref> where small microfragments composed mainly of tissue with limited skeleton are grown ex situ until the coral tissue reaches the edges of the ceramic plug, resulting in larger coral outplants without exposed skeletal walls and low height, thereby reducing predation risk. This would increase the time a fragment remains within nurseries but would also limit predation impacts. Finally, embedding corals into the cement platform ('embedded treatment') mimics this process by lowering coral height and the amount of skeletal walls exposed and reduced predation prevalence to < 1% of corals. While placing corals embedded into cement may be an option for limiting removal and predation mortality, embedded corals had > 3% of the coral tissue covered by sediments, highlighting a potential tradeoff between reduced predation and sediment impacts that needs to be further evaluated.</ns0:p><ns0:p>Lastly, one of the factors that may have resulted in the high levels of removal and predation recorded here may have been the size of the coral fragments used in this study. Prior research has shown a relationship between the size of the fragments or colonies used for restoration and their survivorship and susceptibility to fish predation. For example, a size refuge for coral nubbins was documented by <ns0:ref type='bibr' target='#b12'>Christiansen et al. (2009)</ns0:ref>, <ns0:ref type='bibr' target='#b25'>Jayewardene et al. (2009)</ns0:ref>, <ns0:ref type='bibr' target='#b19'>Gibbs and Hay (2015)</ns0:ref>, and <ns0:ref type='bibr' target='#b47'>Quimpo et al. (2019)</ns0:ref> in field or laboratory experiments. Moreover, in a recent study by <ns0:ref type='bibr' target='#b36'>Lustic et al. (2020)</ns0:ref>, medium-sized (40-130 cm 2 ) colonies of Orbicella faveolata and Montastraea cavernosa outplanted onto a reef in the Florida Keys showed no significant impacts of fish predation, highlighting a potential size threshold for predationimpacts. Thus, the impacts from fish predation may be mitigated in Florida by outplanting larger fragments or, as described by <ns0:ref type='bibr' target='#b45'>Page et al. (2018)</ns0:ref>, by deploying smaller corals together as tight clusters to foster fusion and function as a larger skeletal unit. There is a trade-off between the number of corals derived from a single parent and the size of the fragments produced, so controlled experiments on the role of size on predation susceptibility are needed before the optimum size of massive coral outplants can be established, especially in habitats with high levels of fish predation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>As coral declines continue worldwide, active reef restoration has emerged as a powerful management alternative to slow down and eventually help reverse these declines (Natl. Acad. Sci. 2019). As the number of techniques and species used in restoration increase beyond the established success of branching corals, practitioners and scientists are collaborating to develop expanded guidelines and best practices. These are critically needed to both broaden the footprint of restoration while keeping restoration costs down. Our study, based on the restoration of small fragments (< 5 cm 2 ) of four species of massive corals, identified predation by fish as a major bottleneck in restoration success as the activities of a subset of fish taxa (butterflyfishes, wrasses, parrotfishes, surgeonfishes, damselfishes) caused both the high rates of removal of fragments and tissue mortality on remaining fragments. Thus, there is a need to develop methods to reduce these predatory impacts for massive-coral fragments for restoration to be an effective tool in Florida. Here, we identified fragment attachment method (cement performed better than glue) and coral placement (fragments performed better with tissue covering the skeletal walls, and deployed either flushed or embedded within outplanting platforms) as factors that can be used to reduce impacts. We also identified the need to conduct additional experiments to discern the interactive role of fish abundance and territoriality on fragment performance and to explore the role of fragment size and species palatability on survivorship. We believe that the recent development and adoption of microfragmentation as a technique for massive coral propagation will provide the corals needed to develop more efficient outplanting methods and circumvent the fish predation bottleneck identified here in the near future, allowing for the successful restoration of these keystone species. Manuscript to be reviewed Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:04:47808:2:0:NEW 21 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
</ns0:body>
" | "Dear Editor,
We thank you for the opportunity to complete a second revision of our manuscript. Our revisions and responses to the reviewers appear below in bold type.
Reviewer 1:
- Line 119. Thank you for the additional details around the deployment design for experiment 1. However, please clarify why there are two quadrat sizes indicated (3x3 and 5x5) and how they were distributed among reef sites. I recognize that the size was ‘based on substrate availability’ but I am concerned that differences in spatial distribution among outplants within grids could significantly impact the predation patterns observed. A 25m2 grid is nearly 3 times the area of a 9m2 grid and thus the density of corals could be much different among plots. In line 346 of the discussion, you mention that high predation may have been caused by close spacing of the corals. So, was the spacing of corals within plots consistent across all plots and reef sites (I recognize that you say 40-60 cm spacing)? If densities were consistent among plots, and easy solution would be to add a sentence to the effect of: “corals were deployed within replicate square grids (from 3x3 to 5x5 m) whose areas were determined based on substrate availability. The number of coral outplants placed within each grid ranged from X to X to maintain a consistent density of corals across replicate plots”.
The text in the methods was modified as suggested:
“Corals were deployed within replicate grids (from 3x3 to 5x5 m) whose areas were determined based on substrate availability. Four such grids were deployed in Reef 1, 7 in Reef 2, and 6 in Reef 3. The corals were spaced 40-60 cm apart within each grid to maintain a consistent density of corals across replicate plots and grids were separated by at least 2 m. The coral outplants were placed at least 20 cm away from existing stony and soft corals, sponges, and the zoanthid Palythoa. In total, 53 M. cavernosa, 123 O. faveolata, 80 P. clivosa, and 41 P. strigosa were outplanted among the 3 reefs, with all 4 species represented in each plot.
- Line 172: The experimental design is still somewhat unclear for experiment 2. Firstly, if each plot had 10 pizzas total (and 3-4 pizzas per treatment), shouldn't there be at least 12 pizzas per plot given that there are 4 treatments? The numbers don't seem to add up. Secondly, please indicate the size and density (see comment above) of pizzas within each plot, and explicitly state the number of replicate plots (I presume 4 give the sample sizes reported?). Thirdly, I suggest moving lines 177-182 (traids) right after 170 so the 4 experimental treatments can be compared with the control. Then you can discuss how many of each treatment (& control) pizzas/triads were placed in each plot. Lastly, if the pizzas within plots were spaced 30-50 cm apart but plots were only separated by only 1 m, how are the plots independent replicates? Can you provide any reference or justification for the spatial arrangement of the experimental design?
While I appreciate the reviewer’s request for additional information, we believe that the description as it stands accurately describes the layout. We deployed pizzas in sets of 10 per plot, which means we could not do 3 per treatment per plot or we would have needed 48 pizzas and we did not have enough corals. Thus, we state that we deployed 3-4 pizzas per treatment per plot for a total of 40 pizzas.
We do not have a reference to justify the spacing among plots and we have no data on territorial ranges of the fish feeding on the corals. The goal was to look at impacts based on coral treatment and not explore spatial patterns of predation. The lack of significant plot effects allowed us to group all data together, effectively removing plot influences.
The following sentence was added to state this:
“No significant influence of “plot” was documented and data were thus grouped for all plots.
- Line 189: You state that % tissue covered by sediment and % tissue removal were averaged within pizzas and then compared among treatments using ANOVA. But what about the removal data? Were those also averaged within pizzas, or were they combined across all replicate pizzas within a plot? Was replicate pizza or replicate plot used as the experimental unit in the data analysis? More details are needed.
No corals were removed from the pizzas so no within-pizza averaging was required.
Also, please see prior reply regarding plots
- Line 205: Corals were monitored at 1 wk, 1 mo and 6 mo. But there are many more than 3 RVC surveys on each reef and the number of surveys differed among reefs? Please clarify how the RVC surveys relate to the coral monitoring, how many replicate surveys per time per reef, how many replicate temporal surveys, etc. were undertaken. It is difficult to see how the survey times/dates relate to the coral restoration experimental timeline. The replicate # surveys, and when they were undertaken need to be clearly linked with exp 1. Secondly, given that the sites had different numbers of surveys, should only surveys conducted at all three sites within the same month be used in the analyses? Could there be seasonal variability in the fish communities observed?
The fish data are used here to explore differences in fish community among reefs and not to statistically relate to predation patterns. Thus, we state:
“The prevalence of fish predation, including complete fragment removal and fish bites, was highest at Reefs 1 and 2, which coincided with the significantly greater abundance of corallivorous fish taxa recorded at these two sites compared to Reef 3 (ANOVA; Tukey-Kramer HSD test; p = < 0.05) (Fig. 4C, D). “
The fish survey data spanned the period before the outplanting of corals for Exp 1 and extended beyond the completion of this Exp, and, as stated. While fish abundance can vary in space and time, we decided to group all data together to give an overall assessment of fish abundance on these reefs. All reefs were surveyed within a month of each other every survey interval as stated:
“Between May 2018 and February 2019, we completed 13 RVC surveys at Reef 1, 10 surveys at Reef 2, and 14 surveys at Reef 3. During the 10-month monitoring period, all three reefs were surveyed within one month, with Reefs 1 and 3 surveyed opportunistically additional times. “
We did not intend to relate fish seasonal abundance to coral impacts and most of the impacts happened within the first month after outplanting. Since we do not relate fish and coral data statistically we do not believe any further changes or analyses are warranted.
General comments:
- With respect to organisation/flow, the methods for the “coral cover, fish abundance and predation surveys” subsection still seemed out of place. It wasn’t clear that the RVC data were used only for experiment 1 but that the video surveys were used for both experiments. At the very least, an introductory sentence in this section is needed to provide context to the data and to explain which experiments each data set was supporting (consider moving line 200 up to the start of the paragraph and expanding it). Also, since the fish survey data are reported in the results along with experiment 1 results, perhaps they should be presented that way in the methods, so that the methods and results are presented in parallel?
In lines 214-220, we added text to state that fish survey data were conducted during Exp 1 and that camera deployments were done during both experiments:
“In addition to the visual fish surveys conducted during the coral deployment for experiment one, we deployed a video camera immediately after each coral deployment focused on the newly outplanted corals to document the fish species observed interacting with the corals after the divers had left the plot for both experiments. Each video was viewed and a list of species interacting with the outplants was compiled. The duration of these deployments was variable (1-7 hrs) based on the time spent by the divers at the site during deployment and was only intended to compile a list of fish species approaching and/or biting the corals.“
We do not believe further restructuring would necessarily improve the flow of the paper and none of the other reviewers have suggested restructuring so we would like to stay with the present structure/flow.
Minor comments (also please see attached PDF):
Minor comments included in the pdf were addressed in the revision.
- Typo in TableS2 column title
Fixed
- Table S3, please add abundance unit to column name and consider adding a SD value for each average. Furthermore, multiple temporal surveys were undertaken. When and how many surveys were completed at each reef that went into the average calculations?
Units (individuals m-2) and standard deviation were added
We already report the number of fish surveys and the range of dates as follows in the modified text:
“Between May 2018 and February 2019, we completed 13 RVC surveys at Reef 1, 10 surveys at Reef 2, and 14 surveys at Reef 3. During the 10-month monitoring period, all three reefs were surveyed within one month, with Reefs 1 and 3 surveyed opportunistically additional times.”
Reviewer 2:
L109: Fragment corals “were” first glued
Fixed
L149-151: Sentences closely resemble those of L128-131. Perhaps reword to “Funding constraints precluded genotyping of parent colonies, however, every parent colony was represented in each treatment”.
We added the following statement to the first section of the methods to avoid repeating similar statements in both experiments:
“Due to funding constraints, the parent colonies used in this study were not formally genotyped. Nevertheless, fragments from every parent colony were represented in each reef and treatment and, thus, the results combine the survivorship of corals from three parent colonies per species in experiment one and five parent colonies in experiment two.”
L197: Perhaps spell out Coral Point Count with Excel extension (CPCe) for those unfamiliar.
Done
L223: This could be removed as L221 already mentions the (coral) species-specific removal.
Done
L245-256: I’m hesitant to categorize surgeonfish as “corallivores” since studies have conclusively shown that they are herbivores (e.g. stomach modifications, feeding behavior, etc.; see papers by Howard Choat or David Bellwood). Similarly, damselfish are generally territorial herbivores, planktivores or omnivores.
Agreed. We modified the text to state we are reporting the abundance of fish observed interacting with our outplants:
“The average number of fish observed interacting with the coral outplants…”
L270: “and” 41% for O. faveolata
Fixed
" | Here is a paper. Please give your review comments after reading it. |
650 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The evolutionary history of southern South American organisms has been strongly influenced by Pleistocene climate oscillations. Amphibians are good models to evaluate hypotheses about the influence of these climate cycles on population structure and diversification of the biota, because they are sensitive to environmental changes and have restricted dispersal capabilities. We test hypotheses regarding putative forest refugia and expansion events associated to past climatic changes in the wood frog Batrachyla leptopus distributed along ~ 1000 km of length including glaciated and nonglaciated areas in southwestern Patagonia.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods.</ns0:head><ns0:p>Using three mitochondrial regions (D-loop, cyt b, and coI) and two nuclear loci (pomc and crybA1), we conducted multilocus phylogeographic analyses and species distribution modelling to gain insights of the evolutionary history of this species. Intraspecific genealogy was explored with maximum likelihood, Bayesian, and phylogenetic network approaches. Diversification time was assessed using molecular clock models in a Bayesian framework, and demographic scenarios were evaluated using approximate Bayesian computation (ABC) and extended Bayesian skyline plot (EBSP). Species distribution models (SDM) were reconstructed using climatic and geographic data.</ns0:p><ns0:p>Results. Population structure and genealogical analyses support the existence of four lineages distributed north to south, with moderate to high phylogenetic support (Bootstrap > 70%; BPP > 0.92). The diversification time of B. leptopus' populations began at ~ 0.107 mya. The divergence between A and B lineages would have occurred by the late Pleistocene, approximately 0.068 mya, and divergence between C and D lineages was approximately 0.065 mya. The ABC simulations indicate that lineages coalesced at two different time periods, suggesting the presence of at least two glacial refugia and a postglacial colonization route that gave originated two southern lineages (p = 0.93, type I error: <0.094, type II error: 0.134). EBSP, mismatch distribution and neutrality indexes suggest sudden population expansion at ~0.02 mya for all lineages. SDM infers fragmented distributions of B. leptopus associated with Pleistocene glaciations. Although the present populations of B. leptopus are found in zones affected by the last glacial maximum (~0.023 mya), our analyses recover an older history of interglacial diversification (0.107 -0.019 mya). In addition, we hypothesize two glacial refugia and three interglacial colonization routes, one of which gave rise to two expanding lineages in the south.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The southern South American landscape is characterised by dynamic transformations resulting from tectonic processes and climatic cycles <ns0:ref type='bibr' target='#b26'>(Ortíz-Jaureguizar & Cladera, 2006;</ns0:ref><ns0:ref type='bibr' target='#b14'>Le Roux, 2012)</ns0:ref>. In particular, geological studies <ns0:ref type='bibr' target='#b21'>(Mercer, 1972;</ns0:ref><ns0:ref type='bibr' target='#b35'>Rabassa & Clapperton, 1990;</ns0:ref><ns0:ref type='bibr' target='#b12'>Clark et al., 2009)</ns0:ref> have demonstrated that in the southwestern part of Patagonia have occurred at least four Pleistocene glaciations, including the most extensive Andean glaciation (1.1 mya), the coldest Pleistocene glaciation (0.7 mya), the last southern Patagonian glaciation (180 kya), and the Last Glacial Maximum (LGM; 20,500 and 14,000 years BP). It has been hypothesized that these climatic cycles, re-organized the ecosystems structure, altered species abundance and changed distribution patterns of many Patagonian taxa <ns0:ref type='bibr' target='#b48'>(Sérsic et al., 2011;</ns0:ref><ns0:ref type='bibr'>Giarla & Jansa, 2015)</ns0:ref>. It is also recognized that some areas served as climate refugia in a vast inhospitable region, and that those refugia provided habitat to species to persist and from which they expanded when environmental conditions were suitables <ns0:ref type='bibr'>(Keppel et al., 2012)</ns0:ref>. Phylogeographic studies in vertebrates and plants in this area <ns0:ref type='bibr' target='#b48'>(Sérsic et al., 2011)</ns0:ref> have highlighted the importance of such glacial refugia, where species survived through glacial maxima, and which today harbour high levels of genetic diversity and differentiated genetic clusters <ns0:ref type='bibr' target='#b43'>(Ruzzante et al., 2006;</ns0:ref><ns0:ref type='bibr'>Vidal-Russell, Souto & Premoli, 2011;</ns0:ref><ns0:ref type='bibr'>Zemlak et al., 2011)</ns0:ref>.</ns0:p><ns0:p>Postglacial colonization pathways have been also hypothesized for a range of species <ns0:ref type='bibr'>(Victoriano et al., 2008;</ns0:ref><ns0:ref type='bibr'>González-Ittig et al., 2010;</ns0:ref><ns0:ref type='bibr'>Gallardo et al., 2013;</ns0:ref><ns0:ref type='bibr'>Vidal et al., 2016)</ns0:ref> to explain how extant populations are connected and how genetic diversity is spatially distributed.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Amphibians have attracted considerable attention on Pleistocene refugia hypotheses, largely due to their restricted dispersal capabilities that tend to promote allopatric differentiation <ns0:ref type='bibr'>(Fitzpatrick et al., 2009;</ns0:ref><ns0:ref type='bibr'>Carnaval et al., 2014)</ns0:ref>. Further, amphibians are highly sensitive to habitat disturbances owing to complex life histories, permeable skin, and exposed eggs <ns0:ref type='bibr' target='#b3'>(Beebee, 1996;</ns0:ref><ns0:ref type='bibr' target='#b33'>Prohl, Ron & Ryan, 2010)</ns0:ref>.</ns0:p><ns0:p>In Southwestern Patagonia, most of the amphibian species are endemic (70%) and strongly associated with humid Valdivian forest <ns0:ref type='bibr'>(Formas, 1995)</ns0:ref>. These forests contracted into smaller fragments during the more arid phases of the Pleistocene, leading to the isolation and allopatric diversification of forest-associated taxa <ns0:ref type='bibr' target='#b51'>(Suárez-Villota et al., 2018)</ns0:ref>. One example is the grey wood frog Batrachyla leptopus Bell 1843. This small amphibian (30-35 mm snout vent length) lays the eggs (diameter of ova 3-4 mm) in clusters of <ns0:ref type='bibr'>93-146.</ns0:ref> Clutches are fertilized at the edges of small pools, amidst vegetation or under fallen logs and rocks on the ground, where embryonic development takes place <ns0:ref type='bibr' target='#b8'>(Busse, 1971;</ns0:ref><ns0:ref type='bibr'>Úbeda & Nuñez, 2006)</ns0:ref>. When autumnal rains flood the area (March-June), water stimulates hatching, and larvae metamorphose in 5-7 months <ns0:ref type='bibr'>(Formas, 1976)</ns0:ref>. Batrachyla leptopus has one of the broadest distributions of any Chilean frog <ns0:ref type='bibr'>(Cuevas & Cifuentes, 2010)</ns0:ref>, and is threatened by habitat deterioration in most of its geographic range. Furthermore, most of its current distributional area was intensively glaciated during the</ns0:p><ns0:p>LGM but its genetic structure and the impact of the habitat lost are poorly known <ns0:ref type='bibr'>(Heusser & Flint, 1977;</ns0:ref><ns0:ref type='bibr'>Paskoff, 1977)</ns0:ref>. Thus, while the humid ecological requirements of B. leptopus might in part explain its low abundance and patchy distributional pattern, Quaternary glaciations likely have generated a phylogeographic history linked to glacial refugia. In fact, previous studies of B. leptopus <ns0:ref type='bibr'>(Formas & Brieva, 2000;</ns0:ref><ns0:ref type='bibr'>Vidal et al., 2016)</ns0:ref> have revealed high levels of population divergence as a result Progress in phylogeographic studies has been further extended by the incorporation of species distribution modelling (SDM; <ns0:ref type='bibr' target='#b30'>Phillips et al., 2017)</ns0:ref>. SDM approaches has been widely applied to assessment of species ranges, and evaluate special and temporal hypothesis about current and past species occurrence <ns0:ref type='bibr'>(Gavin et al., 2014)</ns0:ref>. Accessibility and easy data requirements of correlative SDMs, coupled with the improved availability of paleoclimate simulations, has the special advantage of permitting prediction of distributional potential across scenarios of environmental change. These models are particularly relevant for understanding the effects that ongoing human-caused global climate change will have on biodiversity <ns0:ref type='bibr'>(Wiens et al., 2009)</ns0:ref>, including the study of glacial refugia <ns0:ref type='bibr'>(Gavin et al., 2014)</ns0:ref>.</ns0:p><ns0:p>In this work, we use a multilocus phylogeographic approach and species distribution Manuscript to be reviewed</ns0:p><ns0:p>To this aim, we first estimated the genetic structure among B. leptopus populations and reconstructed its phylogeographic relationships under maximum likelihood and Bayesian inference. Second, we estimated divergence time and temporal changes in population size to determine if these were consistent with late Pleistocene events. Then, we examined the demographic history of this species by simulating alternative Pleistocene glaciation scenarios in an ABC framework. Finally, we combined demographic inferences with species distribution modelling in B. leptopus.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Sample collection</ns0:head><ns0:p>Between 2009-2018 we collected 130 individuals and buccal swabs (most samples) from 19 localities throughout the distributional range of B. leptopus in south western Patagonia (Table <ns0:ref type='table'>1</ns0:ref>; </ns0:p></ns0:div>
<ns0:div><ns0:head>DNA extraction, amplification, and sequence alignment</ns0:head><ns0:p>Whole genomic DNA was extracted either from liver tissues or buccal swabs according to <ns0:ref type='bibr' target='#b6'>Broquet et al. (2007)</ns0:ref>, using the manufacturer's recommended protocol for the Qiagen DNeasy <ns0:ref type='bibr'>Phillips, 2004)</ns0:ref>, via polymerase chain reaction (PCR). Reaction cocktails for PCR were according to <ns0:ref type='bibr'>Suarez-Villota et al. (2018)</ns0:ref>. PCR products were sequenced in Macrogen Inc. <ns0:ref type='bibr'>(Seoul, Korea)</ns0:ref> and at the DNA Sequencing Center at Brigham Young University (Provo, USA). To transform sequence data to haplotypes we used PHASE v2.1.1 <ns0:ref type='bibr' target='#b50'>(Stephens & Donnelly, 2003)</ns0:ref> with the default model for recombination rate variation <ns0:ref type='bibr' target='#b16'>(Li & Stephens, 2003)</ns0:ref>. We aligned sequences using the automatic assembly function in Sequencher v. 4.8 (Gene Codes Corp.), and inspected the aligned sequences by eye, and made corrections manually.</ns0:p></ns0:div>
<ns0:div><ns0:head>Molecular diversity and lineage structure</ns0:head><ns0:p>Haplotype and nucleotide diversity indexes <ns0:ref type='bibr' target='#b23'>(Nei, 1987)</ns0:ref> and their standard deviations, were estimated with DNASP v5.0 <ns0:ref type='bibr' target='#b18'>(Librado & Rozas, 2009</ns0:ref>) using all markers. The possibility of saturation in the rate of base substitutions was assessed by the method of <ns0:ref type='bibr'>Xia et al. (2003</ns0:ref><ns0:ref type='bibr'>) using DAMBE v6.0 (Xia & Xie, 2001)</ns0:ref>. Population genetic structure was examined using the package GENELAND v4 implemented in R v3. <ns0:ref type='bibr'>1.2 (Guillot, Mortier & Estoup, 2005)</ns0:ref>, to infer the number of populations by giving a spatial model of cluster membership without prior designations.</ns0:p><ns0:p>GENELAND was run with a model of uncorrelated allele frequencies for the mitochondrial locus (with all gene regions concatenated). We performed eight independent runs of 1.5x10 7 iterations, with thinning set to 500 and a 'burn in' of 20%. The number of possible clusters tested ranged from 1 to 19 (according to sampling locations). The level of population structure among the clusters obtained by GENELAND, was assessed by analysis of molecular variance (AMOVA; Holsinger & Weir, 2009) using ARLEQUIN v3.1 <ns0:ref type='bibr'>(Excoffier, Laval & Schneider, 2005)</ns0:ref> for mtDNA and nDNA separately. Also, using all loci we evaluated whether the sequences evolved under strict neutrality using Tajima's D <ns0:ref type='bibr'>(Tajima, 1989</ns0:ref><ns0:ref type='bibr'>), Fu & Li's D (Fu & Li, 1993)</ns0:ref>, and r 2 (Ramos-Onsins & Rozas, 2002) tests.</ns0:p></ns0:div>
<ns0:div><ns0:head>Phylogenetic trees reconstruction, split networks and divergence time estimates</ns0:head><ns0:p>Previous to phylogenetic analyses, evolutionary models and partitioning strategies were evaluated using Bayesian information criterion (BIC) scores <ns0:ref type='bibr' target='#b45'>(Schwarz, 1978)</ns0:ref> in PARTITIONFINDER v2.1.1 <ns0:ref type='bibr' target='#b13'>(Lanfear et al., 2017)</ns0:ref> (Table <ns0:ref type='table'>S1</ns0:ref>). The phylogenetic analyses were performed with mitochondrial and nuclear concatenated matrix. Partitioned maximum likelihood analyses were conducted using GARLI 2.0 <ns0:ref type='bibr'>(Zwickl, 2006)</ns0:ref> with 200 replicates of nonparametric bootstrap for branch support. Bayesian analyses were performed using MRBAYES v3.2 <ns0:ref type='bibr' target='#b42'>(Ronquist et al., 2012)</ns0:ref>. We performed four independent MCMC runs 50 million generations, sampling every 2,000 generations. Posterior distributions for parameter estimates and likelihood scores to approximate convergence were visualized with the TRACER program v1.6.0 <ns0:ref type='bibr' target='#b36'>(Rambaut et al., 2014)</ns0:ref>. The effective sample sizes (ESS) of each parameter (>200), allowed us to confirm that the analysis was adequately sampled. A maximum clade credibility tree was visualized with the program FIGTREE v1.4.4 (http://tree.bio.ed.ac.uk/software/figtree/).</ns0:p><ns0:p>Posterior probability values >0.95 were taken as high statistical support for a clade being present on the true tree <ns0:ref type='bibr'>(Huelsenbeck & Rannala, 2004)</ns0:ref>. In order to obtain additional statistical support for the best tree obtained, topologies of different trees (ML and Bayesian) were compared with the use of the Shimodaira-Hasegawa (S-H) test <ns0:ref type='bibr' target='#b49'>(Shimodaira & Hasegawa, 1999)</ns0:ref> with resampling-estimated log likelihood (RELL) and bootstrapping of 1,000 replicates, using the program PAUP*.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>We are aware that phylogenetic methods may not apply at the within-species level, due to multifurcating population genealogies in which descendant alleles coexist with ancestral ones, and recombination events may produce reticulate relationships <ns0:ref type='bibr' target='#b31'>(Posada & Crandall, 2001)</ns0:ref>. To consider these caveats, we constructed unrooted phylogenetic networks using the method described by <ns0:ref type='bibr'>Huson and Bryant (2006)</ns0:ref>, implemented in SPLITSTREE v4.14.4.</ns0:p><ns0:p>To determine when major clades and lineages diverged relative to Quaternary glaciation history, we estimated time since the most recent common ancestor (TMRCA), using the reconstructed species tree from concatenated mitochondrial and nuclear sequences. For this reconstruction, we used multi-species coalescent module implemented in * <ns0:ref type='bibr'>BEAST of BEAST v1.8.4 (Drummond & Rambaut, 2007;</ns0:ref><ns0:ref type='bibr'>Heled & Drummond, 2010)</ns0:ref>, and the same models used for phylogenetic tree reconstruction found by PARTITIONFINDER. Due that it is not possible to date any of the nodes within B. leptopus, as there are no fossils or dated biogeographic events, we used as prior Neobatrachian mutation rates of 0.291037% per million years for COI, rate of 0.37917% per million years for each other mitochondrial markers (dloop and cytb), and a rate of 0.3741% per million years for pomc sequences, according to <ns0:ref type='bibr'>Irrisarri et al. (2012)</ns0:ref>. Bayes factor analysis <ns0:ref type='bibr' target='#b17'>(Li & Drummond, 2012)</ns0:ref> indicated that species tree with strict-clock model received decisive nodal support compared to uncorrelated exponential or uncorrelated lognormal relaxed-clock models.</ns0:p><ns0:p>Markov chains in BEAST were initialized using the tree obtained by MRBAYES, to calculate posterior parameter distributions, including the tree topology and divergence times. We used BEAST to estimate divergence time, based runs for 2x10 7 generations, and sampling every 1000th generation. The first 10% of samples were discarded as 'burn in', and we estimated convergence to the stationary distribution and acceptable mixing (ESS >200) using TRACER v1.6.0.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Population-size dynamics through time</ns0:head><ns0:p>Hypotheses of historical demographic expansions and dynamics through geological time of the inferred lineages were estimated by 'mismatch distributions' <ns0:ref type='bibr' target='#b40'>(Rogers & Harpending, 1992)</ns0:ref>, and Extended Bayesian Skyline Plots (EBSP; <ns0:ref type='bibr'>Heled & Drummond, 2008)</ns0:ref> respectively. We used in a complementary way both approaches due that small sample sizes apparently to fail to provide enough power to Bayesian skyline plots to detect population expansion <ns0:ref type='bibr'>(Grant, 2015)</ns0:ref>. The smooth, unimodal distributions typical of expanding populations can be readily distinguished from the ragged, multimodal distribution 'signatures' of long-term stationary populations, by means of the 'raggedness' of these distributions <ns0:ref type='bibr' target='#b40'>(Rogers & Harpending, 1992)</ns0:ref>. Confidence intervals for these estimates were obtained by simulations using the coalescence algorithm as implemented in DNASP v5.0. Genealogies and model parameters for each lineage were sampled every 1000th iteration for 2x10 7 generations under a strict molecular clock with uniformly distributed priors and a 'burn in' of 2000. Demographic patterns for each analysis were plotted in EXCEL v14.7.7.</ns0:p></ns0:div>
<ns0:div><ns0:head>Test of phylogeographical hypotheses with ABC</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>A coalescent method was used to test phylogeographic hypotheses by constraining the genealogies to fit alternative evolutionary models, and assessing each model's fit by comparing the observed genetic pattern with the range of simulated patterns. Competing phylogeographic hypotheses were compared using approximate Bayesian computation method (ABC approach), as implemented in DIYABC v2.1 <ns0:ref type='bibr'>(Cornuet et al., 2014)</ns0:ref>. We evaluated five demographic scenarios to test alternative divergence times and tree topologies of the four main lineages recovered in GENELAND and in the phylogenetic analyses. Furthermore, the divergence of the main four lineages toke place before the last glacial maximum so the divergence scenarios were proved in such range. The refugia hypotheses correspond to the points of coalescence, with four lineages strongly supported, the possibilities of coalescence are from 1 to 3, for which we tested all the possibilities except scenarios with postglacial admixture because there are not paraphyly events between lineages. The prior coalescence points at time times t1 and t2 applied in the ABC correspond those estimated by BEAST for the origin of Batrachyla leptopus (t2) and the divergence of the four lineages (t1, lower and higher range of the four lineages). Thus, all historically relevant scenarios differed only in the order of population divergence, and in the number and timing of demographic expansion events. These alternatives were: Scenario 1 -the null model -all four lineages coalesced at t1 with equal divergence rates. Scenario 2 -also a null model, but all four lineages coalesced at t2 with equal divergence rates. Scenario 3 -the first coalescence of lineages A and B at t1, whose ancestor coalesced at t2 with lineage C and D. Manuscript to be reviewed tested other scenarios whose divergence time were lower and the probability was very low so they were not considered.</ns0:p><ns0:p>Prior values of Ne were set as 1,000-500,000 individuals with a uniform distribution, based on</ns0:p><ns0:p>Ne calculated from MIGRATE-N v3.6 <ns0:ref type='bibr' target='#b4'>(Beerli, 2006)</ns0:ref>. For this analysis, we performed maximum likelihood using 10 short chains of 1000 steps, and two long chains of 10 000 steps, sampling each 100 steps, and a burn-in of 10 per cent. Ne was calculated using mitochondrial and nuclear rates reported by <ns0:ref type='bibr'>Irisarri et al. (2012)</ns0:ref>.</ns0:p><ns0:p>Prior values for divergence of the ancestral populations were based on divergence times calculated here (see Results), and a generation time of 2-3 yr <ns0:ref type='bibr' target='#b19'>(Martin & Palumbi, 1993)</ns0:ref>, using a uniform distribution. Divergence times were set at between 20,000 -500,000 generations ago for t2, and 10,000 -200,000 generations for t1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Paleo-distribution and species distribution modelling</ns0:head><ns0:p>A total of 120 occurrence records were used for the species distribution models (SDMs) and paleo-distribution modelling. Records were obtained from peer-reviewed literature, our sampled sites, and online databases (GBIF: gbif.org, VetNet: vertnet.org, and iDigBio: idigbio.org). We modeled the SDMs using the standard 19 bioclimatic variables downloaded from Worldclim <ns0:ref type='bibr'>(Hijmans et al., 2005)</ns0:ref> for the current conditions , the Mid Holocene (Mid-Hol, ~6,000 yrs BP), the LGM (~22,000 yrs BP), and the Last Inter-glacial (LIG, ~120,000 -140,000 yrs BP). The variables were at 30 arc-sec for the current conditions and the LIG, and at 2. For all models, the equal training and sensitivity threshold rule was applied and the cloglog output was selected. Extrapolation was not used and clamping was applied when hind-casting the model of the current conditions to the past. The models for the Mid-Hol and for the LGM were overlaid within each time period to identify areas of agreement and disagreement between the models for each the time period. All models were transformed to binary using the selected threshold rule <ns0:ref type='bibr' target='#b30'>(Phillips et al., 2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Genetic structure using Bayesian clustering model</ns0:head><ns0:p>A total of 113 haplotypes were found when mitochondrial and nuclear markers were combined (Table <ns0:ref type='table'>2</ns0:ref>). Saturation tests as a function of the genetic distance estimated under substitution model GTR showed non-significative saturation of DNA sequence alignments. The mtDNA Bayesian analysis with GENELAND yielded a modal number of four clusters (K=4), recovered from all independents runs (Fig. <ns0:ref type='figure' target='#fig_6'>2A</ns0:ref>); this is based on the highest average posterior probability.</ns0:p><ns0:p>The distribution of these four clusters, from north to south, was named as follows (Fig. <ns0:ref type='figure' target='#fig_6'>2B</ns0:ref>):</ns0:p><ns0:p>lineage A (Los Queules), lineage B (Nahuelbuta), lineage C (Bahía Mansa, Cordillera Pelada, Máfil, and Pichirropulli), and lineage D (all remaining localities shown in Table <ns0:ref type='table'>1</ns0:ref>). The highest estimate of haplotype diversity was found in lineage A, whereas the lowest values were found in lineage B (Table <ns0:ref type='table'>2</ns0:ref>). Highest nucleotide diversity was detected in lineage D and the lowest one in lineage B (Table <ns0:ref type='table'>2</ns0:ref>). Negative but non-significant values for Tajima's D and Fu's FS neutrality test indexes were found in all lineages except for B lineage. Here the non-significant positive values likely reflected the smallest sample size for this locality. Rozas's r 2 were positive for all lineages but only significant for A and B lineages, suggesting recent expansion (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p><ns0:p>The AMOVA results using the four lineages indicate a significant genetic structure (groups defined as Lineages A, B, C, and D): (1) variation among lineages = 35.64% and variation within lineages = 45.93% for mtDNA; (2) variation among lineages = 0 % and variation within lineages = 51.12% for pomc;</ns0:p><ns0:p>(3) variation among lineages = 4.49 % and variation within lineages = 2.74% for crybA1 (Table <ns0:ref type='table'>3</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Phylogenetic tree reconstruction, split networks, and lineage divergence time</ns0:head><ns0:p>The models selected for the ML and Bayesian analyses are described in Table <ns0:ref type='table'>S1</ns0:ref>. Because the Bayesian analyses recovered a maximum clade credibility tree similar to the best ML tree, and the Shimodaira-Hasegawa test showed that topological disagreements were restricted to 'lowsupport' nodes, we show only the Bayesian tree (Fig. <ns0:ref type='figure' target='#fig_8'>3A</ns0:ref>). The same four lineages recovered by GENELAND were recovered in the phylogenetic reconstruction with moderate to high support values. Lineages A and B were recovered as sister groups (bootstrap = 96%; BPP = 0.99), as were the C and D lineages, but with moderate support (bootstrap 74%; BPP= 0.95).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Mitochondrial phylogenetic analyses recovered similar results to concatenated datasets, but the nuclear phylogeny was highly polytomized (Fig. <ns0:ref type='figure' target='#fig_0'>S1</ns0:ref>). The split networks (Fig. <ns0:ref type='figure' target='#fig_8'>3B</ns0:ref>) recovered the </ns0:p></ns0:div>
<ns0:div><ns0:head>Demographic patterns of the inferred clusters</ns0:head><ns0:p>Results of mismatch distribution analyses (Fig. <ns0:ref type='figure' target='#fig_9'>4A-D</ns0:ref>) revealed a single primary peak for lineage</ns0:p><ns0:p>A, but with a non-significant raggedness index (r=0.0230, P>0.1). Similarly, unimodal patterns were observed in lineages C (r=0.003, P<0.001) and D (r=0.0008, P<0.001). The small sample size (n = 7) and haplotype numbers (H = 2) precluded this analysis on lineage B. Reconstruction of the demographic histories by means of Extended Bayesian Skyline Plot (Fig. <ns0:ref type='figure' target='#fig_9'>4E-H</ns0:ref> ) suggested population expansions for all lineages except B (Fig. <ns0:ref type='figure' target='#fig_9'>4F</ns0:ref>). EBSP further resolved sequential demographic expansions from the oldest, lineage D (c. 18,000 years bp), then lineage A (c. 11,000 years bp), and most recently, lineage C (c. 5,000 years bp).</ns0:p></ns0:div>
<ns0:div><ns0:head>Hypothesis testing with ABC</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Logistic regression analysis with DIYABC identified Scenario 4 as most strongly supported among the five tested (Fig. <ns0:ref type='figure' target='#fig_10'>5D</ns0:ref>), with a high posterior probability (0.93; Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>); all other scenarios had much lower support (0.0-0.04). Moreover, the Type I and Type II error rates estimated for Scenario 4 were the lowest in both cases (0.14; 0.014-0.196; Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>).</ns0:p><ns0:p>Scenario 4 placed the first divergence as the split between lineages A, B, and the ancestor of the southern clade (lineages C and D) at t2, and the second split between lineages C and D at t1 (Fig. <ns0:ref type='figure' target='#fig_10'>5D</ns0:ref>). The effective population size (Ne) and divergence time parameters, in terms of the number of generations (t), estimated for this divergence scenario (Table <ns0:ref type='table'>S2</ns0:ref>), corroborate the population expansions inferred by EBSP and mismatch distribution for lineages C and D.</ns0:p></ns0:div>
<ns0:div><ns0:head>SDMs and paleo-distribution models</ns0:head><ns0:p>The predicted distribution models of B. leptopus under four periods (last inter-glacial to current) are shown in Fig. <ns0:ref type='figure' target='#fig_11'>6</ns0:ref>. The model for the current conditions showed that the distribution of this species is mostly encompassed by the county-based current distribution mapped by the IUCN, with a high AUC value (0.961, SD = 0.013) (Fig. <ns0:ref type='figure' target='#fig_11'>6A</ns0:ref>). The two circulation models for the Mid- <ns0:ref type='table'>3</ns0:ref>), with less differentiation among lineages.</ns0:p><ns0:p>This indicates the presence of high local genetic structure and high interpopulation differentiation. These results might suggest that although B. leptopus has a wide range of distribution, long-range dispersal is highly unlikely, which is in agreement with historical data.</ns0:p><ns0:p>On the other hand, most of the variation in nuclear markers were observed within localities, while the values of both genes were not significant (Table <ns0:ref type='table'>3</ns0:ref>).</ns0:p><ns0:p>The basic topology of the concatenated ML and Bayesian trees were similar, consequently we used the Bayesian tree as our primary hypothesis of relationships among B. leptopus populations (Fig. <ns0:ref type='figure' target='#fig_8'>3A</ns0:ref>). We recovered two main clades (named south and north) and four lineages (A, B, C, and D) strongly supported by boostrap, and posterior probabilities. The nuclear phylogeny was highly polytomized (Fig. <ns0:ref type='figure' target='#fig_0'>S1A, B</ns0:ref>) and it did not inform on lineages relationships. In contrast, the mtDNA phylogeny (Fig. <ns0:ref type='figure' target='#fig_0'>S1C</ns0:ref>) was highly informative, separating the same clades of the concatenated data set. Consequently, mtDNA variation was a leading indicator of population differentiation and phylogenetic relationships relative to nuclear loci in B. leptopus, indicating that mitochondrial markers can be more sensitive in revelate lineage divergence than any single nuclear gene, problably because a lower effective population sizes than nuclear genes (Hung,</ns0:p></ns0:div>
<ns0:div><ns0:head>Biogeographic structure of the lineages</ns0:head><ns0:p>The two northernmost lineages (A and B) are currently separated by two major river systems (Itata and Bío Bío; Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>); these boundaries also coincide with a region with sparsely distributed forest patches. These two lineages are genetically well-differentiated from each other (Fig. <ns0:ref type='figure' target='#fig_6'>2b</ns0:ref>); lineage A is restricted to the Los Queules Reserve, and lineage B are limited to a small area in Nahuelbuta range (Locality 2) near to the Butamalal River. The evolution of distinct lineages or genetic clusters is often attributed to population isolation during glacial advances, which geographical isolation in combination with different selection pressures and/or genetic drift would drive population divergence <ns0:ref type='bibr'>(Hewitt, 2004)</ns0:ref>. Similarly, low levels of current genetic diversity as observed in lineage B (Nahuelbuta; Table <ns0:ref type='table'>2</ns0:ref>), suggest a recolonization history of founder effects, small population size, and genetic bottlenecks <ns0:ref type='bibr'>(Hewitt, 2004)</ns0:ref>.</ns0:p><ns0:p>In contrast to these well-differentiated/low variability lineages, those in the southern distribution (lineages C and D) are geographically more heterogeneous. Lineage C is widely distributed from Máfil in the Los Ríos region (Locality 3 in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) to Bahía Mansa, Los Lagos region (Locality 6 in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>), suggesting that certain landscape features, such as extensive forests, would have allowed dispersal among breeding groups over long timescales. Moreover, the encompassed areas of lineage C also include large mountains (e.g. Bahía Mansa and Cordillera Pelada; Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>), suggesting that such orography could represent barriers to gene flow in B. leptopus, as reported in some co-distributed vertebrates and plants <ns0:ref type='bibr' target='#b48'>(Sérsic et al., 2011)</ns0:ref>. The combined dataset suggests that lineage D is widespread throughout the rest of the species' range, with few phylogeographical subdivisions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lineage divergence time</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Divergence time estimates suggest that diversification of B. leptopus lineages may have occurred earlier than reported in other frogs such as the ground frog Eupsophus calcaratus <ns0:ref type='bibr' target='#b25'>(Nuñez et al., 2011)</ns0:ref>, although co-distributed populations (e.g. Bahía Mansa) appear to have diverged later in time (0.025 mya for B. leptopus (Fig. <ns0:ref type='figure' target='#fig_8'>3A</ns0:ref>), and 0.065 mya for E. calcaratus <ns0:ref type='bibr' target='#b25'>(Nuñez et al., 2011</ns0:ref>; Pleistocene (~0.107 mya; Fig. <ns0:ref type='figure' target='#fig_8'>3</ns0:ref>). Moreover, the overall pattern suggests that B. leptopus has undergone several rounds of fragmentation, followed by successive radiations within each clade.</ns0:p><ns0:p>Further, at least two more recent series of fragmentation events are inferred within each of these clades. Our calibrations place the split between lineages A and B at ~0.068 mya, and between lineages C and D at ~0.065 mya.</ns0:p><ns0:p>The discrepancy in divergence times between our results and those of the Vidal et al. ( <ns0:ref type='formula'>2016</ns0:ref>) may be due to the use of different mutations rates (0.8%) and a single marker (mitochondrial cyt b). It is well known from population genetics theory that the stochastic nature of the genealogical process implies a significant amount of variance associated with parameter estimation. In fact, <ns0:ref type='bibr' target='#b22'>Nabholz et al. (2009)</ns0:ref> suggest that for inferring divergence dates should imperatively use statistical phylogenetic methods accounting for substitution rate variation across lineages.</ns0:p><ns0:p>Endeed, analysis of mtDNA sequence data can be enhanced if they are collected in conjunction with nuclear sequences, because they provide an independent estimate of phylogenetic relationships, mitigating the inherent random effects of the randomness inherent to genetic drift, and the significant amount of variance associated with parameter estimation <ns0:ref type='bibr' target='#b11'>(Carstens & Dewey, 2010)</ns0:ref>. Interpretations derived of the divergence time analyses also need to take into account the largely overlapping confidence intervals of the results for each lineage divergence. For example, if only the median values are considered, the results suggest coalescence of the lineages C and D at t1 (around 0.066 mya), and coalescence of the lineages A and B at t2 (around 0.107 mya). But if we consider the confidence intervals, the divergence of both clades, North and South could be closer in time than what is hypothesized in such reconstruction.</ns0:p></ns0:div>
<ns0:div><ns0:head>Hypothesized refugia and post-glacial expansion</ns0:head><ns0:p>Studies on past contraction-expansion climate cycles in Patagonian landscapes suggest that a rapid population expansion should occur in the biota affected by these processes, as habitats became more available <ns0:ref type='bibr'>(Fraser et al., 2012)</ns0:ref>. Despite the geomorphological differences in Patagonian landscapes, population genetics theory points out that a population undergoing rapid expansion can be characterized by low genetic diversity, since each new founder population represents only a fraction of the ancestral population <ns0:ref type='bibr' target='#b24'>(Nichols & Hewitt, 1994;</ns0:ref><ns0:ref type='bibr'>Hewitt, 2000;</ns0:ref><ns0:ref type='bibr'>Hewitt, 2004;</ns0:ref><ns0:ref type='bibr'>Waters, Fraser & Hewitt, 2013)</ns0:ref>.</ns0:p><ns0:p>The last two Pleistocene glaciations in southwestern South America (180 kya and 20 kya) covered the Andes with large ice fields reaching the Pacific Ocean south to 39ºS, where the ice sheet decreased in elevation to sea level, and extending further to the southern tip of South America <ns0:ref type='bibr' target='#b34'>(Rabassa, 2011)</ns0:ref>. Consequently, Late-Pleistocene divergence time estimates (0.107 mya) for first diversification of B. leptopus populations separating the North and South clades (Fig. <ns0:ref type='figure' target='#fig_8'>3</ns0:ref>), are consistent with a hypothesis of Pleistocene isolation followed by interglacial dispersal. In fact, the divergence of lineages A and B suggests that this interglacial dispersal from the ancestral population occurred rapidly across the current range of the species.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>The existence of two suitable areas for the species is supported by the SDM for the LIG (0.120 -0.140 mya) in that Los Queules population (Locality 1, Table <ns0:ref type='table'>1</ns0:ref>) showed the highest genetic diversity (Table <ns0:ref type='table'>2</ns0:ref>). This is typical for refugial populations that have been stable over time <ns0:ref type='bibr'>(Fraser et al., 2012)</ns0:ref>. This evidence and the agreement of the two circulation models for Los</ns0:p><ns0:p>Queules area as a suitable habitat for the species during the LGM (Fig. <ns0:ref type='figure' target='#fig_11'>6C</ns0:ref>), suggest that it is highly probable that now Los Queules location is a remnant of the northern refuge, derived from the last southern Patagonian glaciation (180 kya).</ns0:p><ns0:p>Demographic reconstruction in B. leptopus using an ABC framework also supports the hypothesis of two putative refugia at different time during the Pleistocene (Scenario 4, Fig. <ns0:ref type='figure' target='#fig_10'>5D</ns0:ref>, Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>). This scenario suggests that the South clade populations (lineages C and D) are likely descended from a divergence event approximately 65 Kya. This scenario is concordant with the predicted patchy distribution of the species during the Mid-Holocene (~6,000 years BP; Fig. <ns0:ref type='figure' target='#fig_11'>6B</ns0:ref>).</ns0:p><ns0:p>On the other hand, the various demographic analyses show that the genetic structure of lineages C and D contains signatures of demographic expansion consistent with Pleistocene glacial retreat. In fact, strong support for recent population expansion is represented by significantly negative Fu's Fs values, unimodal mismatch distributions with low raggedness indexes, and EBSP's depicting rapid expansion following the retreat of the Patagonian ice sheet after 15 Kya (Table <ns0:ref type='table'>2</ns0:ref>; Fig. <ns0:ref type='figure' target='#fig_9'>4</ns0:ref>). This hypothesis is also reinforced with SDMs produced by both circulation models, although when viewed separately, the MIROC circulation model is the only one predicting this scenario.</ns0:p></ns0:div>
<ns0:div><ns0:head>Past population dynamics and current conservation significance</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>During the Cenozoic climatic oscilations included multiple glaciations in southern South America <ns0:ref type='bibr' target='#b35'>(Rabassa & Clapperton, 1990;</ns0:ref><ns0:ref type='bibr'>Coronato, Martínez & Rabassa, 2004)</ns0:ref>. These geological events have been hypothesized as causes of the retreat and advance of temperate Nothofagus forests and conifers <ns0:ref type='bibr'>(Villagrán & Hinojosa, 1997;</ns0:ref><ns0:ref type='bibr' target='#b32'>Premoli, Kitzberger & Veblen, 2000;</ns0:ref><ns0:ref type='bibr'>Tremetsberger et al., 2009)</ns0:ref>. Accordingly, several phylogeographic hypotheses suggest that Pleistocene glaciations had profound effects on the population genetic structure and variability of Patagonian fauna. For example, glaciated populations of some fish species display molecular diversity (high haplotype diversity and low nucleotide diversity) significantly correlated with latitude <ns0:ref type='bibr' target='#b44'>(Ruzzante et al., 2008;</ns0:ref><ns0:ref type='bibr'>Cosacov et al., 2010)</ns0:ref>. The same genetic patterns have been observed in reptiles <ns0:ref type='bibr' target='#b5'>(Breitman et al., 2011;</ns0:ref><ns0:ref type='bibr'>Fontanella et al., 2012)</ns0:ref>, amphibians <ns0:ref type='bibr' target='#b25'>(Nuñez et al., 2011)</ns0:ref>, and mammals <ns0:ref type='bibr'>(Himes, Gallardo & Kenagy, 2008;</ns0:ref><ns0:ref type='bibr' target='#b15'>Lessa, D'Elía & Pardiñas, 2010)</ns0:ref>.</ns0:p><ns0:p>In agreement with these previous studies, the location of glacial refugia and postglacial expansion identified here, indicate that the climatic niche of B. leptopus is likely to be related to an increase in the availability of suitable habitat in the southern part of its current distribution.</ns0:p><ns0:p>Reconstruction of the potential distribution area of B. leptopus (Fig. <ns0:ref type='figure' target='#fig_11'>6</ns0:ref>) suggests that suitable habitats underwent expansions and contractions during the glacial periods.</ns0:p><ns0:p>In addition to the inferences of past population dynamics, predictions about the ability of species to respond to future climate change play an important role in alerting potential risks to biodiversity. In fact, many studies have investigated the response of biodiversity to climate change, and most of them indicate that current and future rates of these changes may be too fast for the ecological niche to evolve <ns0:ref type='bibr'>(Fraser et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b41'>Rolland et al., 2018)</ns0:ref>. This is particularly critical in species with low dispersal capacity such as amphibians, which makes them potentially less able to respond to changes induced by climate and, consequently, more vulnerable to Manuscript to be reviewed In the same way, AB and CD lineage ancestors coalesced at t2 in a single refuge.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Fig. 1 )</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig.1). Each sampling site was geo-referenced with a GPS Garmin GPSmap 76CSx. Eight</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>tissue kit (Cat. No. 69506). We amplified three mitochondrial regions: a segment of the Control region (D-loop; Goebel, Donnelly & Atz, 1999) Cytochrome b (cyt b; Degnan & Moritz, 1992), PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020) Manuscript to be reviewed and Cytochrome oxidase subunit I (coI; Folmer et al., 1994), and two nuclear regions: Propiomelanocortin (pomc; Gamble et al., 2008) and β Crystallin A1 (crybA1; Dolman &</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Scenario 4 -the first coalescence of lineages C and D at t1, whose ancestor coalesced at t2 with lineage A and B. Scenario 5 -one split event at t1 isolated the north (lineages A and B) from the south (lineages C and D) clades, and then a coalescence of both clades at t2 (see Results). We PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>5 arcmin for the Mid-Hol and LGM. The Mid-Hol and the LGM variables were based on the Community Climate System Model (CCSM) and the Model for Interdisciplinary Research on Climate (MIROC), while LIG conditions were based on Otto-Bliesner et al. (2016). We PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)Manuscript to be reviewed restricted the projection of the models by creating a buffer of 2º around the outermost occurrence records and the known distribution of B. leptopus. All SDMs were performed using MAXENT v3.4.0<ns0:ref type='bibr' target='#b30'>(Phillips et al., 2017)</ns0:ref>. To avoid model overfitting and account for the correlation between the variables and the presence of outliers encountered during data exploration, we reduced the number of variables to five. This was performed by retaining the variables with |rho| < 0.8 that contributed the most to ten cross-validated models, as shown by a Jackknife test. These variables were Bio2 = Mean Diurnal Range, Bio4 = Temperature Seasonality, Bio5 = Max Temperature of Warmest Month, Bio13 = Precipitation of Wettest Month, and Bio17 = Precipitation of Driest Quarter.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>same four lineages obtained by the ML and Bayesian analyses. The fit index was 94.96, meaning that only 5.04% of the distances in the distance matrix are not represented by the network. Most of the internal splits have bootstrap support between 68 and 100%. Divergence dating indicates that the A-B and C-D clades separated during the late Pleistocene, approximately 0.107 mya [95% confidence interval (CI) = 0.020-0.278 mya]. The divergence between A and B lineages would have occurred by the late Pleistocene (approximately 0.068 mya; 95% CI = 0.036-0.147 mya) and divergence between C and D lineages was approximately 0.065 mya (95% CI = 0.056-0.092 mya) (Fig. 3A).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Hol showed different distributions: the MIROC-based model showed a continuous distribution similar to the current conditions model, while the CCSM-based model resolved disjunct distributions for the northern region of the current distribution model (Fig.6B). Both LGM models showed clear disjunct distributions with concordance around Nahuelbuta mountain range (Fig.6C). The MIROC based model indicated a distribution to the north of the current distribution model, and the CCSM resolved a highly fragmented distribution. The model for the LIG showed a disjunct distribution for the species at the northern portion of the current distribution of B. leptopus (Fig.6D).PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020) Manuscript to be reviewed Discussion Our hindcasting-based approach supports the existence of four lineages in B. leptopus (A, B, C and D; Figs. 2, 3) distributed discontinuously along the narrow, ~1000 km long Patagonian region of southern Chile. AMOVA confirmed the strongly different patterns of variation among the Batrachyla leptopus populations. In our study, most of the mtDNA genetic differences are present among localities and within lineages (Table</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig. 2 )</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Fig. 2). Vidal et al. (2016) point out that some B. leptopus populations from Chiloé Island and</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>extinction(Duan et al., 2016). In this context, the genetic impoverishment at northern area of the distributional range of B. leptopus is of great concern, given a climate change scenario based on increases in temperatures and aridity in central-southern Chile.ConclusionsOur study on genetic diversity throughout the geographic range of B. leptopus, supporting the existence of four lineages distributed along ~ 1000 km of length in southwestern Patagonia, including glaciated and no glaciated areas by the LGM. The two northernmost lineages are present in a region with poorly preserved forest patches, whilst the southern lineages are geographically more heterogeneous, suggesting that extensive forests would have allowed dispersion among breeding groups over time scales. Late Pleistocene divergence estimates for the first diversification of the B. leptopus, that separate the North and South clades, also supported by the SDM for the LIG, are consistent with a Pleistocene isolation followed by interglacial dispersion. The ABC analyses also supported the hypothesis of two putative refugia at different times during the Pleistocene, concordant with a patchy distribution of the species during the Middle Holocene. In addition, northern populations of B. leptopus showed the highest degree of isolation, deserving special attention since considering an increase of the temperatures and aridity in the center-south of Chile, and that they do not have connectivity with nearby populations, they could end up disappearing.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='43,42.52,255.37,525.00,387.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Type I and Type II error rates and posterior probabilities for each scenario calculated from DIYABC.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type 2 error rate</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:note></ns0:figure>
<ns0:note place='foot'>Drovetski & Zink, 2016).PeerJ reviewing PDF | (2020:03:46371:1:1:NEW 9 Jul 2020)</ns0:note>
</ns0:body>
" | "Valdivia, June 23, 2020
Dr.
Nikolay Poyarkov
Academic Editor, PeerJ
REF: #2020:03:46371:0:1:REVIEW
Dear Dr. Poyarkov,
Please find enclosed the revised version of our manuscript “Phylogeographic analysis and species distribution modelling of the wood frog Batrachyla leptopus (Batrachylidae) reveal interglacial diversification in south western Patagonia” by José J. Nuñez, Elkin Y. Suárez-Villota, Camila A. Quercia, Angel P. Olivares, and Jack W. Sites.
Below, you can find the suggestions made by the reviewers, and the corresponding answers. The manuscript was reviewed and we included most of the changes and suggestions of the reviewers that improved significantly the manuscript.
We hope this revised version will make our manuscript suitable for publication in PeerJ. Thank you in advance for consideration of this manuscript, and please feel free to contact us if there is any question.
Sincerely yours,
Dr. José J. Nuñez
Universidad Austral de Chile
Casilla 567, Valdivia, Chile
Tel. 56 (63) 221483
E-mail: [email protected]
Questions and Answers to the Editor
Responses:
Thank you for submitting your manuscript to PeerJ. I have sent your paper to three expert referees for their consideration. I have now received their comments back and have read through your paper carefully myself, and I really like it. Enclosed please find the reviews of your manuscript.
R:/ Thank you. We included most of the comments and changes suggested by the reviewers, including new paragraphs and more detailed explanations, particularly for the Reviewer 2. In addition, we made some other corrections over mistakes in the original manuscript. Also, according to suggestions of PeerJ Team the paragraph in the lines 226-227 was changed due to overlapping with previous work of our team. In addition, note that I changed the address of one of the coauthors due to his new job.
The reviews are in general favourable and suggest that, subject to minor revisions, your paper could be suitable for publication. Please consider these suggestions, and especially please pay attention to the questions raised by Reviewer 2, who did a very hard work on your manuscript and provided many useful comments. All raised questions must be fully addressed.
Personally, I would appreciate if you would add thumbnails showing life photos of your model species Batrachyla leptopus to Fig. 1 and Fig. 3 - this would make the figures look more attractive for a potential reader.
R:/ A life photo was included in the Fig. 1 and Fig. 3.
Reviewer 1 (Anonymous)
Experimental design
In general, both genetic and SDM methods agree with the best practices in their fields. The only exception perhaps is the mismatch distribution, given that the other used methods (EBSPs and ABC) have been shown to be vastly superior.
R:/ We included mismatch distribution analysis as a complementary analysis to EBSP due that small sample sizes appear to affect the EBSP analysis as has been described by Grant (2015). We included a paragraph with this justification. Lines 358-361.
L66: there is an extra comma before “have”
R:/ It was corrected.
L92: patchy?
R:/ We agree, it is patchy.
L202: Please provide information on the number of generations, their intervals, and parameter ESSs, as you
indicated for the BEAST analysis on line 229-231.
R:/ We included such information. Lines 278-283.
Reviewer 2 (Anonymous)
Basic reporting
This is an interesting and meaningful study focused at testing biogeographic hypotheses associated with Patagonian paleo-refuges, using as model the wood frog Batrachyla leptopus, with in-depth analyses covering phylogenetics, phylogeography, population genetics, divergence dating and ecological modelling. The tests are well-thought and the steps explained in a clear manner. Nevertheless, the conclusions are strongly based on an important bottleneck, the divergence dating analysis, which results are extrapolated for downstream ABC and niche modelling; because the authors use only a rate prior, extrapolated from a previous study, without checking for a a likely larger mitochondrial evolutionary rate, and without incorporating any fossils/biogeography as node date priors, their conclusions about the importance of past refuges should be taken with a grain of salt. Without further testing the effect of a faster evolutionary rate, no conclusions can be drawn temporaly, although spatial conclusions seem reasonable (population genetics, phylogenetic pattern).
R:/ Thank you so much to the reviewer 2 for these comments. Our answers below:
Experimental design
Phylogenetics, phylogeography, population genetics, and ecological modelling analyses are all interesting and in line with what the authors aim to test.
However, the dating analysis may bear important biases. Because rate extrapolation is used without imposing fossil/biogeographic constraints, rate biases should have been considered, as the posterior times could be strongly affected by them (due to an inverse relationship between times and rates); and because times are an important issue in this ms, I strongly suggest an upwards correction (Nabholz et al., 2009) in BEAST, to check if correlation to paleo-refuges still holds. The Irisarri 2012 study apparently did not employ such a correction either.
Therefore, the divergence dating analysis assumes too much in its current configuration, and because downstream analyses assume the posterior time ranges reported as correct (when there's the possibility that they are not due to rate underestimation), it's very important that the authors re-run Beast with an alternative rate prior embracing (likely) larger rates for mitochondrial genes (as Nabholz et al. 2009 indicate that these are larger than commonly assumed).
R:/ We really appreciate the reviewer's comments. Nabholz et al 2009 reported an upwards correction in mitochondrial DNA mutation rates, but was for birds and mammals. Since it is known that mitochondrial rates could be influenced for longevity, body mass, metabolic rates, different lineages, and other life-history traits, we think that is not possible to apply such correction from homeothermic (birds or mammals) to ectothermic (amphibians) taxa.
Although in amphibians there is not evidences of higher mitochondrial divergence rates, it is interesting the great rate variation among mitochondrial genes and codons reported by Nabholz et al (2009). Due that it is not possible to date any of the nodes within B. leptopus, as there are no fossils or dated biogeographic events, we decide re-run Beast analyses using priors to estimate the rate of each mitochondrial gene (lines 233-236), instead only one prior for all mitochondrial data set. Thus, we used as prior Neobatrachian mutation rates of 0.291037% per million years for COI and the general mitochondrial rate of 0.379173% per million years for each other genes (d-loop and cytb markers, Irrisarri et al 2012), and a rate of 0.3741% per million years for pomc sequences. Bayes factor analysis indicated that LogNormal relaxed clock model received decisive support compared with other models available in BEAST. We think that this is the line with those propposed by Nabholz et al (2009) that suggest:“Users of mtDNA as a tools for inferring divergence dates should imperatively use statistical phylogenetic methods accounting for substitution rate variation across lineages, the so-called clock-relaxed methods”.
Despite of this adjust in our dating analyses, Batrachyla leptopus diversification time was similar to our previous estimations with a wider 95% HDP interval. Thus, we decide report these new results and extrapolate for downstream ABC and niche modelling analyses.
Further questions/suggestions:
- The authors tested for saturation, but do not report if they found it for any marker in the results. In case there's saturation, older times could be underestimated, possibly inflating the tendency to infer recent population growth even if there is none (or if there is but it's an older one). If only a subset of the markers suggest a linear relationship with the best fit model, exclude the markers with more extreme signs of saturation, or else exclude positions with largest entropy (e.g., using Tiger or the likes), and re-run.
R:/ We made saturation tests for each marker. None of them presented significant saturation.
- Have the authors tested for clusterization by sampling tissue (i.e., swab vs. liver) in the phylogeny (or by genetic distances)? Given that there may be nuclear copies of mtDNA, or even multiple D-loop copies in a mtDNA (depending on the vertebrate group; and I couldn't find a complete Batrachyla nor Batrachylidae mtDNA genome at NCBI to testify for a single-copy D-loop in the species), it'd be important to show that there isn't such a clustered pattern of inferring an inexistent clade due to paralogy. Testing if clades are associated with tissue type shall give a hint at this.
R:/ We test if clades are associated with sample type (tissue versus swabs), but we found no differences nor differential clustering.
- Regarding population structure, please also provide Fst measures by mtDNA concatenated, and by each nuclear locus (could be a SM table). Discuss if the 3 measures disagree and, if so, how such disagreement is in line or not with current conclusions.
R:/ We made these analyses separately. The results are reported in lines 490-494, and updated in a new Table 3.
- Do mtDNA concatenated vc nuc1 vs. nuc2 trees agree? Showing a SM figure with the 3 topologies would be enough. If they disagree, how does it impact conclusions?
R:/ We report these results as Supplementary Figure (fig. S1) and discuss about disagreements. As expected, nuclear sequences were less informative than mitochondrial ones, we included these results as well as Amova results in the first part of the discussion, lines 342-346 and lines 593-613.
- Does PhyML support partitioning? If not, how did the authors estimate the tree, assuming at least some partitioning had better fit than none (as often is the case)? If not, please run a partitioned ML using either RaxML, RaxML-ng, or IQTree.
R:/ We correct this sentence. Actually, we made the partitioned ML analyses with Garli 2.0.
- Were the 2 different nucDNAs concatenated in a single partition, or not? Please clarify. Also, tests of partitioning do not mind if nuc and mt partitions are mixed, but in fact they shouldn't be. If so, there's the chance that *BEAST run may be incorrect. There should be at least 2 partitions (mt plus 1-2 nuc partitions), and eventually more if the best partitioned model suggests so, e.g. by codon position within nuclear genes (regarded mt+nuc partitions are avoided). One nuc partition is acceptable if there's low nuclear variation.
- Still regarding the different types of loci in Beast: was the correct ploidy number assumed for mtDNA, and for nucDNAs?
R:/ Yes, the ploidy was assumed according the data if were mtDNA or nDNA
Validity of the findings
Because the authors base their conclusions on the importance of paleo-refuges on dates that can be biased, it is hard to judge the validity of their findings in the current ms version.
R:/ We made the necessary adjustments according the reviewer to diminish this bias.
Comments for the Author
A meaningful and important ms, using relevant techniques to test paleo-refuges using an anuran as model, but which needs further sensitivity analysis of divergence dating to become more credible. I recommend major revisions, mostly due to dating analyses.
R:/ We think that we have considered all of the valuable comments on the reviewer to strengthen our analyzes.
Reviewer 3 (Anonymous)
Basic reporting
The article Phylogeographic analysis and species distribution modeling of the wood frog Batrachyla leptopus (Batrachylidae) reveal interglacial diversification in south western Patagonia; It deals with a topic of particular importance to scientists interested in the evolutionary history of herpetofauna in southern South America. The particular case of the species of the Batrachyla genus, endemic to the Nothofagus forests, in southern Chile and Argentina has been of great interest by many scientists, who have revealed aspects of their natural history, characteristics, chromosomes, behaviors, larvae, etc. In this context, this article contributes significantly to a better understanding of aspects of the evolutionary history of one of its members, Batrachyla leptopus. This is a work very well documented in the extensive bibliography that exists on the matter. Furthermore, it is very well structured and therefore easy to read, even for someone who does not know molecular technologies in depth, but who, by understanding the context well, can realize that the objectives are well achieved and fully explained. For all of the above, this work deserves to be published without major corrections than those that could be derived from the very action of writing a scientific article.
Experimental design
The experimental design is appropriate for a job of this nature. The authors use three mitochondrial regions (D-loop, cyt b, and coI) and two nuclear loci (pomc and crybA1) to test their hypothesis. Indeed, taking into account that one of the questionable aspects of molecular methods is their robust statistics, this paper uses those most reliable methods, and incorporates a coalescence analysis, highly recommended nowadays as there is much confusion, due to the variability of results when a single molecular marker is used. On the other hand, the sample design incorporates specimens from the entire distribution, which makes the work very representative of the reality of B. leptopus. One aspect that could perhaps be lacking in this regard is some additional populations in the Andes Mountains, at 38 ° latitude (Curacautín) (Tohuaca National Park) and from north of Los Queules, but it does not diminish the importance of the work.
R:/ We agree with this comment. In fact, we know and search for other intermediate localities of B. leptopus, but we were not successful.
Validity of the findings
The results are valid since they are in line with other investigations carried out on the herpetofauna of southern South America, and have been obtained by applying an appropriate sampling design. Within the southern batrachofauna, the Batrachyla species, including Batrachyla leptopus, are part of a group of amphibians of tropical origin, and therefore their diversification may have occurred as a consequence of orographic processes subsequent to the current constitution of the continents. This is absolutely consistent with those expressed by the authors of this article. Furthermore, the historical antecedents that allow the evolution of South American amphibians to be contextualized do not provide many other additional hypotheses. On the other hand, the authors do not suggest in the text, if these four different lineages that appear in Batrachyla leptopus, present enough differentiation to be considered as possible different species. Perhaps it would be appropriate for the authors to discuss their results considering this hypothesis.
In this same line, It would be interesting to know, if the authors noticed any correspondence between lineages and some morphological distinction that suggests the presence of different taxonomic entities in Batrachyla leptopus. This question arises from the fact that during our extensive campaigns in the forests of southern Chile, we noticed some different morphs in B. leptopus, for example, some smaller specimens, the texture of their skin smoother.
R:/ Although the genetic clustering observed among the lineages of Batrachyla are supported by demographic and phylogenetic analyses, our approaches are not bound to any particular species delimitation method. Also, in any event, genetic distances among Batrachyla lineages were upper than those reported for other sympatric frogs (ex. Nuñez et al, 2011, Vidal et al., 2016). Furthermore, amova analyses provide insights into population substructuring, they do not allow one to relate population structure to speciation events. Consequently, we think that our results suggest intraspecific rather than interspecific variation.
Comments for the Author
I would like to congratulate the authors of this work for their important contribution to the knowledge of the evolutionary history of an important species in the herpetofauna of southern South America. One aspect that I would like to see at work is what consequences these results could have on the number of species in the genus. Could it be suggested that these four lineages represent four different species? If the molecular analyzes have a robust support, and the results obtained are consistent with other historical antecedents, can it be suggested that within species B. leptopus there is more than one species? I would like the authors to discuss this point in some depth.
R:/ This is an interesting hypothesis but our work was attempted to evaluate hypotheses about intraspecific differentiation and we used methods accordingly. Deriving hypotheses about different species only from a phylogenetic tree seems to us too speculative, under the arguments indicated in the previous section. There are explicit and objective methods to evaluate species limits, but none of them were applied in our present work.
However, the authors mention that Batrachyla leptopus presents a very wide distribution (discontinuously ~ 1000 km), it is important for the reader that the figures where maps are represented have a reference bar in km.
R:/ We included a bar in Fig 1.
Do the authors have a photograph of a specimen from each of the lineages?
R:/ In fact, the skin of Batrachyla leptopus is very variable in texture, but we think it is a risk to asign a particular photo to each lineage. In fact we know that there are specimens with less rough and more rough skin, but we do not rule out that this is an ontogenetic rather than a phylogenetic or population difference. In any case, we thank to the reviewer for this detail in B. leptopus.
Annotated manuscript
The reviewer has also provided an annotated manuscript as part of their review:
R:/ We included most of the comments and corrections of the reviewer. However, we think that the correct term is “disjunct”, not “disjoint” for fragmented populations.
" | Here is a paper. Please give your review comments after reading it. |
651 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Body condition is an important determinant of health, and its evaluation has practical applications for the conservation and management of mammals. We developed a noninvasive method that uses photographs to assess the body condition of free-ranging brown bears (Ursus arctos) in the Shiretoko Peninsula, Hokkaido, Japan. First, we weighed and measured 476 bears captured during 1998-2017 and calculated their body condition index (BCI) based on residuals from the regression of body mass against body length. BCI showed seasonal changes and was lower in spring and summer than in autumn. The torso height:body length ratio was strongly correlated with BCI, which suggests that it can be used as an indicator of body condition. Second, we examined the precision of photographbased measurements using an identifiable bear in the Rusha area, a special wildlife protection area on the peninsula. A total of 220 lateral photographs of this bear were taken September 24-26, 2017, and classified according to bear posture. The torso height:body/torso length ratio was calculated with four measurement methods and compared among bear postures in the photographs. The results showed torso height:horizontal torso length (TH:HTL) to be the indicator that could be applied to photographs of the most diverse postures, and its coefficient of variation for measurements was <5%. In addition, when analyzing photographs of this bear taken from June to October during 2016-2018, TH:HTL was significantly higher in autumn than in spring/summer, which indicates that this ratio reflects seasonal changes in body condition in wild bears. Third, we calculated BCI from actual measurements of seven females captured in the Rusha area and TH:HTL from photographs of the same individuals. We found a significant positive relationship between TH:HTL and BCI, which suggests that the body condition of brown bears can be estimated with high accuracy based on photographs.</ns0:p><ns0:p>Our simple and accurate method is useful for monitoring bear body condition repeatedly</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Body condition, defined as the energetic state in an individual, especially the relative size of energy reserves such as fat and protein <ns0:ref type='bibr'>(Gosler, 1996;</ns0:ref><ns0:ref type='bibr'>Schulte-Hostedde, Millar & Hickling, 2001;</ns0:ref><ns0:ref type='bibr'>Peig & Green, 2009)</ns0:ref>, is an important determinant of health in both terrestrial and marine mammals. It serves as an indicator of food quality <ns0:ref type='bibr'>(Mahoney, Virgl & Mawhinney, 2001;</ns0:ref><ns0:ref type='bibr'>McLellan, 2011)</ns0:ref>, reproductive success <ns0:ref type='bibr'>(Noyce & Garshelis, 1994;</ns0:ref><ns0:ref type='bibr'>Guinet et al., 1998)</ns0:ref>, and survivorship <ns0:ref type='bibr'>(Young, 1976;</ns0:ref><ns0:ref type='bibr'>Gaillard et al., 2000)</ns0:ref>. Animals in good body condition generally have more energy reserves and are therefore more resilient and more likely to survive than those in poorer condition <ns0:ref type='bibr'>(Cook et al., 2004;</ns0:ref><ns0:ref type='bibr'>Clutton-Brock & Sheldon, 2010)</ns0:ref>. In females, reproductive traits such as litter mass, number of litters, neonatal mass, and breeding life-span increase with body condition <ns0:ref type='bibr'>(Samson & Huot, 1995;</ns0:ref><ns0:ref type='bibr' target='#b1'>Atkinson & Ramsay, 1995)</ns0:ref>. Therefore, evaluating body condition is of general biological interest but also has practical applications for the conservation and management of mammals.</ns0:p><ns0:p>The body condition of living mammals has been assessed with morphometric measurements <ns0:ref type='bibr'>(Guinet et al., 1998;</ns0:ref><ns0:ref type='bibr'>Cattet et al., 2002)</ns0:ref>, blood analyses <ns0:ref type='bibr'>(Hellgren, Rogers & Seal, 1993;</ns0:ref><ns0:ref type='bibr'>Gau & Case, 1999)</ns0:ref>, bioelectrical impedance <ns0:ref type='bibr'>(Farley & Robbins, 1994;</ns0:ref><ns0:ref type='bibr'>Hilderbrand, Farley & Robbins, 1998)</ns0:ref>, and ultrasound measurements of subcutaneous fat <ns0:ref type='bibr'>(Morfeld et al., 2014)</ns0:ref>.</ns0:p><ns0:p>However, these methods are unsuitable as a routine method because they require repeated capture of individuals. Applying these methods to free-ranging, large-bodied mammals is inherently difficult because the capture operation is dangerous for researchers and may affect animal behavior and survival through anesthesia and direct handling. An alternative, noninvasive evaluation method is body condition scoring (BCS). BCS is a subjective assessment of subcutaneous body fat stores based on a visual or tactile evaluation of muscle tone and key skeletal elements <ns0:ref type='bibr'>(Otto et al., 1991;</ns0:ref><ns0:ref type='bibr'>Burkholder, 2000)</ns0:ref>. Various BCS systems have been established for monitoring individual condition in companion animals (e.g., dogs and cats:</ns0:p><ns0:p>Laflamme, 2012), livestock (e.g., cattle, horses, and pigs: <ns0:ref type='bibr'>Wildman et al., 1982;</ns0:ref><ns0:ref type='bibr'>Henneke et al., 1983;</ns0:ref><ns0:ref type='bibr'>Department for Environment Food and Rural Affairs, 2004)</ns0:ref>, and also wildlife (e.g., bears, dolphins, and elephants: <ns0:ref type='bibr'>Stirling, Thiemann & Richardson, 2008;</ns0:ref><ns0:ref type='bibr'>Morfeld et al., 2014;</ns0:ref><ns0:ref type='bibr'>Joblon et al., 2015)</ns0:ref>. In addition, visual assessment criteria based on photographs have been used to evaluate relative body condition in whales. Photograph-based measurements of the length and width of gray whales (Eschrichtius robustus) from vertical aerial photogrammetry can reveal changes in body condition associated with fasting during winter migrations <ns0:ref type='bibr'>(Perryman & Lynn, 2002)</ns0:ref>. These studies demonstrate that it is possible to visually detect changes in body condition without capturing animals.</ns0:p><ns0:p>For killed or captured bears (Ursus spp.), a body condition index (BCI) has been established based on residuals from the regression of body mass against straight-line body length (i.e., the observed mass minus the expected mass: <ns0:ref type='bibr'>Cattet et al., 2002)</ns0:ref>. Independently of sex or age, the BCI has a strong positive relationship with true body condition, defined as the combined mass of fat and skeletal muscle relative to body size <ns0:ref type='bibr' target='#b0'>(Atkinson, Nelson & Ramsay, 1996;</ns0:ref><ns0:ref type='bibr'>Cattet et al., 2002)</ns0:ref>. The BCI has higher positive values for bears in better condition and lower negative values for those in poorer condition. In addition, predictive equations have been developed to estimate body mass and condition in bears from measurements of straight-line body length and axillary girth <ns0:ref type='bibr' target='#b2'>(Bartareau, 2017;</ns0:ref><ns0:ref type='bibr'>Moriwaki et al., 2018)</ns0:ref>. However, to clarify seasonal and annual changes in the body condition of bears, it is necessary to develop a method that can be used to monitor body condition repeatedly and continued for several years. For proper conservation and management of bear populations, it is important to develop a noninvasive method of assessing body condition in bears without capture operations.</ns0:p><ns0:p>In this study, we developed a noninvasive method of evaluating the body condition of brown bears (Ursus arctos) based on morphometric measurements obtained from photographs.</ns0:p><ns0:p>Brown bears are large omnivores that can change their diet in response to spatial and seasonal variation in food resources <ns0:ref type='bibr'>(Bojarska & Selva, 2012)</ns0:ref> and have a wide distribution throughout the Northern Hemisphere. In Japan, they occur only on Hokkaido, the northernmost island of the country (Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>). Our goal was to develop an accurate, photograph-based evaluation method that could be applied to bears in various postures. To achieve this, we took the following three steps.</ns0:p><ns0:p>First, we conducted preliminary analyses using BCIs calculated from actual measurements of killed or captured bears to obtain fundamental information on the body condition of Hokkaido brown bears. We also investigated whether the ratio of torso height to body length could be used as an indicator of body condition by examining its correlation with BCI. Second, we validated the precision of photograph-based measurements using photographs of an identifiable female.</ns0:p><ns0:p>We identified four candidate methods of measurement, including horizontal body length, Euclidean body length, polygonal-line body length, and horizontal torso length. Then, we examined which method had the largest number of applicable photographs with sufficiently small variation in measurement. We also examined the ability of our method to detect seasonal changes in body condition. Third, we validated the accuracy of the photograph-based measurement method by examining the correlation between BCIs calculated from actual measurements of captured individuals and photographic evaluation of the same individuals.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Study area</ns0:head><ns0:p>This study was conducted in the Shiretoko Peninsula (43°50´-44°20´ N, 144°45´-145°20´ E), Hokkaido, Japan (Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>). This peninsula has one of the largest brown bear populations worldwide <ns0:ref type='bibr'>(Hokkaido Government, 2017)</ns0:ref>, and an area from the middle to the tip of the peninsula has been designated as Shiretoko National Park and a UNESCO World Natural Heritage Site. During 1998-2017, we collected body masses and morphometrics from brown bears captured for research purposes, killed for nuisance control, or harvested from the peninsula, including the towns of Shari and Rausu (Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>). In addition, a focal survey was conducted in the Rusha area (44°12′ N, 145°12′ E; Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>), a special wildlife protection area.</ns0:p><ns0:p>Public access is not allowed without permission and there is no human residence except for one fishermen's settlement. Because the fishermen have not excluded bears from the settlement area in the last few decades, the bears have become habituated to the existence of humans, which Manuscript to be reviewed body length (cm) (Supplemental Data S1). Body mass was measured with calibrated hanging spring scales. Body length was measured with a non-stretchable tape measure as the straight-line distance from the tip of the nose to the end of the last tail vertebra while the bear was aligned laterally. In addition, we measured torso height (cm) as the distance from the lowest point of the abdomen to the spine in females ≥5 years old during 2014-2017. We also collected tissue (e.g., muscle and liver) from killed bears and blood and hair samples from captured bears for DNA extraction, which allowed us to identify individuals and their sex <ns0:ref type='bibr'>(Shimozuru et al., 2017;</ns0:ref><ns0:ref type='bibr'>Shirane et al., 2018)</ns0:ref>. Among 503 killed or captured individuals, 22 individuals were sampled more than once during our study period due to repeated capture or killing after capture; we used only the measurement taken at the greatest age in the following analyses.</ns0:p><ns0:p>We estimated the age in years of most bears captured or killed by counting the cementum annuli of the teeth <ns0:ref type='bibr'>(Yoneda, 1976)</ns0:ref>. For some individuals, we could not determine the exact age due to many cementum-layers developed in old individuals or poor quality of teeth samples.</ns0:p><ns0:p>Individuals whose age range could only be estimated were excluded from the growth curve analyses but were included for BCI and subsequent analyses if the growth curve results (detailed below) allowed their classification into an age class. For example, females ≥5 years old were excluded from growth curve analyses but were used as adults for subsequent analyses, whereas Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Growth curve of body length</ns0:head><ns0:p>To estimate the age at which the growth of body length was completed, growth pattern in body length was examined using a von Bertalanffy curve as previously described in bears <ns0:ref type='bibr'>(Kingsley, Nagy & Reynolds, 1988;</ns0:ref><ns0:ref type='bibr'>Derocher & Stirling, 1998;</ns0:ref><ns0:ref type='bibr'>Derocher & Wiig, 2002;</ns0:ref><ns0:ref type='bibr' target='#b3'>Bartareau, Cluff & Larter, 2011)</ns0:ref>. The von Bertalanffy size-at-age equation was used in the form</ns0:p><ns0:formula xml:id='formula_0'>A t = A ∞ (1 -e -K(t -T) )</ns0:formula><ns0:p>, where A t is body length (in cm) at age t, A ∞ is asymptotic body length (in cm), K is a size growth rate constant (year -1 ), and T is a fitting constant (extrapolated age at zero size; in years). We conducted F tests to determine whether the parameters of the von Bertalanffy growth equation differed significantly by sex. We conducted analyses using FSA package version 0.8.30 (Ogle, Wheeler & Dinno, 2020) and nlstools package version 1.0-2 <ns0:ref type='bibr'>(Baty et al., 2015)</ns0:ref> in R (R Core Team, 2019). According to the age reaching 95% of the asymptotic body lengths obtained from this analysis (detailed below), bears were assigned to three age classes for each sex: cubs (0-1 years old), subadults (age 1-4 years and 1-7 years for females and males, respectively), and adults (age ≥5 years and ≥8 years for females and males, respectively).</ns0:p></ns0:div>
<ns0:div><ns0:head>BCI of killed or captured bears</ns0:head><ns0:p>We calculated BCI as previously described in <ns0:ref type='bibr'>Cattet et al. (2002)</ns0:ref>. Specifically, body mass and length values were transformed to natural logarithms and a least-squares linear regression analysis was conducted to describe the relationship between the ln-transformed values. The standardized residuals of this linear regression were used as BCI. In addition, as a preliminary experiment for the evaluation of body condition using photographs, we calculated the ratio of torso height to body length (TH:BL) using actual measurement data.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Statistical methods. -To determine if the BCI was independent of body size, we investigated the correlation between BCI and body length that is an indicator of body size <ns0:ref type='bibr'>(Mahoney, Virgl & Mawhinney, 2001;</ns0:ref><ns0:ref type='bibr'>Cattet et al., 2002)</ns0:ref>. BCI was compared among seasons and age-sex classes using two-way analysis of variance (ANOVA). We used Tukey multiple comparisons <ns0:ref type='bibr'>(Tukey, 1977)</ns0:ref> to evaluate differences between the mean values of each comparison.</ns0:p><ns0:p>Based on major changes in diet <ns0:ref type='bibr'>(Ohdachi & Aoi, 1987)</ns0:ref>, we divided the sampling period into three seasons: spring (April to June; main diet of grass), summer (July and August; main diet of grass and ants), and autumn (September to November; main diet of berries and acorns). In addition, we linearly regressed BCI on the TH:BL of the same individuals and calculated the correlation coefficient. We also used correlation analysis between TH:BL and body length to investigate the effects of body size. We conducted all statistical analyses in R (R Core Team, 2019).</ns0:p></ns0:div>
<ns0:div><ns0:head>Obtaining and filtering of photographs</ns0:head><ns0:p>Periodic surveys (≥1 day/2 weeks) have been conducted since 2011 in the Rusha area, mainly for monitoring the reproductive status of identifiable females <ns0:ref type='bibr'>(Shimozuru et al., 2017)</ns0:ref>.</ns0:p><ns0:p>This area is a narrow estuarine coast stretching south to north for approximately 3 km. Field teams patrolled the area by car and waited for bears to emerge from the vegetation on the mountainside. When bears appeared, we followed individuals, maintaining a distance of about 20-100 m. Individual bears were identified by field staff according to their appearance as described in <ns0:ref type='bibr'>Shimozuru et al. (2017)</ns0:ref>, and close-up photographs were taken from multiple angles with a digital, single-lens reflex camera (Nikon D800, NIKON Co., Tokyo, Japan; or Canon EOS 5D, Canon Inc., Tokyo, Japan).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>For each survey in the Rusha area, lateral photographs of each individual bear were selected and graded based on several attributes: camera focus, camera tilt (vertical), camera angle (horizontal), body/torso height measurability, and body/torso length measurability for photography; and degree of body arch (vertical), straightness of body (horizontal), degree of neck flexing (vertical), and degree of neck bending (horizontal) for bear posture (Table <ns0:ref type='table' target='#tab_3'>S1</ns0:ref>, Fig. <ns0:ref type='figure' target='#fig_3'>S1</ns0:ref>). Each photograph was given a score of 1 (good quality), 2 (medium quality), or 3 (poor quality) for each attribute. Photographs that were given a score of 3 for any attribute were removed from further analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Morphometric measurements from photographs</ns0:head><ns0:p>We used ImageJ version 1.52a <ns0:ref type='bibr'>(Schneider, Rasband & Eliceiri, 2012)</ns0:ref> to extract morphometric measurements from lateral photographs of bears. We first adjusted the angle of the photographs according to the ground surface, then measured the torso height in pixels (TH) as the distance perpendicular to the ground from the lowest point of the abdomen to the highest point of the waist (Fig. <ns0:ref type='figure'>2</ns0:ref>). Length measurements (in pixels) included the following four methods:</ns0:p><ns0:p>the horizontal straight-line body length (HBL, Fig. <ns0:ref type='figure'>2</ns0:ref>) was the straight-line distance from the tip of the nose to the base of the tail; the Euclidean straight-line body length (EBL, Fig. <ns0:ref type='figure'>2</ns0:ref>) was the Euclidean distance from the base of the tail to tip of the nose; the polygonal-line body length (PBL, Fig. <ns0:ref type='figure'>2</ns0:ref>) was the sum of the distance from the base of the tail to the highest part of the shoulder parallel to the ground surface, from that point to the base of the ear, and from that point to the tip of the nose; and the horizontal straight-line torso length (HTL, Fig. <ns0:ref type='figure'>2</ns0:ref>) was the straightline distance from the base of the tail to the highest part of the shoulder parallel to the ground.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed For all measurements, any area that could be clearly judged to be only fur was excluded from the measurement range.</ns0:p></ns0:div>
<ns0:div><ns0:head>Precision of measurements from photographs</ns0:head><ns0:p>To examine the precision of each photograph-based measurement method and the effects of bear posture, we used photographs of one bear (bear ID: HC) that was monitored routinely in the Rusha area during 2016-2018. We classified photographs according to bear posture (Table <ns0:ref type='table' target='#tab_3'>S1</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_3'>S1</ns0:ref>): photographs that had a score of 1 for all attributes were assigned to 'Good', those with a score of 2 for body straightness only were assigned to 'BS', those with a score of 2 for neck flexing only were assigned to 'NF', and those with a score of 2 for neck lateral bending only were assigned to 'NB'. Photographs that were not assigned to any category were excluded from these analyses. First, to determine the number of measurements sufficient to reduce measurement error, we assessed measurement precision within photographs by repeatedly measuring (50 times) the body morphometrics from the best photograph taken on September 25, 2017, and assigned to the 'Good' category. From these measurements, the coefficients of variation (CVs) for TH, HBL, EBL, PBL, HTL, and the ratio of TH to body/torso length were calculated. In addition, by considering the standard deviation obtained from the 50 measurements as the population standard deviation, we calculated the measurement error at a given number of measurements. We ultimately adopted the minimum number of measurements for which the measurement error had a value that did not affect the second decimal place (i.e., <0.0025). In the following analyses, TH and body/torso length were measured three times, and the TH:body/torso length ratio was calculated from the respective average values according to our results (detailed below).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Second, we assessed measurement precision between bear postures (differences between repeated measures of the same individual taken from photographs with different postures) by taking measurements from photographs in different posture categories. To eliminate the effects of seasonal changes in body condition, we restricted these analyses to photographs taken September 24-26, 2017. The TH:body/torso length ratio was compared among the posture categories for each measurement method with one-way ANOVA. We used Tukey multiple comparisons <ns0:ref type='bibr'>(Tukey, 1977)</ns0:ref> to evaluate differences between the mean values of different categories. Then we calculated the CV of each method using all of the photographs applicable to the method to evaluate the measurement precision of each method. We compared CVs among the four methods using an asymptotic test <ns0:ref type='bibr'>(Feltz & Miller, 1996)</ns0:ref>. From these results, we adopted the method that could be applied to photographs of the most diverse postures while maintaining a sufficiently high measurement precision between photographs (CV < 5%). In accordance with these results (detailed below), we used TH:HTL as an indicator of body condition in the following analyses. Third, to examine whether TH:HTL reflected seasonal changes in body condition, we used photographs taken between late June and early October during 2016-2018. For each halfmonth, the best two or more photographs were selected and the median TH:HTL obtained from these photographs was considered the evaluation value for that half-month. We compared TH:HTL among half-months using one-way ANOVA and used Tukey multiple comparisons <ns0:ref type='bibr'>(Tukey, 1977)</ns0:ref> to evaluate differences between the mean values of each half-month. We conducted statistical analyses using Microsoft Excel ® (Microsoft Corporation, 2016) or R (R Core Team, 2019).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy of measurements from photographs</ns0:head><ns0:p>We examined the accuracy of photograph-based measurement methods using actual measurement data for seven females (≥5 years old) captured in the Rusha area (bear IDs: BE, DR, GI, KR, LI, RI, and WK). We collected photographs of these individuals from within 3 days before and after the days the individuals were captured. After filtering the photographs, we measured TH and HTL and calculated the TH:HTL ratio using two or more of the best photographs. We also calculated BCI using the body mass and length measured at the time of capture.</ns0:p><ns0:p>Statistical methods.</ns0:p><ns0:p>-We linearly regressed BCI on the TH:HTL ratio and calculated the correlation coefficient. We conducted statistical analyses using Microsoft Excel ® (Microsoft Corporation, 2016).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>We Manuscript to be reviewed which indicates that BCI was independent of body size (Fig. <ns0:ref type='figure'>S2</ns0:ref>).</ns0:p><ns0:p>An ANOVA of BCI showed that BCI varied significantly by season (F 2,459 = 13.26, p < 0.001; Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>, Fig. <ns0:ref type='figure'>5</ns0:ref>), with bears sampled in spring and summer having lower BCI than bears sampled in autumn (both p < 0.001). Differences among age-sex classes were also significant (F 5,459 = 4.20, p < 0.001): Adult males showed higher BCI than adult females (p = 0.002), subadult females (p < 0.001), and subadult males (p = 0.003), whereas BCI did not differ among other age-sex classes (p = 0.35-0.99). The interaction between season and age-sex class was not significant (F 9,459 = 0.46, p = 0.90). We obtained measurements of torso height from 23 adult females. A positive correlation was found between the TH:BL ratio and BCI (r = 0.81, p < 0.001; Fig. <ns0:ref type='figure'>6</ns0:ref>, Data S1). There was no correlation between body length and TH:BL (r = -0.068, p = 0.73), which indicates that TH:BL was independent of body size (Fig. <ns0:ref type='figure'>S3</ns0:ref>).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Precision of measurements from photographs</ns0:head><ns0:p>A total of 220 photographs of the same bear (bear ID: HC) were taken September 24-26, 2017. After filtering based on photographic conditions and the body arch of the bear (Table <ns0:ref type='table' target='#tab_3'>S1</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_3'>S1</ns0:ref>), 101 photographs remained. Of these photographs, 15 were assigned to 'Good,' 9 to 'BS,' 10 to 'NF,' and 9 to 'NB.' Based on 50 repeat measurements of the best photograph in the 'Good' category, the CV in measurement error within photographs was estimated to be 0.29% for torso height and 0.27%, 0.29%, 0.26%, and 0.45% for HBL, EBL, PBL, and HTL, respectively. For all measurement methods, we reduced the measurement error of the ratio of height to length to less than ±0.0025 by measuring height and body/torso length ≥3 times (Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>). The torso height:body/torso length ratio differed among the posture categories for all measurement methods (p < 0.001 for TH:HBL and TH:EBL, p = 0.005 for TH:PBL, and p = 0.002 for TH:HTL, Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>, Data S2). TH:HBL and TH:EBL obtained from photographs in the 'BS,' 'NF,' and 'NB' categories differed significantly from the results obtained from photographs in the 'Good' category (Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>). TH:PBL measured using 'BS' and 'NB' photographs were different from those of 'Good' photographs (Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>). TH:HTL differed from 'Good' photographs only when we used 'BS' photographs (Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref>). When we used all photographs in each category that did not differ from 'Good' for each method, the CV was <5% for all methods and did not differ among methods (p = 0.067): 2.47% in TH:HBL (photo n = 15), 2.19% in TH:EBL (n = 15), 3.18% in TH:PBL (n = 25), and 3.93% in TH:HTL (n = 34). Given these results, TH:HTL was adopted as the measurement method with both the largest number of applicable photographs and a CV < 5% (i.e., high measurement precision between photographs).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed By calculating TH:HTL using photographs of the same bear (bear ID: HC) taken from late June to early October during 2016-2018, we determined that TH:HTL reached its lowest in late August (0.567 ± 0.012; mean ± SE) and its highest in early October (0.714 ± 0.015, Fig. <ns0:ref type='figure'>7</ns0:ref>).</ns0:p><ns0:p>TH:HTL varied significantly among half-months (F 7,16 = 18.41, p < 0.001) and was lower in early August than in late June (p = 0.013), early July (p = 0.007), late July (p = 0.012), early September (p < 0.001), late September (p < 0.001), or early October (p < 0.001).</ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy of measurements from photographs</ns0:head><ns0:p>We captured seven adult females in the Rusha area during 2014-2016 and took photographs of each individual within 3 days before and after each capture date (Table <ns0:ref type='table' target='#tab_4'>S2</ns0:ref>).</ns0:p><ns0:p>There was a positive correlation between BCI calculated from actual morphometric measurements and TH:HTL calculated from photographs (r = 0.78, p = 0.041; Fig. <ns0:ref type='figure'>8</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>We have developed a new method for visually assessing the body condition of adult female brown bears using photographs. The evaluation method consists of filtering photographs based on photograph conditions and bear posture and using photograph-based measurements of torso height and horizontal torso length in pixels to calculate the TH:HTL ratio. The significant positive relationship between TH:HTL calculated from photographs and BCI calculated from Manuscript to be reviewed 2000) and percent body fat <ns0:ref type='bibr'>(McLellan, 2011)</ns0:ref>. This study is the first to propose a photographbased method of evaluating bear body condition that is accurate and reliable.</ns0:p><ns0:p>The most versatile photograph-based measurement method that could be applied to bears with various postures was the measurement not of body length but of torso length. In right whales (Eubalaena sp.) and gray whales, body condition has been evaluated with high precision and accuracy with aerial vehicle photogrammetry by selecting photographs under strict conditions based on the whale's posture <ns0:ref type='bibr'>(Perryman & Lynn, 2002;</ns0:ref><ns0:ref type='bibr'>Christiansen et al., 2018)</ns0:ref>.</ns0:p><ns0:p>However, it is not easy to collect a large number of good-quality photographs of brown bears inhabiting forests that are suitable for measurement. In fact, of the 220 photographs taken to confirm the precision of photograph-based measurement methods in this study, only 15 (6.8%)</ns0:p><ns0:p>were classified into the 'Good' category. Therefore, to establish a useful method of assessing body condition, it was necessary to find a method that had high applicability as well as high precision and accuracy. Although the body length of killed or captured brown bears is generally measured as the distance from the tip of the nose to the end of the last tail vertebra <ns0:ref type='bibr'>(Blanchard, 1987)</ns0:ref>, in the present study all methods that included the tip of the nose in the photograph-based measurement range (i.e., HBL, EBL, and PBL) were affected by the degree of neck flexing and neck lateral bending. However, the torso length (i.e., HTL) could be measured without being affected by the condition of the neck as long as the condition of body straightness was satisfied. TH:HTL declined from June to August and increased thereafter until the end of the field survey in early October, which suggests that bears were gaining fat over this period. The period when TH:HTL was lowest (i.e., August) coincides with the time when most cub disappearances occur in the Rusha area <ns0:ref type='bibr'>(Shimozuru et al., 2017)</ns0:ref>, which indicates that poor nutrition in the summer may cause cub mortality. The seasonal changes in TH:HTL were partly consistent with</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>BCIs calculated from killed bears, except that TH:HTL increased drastically in September.</ns0:p><ns0:p>Because seasonal changes in TH:HTL were examined in only one individual in this study, it is necessary to examine how TH:HTL changes seasonally in other living bears. One factor leading to the difference between seasonal change patterns in TH:HTL and BCI may be differences in the food environment between the Rusha area and other areas. Acorns (Quercus crispula), which contain large quantities of carbohydrates and fats, are a major food source throughout Hokkaido during September-November <ns0:ref type='bibr'>(Ohdachi & Aoi, 1987;</ns0:ref><ns0:ref type='bibr'>Sato, Mano & Takatsuki, 2005)</ns0:ref>. In addition, the Rusha area is considered to be a natural 'ecocenter', defined by Craighead, Sumner & Mitchell (1995) as an area where highly nutritional food is concentrated during a certain part of the year, and many bears are present in this area to obtain these resources, in particular salmonid fish, from late August <ns0:ref type='bibr'>(Yamanaka & Aoi, 1988;</ns0:ref><ns0:ref type='bibr'>Shimozuru et al., 2017)</ns0:ref>. Therefore, bears in the Rusha area can consume higher-energy foods from late summer to autumn, which may cause their TH:HTL to increase more rapidly than the BCI of bears killed in other areas.</ns0:p><ns0:p>Another possible explanation for the difference in seasonal change patterns of body condition is that most of the actual measurements were collected from bears killed for nuisance control.</ns0:p><ns0:p>Throughout the lower part of the peninsula, vast agricultural farms produce mainly dent corn and sugar beets. These farms may act as an attractive sink because of the availability of humanderived foods, which lead to human-caused bear deaths <ns0:ref type='bibr'>(Delibes, Gaona & Ferreras, 2001;</ns0:ref><ns0:ref type='bibr'>Sato et al., 2011)</ns0:ref>. Therefore, there is a possibility that bears killed before September included those that had emerged into farmland or human residential areas to obtain anthropogenic foods to compensate for poor body condition. Our results suggest that including body condition data for living bears will improve estimations of seasonal and long-term trends in body condition and thus provide better estimates of the health of the bear population.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>It is important to determine whether the method established using adult females in this study can be extended to other age-sex classes, other bear populations, and other bear species.</ns0:p><ns0:p>Differences in body condition among age-sex classes should be taken into consideration. Our results showed that BCIs calculated from actual measurements were higher in adult males than in other age-sex classes. Therefore, relative changes in TH:HTL need to be examined by age-sex class. This study also showed no interaction between age-sex classes and seasons for BCI, which indicates that any age-sex class would show similar seasonal changes in body condition.</ns0:p><ns0:p>However, it is necessary to investigate further whether the TH:HTL of other age-sex classes is able to show the seasonal changes that can be detected in adult females. Another consideration is differences in growth patterns between populations. Asymptotic body length (cm) was smaller in the Shiretoko Peninsula, 145.07 ± 1.48 and 179.47 ± 2.39 for females and males, respectively, than in two previously studied brown bear populations in northern <ns0:ref type='bibr'>Canada (171.55 ± 1.15 and</ns0:ref><ns0:ref type='bibr'>197.05 ± 0.69, Bartareau et al. 2011) and</ns0:ref><ns0:ref type='bibr'>Alaska (166.10-194.08 and</ns0:ref><ns0:ref type='bibr'>190.72-206.36, Hilderbrand et al., 2018)</ns0:ref>. Therefore, when using our photograph-based method to evaluate body condition in other populations, it is necessary to select target individuals depending on the age of maturity in each population.</ns0:p><ns0:p>Because the equipment needed to weigh large-bodied animals is often inadequate or unavailable in the field, it is more difficult to directly measure the body mass of brown bears than it is to take other morphometric measurements. The TH:BL ratio measured from killed or captured bears in this study was strongly correlated with BCI, which suggests that TH:BL, as well as axillary girth, which allows us to estimate body mass <ns0:ref type='bibr'>(Cattet, 1990;</ns0:ref><ns0:ref type='bibr'>Derocher & Wiig, 2002;</ns0:ref><ns0:ref type='bibr'>Cattet & Obbard, 2005;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bartareau, 2017;</ns0:ref><ns0:ref type='bibr'>Moriwaki et al., 2018)</ns0:ref>, can be considered a useful indicator of body condition in captured bears without direct measurement of body mass.</ns0:p><ns0:p>In mice, pelvic circumference is considered a potential predictor of fat content <ns0:ref type='bibr'>(Labocha, Schutz & Hayes, 2014)</ns0:ref>. In addition, abdominal girth has been widely used in measurements of humans (e.g., as part of calculating body mass index). Although torso height is a nonstandard morphometric measurement in bear studies, such additional data may make it possible to improve predictions of body condition. Furthermore, using our photograph-based method, we can overcome the technical and financial difficulties of repeated capture and can conduct periodic assessments of body condition. A noninvasive evaluation method, BCS has been previously described for polar bears (Ursus maritimus) <ns0:ref type='bibr'>(Stirling, Thiemann & Richardson, 2008)</ns0:ref>. However, BCS is a subjective assessment system and has the disadvantage of potentially missing small changes because it uses a scale from 1 to 5. Using morphometric measurements from photographs, our method makes it possible to conduct objective and quantitative visual assessments of body condition and allows researchers to identify small fluctuations in body condition. In this study, we were able to obtain usable photographs by conducting a survey in the Rusha area, where we could photograph bears easily and safely. If automated trail cameras were installed to collect bear photographs, our noninvasive assessment method of body condition could be used widely in various locations.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We developed a noninvasive method that uses photographs to assess the body condition of free-ranging brown bears and validated its accuracy against actual measurements of captured bears in the Shiretoko Peninsula, Hokkaido, Japan. Because our method is simple and applicable to photographs of bears in various postures, it can be widely applied and thus is useful for monitoring the body condition of brown bears repeatedly over the years. Using photograph- Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 2</ns0:note><ns0:p>Four candidate methods of measurement to evaluate the body condition of brown bears in the Shiretoko Peninsula, Hokkaido, Japan. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 3</ns0:note><ns0:p>Body length at age for 174 female (○) and 258 male (•) brown bears in the Shiretoko Peninsula, Hokkaido, Japan.</ns0:p><ns0:p>Fitted lines represent the von Bertalanffy growth curve for females (dashed line) and males (solid line).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed Measurement precision within photographs of an adult female brown bear (bear-ID: HC) in the Rusha area of the Shiretoko Peninsula, Hokkaido, Japan.</ns0:p><ns0:p>The standard error (SE) in the ratio of torso height to body/torso length at a certain number of measurements was calculated by considering the standard deviation (SD) obtained from 50 times measurements as the population standard deviation. CV means coefficient of variation.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed Mean (± SD) ratio of torso height to body/torso length obtained from photographs of an adult female brown bear (bear-ID: HC) in the Rusha area of the Shiretoko Peninsula, Hokkaido, Japan.</ns0:p><ns0:p>P values are based on comparisons of mean ratios from the 'Good' category versus other categories for each measurement method with Tukey multiple comparisons. Bold characters indicate significant differences. The 'Good' category contained photographs with a score of 1 for all attributes, 'BS' had a score of 2 for body straightness only, 'NF' had a score of 2 for neck flexing only, and 'NB' had a score of 2 for neck lateral bending only.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed P values are based on comparisons of mean ratios from the 'Good' category versus other categories for each measurement method with Tukey multiple comparisons. Bold characters indicate significant differences. The 'Good' category contained photographs with a score of 1 for all attributes, 'BS' had a score of 2 for body straightness only, 'NF' had a score of 2 for neck flexing only, and 'NB' had a score of 2 for neck lateral bending only. Manuscript to be reviewed</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>proportion. In accordance with these results, 476 individuals, including those with known age ranges, were classified into age classes and used in the subsequent analyses: 8 females and 19 males were cubs, 105 females (1-4 years) and 211 males (1-7 years) were subadults, and 92 females ≥5 years old and 41 males ≥8 years old were adults.BCI of killed or captured bearsNatural logarithmic transformation of the body mass and length data resulted in a linear relationship between mass and length as follows: ln body mass = 3.04 • ln body length -10.40(R 2 = 0.94, residual standard deviation = 0.19, Fig.4, Data S1). To facilitate estimation of BCI for brown bears, we developed the following model: BCI = (ln body mass -3.04 ln body length + 10.40)/0.19. There was no correlation between body length and BCI (r = 0.037, p = 0.39),</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>actual measurements of given individuals indicates that the body condition of brown bears can be estimated with a high degree of accuracy based on photographs. TH:HTL values increased as BCI increased, in agreement with other body condition indices, such as Quetelet's index (Cattet, PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)Manuscript to be reviewed based evaluation will assist bear researchers in further investigating relationships among body condition, food habit, and reproductive success, which contribute to the conservation and management of brown bears.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>A) Horizontal body length (HBL). (B) Euclidean body length (EBL). (C) Polygonal-line body length (PBL). (D) Horizontal torso length (HTL). Photo credit: Yuri Shirane PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>) ratio of torso height to body/torso length obtained from photographs of an adult female brown bear (bear-ID: HC) in the Rusha area of the Shiretoko Peninsula, Hokkaido, Japan.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,250.12,525.00,440.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,70.87,525.00,447.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,224.62,525.00,446.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='38,42.52,251.24,525.00,447.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='39,42.52,270.37,525.00,327.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='40,42.52,251.24,525.00,447.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>weighed and measured 503 different individuals: 9 females from the Rusha area during 2014-2016 and 494 individuals (201 females and 293 males) from other parts of the Shiretoko Peninsula during 1998-2017. Among these, we assigned an age (in years) to 432 individuals (174 females and 258 males) and an age range to 56 individuals.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Body length growth curves</ns0:cell></ns0:row><ns0:row><ns0:cell>von Bertalanffy curves were successfully fitted to body length data for the 432</ns0:cell></ns0:row><ns0:row><ns0:cell>individuals with age (in years) assignments (Fig. 3, Table 1, Data S1). The growth curves</ns0:cell></ns0:row><ns0:row><ns0:cell>differed significantly by sex (F 3, 426 = 76.63, p < 0.001). Females had achieved 95% of their</ns0:cell></ns0:row></ns0:table><ns0:note>asymptotic body length at 4.6 years of age, whereas males took 7.6 years to reach the same PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 : Parameter estimates (± SE) for von Bertalanffy size-at-age curves for the body lengths of 432 brown bears in the Shiretoko Peninsula, Hokkaido, Japan.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>A ∞ is the asymptotic body length, K is the size growth constant, and T is the theoretical age at which the animal would have size 0.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Sex</ns0:cell><ns0:cell>A ∞ (cm)</ns0:cell><ns0:cell>K (year -1 )</ns0:cell><ns0:cell>T (years)</ns0:cell><ns0:cell>n</ns0:cell></ns0:row><ns0:row><ns0:cell>Female</ns0:cell><ns0:cell>145.07 ± 1.48</ns0:cell><ns0:cell>0.51 ± 0.04</ns0:cell><ns0:cell>-1.28 ± 0.16</ns0:cell><ns0:cell>174</ns0:cell></ns0:row><ns0:row><ns0:cell>Male</ns0:cell><ns0:cell>179.47 ± 2.39</ns0:cell><ns0:cell>0.32 ± 0.02</ns0:cell><ns0:cell>-1.73 ± 0.14</ns0:cell><ns0:cell>257</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 : Mean body condition index (BCI) and body weight of brown bears in six age-sex classes captured and measured in the Shiretoko Peninsula, Hokkaido, Japan, during 1998-2017.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Spring is April-June, summer is July and August, and autumn is September-November.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Spring</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Summer</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Autumn</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell>BCI</ns0:cell><ns0:cell>Weight</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>BCI</ns0:cell><ns0:cell>Weigh</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>BCI</ns0:cell><ns0:cell>Weight</ns0:cell><ns0:cell>n</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(kg)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>t (kg)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>(kg)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Female</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Adult</ns0:cell><ns0:cell>-0.39 ±</ns0:cell><ns0:cell>98.5 ±</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>-0.18 ±</ns0:cell><ns0:cell>101.4</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>0.20 ±</ns0:cell><ns0:cell>116.2 ±</ns0:cell><ns0:cell>43</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.26</ns0:cell><ns0:cell>4.8</ns0:cell><ns0:cell /><ns0:cell>0.12</ns0:cell><ns0:cell>± 3.6</ns0:cell><ns0:cell /><ns0:cell>0.14</ns0:cell><ns0:cell>4.1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Subadult</ns0:cell><ns0:cell>-0.39 ±</ns0:cell><ns0:cell>61.9 ±</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>-0.20 ±</ns0:cell><ns0:cell>53.3 ±</ns0:cell><ns0:cell>46</ns0:cell><ns0:cell>0.17 ±</ns0:cell><ns0:cell>72.5 ±</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.13</ns0:cell><ns0:cell>5.4</ns0:cell><ns0:cell /><ns0:cell>0.21</ns0:cell><ns0:cell>4.1</ns0:cell><ns0:cell /><ns0:cell>0.16</ns0:cell><ns0:cell>6.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cub</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.36 ±</ns0:cell><ns0:cell>10.8 ±</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>-0.08</ns0:cell><ns0:cell>16.1 ±</ns0:cell><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.48</ns0:cell><ns0:cell>1.7</ns0:cell><ns0:cell /><ns0:cell>± 0.25</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Male</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Adult</ns0:cell><ns0:cell>0.63 ±</ns0:cell><ns0:cell>230.1 ±</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.37 ±</ns0:cell><ns0:cell>213.4</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>1.16 ±</ns0:cell><ns0:cell>309.2 ±</ns0:cell><ns0:cell>13</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.35</ns0:cell><ns0:cell>13.1</ns0:cell><ns0:cell /><ns0:cell>0.14</ns0:cell><ns0:cell>± 7.2</ns0:cell><ns0:cell /><ns0:cell>0.19</ns0:cell><ns0:cell>13.2</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Subadult</ns0:cell><ns0:cell>-0.15 ±</ns0:cell><ns0:cell>78.8 ±</ns0:cell><ns0:cell>77</ns0:cell><ns0:cell>-0.01 ±</ns0:cell><ns0:cell>85.2 ±</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>0.51 ±</ns0:cell><ns0:cell>99.7 ±</ns0:cell><ns0:cell>43</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.12</ns0:cell><ns0:cell>4.0</ns0:cell><ns0:cell /><ns0:cell>0.08</ns0:cell><ns0:cell>5.5</ns0:cell><ns0:cell /><ns0:cell>0.14</ns0:cell><ns0:cell>6.6</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cub</ns0:cell><ns0:cell>-0.30 ±</ns0:cell><ns0:cell>6.0 ±</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0.12 ±</ns0:cell><ns0:cell>11.5 ±</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0.16 ±</ns0:cell><ns0:cell>22.4 ±</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.00</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell /><ns0:cell>0.45</ns0:cell><ns0:cell>1.6</ns0:cell><ns0:cell /><ns0:cell>0.33</ns0:cell><ns0:cell>4.3</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>All classes</ns0:cell><ns0:cell>-0.20 ±</ns0:cell><ns0:cell>81.7 ±</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>-0.03 ±</ns0:cell><ns0:cell>92.5 ±</ns0:cell><ns0:cell>205</ns0:cell><ns0:cell>0.36 ±</ns0:cell><ns0:cell>107.9 ±</ns0:cell><ns0:cell>148</ns0:cell></ns0:row><ns0:row><ns0:cell>pooled</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.00</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell /><ns0:cell>0.00</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 : Measurement precision within photographs of an adult female brown bear (bear-ID: HC) in the Rusha area of the Shiretoko Peninsula, Hokkaido, Japan.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The standard error (SE) in the ratio of torso height to body/torso length at a certain number of measurements was calculated by considering the standard deviation (SD) obtained from 50 times measurements as the population standard deviation. CV means coefficient of variation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>50 measurements</ns0:cell><ns0:cell /><ns0:cell cols='3'>SE (number of measurement)</ns0:cell></ns0:row><ns0:row><ns0:cell>Methods</ns0:cell><ns0:cell>mean ± SD</ns0:cell><ns0:cell>CV</ns0:cell><ns0:cell>(two)</ns0:cell><ns0:cell>(three)</ns0:cell><ns0:cell>(four)</ns0:cell></ns0:row><ns0:row><ns0:cell>TH:HBL</ns0:cell><ns0:cell>0.4316 ± 0.0016</ns0:cell><ns0:cell>0.36%</ns0:cell><ns0:cell>0.0011</ns0:cell><ns0:cell>0.0009</ns0:cell><ns0:cell>0.0008</ns0:cell></ns0:row><ns0:row><ns0:cell>TH:EBL</ns0:cell><ns0:cell>0.4266 ± 0.0015</ns0:cell><ns0:cell>0.35%</ns0:cell><ns0:cell>0.0010</ns0:cell><ns0:cell>0.0009</ns0:cell><ns0:cell>0.0007</ns0:cell></ns0:row><ns0:row><ns0:cell>TH:PBL</ns0:cell><ns0:cell>0.4163 ± 0.0015</ns0:cell><ns0:cell>0.35%</ns0:cell><ns0:cell>0.0010</ns0:cell><ns0:cell>0.0009</ns0:cell><ns0:cell>0.0007</ns0:cell></ns0:row><ns0:row><ns0:cell>TH:HTL</ns0:cell><ns0:cell>0.7504 ± 0.0040</ns0:cell><ns0:cell>0.53%</ns0:cell><ns0:cell>0.0028</ns0:cell><ns0:cell>0.0023</ns0:cell><ns0:cell>0.0020</ns0:cell></ns0:row><ns0:row><ns0:cell>TH: torso height</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>HBL: horizontal body length</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>EBL: Euclidean body length</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>PBL: polygonal line body length</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>HTL: horizontal torso length.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:06:49702:1:1:NEW 20 Aug 2020)</ns0:note>
</ns0:body>
" | "August 20, 2020
Dr. Jane Waterman,
Academic Editor, PeerJ
Dear Dr. Jane Waterman:
Thank you for your kind e-mail letter together with the reviewers’ and your comments on our manuscript entitled “Development of a noninvasive photograph-based method for the evaluation of body condition in free-ranging brown bears” by Y. Shirane et al. We have confirmed and accepted any changes you suggested. In addition, we revised the text and Supplemental figures as the reviewer #1 and #2 suggested. By considering the points raised by the reviewers and you, we tried to revise our manuscript so that it would become more suitable for publication in PeerJ. The changes are detailed below.
Modifications
We have replaced the term 'weight' with the term 'mass' throughout the manuscript according to the comment by the editor.
AUTHOR COVER PAGE
• We have revised the affiliations according to the comment by the editorial staff.
ABSTRACT
• We have added the text (Line 26 – 27) according to the comment by the reviewer #2.
• We have replaced “short interval over long period” with “repeatedly over the years” (Line 44) according to the comment by the reviewer #2.
INTRODUCTION
• We have added the text and references (Line 48–50) according to the comment by the reviewer #1.
• We have revised the text and references (Line 53–57) according to the comment by the reviewer #1.
• We have added the text and a reference (Line 62–63) according to the comment by the reviewer #2.
• We have revised the text (Line 64–67) according to the comments by the reviewer #1 and #2.
• We have revised the text and added references (Line 67–75) according to the comment by the reviewer #2.
• We have added “Photograph-based” (Line 76, 105, and 110) according to the comment by the reviewer #2.
• We have modified the text (Line 81) and moved to the next paragraph (Line 96–99) and MATERIALS AND METHODS (Line 117–120).
• We have added the text and a reference (Line 81–86) according to the comment by the reviewer #2.
• We have modified the text (Line 86–95).
• We have replaced “short interval over long period” with “repeatedly and continued for several years” (Line 91).
• We have replaced “capture surveys” with “capture operations” (Line 93) according to the comment by the reviewer #1.
• We have replaced “carnivores” with “omnivores” (Line 96) according to the comment by the reviewer #1.
• We have added the text (Line 99–100) according to the comment by the reviewer #2.
• We have added the text (Line 106–107) according to the comment by the reviewer #2.
MATERIALS AND METHODS
• We have modified the text in INTRODUCTION (Line 81) and moved to METERIALS AND METHODS (Line 117–120).
• We have revised the text (Line 120–122) according to the comment by the reviewer #1.
• We have moved the text from “study area” section (Line 122 and 129) to “Bear capture and measurements” section (Line 148–160) according to the comment by the reviewer #2.
• We have deleted the text (Line 123) according to the comment by the reviewer #1.
• We have modified the text (Line 123–127) according to the comment by the editorial staff.
• We have modified the text (Line 128–129) according to the comment by the reviewer #2.
• We have revised the text (Line 135) according to the comment by the reviewer #1.
• We have added the text (Line 135–136) according to the comment by the reviewer #2.
• We have added the reference to Supplemental Data S1 (Line 138) according to the comment by the reviewer #2.
• We have added the text (Line 145–146) according to the comment by the reviewer #2.
• We have added the text (Line 149–150) according to the comment by the reviewer #2.
• We have added the text (Line 172–173) according to the comment by the reviewer #1.
• We have revised the text (Line 178–181) according to the comment by the reviewer #2.
• We have modified the text (Line 181–195) according to the comment by the reviewer #2.
• We have replaced “Shooting” with “Obtaining” (Line 197) according to the comment by the reviewer #2.
• We have added “photograph-based” (Line 233 and 276) according to the comment by the reviewer #2.
• We have modified the text (Line 236–239) according to the comment by the editor.
• We have added “(≥5 years old)” (Line 277) and deleted the text (Line 278).
• We have modified the text (Line 282–285) according to the comment by the reviewer #2.
RESULTS
• We have modified the text (Line 310–314).
DISCUSSION
• We have added “photograph-based” (Line 360, 368, 375, and 380) according to the comment by the reviewer #2.
• We have revised the text (Line 366) according to the comment by the editor.
• We have added “killed or captured” (Line 378) according to the comment by the reviewer #2.
• We have replaced “mass” with “fat” (Line 385).
• We have added hyphen (Line 400) according to the comment by the reviewer #1.
• We have added “other bear species” (Line 413) according to the comment by the reviewer #1.
• We have revised the text and replace a reference (Line 424–425) according to the comment by the reviewer #1.
• We have replaced “short interval over long period” with “repeatedly over the years” (Line 455) according to the comment by the reviewer #2.
CONCLUSION
• We have replaced “short interval over long period” with “repeatedly over the years” (Line 457) according to the comment by the reviewer #2.
FIGURES
• We have added alphabetical labels in Figure 2 and revised the legend of Figure 2 according to the comment by the editorial staff.
• We have modified the style of Figure 2 and 7 according to the comment by the editorial staff.
SUPPLEMENTAL FILES
• We have revised the Supplemental figure S1 according to the comment by the reviewer #2.
• We have added photo credit in the legend of the Supplemental figure S1 according to the comment by the editorial staff.
• We have revised the Supplemental file S2 and S3 figures as JPG types according to the comment by the reviewer #1.
REFERENCES
• We have added references (Atkinson, Nelson & Ramsay, 1996; Bojarska & Selva, 2012; Cook et al., 2004; Clutton-Brock & Sheldon, 2010; Department for Environment Food and Rural Affairs, 2004; Gosler, 1996; Henneke et al., 1983; Hilderbrand et al. 2018; Laflamme, 2012; Peig & Green, 2009; Schulte-Hostedde, Millar & Hickling, 2001; Wildman et al., 1982)
• We have deleted references (Bellemain, Swenson & Taberlet, 2006; Dahle & Swenson, 2003; Deutsch, Haley & Le Boeuf, 1990; Kovach & Powell, 2003; Schwartz et al., 2003)
The English in the manuscript has been checked by at least two professional editors, both native speakers of English. For a certificate, please see: http://www.textcheck.com/certificate/z408Hg
Sincerely,
Michito Shimozuru, DVM, Ph.D.
Laboratory of Wildlife Biology and Medicine,
Graduate School of Veterinary Medicine, Hokkaido University,
Kita 18, Nishi 9, Kita-ku, Sapporo 060-0818, Japan
Tel: +81-11-706-7188. Fax: +81-11-706-5569.
E-mail: [email protected]
On behalf of all authors.
Reply to the comments of Editor
Thank you very much for your comments and suggestions on our manuscript. Replies to your comments/suggestions and our modifications were as follows.
Comments for the Author
Both reviewers thought the paper needed minor revisions and I agree. Please review the comments and make the necessary edits. In addition, you should replace the use of the term 'weight' with the term 'mass' throughout the manuscript. As well, in Line 154, please change 'bear' to 'Bear'. In line 216, change 'BS.' to 'BS'., and again in line 217, change 'NF,' to 'NF', and finally in line 218, change 'NB,' to 'NB', and in Line 344, please change to ' This study is the first to propose...'.
Re: Thank you for pointing out the mistakes in our manuscript. We have corrected all the mistakes you have pointed out (Line 236, 237, 238, 239). According to your suggestion, we have replaced the term “weight” with the term “mass” throughout the manuscript and have changed the phrase in Line 366.
Reply to the comments of Reviewer #1
Thank you very much for your comments and suggestions on our manuscript. All of them were very helpful, and we have revised our manuscript accordingly. Replies to your comments/suggestions and our modifications were as follows.
Comments for the Author
I only have minor revisions to suggest- great work with this paper! It provides a novel tool that could have broad application the field. See specific comments below:
Line 52-53: Is it fat reserves that allow males to gain greater access to females, or overall body mass/ size?
Re: As you pointed out, this sentence and reference were confused with the effect of body size on reproductive success and not adequate to explain the effect of fat reserves. To clarify the definition of “body condition” and to clearly describe the relationship between body condition and animal health, we have revised the text and added references (Line 48–50 and 53–57).
Line 60 and 85: rephrase “capture surveys” to something like “capture operations”
Re: According to the comments from you and Reviewer #2, we have revised the text (Line 64–67 and 93).
Line 71: Brown bears are omnivores
Re: We have replaced “carnivores” with “omnivores” (Line 96).
General comment: Please define your use of “body condition” in the introduction. It is unclear if you are referring to % body fat or overall body size/mass. It becomes clearer later in the paper, but it would be beneficial to the reader to define this up front.
Re: I agree with you that we failed to define what we mean by “body condition”. We use the term 'body condition' not as a body size or mass, but as the energetic status of an individual, especially the relative size of energy reserves such as fat and protein. To make it clear, we have added the text and references (Line 48–50).
Lines 102-104: Consider rephrasing to: During 1998–2017, we collected body weights and
morphometrics from brown bears captured for research purposes, killed for nuisance control, or harvested from the peninsula, including the towns of Shari and Rausu (Fig. 1).
Re: According to your suggestion, we have revised the text (Line120–122).
Line111-112: remove “near the tip of the peninsula” since you have a figure showing this
Re: According to your suggestion, we have removed the text (Line 123).
Line 132: rephrase to “were obtained from bears killed for nuisance control or from harvest”
Re: According to your suggestion, we have rephrased to “were obtained from bears killed for nuisance control or harvested” (Line 135).
Lines 154-156: Please include a justification for why you separated subadult and adult males and females into these categories. Why did you define female maturity as 5 and male as 8?
Re: As a result of body length growth curve, females had achieved 95% of their asymptotic body length at 4.6 years of age, whereas males took 7.6 years to reach the same proportion. According to these results, we classified females ≥5 years old and males ≥8 years old as adults. To male it clear, we have added the text (Line 172–173).
Line 377: hyphenate “higher energy”
Re: According to your suggestion, we have added hyphen (Line 400).
Line 391: or bear species!
Re: According to your comment, we have added “other bear species” (Line 413).
Line 402: The following could be an interesting comparison if you want to include a more recent citation for Alaska:
Hilderbrand, G. V., Gustine, D. D., Mangipane, B. A., Joly, K., Leacock, W., Mangipane, L. S., ... & Cambier, T. (2018). Body size and lean mass of brown bears across and within four diverse ecosystems. Journal of Zoology, 305(1), 53-62.
Re: Thank you for introducing a useful reference. We have revised the text and replaced the reference (Line 424–425).
Reply to the comments of Reviewer #2
Thank you very much for your comments and suggestions on our manuscript. All of them were very helpful, and we have revised our manuscript accordingly. Replies to your comments/suggestions and our modifications were as follows.
Basic reporting
The language was clear and professional throughout. I suggest using additional descriptive terms in addition to 'measures' throughout the paper, as many times I did not understand which type of 'measures' you were referring to. For example,
Re: I agree with you that it was difficult to understand because the word 'measures' is used as the actual measurement value from killed or captured bears in some situations and as the photograph-based measurement value in other situations. To clearly describe which value 'measure' means, we have added additional descriptive terms (Line 76, 105, 110, 233, 276, 360, 368, 375, 378, and 380)
Your abstract needs more detail, specifically in regards to how you defined BCI. It wasn't into very far into the paper that the BCI method was explained.
Re: According to your suggestion, we have added the description of BCI method (Line 26–27).
The statement in line 43 (short intervals over long periods of time) is confusing, and thus defining what short/long mean by example would strengthen this concept.
Re: What we wanted to say is that the assessment of body condition using our method can be repeated monthly and can continue for several years. To clearly describe it, we have replaced “short interval over long period” with “repeatedly over the years” (Line 44, 91 and 457).
I appreciate the thorough review of body condition assessment methods in the Introduction. Adding the use of ultrasound measures of subcutaneous fat, as in Morfeld et al. 2014 for elephants, would strengthen this section. More general background on the wide use of body condition scoring in domestic species would be helpful (dogs, cats, etc.), as well as pigs, cattle, horses. By including these species, this strengthens the rationale of applying similar techniques to wildlife species.
Re: Thank you for introducing a useful reference. According to your suggestion, we have revised text and added references (Line 62–63, Line 67–75).
Line 85 - more information why capture surveys are not suitable for ongoing body condition assessments. The term 'capture surveys' is not previously introduced, which could be confusing to some readers.
Re: Capture surveys in free-ranging, large-bodied mammals is inherently difficult because capture operation is dangerous for researchers and may affect animal behavior and survival through anesthesia and direct handling. According to the comments of you and Reviewer #1, we have revised the text (Line 64–67) and replaced “capture surveys” with “capture operations” (Line 93).
The most significant challenge and confusion was the strong emphasis on correlation to BCI, but the BCI methods were not introduced until line 88, which is critical component to your paper. You should spend time explaining what this method was, citation, etc. It is very unclear as is.
Re: According to your suggestion, we have added the text (Line 81–86). We have also revised this paragraph to be more general about the assessment of body condition in Ursids (Line 86–99).
Line 92 - briefly explain what 'candidate methods' means.
Re: According to your comment, we have revised the text (Line 106–107).
The tables are overall very well done, with raw data submitted. However, I could not open the Supplemental files with the eps file types.
Re: According to your comment, we have revised the Supplemental file S2 and S3 figures as JPG types.
Experimental design
Overall, the design is suitable, but it did take several reads to understand the three small studies within the larger study. It would be best to state the overall goal first, and then indicate the necessary steps to get there.
Re: According to your comment, we have revised the introduction to state the overall goal (Line 99–100).
Line 105: Unclear why you can determine age by teeth of some but not all of your study subjects.
Re: Due to many cementum-layers developed in old individuals, it was sometimes difficult to determine the exact age. Even for young individuals, we could not determine the exact age depending on the quality of teeth samples. In such cases, we have determined a range of ages, such as '>20' and '3-5'. To clearly describe this issue, we have added the text (Line 149–150).
The study area was nicely described. There is information on measurements in this section that should be moved, as it is not related to 'study area' as the section states. (lines 117-121). Keeping the field permits in this section is good, but the animal use permit information could be moved to the animal/subject sections.
Re: According to your suggestion, we have moved the information about age estimation and the animal use permit from “study area” section (Line 122 and 129) to “Bear capture and measurements” section (Line 148–160). We have deleted the information about bear IDs and measurements in the Rusha area (Line 129) as they overlap with the description below.
Line 141. How are the bears captured more than once, if most were killed for nuisance control/hunting as stated in line 132. It would be good to include the sample size of killed versus multiple measures in this section.
Re: While most samples were obtained from bears killed for nuisance control or harvested, some were obtained from bears live captured for research purposes. Several individuals were captured more than once during our study period due to repeated capture or killing after capture. To make it clear, we have added the sample size of multiple measures and revised the text (Line 135–136 and 145–146).
Bear capture and measurements section- include reference to the appropriate table/figure for this data.
Re: According to your comment, we have added the reference to Supplemental Data S1 (Line 138).
BCI of killed or captured bears: The BCI methods should be explained more thoroughly. What did Cattet al al. find? What is the typical range of BCI? Was this developed for males, females? Please provide a review of this method.
Re: Cattet et al. (2002) developed BCI based on residuals from the regression of body mass against straight-line body length for polar bears, black bears, and grizzly bears. Because the range of BCI values varies depending on the population, we cannot present the absolute value of the BCI range. Generally speaking, the BCI has higher positive values for bears in better condition and lower negative values for those in poorer condition. Such a regression-based BCI cannot be compared between individuals from different statistical populations because the regression slopes between populations may differ. However, the BCI can be used to compare individual bears within a population regardless of sex and age because it is independent of body size. To clearly provide this information, we have added the text (Line 81–86 and 178–181).
Statistics. Having a separate statistics section would be useful, or at least define the statistics within each section. It gets confusing and difficult to read with statistical methods throughout the methods and not in a separate section or at least set apart within each section.
Re: We consider it would be hard for readers to follow a statistical analysis in a separate section because the methods of each three small study are complicated. Therefore, we created a new paragraph in each section and described the statistical analysis (Line 181–195, 282–285).
'Shooting and filtering of photographs'. Replace the word 'shooting' to something more professional (obtaining, etc.).
Re: According to your suggestion, we have replaced “shooting” with “obtaining” (Line 197).
Lines 190-193. The supplemental file has Bear Posture, Body Arch (A) as either a score option of 1 or 3 (option 2 is blank), but the file with the photos/examples have only Score examples of 1 or 2. Is this suppose to be a Score of 3? Or why is the table 'blank' for Score 2?
Re: The Score in the figure S1 (A) was wrong and we wanted to show a score example of 3. Thank you for pointing out our mistake. We have revised the supplemental figure S1.
Results. It would be nice to see a resulting table of bears representing the different BCI categories with photos. It is still confusing to met how the BCI index works, as the table gives negative and positive values. Again, more information on this would be extremely valuable.
Re: If your concern is about our third small study that compare actual (BCI) and photograph-based measurements (TH:HTL), a table of results has been given in Supplemental Table S2. In terms of information on the BCI, we have added the text in the introduction section (Line 81–86).
Discussion is very well written and addresses the significant findings of the study.
Validity of the findings
The study results are very valuable and I commend the researchers for their thorough efforts on this study. Overall, the methods and results get confusing to read, but I think separating out sections more thoroughly/accurately, as well as including a separate 'Statistical Methods' section would be very helpful.
Again, explain the BCI you are utilizing throughout this paper as the gold standard for your validation. Without that level of information, it is difficult to interpret the results with clear context/meaning.
Wonderful subject, creative, and thoroughly executed, but clearer writing would help transform your robust study into more user friend action.
Re: Thank you for your appreciation for our research. According to your valuable comments, we have added a description of BCI and placed a separate paragraph of statistical methods. I feel that these revision makes the text easier to understand throughout the manuscript. Again, thank you for your constructive feedback.
" | Here is a paper. Please give your review comments after reading it. |
652 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The process of molecular evolution has many elements that are not yet fully understood.</ns0:p><ns0:p>Evolutionary rates are known to vary among protein coding and noncoding DNAs, and most of the observed changes in amino acid or nucleotide sequences are assumed to be non-adaptive by the neutral theory of molecular evolution. However, it remains unclear whether fixed and standing missense changes in slowly evolving proteins are more or less neutral compared to those in fast evolving genes. Here, based on the evolutionary rates as inferred from identity scores between orthologs in human and Macaca monkey, we found that the fraction of conservative substitutions between species was significantly higher in their slowly evolving proteins. Similar results were obtained by using four different methods of scoring conservative substitutions, including three that remove the impact of substitution probability, where conservative changes require fewer mutations. We also examined the single nucleotide polymorphisms (SNPs) by using the 1000 genomes project data and found that missense SNPs in slowly evolving proteins also had a higher fraction of conservative changes, especially for common SNPs, consistent with more non-conservative substitutions and hence stronger natural selection for SNPs, particularly rare ones, in fast evolving proteins. These results suggest that fixed and standing missense variants in slowly evolving proteins are more likely to be neutral.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Since the early 1960s, protein sequence comparisons have become increasingly important in molecular evolutionary research <ns0:ref type='bibr' target='#b5'>(Doolittle & Blombaeck 1964;</ns0:ref><ns0:ref type='bibr' target='#b6'>Fitch & Margoliash 1967;</ns0:ref><ns0:ref type='bibr'>Margoliash 1963;</ns0:ref><ns0:ref type='bibr'>Zuckerkandl & Pauling 1962)</ns0:ref>. An apparent relationship between protein sequence divergence and time of separation led to the molecular clock hypothesis, which assumes a constant and similar evolutionary rate among species <ns0:ref type='bibr' target='#b16'>(Kumar 2005;</ns0:ref><ns0:ref type='bibr'>Margoliash 1963;</ns0:ref><ns0:ref type='bibr'>Zuckerkandl & Pauling 1962)</ns0:ref>. Thus, sequence divergence between species is thought to be largely a function of time. The molecular clock, in turn, led Kimura to propose the neutral theory to explain nature: sequence differences between species were thought to be largely due to neutral changes rather than adaptive evolution <ns0:ref type='bibr' target='#b14'>(Kimura 1968</ns0:ref>). However, the notion of a molecular clock may be unrealistic since it predicts a constant substitution rate as measured in generations, whereas the observed molecular clock is measured in years <ns0:ref type='bibr' target='#b1'>(Ayala 1999;</ns0:ref><ns0:ref type='bibr'>Pulquerio & Nichols 2007)</ns0:ref>. The neutral theory remains an incomplete explanatory theory <ns0:ref type='bibr' target='#b9'>(Hu et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b13'>Kern & Hahn 2018)</ns0:ref>. Evolutionary rates are known to vary among protein coding and non-coding DNAs. The neutral theory posits that the substitution rate under selective neutrality is expected to be equal to the mutation rate <ns0:ref type='bibr' target='#b15'>(Kimura 1983)</ns0:ref>. If mutations/substitutions are not neutral or are under natural selection, the substitution rate would be affected by the population size and the selection coefficient, which are unlikely to be constant among all lineages. Slowly evolving genes are well known to be under stronger purifying or negative selection as measured by using dN/dS ratio, which means that a new mutation has a lower probability of being fixed <ns0:ref type='bibr' target='#b4'>(Cai & Petrov 2010)</ns0:ref>.</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed However, negative selection as detected by the dN/dS method is largely concerned with nonobserved mutations and says little about the fixed or observed variations. And most molecular evolutionary approaches such as phylogenetic and demographic inferences are concerned with observed variants. It remains to be determined whether fixed and standing missense substitutions in slowly evolving genes are more or less neutral relative to those in fast evolving genes.</ns0:p><ns0:p>We here found that the proportion of conservative substitutions between species was higher in the slowest evolving set of proteins than in faster evolving proteins. Using datasets from the 1000 genomes (1KG) project phase 3 dataset <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>, we also found that missense single nucleotide polymorphisms (SNPs) from the slowest evolving set of proteins, especially those with high minor allele frequency (MAF), were enriched with conservative amino acid changes, consistent with these changes being under less natural selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methods</ns0:head><ns0:p>Classification of proteins as slowly and fast evolving. The identification of slowly evolving proteins and their associated SNPs was done as previously described (Yuan et al. Manuscript to be reviewed percentage identities. Proteins that show the highest identity between human and monkey were included in the set of slowly evolving (including 423 genes > 304 amino acid in length with 100% identity and 178 genes > 1102 amino acid in length with 99% identity between monkey and human). The rest are all considered fast evolving proteins. The cutoff criterion was based on the empirical observation of low substitution saturation, and the finding that missense SNPs from the slow set of proteins produced genetic diversity patterns that were distinct from those found in the fast set <ns0:ref type='bibr' target='#b8'>(Yuan et al. 2017)</ns0:ref>. SNP selection. We downloaded the 1KG phase 3 data and assigned SNP categories using ANNOVAR <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>. We then picked out the missense SNPs located in the slow Manuscript to be reviewed substitutions.</ns0:p><ns0:p>For fixed substitutions as revealed by BLASTP, conservative changes were scored by using four different matrixes. The BLOSUM62 matrix has a scoring range from -3 to 3 (-3,-2, -1, 0, 1, 2,</ns0:p><ns0:p>3) with higher positive values representing more conservative changes (Pearson 2013). We assigned each amino acid mutation a score and we used score >0 to denote conservative changes in cases where the number of conservative changes is enumerated. As the BLOSUM62 matrix does not take into account the effect of substitution probability (the fact that conservative changes require fewer mutations), we also used three other matrixes to score conservative amino acid replacements that have removed the impact of substitution probability, including the'EX' matrix (Yampolsky & Stoltzfus 2005), which is based on laboratory mutagenesis, and the two physicochemical matrices in <ns0:ref type='bibr' target='#b3'>Braun (2018)</ns0:ref> and Pandey and Braun (2020): delta-V (normalized change in amino acid side chain volume), and delta-P (normalized change in amino acid side chain polarity) <ns0:ref type='bibr' target='#b3'>(Braun 2018;</ns0:ref><ns0:ref type='bibr'>Pandey & Braun 2020)</ns0:ref>. All three matrixes in spreadsheets are available from GITHUB (https://github.com/ebraun68/clade_specific_prot_models).</ns0:p><ns0:p>Specifically, the EX matrix (or, more accurately, a normalized symmetric version of the EX matrix) is in the excel spreadsheet 'EX_matrix_sym.xlsx'; the delta-V and delta-P matrices can be found in one of the sheets (the sheet called 'Exchanges') in the file 'exchange_Pandey_Braun.xlsx'.</ns0:p><ns0:p>All three of the matrixes are normalized to range from zero to one. To be comparable to the BLOSUM62 matrix, we generated integer versions of these three matrixes by multiplying by 10, subtracting 5, and then rounding to the nearest integer. Here the matrix values range from -5 to +5 with higher positive values representing more conservative changes. For EX matrix, we used</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed score >2 to denote conservative changes. For delta-V and delta-P matrixes, we used score >3 to denote conservative changes. In this way of using different cutoff scores to represent conservative changes, we could keep the fraction of conservative changes close to 0.5 for each of the four different matrixes.</ns0:p><ns0:p>Statistics. Chi-squared test was performed using GraphPad Prism 6.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>:</ns0:head></ns0:div>
<ns0:div><ns0:head>Fixed amino acid substitutions and evolutionary rates of proteins</ns0:head><ns0:p>We determined the evolutionary rates of proteins in the human genome by the percentage of identities between human proteins and their orthologs in Macaca monkey as described previously <ns0:ref type='bibr' target='#b8'>(Yuan et al. 2017)</ns0:ref>. We then divided the proteins into several groups of different evolutionary rates, and compared the proportion of conservative amino acid substitutions in each group.</ns0:p><ns0:p>The mismatches between two species would have one of the two residues or alleles as ancestral, in the case of slowly evolving proteins yet to reach mutation saturation (no independent mutations occurring at the same site among species and across time), and so a mismatch due to conservative changes would involve a conservative mutation during evolution from the ancestor to extant species. But at mutation saturation for fast evolving proteins, where a site may encounter multiple mutations across taxa and time, while a drastic substitution would necessarily involve a non-conservative mutation, it is possible for a conservative substitution to result from at least two independent non-conservative mutations (if the common ancestor has</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Arg at some site, a drastic mutation event at this site occurring in each of the two species, Arg to Leu in one and Arg to Ile in the other, may lead to a conservative substitution of Leu and Ile).</ns0:p><ns0:p>Thus, a conservative substitution at mutation saturation just means less physical and chemical differences between the two species concerned and says little about the actual mutation events.</ns0:p><ns0:p>A lower fraction of conservative substitutions at saturation for fast evolving proteins would mean more physical and chemical differences between the two species, which may more easily translate into functional differences for natural selection to act upon.</ns0:p><ns0:p>To verify that the slowest evolving proteins with length >1102 amino acids and percentage identity >99% are distinct from the fast set, we first compared proteins with length >1102 amino acids with no gaps in alignment (Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_6'>1A</ns0:ref>) or with gaps (Table <ns0:ref type='table' target='#tab_5'>1, Figure</ns0:ref> Manuscript to be reviewed and 60-80% identity), and found similar but less robust and consistent trends (Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_6'>1C and D</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Standing amino acid variants and evolutionary rates of proteins</ns0:head><ns0:p>We next studied the missense SNPs found in proteins with different evolutionary rates by using 1KG dataset <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>. There were 15271 missense SNPs in the slowly evolving set of proteins (>1102 aa with 99% identity and >304 aa with 100% identity) and 546297 missense SNPs in the fast set (all proteins that remain after excluding the slow set). We assigned each amino acid change found in a missense SNP a conservation score as described above. The number of SNPs in each score category was then enumerated. We performed this analysis by using each of the four different scoring matrixes and found largely similar results (Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Missense SNPs in the slowly evolving set of proteins in general had lower fractions of drastic mutations, and higher fractions of conservative mutations relative to those in the faster evolving set of proteins (Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). The fraction of conservative mutations in the slow evolving set was significantly higher than that of the fast set (P<0.001, Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Manuscript to be reviewed to be under natural selection. The results showed that for missense SNPs in the fast evolving set of proteins, the common SNPs with MAF >0.001 showed a higher fraction of conservative changes than the rare SNPs with MAF<0.001 (P<0.001), indicating a stronger natural selection for the rare SNPs in the fast set (Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). While SNPs in the fast set showed similar fractions of conservative changes across three different MAF groups (>0.001, >0.01, and >0.05), there was a more obvious trend of having a higher proportion of conservative changes as MAF values increase from >0.001 to >0.01 to >0.05 for SNPs in the slow set, consistent with weaker natural selection for common SNPs in the slow set (Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). Each of the three groups in the fast set showed a significantly lower fraction of conservative changes than the respective group in the slow set (P<0.01), indicating stronger natural selection for SNPs in the fast set (Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>). The results indicate that common SNPs in slowly evolving proteins had more conservative changes that were under a weaker natural selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion:</ns0:head><ns0:p>Our results here showed that fixed and standing changes in slowly evolving proteins were enriched with conservative amino acid substitutions. Similar results were obtained using four different matrixes to rank the conservative nature of a substitution. Based on substitution probability alone, amino acid substitutions in slowly evolving proteins are expected to be more conserved than those in fast evolving proteins, since fast evolving proteins have a higher probability of the doublet mutations that are necessary for a drastic substitution to occur, but have a very low rate of occurrence <ns0:ref type='bibr'>(Whelan & Goldman 2004)</ns0:ref>. If evolutionary time is not long</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed enough for mutation saturation to occur, non-conservative substitutions would be expected to be a function of mutation rate and time. This simple explanation appears not to be the reason for the observations here, since the three matrixes that have removed the impact of substitution probability produced similar results as the matrix that does not take into account the impact of substitution probability.</ns0:p><ns0:p>The results here may be best accounted for by mutation saturation in fast evolving proteins, where multiple recurrent mutations at the same site have occurred across taxa and time (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>). At saturation, the range of mutations that have happened at any given site of any given taxon is irrelevant to the particular type of possible alleles the site may carry at present time. If substitutions in fast evolving proteins are at saturation and under natural selection as indicated here, it would follow that genetic distances or degrees of sequence mismatches between taxa in these proteins would be at saturation, or no longer correlated exactly with time. It is easy to tell the difference between optimum/maximum saturation genetic distances and linear distances as described previously <ns0:ref type='bibr' target='#b11'>(Huang 2010)</ns0:ref>. Briefly, imagine a 100 amino acid protein with only 1 neutral site. In a multispecies alignment involving at least three taxa, if one finds only one of these taxa with a mutation at this neutral site while all other species have the same nonmutated residue, there is no saturation (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, time point 2). However, if one finds that nearly every taxon has a unique amino acid, one would conclude mutation saturation as there would</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed have been multiple independent substitution events among different species at the same site, and repeated mutations at the same site do not increase distance (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, time point 3 and 4</ns0:p><ns0:p>for fast evolving proteins). We have termed those sites with repeated mutations 'overlap' sites <ns0:ref type='bibr' target='#b11'>(Huang 2010)</ns0:ref>. So, a diagnostic criterion for saturated maximum distance between two species is the proportion of overlap sites among mismatched sites. Saturation would typically have 50-60% overlapped sites that are 2-3 fold higher than that expected before saturation <ns0:ref type='bibr' target='#b11'>(Huang 2010;</ns0:ref><ns0:ref type='bibr'>Luo & Huang 2016)</ns0:ref>. It is not expected to have near 100% overlapped sites, because certain sites may only accommodate 2 or very few amino acid residues at saturation equilibrium, which would prevent them from presenting as overlapped sites even though they are in fact overlapped and saturated sites. Also, saturation may result in convergent evolution with independent mutations changing to the same amino acid (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, time point 5 for fast evolving proteins). This overlap ratio method is an empirical one free of uncertain assumptions and hence more realistic than other methods of testing for saturation, such as comparing the observed number of mutations to the inferred one based on uncertain phylogenetic trees derived from maximum parsimony or maximum likelihood methods <ns0:ref type='bibr'>(Philippe et al. 1994;</ns0:ref><ns0:ref type='bibr'>Steel et al. 1993;</ns0:ref><ns0:ref type='bibr'>Xia et al. 2003)</ns0:ref>. By using the overlap ratio method, we have verified that the vast majority of proteins show maximum distances between any two deeply diverged taxa, and only a small proportion, the slowest evolving, are still at the linear phase of changes <ns0:ref type='bibr' target='#b11'>(Huang 2010;</ns0:ref><ns0:ref type='bibr'>Luo & Huang 2016;</ns0:ref><ns0:ref type='bibr' target='#b8'>Yuan et al. 2017</ns0:ref>). Variations at most genomic sites within human populations are also at optimum equilibrium, as evidenced by the observation that a slight increase above the present genetic diversity level in normal subjects is associated with patient populations suffering from complex</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed diseases <ns0:ref type='bibr' target='#b7'>(Gui et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b8'>He et al. 2017;</ns0:ref><ns0:ref type='bibr'>Lei & Huang 2017;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lei et al. 2018;</ns0:ref><ns0:ref type='bibr'>Yuan et al. 2012;</ns0:ref><ns0:ref type='bibr'>Yuan et al. 2014;</ns0:ref><ns0:ref type='bibr'>Zhu et al. 2015)</ns0:ref>, as well as the observation that the sharing of SNPs among different human groups is an evolutionary rate-dependent phenomenon, with more sharing in fast evolving sequences <ns0:ref type='bibr' target='#b8'>(Yuan et al. 2017)</ns0:ref>. It is important to note that a protein in a complex species plays more roles than its orthologous protein in a species of less organismal complexity, as explained by the maximum genetic diversity hypothesis <ns0:ref type='bibr' target='#b9'>(Hu et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b10'>Huang 2008;</ns0:ref><ns0:ref type='bibr' target='#b12'>Huang 2016)</ns0:ref>. A protein has more functions to play in complex organisms due in part to its involvement in more cell types, and hence it becomes more susceptible to mutational inactivation. While the divergence time among higher taxa such as between human and Macaca monkey is relatively short, mutation saturation could still happen for fast evolving proteins since the number of positions that can accept fixed substitutions is comparatively lower. It is also important to note that the type of saturation we describe here is slightly different from that seen in 'long branch attraction (LBA)' in phylogenetic trees <ns0:ref type='bibr' target='#b2'>(Bergsten 2005)</ns0:ref>. In LBA, saturation means convergent mutations leading to the same amino acid residue or nucleotide among (across) multiple taxa (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, time point 5 for fast evolving protein P1 and P2).</ns0:p><ns0:p>Although they were derived independently, these shared alleles can be misinterpreted in phylogenetic analyses as being shared due to common ancestry. However, for the type of saturation we have discussed here, independent mutations at the same site among different taxa would generally lead to different taxa having different amino acids rather than the same (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, time point 3 and 4 for fast evolving proteins), since the probability of an independent mutation changing to the same amino acid is about 20 times lower than that of mutating to a different</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed amino acid (assuming no difference in the probability of being mutated to among the 20 amino acids). Thus, the type of saturation we have described here is expected to be more commonplace in nature compared to that in the case of LBA. Since a single mutation is sufficient for a mismatch between any two taxa, multiple independent mutations at the same site leading to different amino acids would not increase the number of mismatches and would remain unnoticeable if one only aligns the sequences from two different taxa (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, the number of mismatch between P2 and P3 is 1 at time point 2 before saturation and remains as 1 at time point 4 after saturation). It only becomes apparent when one aligns the sequences from three different taxa (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, time point 3 and 4 for fast evolving proteins), as we described above and in previous publications <ns0:ref type='bibr' target='#b11'>(Huang 2010;</ns0:ref><ns0:ref type='bibr'>Luo & Huang 2016)</ns0:ref>. However, even though the type of saturation we describe here does not increase the number of mismatches, it could result in a reduced number of mismatches in rare cases when independent mutations in two different taxa happen to lead to the same residue (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>, time point 5 for P1 and P2). Thus, it does not preclude the type of saturation observed in the case of LBA. These two types of saturation are essentially just two different aspects of the same saturation phenomenon, one more commonplace and manifesting as higher overlap ratio while the other less common and manifesting as LBA.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion:</ns0:head><ns0:p>Our study here addressed whether observed amino acids variants in slowly evolving proteins are more or less neutral than those in fast evolving proteins. The results suggest that</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:p><ns0:p>Manuscript to be reviewed fixed and standing missense variations in slowly evolving proteins are more likely to be neutral, and have implications for phylogenetic inferences.</ns0:p></ns0:div>
<ns0:div><ns0:head>Declarations:</ns0:head><ns0:p>Tables:</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref>. Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions. Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey.</ns0:p><ns0:p>The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Figure Legends:</ns0:note><ns0:note type='other'>Figure 1</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2017). Briefly, we collected the whole genome protein data of Homo sapiens (version 36.3) and Macaca mulatta (version 1) from the NCBI FTP site, and then compared the human protein to the monkey protein using local BLASTP program at a cutoff of 1E-10. We only retained one human protein with multiple isoforms, and chose the monkey protein with the most significant E-value as the orthologous counterpart of each human protein. The aligned proteins were ranked by PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>1B) divided into 4 groups of different percentage identity between human and monkey, >99%, 98-99%, 96-98%, and 87-97%. We used four different scoring matrixes to give each amino acid change a rank score in terms of how conservative the change is, BLOSUM62 (Pearson 2013), EX(Yampolsky & Stoltzfus 2005), delta-V, and delta-P<ns0:ref type='bibr' target='#b3'>(Braun 2018;</ns0:ref> Pandey & Braun 2020). The results were largely similar. There was a general correlation between slower evolutionary rates and higher fractions of conservative changes, with a significant drop in the fraction of conservative changes between the slowest evolving, which was included in the slow set that has monkey-human identity > 99% and protein length >1102 amino acids, and the next slowest set (Figure1A and B). Proteins with alignment gaps showed similar or slightly lower fractions of conservative changes than those without gaps. We further studied the remaining proteins with shorter protein length (200-1102 amino acids) divided into 4 groups (95-99%, 90-95%, 80-90%, PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Fraction of conservative substitutions in fixed changes in proteins of different</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Fraction of conservative substitutions in standing missense substitutions in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Fraction of conservative substitutions in missense SNPs with different MAFs in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of non-conservative substitutions and mutation saturation in fast</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Fraction of conservative substitutions in fixed changes in proteins of different evolutionary rates.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 2 Figure 2 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3 Figure 3 .</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Fraction of conservative substitutions in missense SNPs with different MAFs in proteins of different evolutionary rates. SNPs from either fast or slowly evolving proteins were classified based on MAF values and the fractions of conservative changes in each class are shown. Statistical significance score in difference between slow and fast or between different MAF cutoffs are shown. **, P<0.01. Chi squared test.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of non-conservative substitutions and mutation saturation in fast evolving proteins.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>are fixed, it would not be because of selection but rather because of random drift. It makes sense for slowly evolving proteins to be spared by positive selection, because a mutation that takes a long time to arrive would be useless for quick adaptive needs.Finally, SNPs in the slow set may be under negative selection if they produce drastic changes, or under no selection if they produce conservative changes (assuming no positive selection as explained above). While one would expect less conservative changes in the rare SNPs compared to the common SNPs, since negative selection may account in part for the low MAF value, the difference in the fraction of conservative changes between the rare SNPs and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>function as dramatically as the drastic changes, which may make it harder for natural selection to</ns0:cell></ns0:row><ns0:row><ns0:cell>occur.</ns0:cell></ns0:row><ns0:row><ns0:cell>Second, as fixed variants cannot be fixed because of negative selection on the variants per</ns0:cell></ns0:row><ns0:row><ns0:cell>se, they are either neutral or under positive selection. Indeed, fast evolving proteins are known to</ns0:cell></ns0:row></ns0:table><ns0:note>Naturalselection is expected to play an important role in determining that. And natural selection of course would be most efficient if the mutated allele is functionally very different from the non-mutated allele.Fixed and stranding conservative variants in slowly evolving proteins may be under a weaker natural selection for several reasons. First, substitutions in slowly evolving proteins are more likely to be conservative and conservative changes may not alter protein structure and be under more positive selection<ns0:ref type='bibr' target='#b4'>(Cai & Petrov 2010;</ns0:ref><ns0:ref type='bibr' target='#b8'>Yuan et al. 2017)</ns0:ref>, which implies that fixed variants in slowly evolving proteins can only become more neutral. Even if slightly deleterious PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)Manuscript to be reviewedmutations the common ones in the slow set should be greater than that in the fast set, since the SNPs in the fast set may be under natural selection regardless of MAF values (low MAF SNPs under more negative selection while high MAF SNPs under both positive and negative selection). Our results are consistent with such expectations.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes.</ns0:figDesc><ns0:table /><ns0:note>PeerJ reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 . Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length >1102 amino acid with no gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Identity %</ns0:cell><ns0:cell>BLOSUM62</ns0:cell><ns0:cell>EX</ns0:cell><ns0:cell>delta-V</ns0:cell><ns0:cell>delta-P</ns0:cell><ns0:cell># proteins</ns0:cell><ns0:cell>Length ave.</ns0:cell></ns0:row><ns0:row><ns0:cell>>99</ns0:cell><ns0:cell>0.49</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>136</ns0:cell><ns0:cell>1532.7</ns0:cell></ns0:row><ns0:row><ns0:cell>98-99</ns0:cell><ns0:cell>0.44</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>137</ns0:cell><ns0:cell>1464.0</ns0:cell></ns0:row><ns0:row><ns0:cell>96-98</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>1539.4</ns0:cell></ns0:row><ns0:row><ns0:cell>87-96</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>1414.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length >1102 amino acid with gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>>99</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>1659.3</ns0:cell></ns0:row><ns0:row><ns0:cell>98-99</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>1855.8</ns0:cell></ns0:row><ns0:row><ns0:cell>96-98</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>320</ns0:cell><ns0:cell>1792.7</ns0:cell></ns0:row><ns0:row><ns0:cell>87-96</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.50</ns0:cell><ns0:cell>437</ns0:cell><ns0:cell>1727.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length 200-1102 amino acid with no gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>95-99</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>6984</ns0:cell><ns0:cell>478.3</ns0:cell></ns0:row><ns0:row><ns0:cell>90-95</ns0:cell><ns0:cell>0.39</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>1229</ns0:cell><ns0:cell>407.9</ns0:cell></ns0:row><ns0:row><ns0:cell>80-90</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>276</ns0:cell><ns0:cell>350.9</ns0:cell></ns0:row><ns0:row><ns0:cell>60-80</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.57</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>372.8</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Protein length 200-1102 amino acid with gaps in alignment 95-99</ns0:head><ns0:label /><ns0:figDesc>PeerJ</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>0.38</ns0:cell><ns0:cell>0.22</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>2001</ns0:cell><ns0:cell>601.1</ns0:cell></ns0:row><ns0:row><ns0:cell>90-95</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell>1529</ns0:cell><ns0:cell>566.4</ns0:cell></ns0:row><ns0:row><ns0:cell>80-90</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>1050</ns0:cell><ns0:cell>489.1</ns0:cell></ns0:row><ns0:row><ns0:cell>60-80</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.48</ns0:cell><ns0:cell>467</ns0:cell><ns0:cell>447.1</ns0:cell></ns0:row></ns0:table><ns0:note>reviewing PDF | (2019:12:43889:1:1:NEW 10 Apr 2020)</ns0:note></ns0:figure>
</ns0:body>
" | "Dear Editor Braun,
Thank you for editing our manuscript! Your explicit reviews are extremely helpful. We have performed the analyses that you suggested and presented the new results in the revised manuscript. We have revised the manuscript based on your comments and the other two reviewers’ comments. A detailed rebuttal is in the following (in bold). We believe the manuscript is now suitable for publication in Peer J.
Sincerely,
Shi Huang
Professor
Editor comments (Edward Braun)
MAJOR REVISIONS
I would like to start by apologizing for the long time to decision. As you will see, the reviews are split with one very negative review. However, I see the potential in the manuscript so I wanted to give the authors the opportunity to revise the manuscript. Therefore, I wrote my own review to provide helpful guidance. Obviously, the authors should pay close attention to reviewer 1 and also address those comments.
Here is my editorial review:
I’ve read through the manuscript “Enrichment in conservative amino acid changes among fixed and standing missense variations in slow evolving proteins” (note that “slow evolving proteins” should read “slowly-evolving proteins”) and the two submitted reviews and I’d like to give some very explicit guidance regarding a potential revision. As you can see, the two reviewers are split, and you received one very negative review. Although I agree with the first reviewer that there are some major issues with the presentation, I also feel that the manuscript contains the core of a very good idea. For publication, I would like to see two things from the authors that constitute major revisions:
1) A reframing of the problem they seek to study.
2) Quantitative analyses showing that they are seeing a genuine enrichment of conservative changes in missense variants in slowly evolving proteins rather than a simple effect of the genetic code.
The reframing of the problem is, in a sense, relatively straightforward. The authors seem to have some major misconceptions regarding phylogenetics. Normally, I would have rejected a paper with such a large number of misconceptions regarding the current state of the art in phylogenetics, but I don’t think the problem they are studying is actually a phylogenetic problem per se. This is what convinced me to give the authors a chance to conduct a major revision.
The big misconception the author’s appear to have is related the state of the art regarding distance methods. It is simply untrue that “…the distance matrix methods are sound provided that one uses neutral variants that accumulate to increase genetic distances in a nearly linear fashion common to the species concerned.” (their lines 68-70). Their statement (lines 259-260) that “Distance matrix methods do not rely on such models but requires the molecular clock and hence the neutral variants.” This statement regarding the molecular clock is true for some distance methods, such as UPGMA. However, as Sanderson and Kim (2000) stated “...performance evaluations [of phylogenetic methods] based on computer simulations (reviewed in Hillis et al., 1994; Li, 1997) and studies of ‘known phylogenies’ (Russo et al., 1996; Naylor and Brown, 1998; Leitner and Fitch, 1999). However, little consensus has emerged, except that a few methods that are not widely used anyway, such as UPGMA, perform poorly” (see the original for the citations embedded in the quote). In other words, the distance methods that require a molecular clock are known to perform poorly have been known to perform poorly for at least two decades and are therefore seldom, if ever, used.
The reality is that most commonly used distance methods, such as neighbor-joining (Saitou and Nei 1987), BioNJ (Gascuel 1997), and minimum evolution (Rzhetsky and Nei 1993; Desper and Gascuel 2002). Indeed, least squares methods, like the Fitch-Margoliash method that the authors cite in their manucript, can be used for taxa without a clock unless one imposes the additional requirement that the least squares trees be ultrametric. In other words, Fitch-Margoliash can be used with data that are not clock-like unless one imposes the assumption of a clock (this is the difference between the fitch and kitch methods in Phylip – the former does not assume a clock whereas the latter does).
Likewise, the criticism of maximum likelihood and Bayesian methods implicit in their statement that among “…the existing methods of phylogeny inferences, most, such as the maximum likelihood methods and Bayesian methods, require the assumption of certain evolutionary models of amino acid or nucleotide changes, which may be unrealistic (Felsenstein 1981; Rannala, Yang 1996)” (lines 256-258) is unfounded. It is true that proofs of consistency for ML phylogenetic estimation require the generating model to be correct. However, there are models of evolution that incorporate positive selection (e.g., Yang and Nielsen 2000; although I would acknowledge that those models are seldom used for tree estimation; they are primarily used to study the process of protein evolution given “known” trees). But that issue is largely beside the point; the GTR model and its sub-models essentially assume neutrality and they appear to be fairly robust under most circumstance; the problem cases are limited to very short branches. Indeed, this is the whole basis for the field of phylogenetics – if ML and Bayesian methods were so dependent on the minutia of model fit that they fundamentally don’t work unless the model is perfect (a criticism implicit in their lines 256-258) the members of the phylogenetic community would have noticed by now!
In other words, I feel the business of justifying their work with reference to the molecular clock (or phylogenetics in general) is somewhat misguided. The interaction between clock-like evolution and the molecular clock is complex. Clock-like evolution is certainly helpful and can allow simple models of evolution (whether they are used in a maximum likelihood, Bayesian, or even a distance framework) to perform better than might be naively expected (Bruno and Halpern 1999). But clock-like evolution is not a prerequisite for any commonly used method of phylogenetic estimation.
I’ve spent a lot of time criticizing the authors’ framing of the problem with their reference to the field of phylogenetics, but their study is really a molecular evolution study and not a phylogenetic study. This, along with the second reviewer’s relatively positive evaluation of the manuscript, is what encouraged me to offer “major revisions” rather than “reject.”
There is one relatively large (and potentially problematic) issue with the author’s analysis. On average, amino acid substitutions that require a single substitution tend to be more conservative than those requiring more than one substitution. Thus, their results might reflect something intrinsic to the structure of the genetic code. I think the authors could deal with this by using another way to assess conservative vs. non-conservative amino acid changes. The author’s use of the log-odds scores from the BLOSUM62 matrix to establish conservative vs. non-conservative substitutions suffers from the fundamental problem that the effects of the genetic code are buried in the BLOSUM62 matrix. In other words, the log-odds of amino acid i being aligned with amino acid j reflects two things: the probability of a missense mutation leading to an i to j polymorphism and the probability that j will be fixed. You are (de facto) using the BLOSUM62 matrix as a proxy for the fixation probability, which you assume to reflect the “conservativeness” of the amino acid exchange. However, the strong evidence that the “instantaneous” doublet nucleotide substitution rate is very low (Whelan and Goldman 2004) means that the first issue the probability of a missense mutation leading to an i to j polymorphism should have a big impact on the values in the BLOSUM62 matrix (or any empirical evolutionary matrix, including the original Dayhoff et al. [1978] PAM matrices or the Gonnett et al. [1992] matrix).
There is a simple solution to this problem. There are matrices of conservativeness that remove the impact of substitution probability. Specifically, there is the “EX” matrix of Yampolsky and Stoltzfus (2005), which is based on laboratory mutagenesis. There are also physicochemical matrices in Braun (2018); I would recommend using EX, delta-V (normalized change in amino acid side chain volume), and delta-P (normalized change in amino acid side chain volume). All three of these changes can be obtained from matrices in spreadsheets that are available from github (https://github.com/ebraun68/clade_specific_prot_models; this is the github site for Pandey and Braun 2020). Specifically, the EX matrix (or, more accurately, a normalized symmetric version of the EX matrix) is in the excel spreadsheet EX_matrix_sym.xlsx; the delta-V and delta-P matrices can be found in one of the sheets (the sheet called “Exchanges”) in the file exchange_Pandey_Braun.xlsx.
I think that reframing the manuscript as simply an exploration of molecular evolution and using some matrices that are unaffected by the structure of the genetic code (i.e., the EX, delta-V, and delta-P matrices) will greatly improve the manuscript. On lines 113-115 the authors state that the “…degree of physical/chemical change in an amino acid missense mutation was ranked by a scoring series, -3, -2, -1, 0, 1, 2, 3, in the BLOSUM62 matrix with more positive values representing more conservative changes.” It seems to me that they could do something very similar with the three matrices I recommend. The only major difference is the range of values (all three of the matrices I recommend are normalized to range from zero to one) of the fact that the matrices include non-integer values. However, the same idea applies – values closest to one are more conservative and values closer to zero are less. If the authors wanted to work with integers (perhaps this would be easier with their code) they could generate integer versions of the matrices by multiplying by constant, subtracting half of that constant, and rounding to the nearest integer (e.g., if they multiply by 10, subtract 5, and round they would get a matrix of values that range from -5 to +5 with higher positive values reflecting more conservative changes. I feel this approach would overcome the fundamental flaw of using BLOSUM62 for this analysis.
It would certainly be fine to keep the BLOSUM62 analyses you have already conducted. Likewise, it would be fine to acknowledge that your results might imply that distance analyses of slowly evolving loci are better. There is actually a growing literature that distance methods may have desirable properties not necessarily related to the clock but rather to the multispecies coalescent (Dasarathy et al. 2015; Rusinko and McPartlon 2017; Allman et al. 2019). Thus, empirical hints for working with distance methods might be valuable. Of course, there are other reasons to avoid rapidly evolving loci (i.e., avoiding saturation). My big point is that this manuscript should not primarily focus on phylogeny and instead focus on molecular evolution.
I would like to say three addition things. First, I think should endeavor to improve the English language presentation of their work. I am always a bit worried about saying this because the need for good English writing puts an additional burden on researchers who are not native English speakers. But (perhaps unfortunately) it is simply a fact that the international language of science at this point is English. Second, I thought long and hard about whether the manuscript would merit publication if using the EX, delta-V, delta-P matrices showed that their core finding (that slowly-evolving proteins are enriched for conservative amino acid changes relative to rapidly evolving proteins) is incorrect (or more accurately, that the conclusion only emerges when the BLOSUM62 matrix is used to score conservative vs. non-conservative changes). I think it would be interesting regardless, as long as the results are sound. Finally, I recognize that I am asking the authors to use information from two of my papers (Braun 2018 and the Pandey and Braun 2020 bioRxiv preprint). I don’t like to push citations of my own work too strongly when I am editing papers, but in this case it is only two papers and I feel that specific work is directly relevant and – in the case of the EX, delta-V, and delta-P matrices – directly addresses what I believe to be a fundamental limitation of using the BLOSUM62 matrix to score conservative vs non-conservative changes.
I hope this guidance is helpful. I do think this paper has potential, but I also believe these improvements are essential to make the manuscript publishable. Note that I do plan send the manuscript out for re-review if it is resubmitted in a revised version.
---
References for this editorial review:
Allman, E. S., Long, C., & Rhodes, J. A. (2019). Species tree inference from genomic sequences using the log-det distance. SIAM Journal on Applied Algebra and Geometry, 3(1), 107-127.
Braun, E.L. (2018) An evolutionary model motivated by physicochemical properties of amino acids reveals variation among proteins. Bioinformatics, 34, i350-i356.
Bruno, W. J., & Halpern, A. L. (1999). Topological bias and inconsistency of maximum likelihood using wrong models. Molecular Biology and Evolution, 16(4), 564-566.
Dayhoff, M.O. et al. (1978) A model of evolutionary change in proteins. In Dayhoff, M.O. (ed.), Atlas of Protein Sequence and Structure, National Biomedical Research Foundation, Silver Springs, MD, Vol. 5, pp. 345-352.
Desper R, and Gascuel O. Fast and accurate phylogeny reconstruction algorithms based on the minimum-evolution principle J Comput Biol., 2002, vol. 9 (pg. 687-705)
Dasarathy G., Nowak R., & Roch S. Data requirement for phylogenetic inference from multiple loci: a new distance method. IEEE/ACM Trans. Comput. Biol. Bioinforma, 12 (2) (2015), pp. 422-432
Gascuel, O. (1997). BIONJ: an improved version of the NJ algorithm based on a simple model of sequence data. Molecular biology and evolution, 14(7), 685-695.
Gonnet, G. H., Cohen, M. A., & Benner, S. A. (1992). Exhaustive matching of the entire protein sequence database. Science, 256(5062), 1443-1445.
Pandey, A. & Braun, E.L. Protein evolution is structure dependent and non-homogeneous across the tree of life. bioRxiv 2020.01.28.923458; doi: https://doi.org/10.1101/2020.01.28.923458
Rzhetsky, A., & Nei, M. (1993). Theoretical foundation of the minimum-evolution method of phylogenetic inference. Molecular biology and evolution, 10(5), 1073-1095.
Rusinko, J., & McPartlon, M. (2017). Species tree estimation using Neighbor Joining. Journal of theoretical biology, 414, 5-7.
Saitou, N., & Nei, M. (1987). The neighbor-joining method: a new method for reconstructing phylogenetic trees. Molecular biology and evolution, 4(4), 406-425.
Sanderson, M. J., & Kim, J. (2000). Parametric phylogenetics? Systematic Biology, 49(4), 817-829.
Whelan, S., & Goldman, N. (2004). Estimating the frequency of events that cause multiple-nucleotide changes. Genetics, 167(4), 2027-2043.
Yampolsky, L.Y. and Stoltzfus, A. (2005) The exchangeability of amino acids in proteins. Genetics, 170, 1459-1472.
Yang, Z., & Nielsen, R. (2000). Estimating synonymous and nonsynonymous substitution rates under realistic evolutionary models. Molecular biology and evolution, 17(1), 32-43.
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title) #]
[# PeerJ Staff Note: It is PeerJ policy that additional references suggested during the peer-review process should only be included if the authors are in agreement that they are relevant and useful #]
[# PeerJ Staff Note: Please ensure that all review comments are addressed in a rebuttal letter and any edits or clarifications mentioned in the letter are also inserted into the revised manuscript where appropriate. It is a common mistake to address reviewer questions in the rebuttal letter but not in the revised manuscript. If a reviewer raised a question then your readers will probably have the same question so you should ensure that the manuscript can stand alone without the rebuttal letter. Directions on how to prepare a rebuttal letter can be found at: https://peerj.com/benefits/academic-rebuttal-letters/ #]
Responses: We thank the editor for these very helpful comments. We have revised the introduction and reframed the question addressed as requested by the editor. We have also performed the analyses using the three matrixes as suggested and presented the results in the revised manuscript. The new analyses strongly strengthened the conclusions. We have also had the manuscript edited by two native English speakers.
Reviewer 1 (Anonymous)
Basic reporting
no comment
Experimental design
no comment
Validity of the findings
The primary finding that amino-acid substitutions and polymorphisms in slowly evolving (conserved) genes are more conservative than those in non-slow genes is sensible and is documented reasonably by the authors.
However, I have a concern about the authors use of the term 'saturation' and their interpretation of the results.
Based on the Discussion, the authors seem to take a somewhat idiosyncratic view of what 'saturation' means. For a sequence to be saturated with substitutions I think generally implies that so many repeated substitutions occurred along a particularly long branch(es) that the stationary distribution has more or less been approached in subsets of the data, which can cause artificial attraction among the longest branches when inferring phylogenetic trees ('long branch attraction'). It is hard to imagine such problems occurring in human-macaque comparisons, since these are two very closely related species by any standard. The authors instead seem to be discussing a problem where individual sites are saturated with substitutions. However, their definition seems to be that any site with two or more apparent substitutions is 'saturated'. If this is really what they are arguing, it does not seem correct. Even parsimony based methods can deal with such cases. Perhaps they are interested in some pathology of distance-based methods, but I can't quite figure out what they are actually claiming. For the most part, I think this problem affects only the few places throughout the manuscript where 'saturation' is mentioned. However, the second half of the discussion seems heavily based on the idea, which I feel is insufficiently justified and incompletely explained. Perhaps the authors can clarify why their definition of saturation is so different from the standard view from statistical phylogenetics. I recognize this aspect of the study seems to be connected with other work by these authors, however, I feel that the discussion of these matters is somewhat disconnected from the study, which is otherwise rather unobjectionable. I therefore suggest that for publication either a careful exposition of these ideas be presented that tries to reconcile the standard definition of saturation with theirs, or that this part of the discussion be removed.
Responses: We thank the reviewer for recognizing the value of our study. With regard to the discussion on saturation, the reviewer is correct to point out that we are using the term saturation to refer to something that is different from that commonly used in phylogenetic studies as in the case of long branch attraction. We have removed the discussion on phylogenetic methods, which may help to avoid mislead readers to interpret our saturation concept in connection with phylogenetic trees. We are glad to see that the reviewer has got it right that saturation in our case here means encountering multiple mutations across taxa and time for a given site. We have now more explicitly stated this when the term saturation was first introduced in the text. We have thought it hard whether to delete the discussion on saturation as suggested by the reviewer. We believe that including it may be more helpful to make sense of the results. Otherwise, the results may be hard to interpret, at least to us. This is especially the case now in light of the new results from the three other matrixes that have removed the impact of substitution probability. The new results showed that substitution probability cannot be the reason for the results observed. However, if the fast evolving proteins are not in mutation saturation (as defined in our way), it would be hard to explain the results. If both slowly and fast evolving proteins are not yet at mutation saturation, one would expect fast evolving proteins to have more drastic changes simply because they have a higher probability for drastic changes to happen. For higher mutation rate and hence higher probability for drastic changes to be a non-factor in our results, mutation saturation may be the best explanation for our results, in our opinion. At saturation, the range of mutations that have happened at any given site is irrelevant to the particular type of possible alleles the site may carry at present time. Natural selection is expected to play an important role in determining that. And natural selection of course would be most efficient if the mutated allele is functionally very different from the non-mutated allele.
We thank the reviewer for the suggestion that we need to reconcile the standard definition of saturation with ours. We have added a discussion of this in the text and a new figure 4 to illustrate our main point.
It is also important to note that the type of saturation we describe here is slightly different from that seen in “long branch attraction (LBA)” in phylogenetic trees (Bergsten, 2005). In LBA, saturation means convergent mutations leading to the same amino acid residue or nucleotide among (across) multiple taxa (Figure 4, time point 5 for fast evolving protein P1 and P2). Although they were derived independently, these shared alleles can be misinterpreted in phylogenetic analyses as being shared due to common ancestry. However, for the type of saturation we have discussed here, independent mutations at the same site among different taxa would generally lead to different taxa having different amino acids rather than the same (Figure 4, time point 3 and 4 for fast evolving proteins), since the probability of an independent mutation changing to the same amino acid is about 20 times lower than that of mutating to a different amino acids (assuming no difference in the probability of being mutated to among the 20 amino acids). Thus, the type of saturation we have described here is expected to be more commonplace in nature compared to that in the case of LBA. Since a single mutation is sufficient for a mismatch between any two taxa, multiple independent mutations at the same site leading to different amino acids would not increase the number of mismatches and would remain unnoticeable if one only aligns the sequences from two different taxa (Figure 4, the number of mismatch between P2 and P3 is 1 at time point 2 before saturation and remains as 1 at time point 4 after saturation). It only becomes apparent when one aligns the sequences from three different taxa (Figure 4, time point 3 and 4 for fast evolving proteins), as we described above and in previous publications (Huang, 2010; Luo and Huang, 2016). However, even though the type of saturation we describe here does not increase the number of mismatches, it could result in a reduced number of mismatches in rare cases when independent mutations in two different taxa happen to lead to the same residue (Figure 4, time point 5 for P1 and P2). Thus, it does not preclude the type of saturation observed in the case of LBA. These two types of saturation are essentially just two different aspects of the same saturation phenomenon, one more commonplace and manifesting as higher overlap ratio while the other less common and manifesting as LBA.
Regarding the reviewer’s comment on human-macaque protein distances being unlikely to be at saturation as we use the concept here, it is important to note that a protein in a complex species plays more roles than its orthologous protein in a species of less organismal complexity, as explained by the maximum genetic diversity hypothesis (Hu et al., 2013; Huang, 2008, 2016). A protein has more functions to play in complex organisms due in part to its involvement in more cell types, and hence it becomes more susceptible to mutational inactivation. While the divergence time among higher taxa such as between human and Macaca monkey is relatively short, mutation saturation could still happen for fast evolving proteins since the number of positions that can accept fixed substitutions is comparatively lower.
One other spot where the results seem over interpreted is where the authors state 'our findings here are consistent with the view that saturation is maintained by positive selection as fast evolving proteins have lower fraction of conservative changes, and inconsistent with the presently popular view that variant sites at saturation are fully neutral.' First, fast evolving proteins include both neutrally evolving and positively selected genes. Second, this hardly seems like any kind of evidence that 'saturation is maintained by positive selection.' Third, I cannot see how any one could think that 'variant sites at saturation are fully neutral'. These ideas need to be more clearly expressed and justified. However, more importantly, they do not seem to me to follow from the results of the paper.
Responses: We thank the reviewer for catching this. We have deleted this sentence.
Reviewer 2 (Anonymous)
Basic reporting
English used throughout the manuscript is poor and this single factor has great impact on the value of the whole study. For example, the first sentence of the whole manuscript 'Proteins were first used in the early 1960s to discover the molecular clock' is inaccurate. It should be 'Protein sequences were first used...'. There are many places throughout the manuscript contain this type of inaccuracy and irregular use of common English. For instances, Line 68 'methods are sound provided that...' ?? and Line 74 'would cease...'
Responses: We thank the reviewer for catching these language mistakes. We have now improved the language in the revised manuscript by having had two native English speakers to edit the writings.
Experimental design
Authors should provide a toy example of sequences to illustrate their main point. The whole bioinformatics experiment design and procedure look puzzling.
Responses: We thank the reviewer for this suggestion. We have revised the writings for the methods section. The main point is, in our opinion, quite straightforward but our poor writings may have prevented the reviewer from getting it. We hope the revised version has expressed it better. We have now provided an illustration of our main point as in Figure 4.
Validity of the findings
From the nonstandard use of terminology, it seems authors who wrote the manuscript are not familiar with the fields, molecular evolution and pouplation genetics. For example, line 52-53 'gene non-identity between species' should be 'sequence divergence between species'. Line 76 'negative selection as defined by dN/dS ratio' should be 'negative selection as measured using dN/dS ratio'. Line 78-80 is difficult to understand what authors are talking about.
Responses: We thank the reviewer for catching these mistakes. We have in the revised manuscript corrected these and other language use mistakes.
Comments for the Author
Please define 'conservative mismatches'.
Responses: We thank the reviewer for this suggestion. By conservative mismatches, we mean a mismatch that is caused by a conservative replacement (also called a conservative mutation or a conservative substitution). It is apparently not a commonly used term and we have decided not to use it, and we have used instead conservative substitution.
" | Here is a paper. Please give your review comments after reading it. |
653 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The process of molecular evolution has many elements that are not yet fully understood.</ns0:p><ns0:p>Evolutionary rates are known to vary among protein coding and noncoding DNAs, and most of the observed changes in amino acid or nucleotide sequences are assumed to be non-adaptive by the neutral theory of molecular evolution. However, it remains unclear whether fixed and standing missense changes in slowly evolving proteins are more or less neutral compared to those in fast evolving genes. Here, based on the evolutionary rates as inferred from identity scores between orthologs in human and Rhesus Macaques (Macaca mulatta), we found that the fraction of conservative substitutions between species was significantly higher in their slowly evolving proteins. Similar results were obtained by using four different methods of scoring conservative substitutions, including three that remove the impact of substitution probability, where conservative changes require fewer mutations. We also examined the single nucleotide polymorphisms (SNPs) by using the 1000 genomes project data and found that missense SNPs in slowly evolving proteins also had a higher fraction of conservative changes, especially for common SNPs, consistent with more non-conservative substitutions and hence stronger natural selection for SNPs, particularly rare ones, in fast evolving proteins. These results suggest that fixed and standing missense variants in slowly evolving proteins are more likely to be neutral.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Since the early 1960s, protein sequence comparisons have become increasingly important in molecular evolutionary research <ns0:ref type='bibr' target='#b5'>(Doolittle & Blombaeck 1964;</ns0:ref><ns0:ref type='bibr' target='#b6'>Fitch & Margoliash 1967;</ns0:ref><ns0:ref type='bibr' target='#b22'>Margoliash 1963;</ns0:ref><ns0:ref type='bibr' target='#b38'>Zuckerkandl & Pauling 1962)</ns0:ref>. An apparent relationship between protein sequence divergence and time of separation led to the molecular clock hypothesis, which assumes a constant and similar evolutionary rate among species <ns0:ref type='bibr' target='#b18'>(Kumar 2005;</ns0:ref><ns0:ref type='bibr' target='#b22'>Margoliash 1963;</ns0:ref><ns0:ref type='bibr' target='#b38'>Zuckerkandl & Pauling 1962)</ns0:ref>. Thus, sequence divergence between species is thought to be largely a function of time. The molecular clock, in turn, led Kimura to propose the neutral theory to explain nature: sequence differences between species were thought to be largely due to neutral changes rather than adaptive evolution <ns0:ref type='bibr' target='#b16'>(Kimura 1968</ns0:ref>). However, the notion of a molecular clock may be unrealistic since it predicts a constant substitution rate as measured in generations, whereas the observed molecular clock is measured in years <ns0:ref type='bibr' target='#b1'>(Ayala 1999;</ns0:ref><ns0:ref type='bibr' target='#b28'>Pulquerio & Nichols 2007)</ns0:ref>. The neutral theory remains an incomplete explanatory theory <ns0:ref type='bibr' target='#b10'>(Hu et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kern & Hahn 2018)</ns0:ref>. Evolutionary rates are known to vary among protein coding and non-coding DNAs. The neutral theory posits that the substitution rate under selective neutrality is expected to be equal to the mutation rate <ns0:ref type='bibr' target='#b17'>(Kimura 1983)</ns0:ref>. If mutations/substitutions are not neutral or are under natural selection, the substitution rate would be affected by the population size and the selection coefficient, which are unlikely to be constant among all lineages. Slowly evolving genes are well known to be under stronger purifying or negative selection as measured by using dN/dS ratio, which means that a new mutation has a lower probability of being fixed <ns0:ref type='bibr' target='#b4'>(Cai & Petrov 2010)</ns0:ref>.</ns0:p><ns0:p>However, negative selection as detected by the dN/dS method is largely concerned with nonobserved mutations and says little about the fixed or observed variations. And most molecular evolutionary approaches such as phylogenetic and demographic inferences are concerned with observed variants. It remains to be determined whether fixed and standing missense substitutions in slowly evolving genes are more or less neutral relative to those in fast evolving genes.</ns0:p><ns0:p>We here examined the fraction of conservative substitutions (amino acid replacement in a protein that changes a given amino acid to a different amino acid with similar biochemical properties) in proteins of different evolutionary rates. We compared the protein orthologs of two relatively closely related species, Homo sapiens and Macaca mulatta, to obtain values of percentage identity to represent evolutionary rates. We found that the proportion of conservative substitutions between species was higher in the slowest evolving set of proteins than in faster evolving proteins. Using datasets from the 1000 genomes (1KG) project phase 3 dataset <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>, we also found that missense single nucleotide polymorphisms (SNPs) from the slowest evolving set of proteins, especially those with high minor allele frequency (MAF), were enriched with conservative amino acid changes, consistent with these changes being under weaker natural selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methods</ns0:head><ns0:p>Classification of proteins as slowly and fast evolving. The identification of slowly evolving proteins and their associated SNPs was done as previously described <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. Briefly, we collected the whole genome protein data of Homo sapiens (version 36.3) and</ns0:p><ns0:p>Macaca mulatta (version 1) from the NCBI FTP site, and then compared the human protein to the monkey protein using local BLASTP program at a cutoff of 1E-10. We only retained one human protein with multiple isoforms, and chose the monkey protein with the most significant Evalue as the orthologous counterpart of each human protein. The aligned proteins were ranked by percentage identities. Proteins that show the highest identity between human and monkey were included in the set of slowly evolving (including 423 genes > 304 amino acid in length with 100% identity and 178 genes > 1102 amino acid in length with 99% identity between monkey and human). The rest are all considered fast evolving proteins. The cutoff criterion was based on the empirical observation of low substitution saturation, and the finding that missense SNPs from the slow set of proteins produced genetic diversity patterns that were distinct from those found in the fast set <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. The BLASTP alignment program is not expected to produce very different results from other programs, especially for highly conserved proteins. We have limited our analysis to high identity orthologs with length >200 amino acid and percent identity >60% between monkey and human. So, variation in alignment is not expected to affect comparing our analysis to others. SNP selection. We downloaded the 1KG phase 3 data and assigned SNP categories using ANNOVAR <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>. We then picked out the missense SNPs located in the slow evolving set of genes from the downloaded VCF files <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. MAF was derived from AF (alternative allele frequency) values from the VCF files. Missense SNPs in fast evolving genes included all those from 1KG that are not from the slowly evolving set. For fixed substitutions as revealed by BLASTP, conservative changes were scored by using four different matrixes. The BLOSUM62 matrix has a scoring range from -3 to 3 (-3,-2, -1, 0, 1, 2, 3) with higher positive values representing more conservative changes <ns0:ref type='bibr' target='#b25'>(Pearson 2013)</ns0:ref>. We assigned each amino acid mutation a score and we used score >0 to denote conservative changes in cases where the number of conservative changes is enumerated. As the BLOSUM62 matrix does not take into account the effect of substitution probability (the fact that conservative changes require fewer mutations), we also used three other matrixes to score conservative amino acid replacements that have removed the impact of substitution probability, including the'EX' matrix <ns0:ref type='bibr' target='#b33'>(Yampolsky & Stoltzfus 2005)</ns0:ref>, which is based on laboratory mutagenesis, and the two physicochemical matrices in <ns0:ref type='bibr' target='#b3'>Braun (2018)</ns0:ref> and <ns0:ref type='bibr' target='#b24'>Pandey and Braun (2020)</ns0:ref>: delta-V (normalized change in amino acid side chain volume), and delta-P (normalized change in amino acid side chain polarity) <ns0:ref type='bibr' target='#b3'>(Braun 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Pandey & Braun 2020)</ns0:ref>. All three matrixes in spreadsheets are available from GITHUB (https://github.com/ebraun68/clade_specific_prot_models). Specifically, the EX matrix (or, more accurately, a normalized symmetric version of the EX matrix) is in the excel spreadsheet 'EX_matrix_sym.xlsx'; the delta-V and delta-P matrices can be found in one of the sheets (the sheet called 'Exchanges') in the file 'exchange_Pandey_Braun.xlsx'. All three of the matrixes are normalized to range from zero to one. To be comparable to the BLOSUM62 matrix, we generated integer versions of these three matrixes by multiplying by 10, subtracting 5, and then rounding to the nearest integer. Here the matrix values range from -5 to +5 with higher positive values representing more conservative changes. For EX matrix, we used score >2 to denote conservative changes. For delta-V and delta-P matrixes, we used score >3 to denote conservative changes. In this way of using different cutoff scores to represent conservative changes, we could keep the fraction of conservative changes close to 0.5 for each of the four different matrixes.</ns0:p><ns0:p>Statistics. Chi-squared test was performed using GraphPad Prism 6.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results:</ns0:head><ns0:p>Fixed amino acid substitutions and evolutionary rates of proteins We determined the evolutionary rates of proteins in the human genome by the percentage of identities between human proteins and their orthologs in Macaca mulatta as described previously <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. We then divided the proteins into several groups of different Manuscript to be reviewed evolutionary rates, and compared the proportion of conservative amino acid substitutions in each group.</ns0:p><ns0:p>The mismatches between two species would have one of the two residues or alleles as ancestral, in the case of slowly evolving proteins yet to reach mutation saturation (no independent mutations occurring at the same site among species and across time), and so a mismatch due to conservative changes would involve a conservative mutation during evolution from the ancestor to extant species. But at mutation saturation for fast evolving proteins, where a site had encountered multiple mutations across taxa and time, while a drastic substitution would necessarily involve a non-conservative mutation, it is possible for a conservative substitution to result from at least two independent non-conservative mutations (if the common ancestor has Arg at some site, a drastic mutation event at this site occurring in each of the two species, Arg to Leu in one and Arg to Ile in the other, may lead to a conservative substitution of Leu and Ile). Thus, a conservative substitution at mutation saturation just means less physical and chemical differences between the two species concerned and says little about the actual mutation events. A lower fraction of conservative substitutions at saturation for fast evolving proteins would mean more physical and chemical differences between the two species, which may more easily translate into functional differences for natural selection to act upon.</ns0:p><ns0:p>To verify that the slowest evolving proteins with length >1102 amino acids and percentage identity >99% are distinct from the fast set, we first compared proteins with length >1102 amino acids with no gaps in alignment (Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_7'>1A</ns0:ref>) or with gaps (Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_7'>1B</ns0:ref>) divided into 4 groups of different percentage identity between human and monkey, >99%, 98-99%, 96-98%, and 87-97%. We used four different scoring matrixes to give each amino acid change a rank score in terms of how conservative the change is, BLOSUM62 (Pearson 2013), EX <ns0:ref type='bibr' target='#b33'>(Yampolsky & Stoltzfus 2005)</ns0:ref>, delta-V, and delta-P <ns0:ref type='bibr' target='#b3'>(Braun 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Pandey & Braun 2020</ns0:ref>). The results were largely similar. There was a general correlation between slower evolutionary rates and higher fractions of conservative changes, with a significant drop in the fraction of conservative changes between the slowest evolving, which was included in the slow set that has monkey-human identity > 99% and protein length >1102 amino acids, and the next slowest set (Figure <ns0:ref type='figure' target='#fig_7'>1A and B</ns0:ref>). Proteins with alignment gaps showed similar or slightly lower fractions of conservative changes than those without gaps. We further studied the remaining proteins with shorter protein length (200-1102 amino acids) divided into 4 groups (95-99%, 90-95%, 80-90%, and 60-80% identity), and found similar but less robust and consistent trends (Table <ns0:ref type='table' target='#tab_6'>1 and Figure 1C and D)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Standing amino acid variants and evolutionary rates of proteins</ns0:head><ns0:p>We next studied the missense SNPs found in proteins with different evolutionary rates by using 1KG dataset <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>. There were 15271 missense SNPs in the slowly evolving set of proteins (>1102 aa with 99% identity and >304 aa with 100% identity) and 546297 missense SNPs in the fast set (all proteins that remain after excluding the slow set). We assigned each amino acid change found in a missense SNP a conservation score as described above. The number of SNPs in each score category was then enumerated. We performed this analysis by using each of the four different scoring matrixes and found largely similar results (Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>). Missense SNPs in the slowly evolving set of proteins in general had lower fractions of drastic mutations, and higher fractions of conservative mutations relative to those in the faster evolving set of proteins (Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>). The fraction of conservative mutations in the slow evolving set was significantly higher than that of the fast set (P<0.001, Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>).</ns0:p><ns0:p>To Manuscript to be reviewed that were under a weaker natural selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion:</ns0:head><ns0:p>Our results here showed that fixed and standing changes in slowly evolving proteins were enriched with conservative amino acid substitutions. Similar results were obtained using four different matrixes to rank the conservative nature of a substitution. Based on substitution probability alone, amino acid substitutions in slowly evolving proteins are expected to be more conserved than those in fast evolving proteins, since fast evolving proteins have a higher probability of the doublet mutations that are necessary for a drastic substitution to occur, but have a very low rate of occurrence <ns0:ref type='bibr' target='#b31'>(Whelan & Goldman 2004)</ns0:ref>. If evolutionary time is not long enough for mutation saturation to occur, non-conservative substitutions would be expected to be a function of mutation rate and time. This simple explanation appears not to be the reason for the observations here, since the three matrixes that have removed the impact of substitution probability produced similar results as the matrix that does not take into account the impact of substitution probability. Although all of the four matrixes are developed by using sequence alignments involving relatively more diverged species, they are still expected to apply to alignment data from relatively closely related species such as monkey and human, since proteins identified as fast evolving by comparing closely related species would also in general be identified as such by comparing more distantly related species. This is suggested by the molecular clock phenomenon or the constant evolutionary rates across time and species. It appears that the delta_V matrix produced less significant results compared to the other three matrixes.</ns0:p><ns0:p>Overestimation of the number of conservative substitutions in fast evolving proteins may account for this. For example, substitutions involving differently charged residues with similar side chain volumes would be scored as non-conservative by the delta_P matrix but conservative by the delta_V matrix (e.g., Glu to Leu). Our analysis does not take into account the co-evolution and co-variation of substitutions due to the physico-chemical constraints on protein structure and folding <ns0:ref type='bibr' target='#b27'>(Pollock et al. 2012)</ns0:ref>.</ns0:p><ns0:p>Site specific variations in substitution constraints however may be similarly present in different proteins of different evolutionary rates so that they may not affect the overall results here. Also, <ns0:ref type='bibr'>Pollock et al.</ns0:ref> show that site-specific preferences shift over time due to substitutions at other sites that are epistatic to the site of interest <ns0:ref type='bibr' target='#b27'>(Pollock et al. 2012</ns0:ref>). Thus, it could be very complex to define site-specific preferences in a meaningful way.</ns0:p><ns0:p>It has recently been shown that coding region mutation rates as measured prior to the effect of natural selection are significantly lower in genes where mutations are more likely to be deleterious <ns0:ref type='bibr' target='#b23'>(Monroe et al. 2020)</ns0:ref>. Mutations are more likely to be deleterious and less likely to be fixed in highly conserved proteins, which are by definition more slowly evolving proteins. Thus slowly evolving genes in fact do have inherently slower mutation rates, which would make them less likely to reach mutation saturation.</ns0:p><ns0:p>The results here may be best accounted for by mutation saturation in fast evolving proteins, where multiple recurrent mutations at the same site have occurred across taxa and time (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>). At saturation, the range of mutations that have happened at any given site of any given</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed taxon is irrelevant to the particular type of possible alleles the site may carry at present time.</ns0:p><ns0:p>Natural selection is expected to play an important role in determining that. And natural selection of course would be most efficient if the mutated allele is functionally very different from the nonmutated allele. If two taxa are different in traits, it would follow that some of the differences in protein sequence between them would be non-neutral or non-conservative changes. Fast evolving genes play more adaptive roles and hence are more involved in accounting for the different traits, and so are expected to be enriched with non-conservative substitutions compared to slowly evolving genes. A fast evolving and adaptive site is more likely to be mutated more than once or encounter mutation saturation. Fixed and standing conservative variants in slowly evolving proteins may be under weaker natural selection for several reasons. First, substitutions in slowly evolving proteins are more likely to be conservative and conservative changes may not alter protein structure and function as dramatically as the drastic changes, which may make it harder for natural selection to occur. Second, as fixed variants cannot be fixed because of negative selection on the variants per se, they are either neutral or under positive selection. Indeed, fast evolving proteins are known to be under more positive selection <ns0:ref type='bibr'>(Cai &</ns0:ref> Manuscript to be reviewed If substitutions in fast evolving proteins are at saturation and under natural selection as indicated here, it would follow that genetic distances or degrees of sequence mismatches between taxa in these proteins would be at saturation, or no longer correlated exactly with time.</ns0:p><ns0:p>It is easy to tell the difference between optimum/maximum saturation genetic distances and linear distances as described previously <ns0:ref type='bibr' target='#b12'>(Huang 2010)</ns0:ref>. Briefly, imagine a 100 amino acid protein with only 1 neutral site. In a multispecies alignment involving at least three taxa, if one finds only one of these taxa with a mutation at this neutral site while all other species have the same non-mutated residue, there is no saturation (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, time point 2). However, if one finds that nearly every taxon has a unique amino acid, one would conclude mutation saturation as there would have been multiple independent substitution events among different species at the same site, and repeated mutations at the same site do not increase distance (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, time point 3 and 4 for fast evolving proteins). We have termed those sites with repeated mutations 'overlap' sites <ns0:ref type='bibr' target='#b12'>(Huang 2010)</ns0:ref>. So, a diagnostic criterion for saturated maximum distance Manuscript to be reviewed between two species is the proportion of overlap sites among mismatched sites. Saturation would typically have 50-60% overlapped sites that are 2-3 fold higher than that expected before saturation <ns0:ref type='bibr' target='#b12'>(Huang 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Luo & Huang 2016)</ns0:ref>. It is not expected to have near 100% overlapped sites, because certain sites may only accommodate 2 or very few amino acid residues at saturation equilibrium, which would prevent them from presenting as overlapped sites even though they are in fact overlapped and saturated sites. Also, saturation may result in convergent evolution with independent mutations changing to the same amino acid (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, time point 5 for fast evolving proteins). This overlap ratio method is an empirical one free of uncertain assumptions and hence more realistic than other methods of testing for saturation, such as comparing the observed number of mutations to the inferred one based on uncertain phylogenetic trees derived from maximum parsimony or maximum likelihood methods <ns0:ref type='bibr' target='#b26'>(Philippe et al. 1994;</ns0:ref><ns0:ref type='bibr' target='#b29'>Steel et al. 1993;</ns0:ref><ns0:ref type='bibr' target='#b32'>Xia et al. 2003)</ns0:ref>. By using the overlap ratio method, we have verified that the vast majority of proteins show maximum distances between any two deeply diverged taxa, and only a small proportion, the slowest evolving, are still at the linear phase of changes <ns0:ref type='bibr' target='#b12'>(Huang 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Luo & Huang 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Yuan et al. 2017</ns0:ref>). Variations at most genomic sites within human populations are also at optimum equilibrium, as evidenced by the observation that a slight increase above the present genetic diversity level in normal subjects is associated with patient populations suffering from complex diseases <ns0:ref type='bibr' target='#b7'>(Gui et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b9'>He et al. 2017;</ns0:ref><ns0:ref type='bibr'>Lei & Huang 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Lei et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Yuan et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b36'>Yuan et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b37'>Zhu et al. 2015)</ns0:ref>, as well as the observation that the sharing of SNPs among different human groups is an evolutionary rate-dependent phenomenon, with more sharing in fast evolving sequences <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. It is important to note that a protein in a complex species plays more roles than its orthologous protein in a species of less organismal complexity, as explained by the maximum genetic diversity hypothesis <ns0:ref type='bibr' target='#b10'>(Hu et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b11'>Huang 2008;</ns0:ref><ns0:ref type='bibr' target='#b14'>Huang 2016)</ns0:ref>. A protein has more functions to play in complex organisms due in part to its involvement in more cell types, and hence it becomes more susceptible to mutational inactivation. While the divergence time among higher taxa such as between human and Macaca monkey is relatively short, mutation saturation could still happen for fast evolving proteins since the number of positions that can accept fixed substitutions is comparatively lower.</ns0:p><ns0:p>It is also important to note that the type of saturation we describe here is slightly different from that seen in 'long branch attraction (LBA)' in phylogenetic trees <ns0:ref type='bibr' target='#b2'>(Bergsten 2005)</ns0:ref>. In LBA, saturation means convergent mutations leading to the same amino acid residue or nucleotide among (across) multiple taxa (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, time point 5 for fast evolving protein P1 and P2).</ns0:p><ns0:p>Although they were derived independently, these shared alleles can be misinterpreted in phylogenetic analyses as being shared due to common ancestry. However, for the type of saturation we have discussed here, independent mutations at the same site among different taxa would generally lead to different taxa having different amino acids rather than the same (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, time point 3 and 4 for fast evolving proteins), since the probability of an independent mutation changing to the same amino acid is about 20 times lower than that of mutating to a different amino acid (assuming no difference in the probability of being mutated to among the 20 amino acids). Thus, the type of saturation we have described here is expected to be more commonplace in nature compared to that in the case of LBA. Since a single mutation is</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed sufficient for a mismatch between any two taxa, multiple independent mutations at the same site leading to different amino acids would not increase the number of mismatches and would remain unnoticeable if one only aligns the sequences from two different taxa (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, the number of mismatch between P2 and P3 is 1 at time point 2 before saturation and remains as 1 at time point 4 after saturation). It only becomes apparent when one aligns the sequences from three different taxa (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, time point 3 and 4 for fast evolving proteins), as we described above and in previous publications <ns0:ref type='bibr' target='#b12'>(Huang 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Luo & Huang 2016)</ns0:ref>. However, even though the type of saturation we describe here does not increase the number of mismatches, it could result in a reduced number of mismatches in rare cases when independent mutations in two different taxa happen to lead to the same residue (Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, time point 5 for P1 and P2). Thus, it does not preclude the type of saturation observed in the case of LBA. These two types of saturation are essentially just two different aspects of the same saturation phenomenon, one more commonplace and manifesting as a higher overlap ratio while the other less common and manifesting as LBA.</ns0:p><ns0:p>It is well known that fast evolving proteins that have reached mutation saturation are not suitable for phylogenetic inferences. We have previously shown that mutation saturation as measured by the overlap ratio method has been largely overlooked <ns0:ref type='bibr' target='#b13'>(Huang 2012;</ns0:ref><ns0:ref type='bibr' target='#b34'>Yuan et al. 2017)</ns0:ref>, in contrast to the long noted LBA. As mentioned above, it appears that the inherent mutation rates are different between fast and slowly evolving proteins as determined by studying the rate of fixed substitutions <ns0:ref type='bibr' target='#b23'>(Monroe et al. 2020)</ns0:ref>. We can thus infer that if the rate difference is large enough, slowly evolving genes should be used in phylogenetic inferences</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed because they would be less likely to reach mutation saturation. The findings here that fast evolving proteins are enriched with non-neutral substitutions relative to slowly evolving proteins are consistent with such an idea. There are two points to note regarding fast and slowly evolving proteins. First, the definition of slowly evolving proteins here (99% identity) is only meant for the specific comparison between human and monkey. For relatively more distantly related species such as human and mouse, the set of slowly evolving proteins is expected to be similar but the percentage identity cutoff for the slow set would be lower than 99%. This is because proteins are known to evolve at constant rates across time and species according to the molecular clock and the neutral theory.</ns0:p><ns0:p>Second, the classification of fast evolving proteins is not absolute and is evolutionary timedependent. Proteins that are found as fast evolving or have reached mutation saturation after a certain relatively long time of evolution are expected to look like slowly evolving or not showing mutation saturation if evolutionary time is relatively short. This is supported by our results here of a nearly linear relationship between evolutionary rates and the fraction of non-conservative changes.</ns0:p><ns0:p>Our finding supports the possibility that, from early on since first diverging from a common ancestor, two sister species are expected to accumulate mostly neutral mismatches, which would later be replaced by non-conservative mismatches when time is long enough for mutation saturation to have taken place. This is to be expected as sister species should become more differentiated in phenotypes with time, and hence more different in sequences with time in terms of both the number of mismatches as well as the chemical nature (conservative or not) of the Manuscript to be reviewed mismatches.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion:</ns0:head><ns0:p>Our study here addressed whether observed amino acids variants in slowly evolving proteins are more or less neutral than those in fast evolving proteins. The results suggest that fixed and standing missense variations in slowly evolving proteins are more likely to be neutral, and have implications for phylogenetic inferences.</ns0:p></ns0:div>
<ns0:div><ns0:head>Declarations:</ns0:head><ns0:p>Tables:</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>. Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions. Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes. Not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Figure Legends:</ns0:note><ns0:note type='other'>Figure 1</ns0:note><ns0:note type='other'>Figure 2</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>test for natural selection regarding conservative changes, we next divided the slowly evolving set of missense SNPs into three groups of different minor allele frequency (MAF) as measured in Africans (similar results were found for other racial groups). For fast evolving proteins at mutation saturation, low MAF values of a missense SNP would mean stronger negative selection, and so SNPs with low MAF are expected to have lower proportions of conservative amino acid changes, since these changes may mean too little functional alteration to be under natural selection. The results showed that for missense SNPs in the fast evolving set of proteins, the common SNPs with MAF >0.001 showed a higher fraction of conservative changes than the rare SNPs with MAF<0.001 (P<0.001), indicating a stronger natural selection for the rare SNPs in the fast set (Figure3). While SNPs in the fast set showed similar fractions of conservative changes across three different MAF groups (>0.001, >0.01, and >0.05), there was a more obvious trend of having a higher proportion of conservative changes as MAF values increase from >0.001 to >0.01 to >0.05 for SNPs in the slow set, consistent with weaker natural selection for common SNPs in the slow set (Figure3). Each of the three groups in the fast set showed a significantly lower fraction of conservative changes than the respective group in the slow set (P<0.01), indicating stronger natural selection for SNPs in the fast set (Figure3). The results indicate that common SNPs in slowly evolving proteins had more conservative changes PeerJ reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>changes, or under no selection if they produce conservative changes (assuming no positive selection as explained above). While one would expect less conservative changes in the rare SNPs compared to the common SNPs, since negative selection may account in part for the low MAF value, the difference in the fraction of conservative changes between the rare SNPs and the common ones in the slow set should be greater than that in the fast set, since the SNPs in the fast set may be under natural selection regardless of MAF values (low MAF SNPs under more negative selection while high MAF SNPs under both positive and negative selection). Our results are consistent with such expectations.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Fraction of conservative substitutions in fixed changes in proteins of different</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Fraction of conservative substitutions in standing missense substitutions in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Fraction of conservative substitutions in missense SNPs with different MAFs in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of non-conservative substitutions and mutation saturation in fast</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Fraction of conservative substitutions in fixed changes in proteins of different evolutionary rates.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Fraction of conservative substitutions in standing missense substitutions in proteins of different evolutionary rates.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 3 Figure 3 .</ns0:head><ns0:label>33</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Fraction of conservative substitutions in missense SNPs with different MAFs in proteins of different evolutionary rates. SNPs from either fast or slowly evolving proteins were classified based on MAF values and the fractions of conservative changes in each class are shown. Statistical significance score in difference between slow and fast or between different MAF cutoffs are shown. **, P<0.01. Chi squared test.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of non-conservative substitutions and mutation saturation in fast evolving proteins.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes. Not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs.</ns0:figDesc><ns0:table /><ns0:note>PeerJ reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 . Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes. Not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length >1102 amino acid with no gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Identity %</ns0:cell><ns0:cell>BLOSUM62</ns0:cell><ns0:cell>EX</ns0:cell><ns0:cell>delta-V</ns0:cell><ns0:cell>delta-P</ns0:cell><ns0:cell># proteins</ns0:cell><ns0:cell>Length ave.</ns0:cell></ns0:row><ns0:row><ns0:cell>>99</ns0:cell><ns0:cell>0.49</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>136</ns0:cell><ns0:cell>1532.7</ns0:cell></ns0:row><ns0:row><ns0:cell>98-99</ns0:cell><ns0:cell>0.44</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>137</ns0:cell><ns0:cell>1464.0</ns0:cell></ns0:row><ns0:row><ns0:cell>96-98</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>1539.4</ns0:cell></ns0:row><ns0:row><ns0:cell>87-96</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>1414.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length >1102 amino acid with gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>>99</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>1659.3</ns0:cell></ns0:row><ns0:row><ns0:cell>98-99</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>1855.8</ns0:cell></ns0:row><ns0:row><ns0:cell>96-98</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>320</ns0:cell><ns0:cell>1792.7</ns0:cell></ns0:row><ns0:row><ns0:cell>87-96</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.50</ns0:cell><ns0:cell>437</ns0:cell><ns0:cell>1727.3</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Protein length 200-1102 amino acid with no gaps in alignment</ns0:head><ns0:label /><ns0:figDesc>PeerJ</ns0:figDesc><ns0:table><ns0:row><ns0:cell>>95</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>6984</ns0:cell><ns0:cell>478.3</ns0:cell></ns0:row><ns0:row><ns0:cell>90-95</ns0:cell><ns0:cell>0.39</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>1229</ns0:cell><ns0:cell>407.9</ns0:cell></ns0:row><ns0:row><ns0:cell>80-90</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>276</ns0:cell><ns0:cell>350.9</ns0:cell></ns0:row><ns0:row><ns0:cell>60-80</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.57</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>372.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length 200-1102 amino acid with gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>>95</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.22</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>2001</ns0:cell><ns0:cell>601.1</ns0:cell></ns0:row><ns0:row><ns0:cell>90-95</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell>1529</ns0:cell><ns0:cell>566.4</ns0:cell></ns0:row><ns0:row><ns0:cell>80-90</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>1050</ns0:cell><ns0:cell>489.1</ns0:cell></ns0:row><ns0:row><ns0:cell>60-80</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.48</ns0:cell><ns0:cell>467</ns0:cell><ns0:cell>447.1</ns0:cell></ns0:row></ns0:table><ns0:note>reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2019:12:43889:2:0:NEW 29 Jul 2020)</ns0:note>
</ns0:body>
" | "Dear Dr. Braun,
Thank you for the helpful comments. We have revised the manuscript accordingly. A detailed response is in the following.
Sincerely,
Shi Huang
I would like to apologize for the delay in this response. I had difficulty securing additional reviews. Fortunately, I was able to secure a review that I consider very thorough so I have decided to move forward rather than waiting any longer on the second review.
I provided a fairly long editorial review for the initial submission and I feel that you addressed it quite well. I had some concerns in terms of the framing of the question, which you addressed quite well. My other concern, which was much more fundamental, was that your use of the BLOSUM matrix introduces circularity. You addressed that concern quite well.
I think you should read the new reviewer's comments thoroughly and do your best to address them. I would like to offer the following guidance for a minor revision:
First, the reviewer makes a good point that all of the matrices (BLOSUM, delta V, delta P, and EX) are crude measures in the sense that they do not consider any site-specific variation. I think the reviewer's concerns are extremely valid, but I think you can address those concerns with a verbal argument rather than conducting any new analyses.
I've spent a fair amount of time thinking about whether there is any easy way to capture among-sites variation in selection on sites (i.e., differentiating sites that tolerate only a specific subset of amino acids from those sites that tolerate a broader set of amino acids). I think it would be extremely difficult. One approach would be to use proteins structure, which would fall outside of the scope of the manuscript. You could use some sort of PSSM approach, as the reviewer suggests, but that would require mapping information from the PSSM onto the sequences. Although alignments that could be converted into PSSMs in this manner do exist (i.e., Pfam) but using them seems like a fundamentally different study.
The Pollock et al. paper (2012; PMID: 22547823) that the reviewer cites actually makes a point that could be useful for a verbal argument. Specifically, Pollock et al. show that site-specific preferences shift over time due to substitutions at other sites that are epistatic to the site of interest. The point I would make about this is simply that it could be very complex to define site-specific preferences in a meaningful way. The fact that you see a significant difference between “slow” and “fast” proteins when you use the crude matrices suggests to me that you are onto something important. It is possible that undertaking an effort to define site-specific preferences using PSSMs or structure would refine things further (e.g., maybe delta P has a strong impact in some sites and delta V has an impact in others) but I think this is a different study. What you have is interesting as is. You likely have other thoughts on the topic as well; my big suggestion is to provide a brief discussion of the fact that different sites in proteins tolerate a specific subset of amino acids, that the processes underlying this are complex, and that this fact is unlikely to invalidate your fundamental conclusion.
Second, the reviewer also asked for a little more speculation about the implications of your findings for phylogenetic analyses. I would recommend keeping any speculation relatively brief. It does seem to me that your results imply evolutionary rate should have an impact on the best-fitting model. This is not surprising and could be stated in a simple manner. Obviously, the best-fitting model has implications for maximum likelihood, Bayesian, and even for some distance analyses (e.g., distance analyses that use ML estimates of distances) in phylogenetics.
Finally, the reviewer’s question about why ~25% of proteins encoded by the macaque and human genomes are not considered is a good one. I assumed in the previous submission that you used all orthologous pairs you could, omitting cases where orthology was unclear or one of the sequences was problematic in some way. It would be nice for you to address this issue in a direct manner.
Overall, I would recommend a fairly focused revision, trying to address comments relatively narrowly. Of course, a very thorough examination of the manuscript for typos (the reviewer points out two minor errors) is very appropriate and I hope you will take the time to do this.
I hope this was helpful and I look forward to seeing a revision.
Thank you for the very helpful and detailed suggestions. We have addressed the three points you mentioned in the revised manuscript. First, site specific variations in substitution constraints may be similarly present in different proteins of different evolutionary rates so that they may not affect the overall results here. Also as you pointed out, Pollock et al. show that site-specific preferences shift over time due to substitutions at other sites that are epistatic to the site of interest. Thus, it could be very complex to define site-specific preferences in a meaningful way. Second, it is well known in the field of phylogenetics that fast evolving proteins that have reached mutation saturation are not suitable for phylogenetic inferences. We have previously shown that mutation saturation as shown by the overlap ratio method has been largely overlooked (Huang, 2012, Yuan et al 2017). The finding here further strengthens the notion that fast evolving proteins are not suitable for phylogenetic inferences as they are enriched with non-neutral substitutions. Finally, not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs.
Reviewer 3
Basic reporting
This revised manuscript is certainly an improved version as far as the focus and description of the research is concerned. The manuscript describes interesting findings and is clear and understandable. But it needs more elaboration and context of the analysis presented; in the introduction, and in other sections as well. For instance, Line 72 “We here found that the proportion of conservative substitutions between species….”- it would help if the introduction would include a description of what type of ‘conservative’ substitutions are being referred to. The description does not appear until the results section. In addition, it would help to specify that the analysis is limited to genome sequences of two [very] closely related species.
Likewise, the scientific names of the species being studied are not mentioned until later in the manuscript.
We thank the reviewer for these suggestions. We have specified the term conservative substitutions when it first appeared in the introduction. We also specified that the evolutionary rates were determined by using two relatively closely related species. We have also used the scientific names of the macaque species when it first appeared.
In the results/discussion there should be a description, even if brief, as to, (a) why results based on the delta_V matrix are different from other substitution matrices, (b) the context in which these substitution matrices were derived, compared to the BLOSUM matrices, i.e. estimates of substitution rates from sequences in relatively closely related clades. Similarly, the tertiary structural context of the estimated substitutions, which is relevant to the discussion of the strength of selection on the measured mutations would be appropriate.
It appears that the delta_V matrix produced less significant results compared to the other three matrixes. Overestimation of conservative substitutions in fast evolving proteins may account for this, e.g., substitutions involving differently charged residues with similar side chain volumes would be scored as non-conservative by the delta_P matrix but conservative by the delta_V matrix, e.g., Glu to Leu. Although all of the four matrixes are developed by using sequence alignments involving relatively more diverged species, they are still expected to apply to alignment data from relatively closely related species such as monkey and human, since proteins identified as fast evolving by comparing closely related species would also in general be identified as such by comparing more distantly related species. This is suggested by the molecular clock phenomenon or the constant evolutionary rates across time and species.
Experimental design
Methods used to identify and quantify conservative mutations are adequate, even if simplistic, for the patterns that the authors seek to characterize. But the description can be more explicit. For example, since all the analyses are based on pairwise sequence alignments (one sequence per species) generated by the BLASTP algorithm, it will be useful to know if the alignments were generated using the BLOSUM-62 matrix (the default for the BLASTP algorithm), even when the other substitution matrices (EX, delta_V and delta_P) were used to identify and quantify the conservative substitutions in the sequence alignments. It is relevant to know if there are any differences in the alignments generated using the BLOSUM-62 matrix compared to alignments generated using the other matrices, and if the variation in alignment would affect the quantification of conservative substitutions. This could be interesting, if not a cause for concern, for fast-evolving proteins (as defined by the authors) that have lower sequence identities, especially for gapped alignments.
Performance of BLASTP alignments can often be weak when comparing sequences that show medium to high divergence (but not necessarily sequences derived from highly divergent species). See also comment regarding Table 1.
The BLASTP alignment program is not expected to produce very different results from other programs, especially for highly conserved proteins. We have limited our analysis to high identity orthologs with length >200 aa and percent identity >60%. So, variation in alignment is not expected to affect our results.
Validity of the findings
The main findings that conservative substitutions are enriched in slowly evolving proteins, and that it is correlated with minor allele frequency is interesting. The conclusions are reasonable in the context of the measurements reported. However, there are some caveats given, (a) the very strict definition/classification of the slowly evolving proteins (99% identity), (b) that the analyses are limited to pair-wise sequence alignments, and (c) restricted to very closely related species. By contrast, how different are the results likely to be if we were to extend (or repeat) the analyses using multiple species/alignments instead of pairwise-alignments? The authors’ definition of slowly evolving proteins (99% identity) may not be broadly applicable. It is quite likely that the results will be different even if proteins from multiple species of the genus Macaca were to be analyzed, let alone comparing many species from different genera, or larger clades such as Old World and New World monkeys, or primates as a whole. The point is, the limitations of the pairwise comparisons should be discussed, rather than the broad implications that the authors seem to indicate. Presumably the fraction of proteins that can be classified as ‘slowly evolving’ might drop substantially. Ideally, running the analysis on multiple alignments of a few orthologous families, if not for all the proteins pairs, would be useful (see also a related note regarding Table-1 under “Comments for the author”).
This is not to question the validity of the findings/conclusions, but to consider the limitations of the experimental design and the context of the measurements, as well as the extent to which the findings can be extrapolated. The article (and readers) would benefit in general if issues of more direct relevance to the core results are discussed, in addition to the explanation of what saturation means. For example, why the trends are different in quantifications based on the delta_V matrix compared to other substitution matrices in Figure 1.
It would be valuable to also discuss why/where pairwise comparisons would be relevant and useful, and where they may not be. After all, the toy example in figure 4 is more relevant to multiple alignments for comparing multiple taxa. Saturation in faster evolving proteins is not inconceivable given the estimated divergence times of several to many millions of years between closely related species.
The definition of slow evolving set (99% identity) is only meant for the specific comparison between human and monkey. For relatively more distantly related species such as human and mouse, the set of slow evolving proteins is expected to be similar but the percentage identity cutoff for the slow set would be lower than 99%. This is because proteins are known to evolve at constant rates across time and species (molecular clock). We do not expect pairwise sequence comparisons to give different results from multispecies comparisons as evolutionary rate are not known to vary depending on the number of species used in an alignment. We also do not expect results from closely related species to lack broad implications as evolutionary rates are known to be constant across species (molecular clock).
In addition, although the EX, delta_P and delta_V matrices remove the probabilistic element compared to the BLOSUM matrices, they are coarse-grained estimates. These matrices do not consider site-specific substitution patterns, as in PSI-BLAST and the PSSM used therein, which is relevant to multiple sequence alignments and multi-species comparisons. This may be relevant to conclusions about the strength of selection. For instance, the relatively simplistic linear sequence-based pairwise measurements used by the authors do not take into account the co-evolution and co-variation of substitutions due to the physico-chemical constraints on protein structure and folding (see Pollock et al, 2012, PMID: 22547823). In this context, the suitability and efficacy of the dN/dS ratios to measure selection has been criticized (see Drummond and Wilke, 2008, PMID: 18662548). The EX, delta_P and delta_V matrices (Braun, PMID: 29950007; Braun and Pandey, 2020, doi: 10.1101/2020.01.28.923458) that explicitly consider the effects/context of protein structure, in addition to the population parameters. The delta_P and delta_V matrices were derived from soluble proteins; transmembrane proteins were excluded from the estimations (see also next paragraph). Therefore, a discussion of the coevolution of sequence in the context of protein structure, and the efficacy of dN/dS for measuring the extent of selection are directly relevant to the analyses and results presented.
Our analysis does not take into account the co-evolution and co-variation of substitutions due to the physico-chemical constraints on protein structure and folding (see Pollock et al, 2012, PMID: 22547823). Site specific variations in substitution constraints may be similarly present in different proteins of different evolutionary rates so that they may not affect the overall results here. Also, Pollock et al. show that site-specific preferences shift over time due to substitutions at other sites that are epistatic to the site of interest. Thus, it could be very complex to define site-specific preferences in a meaningful way.
Going back to the performance of BLASTP alignments that was mentioned in the Experimental Design section, the data presented in Table 1 indicates a potential issue. This could also indicate a limitation of the current experimental design. Assuming that the numbers of proteins in the different categories (column 6) are non-overlapping sets of proteins, I counted a total of 15,019 proteins (or protein pairs?). Given that both human and Macaca genomes are estimated to have about 20,000 protein coding genes, I am curious why roughly a fourth of the proteins are not included in the analyses. Do they produce poor alignments?
Not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs.
Regarding implications for phylogenetic inference, a discussion, even if brief, would be valuable instead of simply stating “…. and have implications for phylogenetic inferences” (line 319). This brief prelude is more appropriate in the abstract.
Since phylogenetic analysis is always based on multiple alignments, how would the findings here be applicable. What are the implications? What type of phylogenetic inference: Of very closely related species of the Macaca genus? Or of primates in general? Or more general?
We have added a brief discussion. It is well known in the field of phylogenetics that fast evolving proteins that have reached mutation saturation are not suitable for phylogenetic inferences. We have previously shown that mutation saturation as shown by the overlap ratio method has been largely overlooked (Huang, 2012, Yuan et al 2017), in contrast to the long noted LBA. The finding here further strengthens the notion that fast evolving proteins are not suitable for phylogenetic inferences as they are enriched with non-neutral substitutions. It suggests that any phylogenetic inferences, regardless of whether they are based on pairwise or multiple alignments and whether they involve closely or distantly related species, must use slowly evolving proteins.
Comments for the author
Minor edits/revisions:
Table 1: It will be useful to give an overview of the fraction of proteins per species analyzed: either a pie chart or a Venn diagram or both.
Line 77: '... less natural selection' should be “weaker natural selection”?
Line 225: Fixed and 'stranding' should be 'standing'?
Please check for similar issues.
We have corrected these and related mistakes.
" | Here is a paper. Please give your review comments after reading it. |
654 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The process of molecular evolution has many elements that are not yet fully understood.</ns0:p><ns0:p>Evolutionary rates are known to vary among protein coding and noncoding DNAs, and most of the observed changes in amino acid or nucleotide sequences are assumed to be non-adaptive by the neutral theory of molecular evolution. However, it remains unclear whether fixed and standing missense changes in slowly evolving proteins are more or less neutral compared to those in fast evolving genes. Here, based on the evolutionary rates as inferred from identity scores between orthologs in human and Rhesus Macaques (Macaca mulatta), we found that the fraction of conservative substitutions between species was significantly higher in their slowly evolving proteins. Similar results were obtained by using four different methods of scoring conservative substitutions, including three that remove the impact of substitution probability, where conservative changes require fewer mutations. We also examined the single nucleotide polymorphisms (SNPs) by using the 1000 genomes project data and found that missense SNPs in slowly evolving proteins also had a higher fraction of conservative changes, especially for common SNPs, consistent with more non-conservative substitutions and hence stronger natural selection for SNPs, particularly rare ones, in fast evolving proteins. These results suggest that fixed and standing missense variants in slowly evolving proteins are more likely to be neutral.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Since the early 1960s, protein sequence comparisons have become increasingly important in molecular evolutionary research <ns0:ref type='bibr' target='#b6'>(Doolittle & Blombaeck 1964;</ns0:ref><ns0:ref type='bibr' target='#b7'>Fitch & Margoliash 1967;</ns0:ref><ns0:ref type='bibr' target='#b22'>Margoliash 1963;</ns0:ref><ns0:ref type='bibr' target='#b38'>Zuckerkandl & Pauling 1962)</ns0:ref>. An apparent relationship between protein sequence divergence and time of separation led to the molecular clock hypothesis, which assumes a constant and similar evolutionary rate among species <ns0:ref type='bibr' target='#b18'>(Kumar 2005;</ns0:ref><ns0:ref type='bibr' target='#b22'>Margoliash 1963;</ns0:ref><ns0:ref type='bibr' target='#b38'>Zuckerkandl & Pauling 1962)</ns0:ref>. Thus, sequence divergence between species is thought to be largely a function of time. The molecular clock, in turn, led Kimura to propose the neutral theory to explain nature: sequence differences between species were thought to be largely due to neutral changes rather than adaptive evolution <ns0:ref type='bibr' target='#b16'>(Kimura 1968</ns0:ref>). However, the notion of a molecular clock may be unrealistic since it predicts a constant substitution rate as measured in generations, whereas the observed molecular clock is measured in years <ns0:ref type='bibr' target='#b1'>(Ayala 1999;</ns0:ref><ns0:ref type='bibr' target='#b29'>Pulquerio & Nichols 2007)</ns0:ref>. The neutral theory remains an incomplete explanatory theory <ns0:ref type='bibr' target='#b10'>(Hu et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kern & Hahn 2018)</ns0:ref>. Evolutionary rates are known to vary among protein coding and non-coding DNAs. The neutral theory posits that the substitution rate under selective neutrality is expected to be equal to the mutation rate <ns0:ref type='bibr' target='#b17'>(Kimura 1983)</ns0:ref>. If mutations/substitutions are not neutral or are under natural selection, the substitution rate would be affected by the population size and the selection coefficient, which are unlikely to be constant among all lineages. Slowly evolving genes are well known to be under stronger purifying or negative selection as measured by using dN/dS ratio, which means that a new mutation has a lower probability of being fixed <ns0:ref type='bibr' target='#b4'>(Cai & Petrov 2010)</ns0:ref>.</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed However, negative selection as detected by the dN/dS method is largely concerned with nonobserved mutations and says little about the fixed or observed variations. And most molecular evolutionary approaches such as phylogenetic and demographic inferences are concerned with observed variants. It remains to be determined whether fixed and standing missense substitutions in slowly evolving genes are more or less neutral relative to those in fast evolving genes.</ns0:p><ns0:p>We here examined the fraction of conservative substitutions (amino acid replacement in a protein that changes a given amino acid to a different amino acid with similar biochemical properties) in proteins of different evolutionary rates. We compared the protein orthologs of two relatively closely related species, Homo sapiens and Macaca mulatta, to obtain values of percentage identity to represent evolutionary rates. We found that the proportion of conservative substitutions between species was higher in the slowest evolving set of proteins than in faster evolving proteins. Using datasets from the 1000 genomes (1KG) project phase 3 dataset <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>, we also found that missense single nucleotide polymorphisms (SNPs) from the slowest evolving set of proteins, especially those with high minor allele frequency (MAF), were enriched with conservative amino acid changes, consistent with these changes being under weaker natural selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methods</ns0:head><ns0:p>Classification of proteins as slowly and fast evolving. The identification of slowly evolving proteins and their associated SNPs was done as previously described (Yuan et al. Manuscript to be reviewed 2017). Briefly, we collected the whole genome protein data of Homo sapiens (version 36.3) and</ns0:p><ns0:p>Macaca mulatta (version 1) from the NCBI FTP site, and then compared the human protein to the monkey protein using local BLASTP program at a cutoff of 1E-10. We only retained one human protein with multiple isoforms, and chose the monkey protein with the most significant Evalue as the orthologous counterpart of each human protein. The aligned proteins were ranked by percentage identities. Proteins that show the highest identity between human and monkey were included in the set of slowly evolving (including 423 genes > 304 amino acid in length with 100% identity and 178 genes > 1102 amino acid in length with 99% identity between monkey and human). The rest are all considered fast evolving proteins. The cutoff criterion was based on the empirical observation of low substitution saturation, and the finding that missense SNPs from the slow set of proteins produced genetic diversity patterns that were distinct from those found in the fast set <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. The BLASTP alignment program is not expected to produce very different results from other programs, especially for highly conserved proteins. We have limited our analysis to high identity orthologs with length >200 amino acid and percent identity >60% between monkey and human. So, variation in alignment is not expected to affect comparing our analysis to others. SNP selection. We downloaded the 1KG phase 3 data and assigned SNP categories using ANNOVAR <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>. We then picked out the missense SNPs located in the slow evolving set of genes from the downloaded VCF files <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. MAF was derived from AF (alternative allele frequency) values from the VCF files. Missense SNPs in fast evolving genes included all those from 1KG that are not from the slowly evolving set.</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed For fixed substitutions as revealed by BLASTP, conservative changes were scored by using four different matrixes. The BLOSUM62 matrix has a scoring range from -3 to 3 (-3,-2, -1, 0, 1, 2, 3) with higher positive values representing more conservative changes <ns0:ref type='bibr' target='#b25'>(Pearson 2013)</ns0:ref>. We assigned each amino acid mutation a score and we used score >0 to denote conservative changes in cases where the number of conservative changes is enumerated. As the BLOSUM62 matrix does not take into account the effect of substitution probability (the fact that conservative changes require fewer mutations), we also used three other matrixes to score conservative amino acid replacements that have removed the impact of substitution probability, including the'EX' matrix <ns0:ref type='bibr' target='#b33'>(Yampolsky & Stoltzfus 2005)</ns0:ref>, which is based on laboratory mutagenesis, and the two physicochemical matrices in <ns0:ref type='bibr' target='#b3'>Braun (2018)</ns0:ref> and Pandey and Braun Manuscript to be reviewed change in amino acid side chain polarity) <ns0:ref type='bibr' target='#b3'>(Braun 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Pandey & Braun 2020)</ns0:ref>. All three matrixes in spreadsheets are available from GITHUB (https://github.com/ebraun68/clade_specific_prot_models). Specifically, the EX matrix (or, more accurately, a normalized symmetric version of the EX matrix) is in the excel spreadsheet 'EX_matrix_sym.xlsx'; the delta_V and delta_P matrices can be found in one of the sheets (the sheet called 'Exchanges') in the file 'exchange_Pandey_Braun.xlsx'. All three of the matrixes are normalized to range from zero to one. To be comparable to the BLOSUM62 matrix, we generated integer versions of these three matrixes by multiplying by 10, subtracting 5, and then rounding to the nearest integer. Here the matrix values range from -5 to +5 with higher positive values representing more conservative changes. For EX matrix, we used score >2 to denote conservative changes. For delta_V and delta_P matrixes, we used score >3 to denote conservative changes. In this way of using different cutoff scores to represent conservative changes, we could keep the fraction of conservative changes close to 0.5 for each of the four different matrixes.</ns0:p><ns0:p>Statistics. Chi-squared test was performed using GraphPad Prism 6.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results:</ns0:head><ns0:p>Fixed amino acid substitutions and evolutionary rates of proteins We determined the evolutionary rates of proteins in the human genome by the percentage of identities between human proteins and their orthologs in Macaca mulatta as described previously <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. We then divided the proteins into several groups of different Manuscript to be reviewed evolutionary rates, and compared the proportion of conservative amino acid substitutions in each group.</ns0:p><ns0:p>The mismatches between two species would have one of the two residues or alleles as ancestral, in the case of slowly evolving proteins yet to reach mutation saturation (no independent mutations occurring at the same site among species and across time), and so a mismatch due to conservative changes would involve a conservative mutation during evolution from the ancestor to extant species. But at mutation saturation for fast evolving proteins, where a site had encountered multiple mutations across taxa and time, while a drastic substitution would necessarily involve a non-conservative mutation, it is possible for a conservative substitution to result from at least two independent non-conservative mutations (if the common ancestor has Arg at some site, a drastic mutation event at this site occurring in each of the two species, Arg to Leu in one and Arg to Ile in the other, may lead to a conservative substitution of Leu and Ile). Thus, a conservative substitution at mutation saturation just means less physical and chemical differences between the two species concerned and says little about the actual mutation events. A lower fraction of conservative substitutions at saturation for fast evolving proteins would mean more physical and chemical differences between the two species, which may more easily translate into functional differences for natural selection to act upon.</ns0:p><ns0:p>To verify that the slowest evolving proteins with length >1102 amino acids and percentage identity >99% are distinct from the fast set, we first compared proteins with length >1102 amino acids with no gaps in alignment (Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_9'>1A</ns0:ref>) or with gaps (Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_9'>1B</ns0:ref>) divided into 4 groups of different percentage identity between human and monkey, >99%, 98-99%, 96-98%, and 87-97%. We used four different scoring matrixes to give each amino acid change a rank score in terms of how conservative the change is, BLOSUM62 (Pearson 2013), EX <ns0:ref type='bibr' target='#b33'>(Yampolsky & Stoltzfus 2005)</ns0:ref>, delta_V, and delta_P <ns0:ref type='bibr' target='#b3'>(Braun 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Pandey & Braun 2020</ns0:ref>). The results were largely similar. There was a general correlation between slower evolutionary rates and higher fractions of conservative changes, with a significant drop in the fraction of conservative changes between the slowest evolving, which was included in the slow set that has monkey-human identity > 99% and protein length >1102 amino acids, and the next slowest set (Figure <ns0:ref type='figure' target='#fig_9'>1A and B</ns0:ref>). Proteins with alignment gaps showed similar or slightly lower fractions of conservative changes than those without gaps. We further studied the remaining proteins with shorter protein length (200-1102 amino acids) divided into 4 groups (95-99%, 90-95%, 80-90%, and 60-80% identity), and found similar but less robust and consistent trends (Table <ns0:ref type='table' target='#tab_6'>1 and Figure 1C and D)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Standing amino acid variants and evolutionary rates of proteins</ns0:head><ns0:p>We next studied the missense SNPs found in proteins with different evolutionary rates by using 1KG dataset <ns0:ref type='bibr' target='#b0'>(Auton et al. 2015)</ns0:ref>. There were 15271 missense SNPs in the slowly evolving set of proteins (>1102 aa with 99% identity and >304 aa with 100% identity) and 546297 missense SNPs in the fast set (all proteins that remain after excluding the slow set). We assigned each amino acid change found in a missense SNP a conservation score as described above. The number of SNPs in each score category was then enumerated. We performed this analysis by using each of the four different scoring matrixes and found largely similar results</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed (Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>). Missense SNPs in the slowly evolving set of proteins in general had lower fractions of drastic mutations, and higher fractions of conservative mutations relative to those in the faster evolving set of proteins (Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>). The fraction of conservative mutations in the slow evolving set was significantly higher than that of the fast set (P<0.001, Figure <ns0:ref type='figure' target='#fig_6'>2</ns0:ref>). Manuscript to be reviewed that were under a weaker natural selection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion:</ns0:head><ns0:p>Our results here showed that fixed and standing changes in slowly evolving proteins were enriched with conservative amino acid substitutions. Similar results were obtained using four different matrixes to rank the conservative nature of a substitution. Based on substitution probability alone, amino acid substitutions in slowly evolving proteins are expected to be more conserved than those in fast evolving proteins, since fast evolving proteins have a higher probability of the doublet mutations that are necessary for a drastic substitution to occur, but have a very low rate of occurrence <ns0:ref type='bibr' target='#b31'>(Whelan & Goldman 2004)</ns0:ref>. If evolutionary time is not long enough for mutation saturation to occur, non-conservative substitutions would be expected to be a function of mutation rate and time. This simple explanation appears not to be the reason for the observations here, since the three matrixes that have removed the impact of substitution probability produced similar results as the matrix that does not take into account the impact of substitution probability.</ns0:p><ns0:p>The four matrixes we used were developed in different ways: the BLOSUM62 matrix by using sequence alignments involving relatively divergent species, the delta_V and delta_P matrixes directly from the physicochemical properties of amino acids, and the EX matrix from experimental mutagenesis. However, the matrixes are still expected to provide information that applies to alignment data from relatively closely related species such as monkey and human, since proteins identified as fast evolving by comparing closely related species would also in Manuscript to be reviewed general be identified as such by comparing more distantly related species. This is suggested by the molecular clock phenomenon or the constant evolutionary rates across time and species. It appears that the delta_V matrix produced less significant results compared to the other three matrixes. Overestimation of the number of conservative substitutions in fast evolving proteins may account for this. For example, substitutions involving differently charged residues with similar side chain volumes would be scored as non-conservative by the delta_P matrix but conservative by the delta_V matrix (e.g., Glu to Leu). Our analysis does not take into account the co-evolution and co-variation of substitutions due to the physico-chemical constraints on protein structure and folding <ns0:ref type='bibr' target='#b27'>(Pollock et al. 2012)</ns0:ref>.</ns0:p><ns0:p>Site specific variations in substitution constraints however may be similarly present in different proteins of different evolutionary rates so that they may not affect the overall results here. Also, <ns0:ref type='bibr'>Pollock et al.</ns0:ref> show that site-specific preferences shift over time due to substitutions at other sites that are epistatic to the site of interest <ns0:ref type='bibr' target='#b27'>(Pollock et al. 2012</ns0:ref>). Thus, it could be very complex to define site-specific preferences in a meaningful way. It has recently been shown that coding region mutation rates as measured prior to the effect of natural selection are significantly lower in genes where mutations are more likely to be deleterious <ns0:ref type='bibr' target='#b23'>(Monroe et al. 2020)</ns0:ref>. Mutations are more likely to be deleterious and less likely to be fixed in highly conserved proteins, which are by definition more common in slowly evolving proteins. Thus, slowly evolving genes in fact do have inherently slower mutation rates, which would make them less likely to reach mutation saturation.</ns0:p><ns0:p>The results here may be best accounted for by mutation saturation in fast evolving proteins,</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>where multiple recurrent mutations at the same site have occurred across taxa and time (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>). At saturation, the range of mutations that have happened at any given site of any given taxon is irrelevant to the particular type of possible alleles the site may carry at present time.</ns0:p><ns0:p>Natural selection is expected to play an important role in determining that. And natural selection of course would be most efficient if the mutated allele is functionally very different from the nonmutated allele. If two taxa are different in traits, it would follow that some of the differences in protein sequence between them would be non-neutral or non-conservative changes. Fast evolving genes play more adaptive roles and hence are more involved in accounting for the different traits, and so are expected to be enriched with non-conservative substitutions compared to slowly evolving genes. A fast evolving and adaptive site is more likely to be mutated more than once or encounter mutation saturation. Fixed and standing conservative variants in slowly evolving proteins may be under weaker natural selection for several reasons. First, substitutions in slowly evolving proteins are more likely to be conservative and conservative changes may not alter protein structure and function as dramatically as the drastic changes, which may make it harder for natural selection to occur. Second, as fixed variants cannot be fixed because of negative selection on the variants per se, they are either neutral or under positive selection. Indeed, fast evolving proteins are known to be under more positive selection <ns0:ref type='bibr' target='#b4'>(Cai & Petrov 2010;</ns0:ref><ns0:ref type='bibr' target='#b34'>Yuan et al. 2017)</ns0:ref> Manuscript to be reviewed because a mutation that takes a long time to arrive would be useless for quick adaptive needs. Finally, SNPs in the slow set may be under negative selection if they produce drastic changes, or under no selection if they produce conservative changes (assuming no positive selection as explained above). While one would expect less conservative changes in the rare SNPs compared to the common SNPs, since negative selection may account in part for the low MAF value, the difference in the fraction of conservative changes between the rare SNPs and the common ones in the slow set should be greater than that in the fast set, since the SNPs in the fast set may be under natural selection regardless of MAF values (low MAF SNPs under more negative selection while high MAF SNPs under both positive and negative selection). Our results are consistent with such expectations.</ns0:p><ns0:p>If substitutions in fast evolving proteins are at saturation and under natural selection as indicated here, it would follow that genetic distances or degrees of sequence mismatches between taxa in these proteins would be at saturation, or no longer correlated exactly with time.</ns0:p><ns0:p>It is easy to tell the difference between optimum/maximum saturation genetic distances and linear distances as described previously <ns0:ref type='bibr' target='#b12'>(Huang 2010)</ns0:ref>. Briefly, imagine a 100 amino acid protein with only 1 neutral site. In a multispecies alignment involving at least three taxa, if one finds only one of these taxa with a mutation at this neutral site while all other species have the same non-mutated residue, there is no saturation (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, time point 2). However, if one finds that nearly every taxon has a unique amino acid, one would conclude mutation saturation as there would have been multiple independent substitution events among different species at the same site, and repeated mutations at the same site do not increase distance (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, time Manuscript to be reviewed point 3 and 4 for fast evolving proteins). We have termed those sites with repeated mutations 'overlap' sites <ns0:ref type='bibr' target='#b12'>(Huang 2010)</ns0:ref>. So, a diagnostic criterion for saturated maximum distance between two species is the proportion of overlap sites among mismatched sites. Saturation would typically have 50-60% overlapped sites that are 2-3 fold higher than that expected before saturation <ns0:ref type='bibr' target='#b12'>(Huang 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Luo & Huang 2016)</ns0:ref>. It is not expected to have near 100% overlapped sites, because certain sites may only accommodate 2 or very few amino acid residues at saturation equilibrium, which would prevent them from presenting as overlapped sites even though they are in fact overlapped and saturated sites. Also, saturation may result in convergent evolution with independent mutations changing to the same amino acid (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, time point 5 for fast evolving proteins). This overlap ratio method is an empirical one free of uncertain assumptions and hence more realistic than other methods of testing for saturation, such as comparing the observed number of mutations to the inferred one based on uncertain phylogenetic trees derived from maximum parsimony or maximum likelihood methods <ns0:ref type='bibr' target='#b26'>(Philippe et al. 1994;</ns0:ref><ns0:ref type='bibr' target='#b30'>Steel et al. 1993;</ns0:ref><ns0:ref type='bibr' target='#b32'>Xia et al. 2003)</ns0:ref>. By using the overlap ratio method, we have verified that the vast majority of proteins show maximum distances between any two deeply diverged taxa, and only a small proportion, the slowest evolving, are still at the linear phase of changes <ns0:ref type='bibr' target='#b12'>(Huang 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Luo & Huang 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Yuan et al. 2017</ns0:ref>). Variations at most genomic sites within human populations are also at optimum equilibrium, as evidenced by the observation that a slight increase above the present genetic diversity level in normal subjects is associated with patient populations suffering from complex diseases <ns0:ref type='bibr' target='#b8'>(Gui et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b9'>He et al. 2017;</ns0:ref><ns0:ref type='bibr'>Lei & Huang 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Lei et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Yuan et al. 2012</ns0:ref>;</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr' target='#b36'>Yuan et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b37'>Zhu et al. 2015)</ns0:ref>, as well as the observation that the sharing of SNPs among different human groups is an evolutionary rate-dependent phenomenon, with more sharing in fast evolving sequences <ns0:ref type='bibr' target='#b34'>(Yuan et al. 2017)</ns0:ref>. It is important to note that a protein in a complex species plays more roles than its orthologous protein in a species of less organismal complexity, as explained by the maximum genetic diversity hypothesis <ns0:ref type='bibr' target='#b10'>(Hu et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b11'>Huang 2008;</ns0:ref><ns0:ref type='bibr' target='#b14'>Huang 2016)</ns0:ref>. A protein has more functions to play in complex organisms due in part to its involvement in more cell types, and hence it becomes more susceptible to mutational inactivation. While the divergence time among higher taxa such as between human and Macaca monkey is relatively short, mutation saturation could still happen for fast evolving proteins since the number of positions that can accept fixed substitutions is comparatively lower.</ns0:p><ns0:p>It is also important to note that the type of saturation we describe here is slightly different from that seen in 'long branch attraction (LBA)' in phylogenetic trees <ns0:ref type='bibr' target='#b2'>(Bergsten 2005)</ns0:ref>. In LBA, saturation means convergent mutations leading to the same amino acid residue or nucleotide among (across) multiple taxa (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, time point 5 for fast evolving protein P1 and P2).</ns0:p><ns0:p>Although they were derived independently, these shared alleles can be misinterpreted in phylogenetic analyses as being shared due to common ancestry. However, for the type of saturation we have discussed here, independent mutations at the same site among different taxa would generally lead to different taxa having different amino acids rather than the same (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, time point 3 and 4 for fast evolving proteins), since the probability of an independent mutation changing to the same amino acid is about 20 times lower than that of mutating to a different amino acid (assuming no difference in the probability of being mutated to among the 20</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed amino acids). Thus, the type of saturation we have described here is expected to be more commonplace in nature compared to that in the case of LBA. Since a single mutation is sufficient for a mismatch between any two taxa, multiple independent mutations at the same site leading to different amino acids would not increase the number of mismatches and would remain unnoticeable if one only aligns the sequences from two different taxa (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, the number of mismatch between P2 and P3 is 1 at time point 2 before saturation and remains as 1 at time point 4 after saturation). It only becomes apparent when one aligns the sequences from three different taxa (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, time point 3 and 4 for fast evolving proteins), as we described above and in previous publications <ns0:ref type='bibr' target='#b12'>(Huang 2010;</ns0:ref><ns0:ref type='bibr' target='#b21'>Luo & Huang 2016)</ns0:ref>. However, even though the type of saturation we describe here does not increase the number of mismatches, it could result in a reduced number of mismatches in rare cases when independent mutations in two different taxa happen to lead to the same residue (Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>, time point 5 for P1 and P2). Thus, it does not preclude the type of saturation observed in the case of LBA. These two types of saturation are essentially just two different aspects of the same saturation phenomenon, one more commonplace and manifesting as a higher overlap ratio while the other less common and manifesting as LBA.</ns0:p><ns0:p>It is well known that fast evolving proteins that have reached mutation saturation are not suitable for phylogenetic inferences. We have previously shown that mutation saturation as measured by the overlap ratio method has been largely overlooked <ns0:ref type='bibr' target='#b13'>(Huang 2012;</ns0:ref><ns0:ref type='bibr' target='#b34'>Yuan et al. 2017)</ns0:ref>, in contrast to the long noted LBA. As mentioned above, it appears that the inherent mutation rates are different between fast and slowly evolving proteins as determined by</ns0:p><ns0:p>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed studying the rate of fixed substitutions <ns0:ref type='bibr' target='#b23'>(Monroe et al. 2020)</ns0:ref>. We can thus infer that if the rate difference is large enough, slowly evolving genes should be used in phylogenetic inferences because they would be less likely to reach mutation saturation. The findings here that fast evolving proteins are enriched with non-neutral substitutions relative to slowly evolving proteins are consistent with such an idea. There are two points to note regarding fast and slowly evolving proteins. First, the definition of slowly evolving proteins here (99% identity) is only meant for the specific comparison between human and monkey. For relatively more distantly related species such as human and mouse, the set of slowly evolving proteins is expected to be similar but the percentage identity cutoff for the slow set would be lower than 99%. This is because proteins are known to evolve at constant rates across time and species according to the molecular clock and the neutral theory.</ns0:p><ns0:p>Second, the classification of fast evolving proteins is not absolute and is evolutionary timedependent. Proteins that are found as fast evolving or have reached mutation saturation after a certain relatively long time of evolution are expected to look like slowly evolving or not showing mutation saturation if evolutionary time is relatively short. This is supported by our results here of a nearly linear relationship between evolutionary rates and the fraction of non-conservative changes.</ns0:p><ns0:p>Our finding supports the possibility that, from early on since first diverging from a common ancestor, two sister species are expected to accumulate mostly neutral mismatches, which would later be replaced by non-conservative mismatches when time is long enough for mutation saturation to have taken place. This is to be expected as sister species should become more Manuscript to be reviewed differentiated in phenotypes with time, and hence more different in sequences with time in terms of both the number of mismatches as well as the chemical nature (conservative or not) of the mismatches.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion:</ns0:head><ns0:p>Our study here addressed whether observed amino acids variants in slowly evolving proteins are more or less neutral than those in fast evolving proteins. The results suggest that fixed and standing missense variations in slowly evolving proteins are more likely to be neutral, and have implications for phylogenetic inferences.</ns0:p></ns0:div>
<ns0:div><ns0:head>Declarations:</ns0:head><ns0:p>Tables:</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>. Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions. Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes. Not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Figure Legends:</ns0:note><ns0:note type='other'>Figure 1</ns0:note><ns0:note type='other'>Figure 2</ns0:note><ns0:note type='other'>Figure 3</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>2020): delta_V (normalized change in amino acid side chain volume), and delta_P (normalized PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>To test for natural selection regarding conservative changes, we next divided the slowly evolving set of missense SNPs into three groups of different minor allele frequency (MAF) as measured in Africans (similar results were found for other racial groups). For fast evolving proteins at mutation saturation, low MAF values of a missense SNP would mean stronger negative selection, and so SNPs with low MAF are expected to have lower proportions of conservative amino acid changes, since these changes may mean too little functional alteration to be under natural selection. The results showed that for missense SNPs in the fast evolving set of proteins, the common SNPs with MAF >0.001 showed a higher fraction of conservative changes than the rare SNPs with MAF<0.001 (P<0.001), indicating a stronger natural selection for the rare SNPs in the fast set (Figure3). While SNPs in the fast set showed similar fractions of conservative changes across three different MAF groups (>0.001, >0.01, and >0.05), there was a more obvious trend of having a higher proportion of conservative changes as MAF values increase from >0.001 to >0.01 to >0.05 for SNPs in the slow set, consistent with weaker natural selection for common SNPs in the slow set (Figure3). Each of the three groups in the fast set showed a significantly lower fraction of conservative changes than the respective group in the slow set (P<0.01), indicating stronger natural selection for SNPs in the fast set (Figure3). The results indicate that common SNPs in slowly evolving proteins had more conservative changes PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Fraction of conservative substitutions in fixed changes in proteins of different</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Fraction of conservative substitutions in standing missense substitutions in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Fraction of conservative substitutions in missense SNPs with different MAFs in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of non-conservative substitutions and mutation saturation in fast</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Fraction of conservative substitutions in fixed changes in proteins of different evolutionary rates.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Fraction of conservative substitutions in standing missense substitutions in proteins of different evolutionary rates.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Fraction of conservative substitutions in missense SNPs with different MAFs in proteins of different evolutionary rates.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Fraction of conservative substitutions in missense SNPs with different MAFs in proteins of different evolutionary rates. SNPs from either fast or slowly evolving proteins were classified based on MAF values and the fractions of conservative changes in each class are shown. Statistical significance score in difference between slow and fast or between different MAF cutoffs are shown. **, P<0.01. Chi squared test.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Illustration of non-conservative substitutions and mutation saturation in fast evolving proteins.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes. Not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs.</ns0:figDesc><ns0:table /><ns0:note>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 1 . Relationship between evolutionary rates and the conservative nature of fixed amino acid substitutions.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Evolutionary rates of proteins in the human genome are represented by the percentage of identities between human proteins and their orthologs in Macaca monkey. The proteins are divided into groups of different evolutionary rates, and the proportion of conservative amino acid mismatches in each group are shown for the four different ranking matrixes. Not all proteins encoded by the macaque and human genomes are considered because some proteins do not have easily identifiable orthologs. Also, we limited our analysis to proteins that have length >200 amino acids and show >60% identity between macaque and human in order to reduce the chance of misidentifying orthologs.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length >1102 amino acid with no gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Identity %</ns0:cell><ns0:cell>BLOSUM62</ns0:cell><ns0:cell>EX</ns0:cell><ns0:cell>delta-V</ns0:cell><ns0:cell>delta-P</ns0:cell><ns0:cell># proteins</ns0:cell><ns0:cell>Length ave.</ns0:cell></ns0:row><ns0:row><ns0:cell>>99</ns0:cell><ns0:cell>0.49</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>136</ns0:cell><ns0:cell>1532.7</ns0:cell></ns0:row><ns0:row><ns0:cell>98-99</ns0:cell><ns0:cell>0.44</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>137</ns0:cell><ns0:cell>1464.0</ns0:cell></ns0:row><ns0:row><ns0:cell>96-98</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.58</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>1539.4</ns0:cell></ns0:row><ns0:row><ns0:cell>87-96</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>1414.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length >1102 amino acid with gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>>99</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>61</ns0:cell><ns0:cell>1659.3</ns0:cell></ns0:row><ns0:row><ns0:cell>98-99</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>1855.8</ns0:cell></ns0:row><ns0:row><ns0:cell>96-98</ns0:cell><ns0:cell>0.36</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>320</ns0:cell><ns0:cell>1792.7</ns0:cell></ns0:row><ns0:row><ns0:cell>87-96</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.50</ns0:cell><ns0:cell>437</ns0:cell><ns0:cell>1727.3</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Protein length 200-1102 amino acid with no gaps in alignment</ns0:head><ns0:label /><ns0:figDesc>PeerJ</ns0:figDesc><ns0:table><ns0:row><ns0:cell>>95</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>6984</ns0:cell><ns0:cell>478.3</ns0:cell></ns0:row><ns0:row><ns0:cell>90-95</ns0:cell><ns0:cell>0.39</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.55</ns0:cell><ns0:cell>1229</ns0:cell><ns0:cell>407.9</ns0:cell></ns0:row><ns0:row><ns0:cell>80-90</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>276</ns0:cell><ns0:cell>350.9</ns0:cell></ns0:row><ns0:row><ns0:cell>60-80</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.57</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>372.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>Protein length 200-1102 amino acid with gaps in alignment</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>>95</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.22</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.59</ns0:cell><ns0:cell>2001</ns0:cell><ns0:cell>601.1</ns0:cell></ns0:row><ns0:row><ns0:cell>90-95</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell>1529</ns0:cell><ns0:cell>566.4</ns0:cell></ns0:row><ns0:row><ns0:cell>80-90</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>0.47</ns0:cell><ns0:cell>1050</ns0:cell><ns0:cell>489.1</ns0:cell></ns0:row><ns0:row><ns0:cell>60-80</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.33</ns0:cell><ns0:cell>0.48</ns0:cell><ns0:cell>467</ns0:cell><ns0:cell>447.1</ns0:cell></ns0:row></ns0:table><ns0:note>reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2019:12:43889:3:0:NEW 12 Aug 2020)</ns0:note>
</ns0:body>
" | "Dear Editor Braun,
Thank you very much for the suggestions. We have revised the manuscript accordingly and hope that you now find it suitable for publication in Peer J.
Sincerely,
Shi Huang
Responses:
I am really excited by this manuscript and am happy to accept it, but there are three very minor corrections I'd like you to make before the final acceptance. One is a factual matter, one is a minor grammatical issue, and the last is a matter of consistency. I will provide the exact locations and some suggested fixes:
The first is the paragraph on lines 220-230 (note that I am using line numbers from the pdf; line numbers in word documents with tracked changes will differ). I suggest you use:
The four matrixes we used were developed in different ways: the BLOSUM matrix by using sequence alignments involving relatively divergent species, the delta_V and delta_P matrixes directly from the physicochemical properties of amino acids, and the EX matrix from experimental mutagenesis. However, the matrixes are still expected to provide information that applies to alignment data from relatively closely related species such as monkey and human, since proteins identified as fast evolving by comparing closely related species would also in general be identified as such by comparing more distantly related species. This is suggested by the molecular clock phenomenon or the constant evolutionary rates across time and species. It appears that the delta_V matrix produced less significant results compared to the other three matrixes. Overestimation of the number of conservative substitutions in fast evolving proteins may account for this. For example, substitutions involving differently charged residues with similar side chain volumes would be scored as non-conservative by the delta_P matrix but conservative by the delta_V matrix (e.g., Glu to Leu).
The changed material is this:
'The four matrixes we used were developed in different ways: the BLOSUM matrix by using sequence alignments involving relatively divergent species, the delta_V and delta_P matrixes directly from the physicochemical properties of amino acids, and the EX matrix from experimental mutagenesis. However, the matrixes are still expected to provide information that applies to alignment data ...' (this is the beginning of the paragraph)
This originally stated that all for matrixes were based on alignments of divergent taxa. This is not correct (although I think I see where in the reviews you may have taken this comment from). Two matrixes were based solely on chemistry and a third was based on laboratory mutagenesis studies.
Thank you for correcting our mistake here. We have made the revisions as you suggested.
The second issue is the paragraph on lines 238-243, which should be changed as follows:
It has recently been shown that coding region mutation rates as measured prior to the effect of natural selection are significantly lower in genes where mutations are more likely to be deleterious (Monroe et al. 2020). Mutations are more likely to be deleterious and less likely to be fixed in highly conserved proteins, which are by definition more common in slowly evolving proteins. Thus, slowly evolving genes in fact do have inherently slower mutation rates, which would make them less likely to reach mutation saturation.
The changes are minor - something was missing in the next to last sentence so I added 'more common in'. I also added a comma after the 'Thus' that begins the last sentence.
We have changed the text accordingly.
The final thing I'd like you to do is check the names you use for the delta matrixes. You sometimes use 'delta P' and 'delta V', other times you use 'delta-P' and 'delta-V' and other times you use 'delta_P' and 'delta_V'. I don't have a strong feeling regarding the best way to write this out - I would be happy with either. However, you should pick one of the ways of writing the matrix names and stick with it throughout the paper. I note that you use 'delta P' and 'delta V' in the figures, so perhaps it is best to use that (although I will leave the final decision in your hands).
Thank you for catching this mistake. We have now used delta_V and delta_P in both the text and the figures, since this would only mean to correct one figure (the alternative would require changes to two figures).
I apologize for feeling that I have to ask for these final changes rather than a simple accept since these are minor issues. But I'd really like to see this manuscript get out the door and into the hands of readers, who will hopefully be as interested in your results as I am. I hope it will be easy for you to get this done quickly so I can also be quick in the final acceptance. Best wishes!
We believe that our manuscript has really become much stronger thanks to your editing. We appreciate it very much. Thank you!
" | Here is a paper. Please give your review comments after reading it. |
655 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cryptosporidium spp. and Giardia duodenalis are two waterborne protozoan parasites that can cause diarrhea. Human and animal feces in surface water are a major source of these pathogens. This paper presents a GloWPa-Crypto model that estimates Cryptosporidium and G. duodenalis emissions from human and animal feces in the Three Gorges Reservoir (TGR), and uses scenario analysis to predict the effects of sanitation, urbanization, and population growth on oocyst and cyst emissions for 2050. Our model estimated annual emissions of 1.6×10 15 oocysts and 2.1×10 15 cysts from human and animal feces, respectively. Humans were the largest contributors of oocysts and cysts, followed by pigs and poultry. Cities were hot-spots for human emissions, while districts with high livestock populations accounted for the highest animal emissions. Our model was the most sensitive to oocyst excretion rates. The results indicated that 74% and 87% of total emissions came from urban areas and humans, respectively, and 86% of total human emissions were produced by the urban population. The scenario analysis showed a potential decrease in oocyst and cyst emissions with improvements in urbanization, sanitation, wastewater treatment, and manure management, regardless of population increase. Our model can further contribute to the understanding of environmental pathways, the risk assessment of Cryptosporidium and Giardia pollution, and effective prevention and control strategies that can reduce the outbreak of waterborne diseases in the TGR and other similar watersheds .</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Cryptosporidium spp. and Giardia duodenalis are two ubiquitous parasites that can cause gastrointestinal disease in humans and many animals worldwide <ns0:ref type='bibr' target='#b59'>(Šlapeta, 2013;</ns0:ref><ns0:ref type='bibr' target='#b54'>Ryan, Fayer, & Xiao, 2014;</ns0:ref><ns0:ref type='bibr' target='#b72'>Wu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b56'>Sahraoui et al., 2019)</ns0:ref>. They can cause cryptosporidiosis and giardiasis, which are typically self-limiting infections in immunocompetent individuals, but are life-threatening illnesses in immunocompromised people, such as AIDS patients <ns0:ref type='bibr' target='#b75'>(Xiao & Fayer, 2008;</ns0:ref><ns0:ref type='bibr' target='#b45'>Liu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b23'>Ghafari et al., 2018 )</ns0:ref>. In developing countries, diarrhea has been identified as the third leading cause of death <ns0:ref type='bibr'>(WHO, 2008)</ns0:ref>, and global deaths from diarrhea are around 1.3 million annually <ns0:ref type='bibr'>(GBD, 2015)</ns0:ref>. There are also many waterborne cryptosporidiosis and giardiasis outbreaks regularly reported in developed countries <ns0:ref type='bibr' target='#b31'>(Hoxie et al., 1997;</ns0:ref><ns0:ref type='bibr' target='#b7'>Bartelt, Attias, & Black, 2016)</ns0:ref>.</ns0:p><ns0:p>Humans and many animals are important reservoirs for Cryptosporidium spp. and G. duodenalis, and large amounts of both pathogens and extremely high oocyst and cyst excretions have been traced in their feces <ns0:ref type='bibr' target='#b24'>(Graczyk & Fried, 2007;</ns0:ref><ns0:ref type='bibr' target='#b61'>Tangtrongsup et al., 2019)</ns0:ref>. Moreover, the transmission of these parasites occurs through a variety of mechanisms in the fecal-oral route, including the direct contact with or indirect ingestion of contaminated food or water <ns0:ref type='bibr' target='#b10'>(Castro-Hermida et al., 2009;</ns0:ref><ns0:ref type='bibr'>Dixon & Brent, 2016;</ns0:ref><ns0:ref type='bibr' target='#b55'>Saaed & Ongerth, 2019)</ns0:ref>. These parasites can enter and pollute surface water directly through sewage sludge or indirectly through field runoff <ns0:ref type='bibr' target='#b25'>(Graczyk et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b49'>Mons et al., 2008)</ns0:ref>. Oocysts and cycsts are highly infectious, very stable in environmental water, and largely resistant to many chemical and physical inactivation agents <ns0:ref type='bibr' target='#b9'>(Carmena et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b11'>Castro-Hermida, Gonzalez-Warleta, & Mezo, 2015;</ns0:ref><ns0:ref type='bibr' target='#b1'>Adeyemo et al., 2019)</ns0:ref>, making the presence of waterborne Cryptosporidium spp. and G. duodenalis pathogens in surface water a serious public health threat <ns0:ref type='bibr' target='#b72'>(Wu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Li et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Cryptosporidiosis and giardiasis have been reported in at least 300 areas and more than 90 countries worldwide <ns0:ref type='bibr' target='#b78'>(Yang et al., 2017)</ns0:ref>. Previous studies have predicted that Cryptosporidium is in 4% to 31% of the stools of immunocompetent people living in developing countries <ns0:ref type='bibr' target='#b53'>(Quihui-Cota et al., 2017)</ns0:ref> and in 1% of the stools of people with high incomes <ns0:ref type='bibr' target='#b12'>(Checkley et al., 2015)</ns0:ref>. Additionally, it has been estimated that more than 200 million people are chronically infected with giardiasis, with 500,000 new cases reported each year, and that waterborne Giardia outbreaks affect approximately 10% of the world's population <ns0:ref type='bibr' target='#b50'>(Norhayati et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b55'>Saaed & Ongerth, 2019)</ns0:ref>. To date, Cryptosporidium spp. and G. duodenalis have been found in more than 27 provincial administrative regions in China <ns0:ref type='bibr' target='#b78'>(Yang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b44'>Liu et al., 2020)</ns0:ref>. However, there continues to be a critical lack of surveillance systems documenting and tracking protozoan infection and waterborne outbreaks in developing countries <ns0:ref type='bibr' target='#b6'>(Baldursson & Karanis, 2011;</ns0:ref><ns0:ref type='bibr'>Efstratiou, Ongerth, & Karanis, 2017ab)</ns0:ref>. Cryptosporidium and Giardia have recently been added as pathogens in China's Standards for Drinking Water Quality (GB/ <ns0:ref type='bibr'>T5749-2006, 2007)</ns0:ref>, suggesting that greater attention is being paid to waterborne parasite control in a region with no previous monitoring and reporting systems. Nevertheless, the incidence rate and risks of waterborne protozoan illness are still poorly understood in China, making it difficult to combat parasitic protozoa, manage source water, and assess future risks <ns0:ref type='bibr' target='#b2'>(An et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b73'>Xiao et al., 2013a;</ns0:ref><ns0:ref type='bibr' target='#b6'>Baldursson & Karanis, 2011)</ns0:ref>.</ns0:p><ns0:p>The Three Gorges Reservoir (TGR), one of the world's largest comprehensive hydropower projects, is located at the upper reaches of the Yangtze River, the longest river in Asia <ns0:ref type='bibr' target='#b26'>(He et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b57'>Sang et al., 2019)</ns0:ref>. It is an important source of water and plays a crucial role in China's economy <ns0:ref type='bibr' target='#b43'>(Li, Huang, & Qu, 2017)</ns0:ref> by optimizing their water resources. Serious pollution from agricultural activities and domestic sewage discharge have adversely affected the sustainable development of the TGR and the entire Yangtze River Basin, and pose a threat to future resources <ns0:ref type='bibr' target='#b19'>(Fu et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b77'>Yang et al., 2015)</ns0:ref>. There is a lack of observational data on Cryptosporidium and Giardia emissions from people and livestock in China. The existing data does show that monitoring programs are expensive, time-consuming, and often cannot detect or properly measure the ambient concentrations of oocysts and cysts <ns0:ref type='bibr'>(Efstratiou, Ongerth & Karanis, 2017ab;</ns0:ref><ns0:ref type='bibr' target='#b47'>Martins et al., 2019)</ns0:ref>. Understanding the environmental emissions and transmission routes of parasitic protozoa is beneficial when developing strategies to assess and mitigate waterborne diseases. In this study, we aimed to: (i) use a spatially explicit model to estimate total annual oocyst and cyst emissions from human and livestock feces in the TGR; (ii) use scenario analysis to explore the impacts of population growth, urbanization, and sanitation changes on human and animal Cryptosporidium and Giardia emissions in surface water; and (iii) contribute to a general understanding of the risk of protozoan parasites and to strategies that will control and reduce the burden of waterborne pathogens in the TGR.</ns0:p></ns0:div>
<ns0:div><ns0:head>Study area and model components</ns0:head><ns0:p>The TGR Area is located at 28°30' to 31°44 ' N and 105°44' to 111°39' E in the lower section of the upper reaches of the Yangtze River. It has a watershed of 46,118 km 2 , reaches approximately 16.8 million residents, and its river system covers 38 Chongqing districts and counties (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Using the area's population (Fig. <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>) and livestock density (Fig. <ns0:ref type='figure' target='#fig_0'>1B</ns0:ref>) (Table <ns0:ref type='table'>S1</ns0:ref>), we applied the GloWPa-Crypto model to estimate oocyst and cyst emissions in the Chongqing region of the TGR <ns0:ref type='bibr' target='#b28'>(Hofstra et al., 2013)</ns0:ref>. We defined an emission as the annual total number of oocysts and cysts excreted by people and livestock found in surface water. We used the emission data from 2013 for our model since the records for human and livestock populations from that year were the most complete. The model (GloWPa-Crypto TGR) consisted of two components: a human emission model and an animal emission model. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows a schematic sketch of the model's components. We identified the two types of pollution sources for the total oocysts and cysts found in the TGR. Point sources were human emissions connected to sewage systems that indirectly reached the TGR after treatment, or directly before treatment. Nonpoint sources were emissions from rural residents or livestock resulting from manure being used as fertilizer and entering surface water via runoff. Our model was partly based on <ns0:ref type='bibr' target='#b28'>Hofstra et al. (2013)</ns0:ref> and other reviews suggesting improvements in manure treatment of livestock and human emissions <ns0:ref type='bibr' target='#b28'>(Hofstra et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b65'>Vermeulen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. We ran the model at both district and county levels. Our model can't differentiate Cryptosporidium species, as there was the paucity of the prevalence and excretion rates for different species in humans and livestock.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating oocyst and cyst excretion in human feces (H)</ns0:head><ns0:p>Using population (P), sanitation availability (F), and excretion (O p ) data from 2013, the model first estimated human oocyst and cyst emissions. We divided the human populations into four emission categories: connected sources, direct sources, diffuse sources, and non-sources. The detailed descriptions of the emission categories are provided in <ns0:ref type='bibr' target='#b39'>Kiulia et al. (2015)</ns0:ref>. The model not only calculated the human emissions connected to sewers in urban and rural areas, but it also calculated direct and diffuse emissions. Unlike <ns0:ref type='bibr' target='#b39'>Kiulia et al. (2015)</ns0:ref>, our model did not differentiate across age categories. In rural areas of China, a portion of fecal waste is collected and used for fertilizer and irrigation. Therefore, we assumed that human feces runoff from septic tanks and pit latrines was a diffuse source of oocysts and cysts in surface water. The two protozoan parasites' prevalence rate was 10% in developing countries (Human Development Index (HDI) 0.785; <ns0:ref type='bibr' target='#b28'>Hofstra et al., 2013)</ns0:ref>, and the average excretion rate (O p ) was assumed to < be and 1.58 for oocysts and cysts, respectively (Ferguson et al., 2007; Hofstra 1.0 × 10 8 × 10 8 et al., 2013). Our model used the secondary sewage treatment according to the Chinese discharge standard of pollutants for municipal wastewater treatment plants ( <ns0:ref type='bibr' target='#b20'>GB18918-2002)</ns0:ref> and the Chinese technological policy for the treatment of municipal sewage and pollution control (http://www.mee.gov.cn/). The removal efficiencies were 10%, 50%, and 95%, for primary, secondary, and tertiary treatments, respectively <ns0:ref type='bibr' target='#b28'>(Hofstra et al., 2013)</ns0:ref>. The results (H) were calculated using Eq. ( <ns0:ref type='formula'>1</ns0:ref>) and (2):</ns0:p><ns0:p>, representing four state sets.</ns0:p><ns0:formula xml:id='formula_0'>K i ∈ {K 1 , K 2 , K 3 , K 4 }, i ∈ {1, 2, 3, 4} ---Urban connected emissions K 1 ---Rural connected emissions K 2 ---Urban direct emissions K 3 ---Rural diffuse emissions K 4</ns0:formula><ns0:p>Oocyst and cyst excretions ( ) from each human emission (i) per district were calculated as K i follows:</ns0:p><ns0:p>st. {</ns0:p><ns0:formula xml:id='formula_1'>K 1 = CE u = P u × F cu × O p × (1 -F rem ) K 2 = CE r = P r × F cr × O p × (1 -F rem ) K 3 = DE u = P u × F du × O p K 4 = DifE r = P r × F difr × O p (1) 𝐻 = 4 ∑ 𝑖 = 1 𝐾 𝑖 (2)</ns0:formula><ns0:p>where H is the total oocyst and cyst excretion from different human emission categories in a district or county (oocysts and cycsts/year); and are total urban and rural populations in P u P r districts or counties, respectively; is the average oocyst and cyst excretion rates per person O p per year (oocysts and cysts/year); and are the fractions of urban and rural populations F cu F cr connected to a sewer, respectively; is the fraction of urban populations not connected to a F du sewer that is considered a direct source;</ns0:p><ns0:p>is the fraction of rural populations not using F difr sanitation that is considered a diffuse source; and is the fraction of oocysts and cysts F rem removed by sewage treatment plants (STP). The values assumed for this study are summarized in Tables <ns0:ref type='table'>S1 and S2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating oocyst and cyst excretion in animal manure (A)</ns0:head><ns0:p>Using the number of livestock, breeding day, manure excretion, oocyst and cyst excretion rate, and prevalence rate, the model then estimated livestock oocyst and cyst emissions in 2013. We established six livestock categories: rabbits, pigs, cattle, poultry, sheep, and goats. Unlike <ns0:ref type='bibr' target='#b28'>Hofstra et al. (2013)</ns0:ref>, our model used different livestock breeding day categories because each livestock species has a unique number of breeding days and produces different amounts of manure and excretions each year. We also divided the animal populations into four emission categories: (1) connected emissions from livestock receiving manure treatment, (2) direct emissions from livestock directly discharged to surface water, (3) diffuse emissions resulting from using livestock manure as a fertilizer after storage, and (4) livestock manure that was not used for irrigation after storage or for any other use (e.g., burned for fuel) <ns0:ref type='bibr' target='#b65'>(Vermeulen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. We assumed that 10% of emissions would be connected emissions <ns0:ref type='bibr' target='#b79'>(Zhang et al., 2017)</ns0:ref>. Illegal and undocumented direct emissions (e.g., dumped manure) were not included in our model due to a lack of data. The results (A) were calculated using Eq. ( <ns0:ref type='formula'>3</ns0:ref>) and ( <ns0:ref type='formula'>4</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_2'>X j ∈ {X 1 , X 2 , X 3 , X 4 , X 5 }, j ∈ { 1, 2, 3, 4, 5 } ---Rabbit emissions X 1 ---Pig emissions X 2 ---Cattle emissions X 3 ---Sheep and goat emissions X 4 ---Poultry emissions X 5</ns0:formula><ns0:p>We calculated oocyst and cyst excretions (X j ) from each animal species (j) per district using the following equation:</ns0:p><ns0:formula xml:id='formula_3'>X j = N a j × D a j × M a j × O a j × P a j (3) 𝐴 = 5 ∑ 𝑗 = 1 𝑋 𝑗 (4)</ns0:formula><ns0:p>where X j is the oocyst and cyst excretions from livestock species j in a district or county (oocysts and cysts/year), is the number of animals in a district or county, is the breeding day for N a j D a j different livestock species j (days), is the mean daily manure of livestock species j (kg•day -M a j 1 ), is the oocyst and cyst excretion rate per infected livestock species j in manure ( 10 log O a j (oo)cysts•kg -1 •d -1 ), and is the prevalence of cryptosporidiosis and giardiasis in livestock P a j species j. The values assumed for this study are summarized in Tables <ns0:ref type='table'>S1, S3</ns0:ref>, S4, and S5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating oocyst and cyst emissions after manure storage (S)</ns0:head><ns0:p>In China, manure from human diffuse and livestock sources is collected and used for irrigation and fertilizer <ns0:ref type='bibr' target='#b46'>(Liu et al., 2019)</ns0:ref>. The decay of oocyst and cysts in manure that has been stored before being applied as fertilizer in the TGR watershed is temperature-dependent during the storage period <ns0:ref type='bibr' target='#b60'>(Tang et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. The number of oocysts and cysts in stored manure that has been loaded on land during irrigation was calculated using Eq. ( <ns0:ref type='formula'>5</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_4'>S = DifE r × F s,h × F v + A × F s,a × F v (5)</ns0:formula><ns0:p>where is the number of oocysts and cysts in manure that has been spread on land after storage S in a district or county (oocysts and cysts year);</ns0:p><ns0:p>and are the number of oocysts and cysts / DifE r A in manure for rural residents (human diffuse sources) and livestock (oocysts and cysts year), / respectively; and are the proportions of stored manure applied as a fertilizer from rural F s,h F s,a residents and livestock, respectively (Table <ns0:ref type='table'>S2</ns0:ref>); and is the proportion of average oocyst and F v cyst survival in the storage system.</ns0:p><ns0:p>The average oocyst and cyst survival rate ( ) in the storage system depended on F v temperature (T) and storage time ( ) <ns0:ref type='bibr' target='#b64'>(Vermeulen et al., 2017)</ns0:ref>. The results were calculated using t s</ns0:p><ns0:p>Eq. ( <ns0:ref type='formula'>6</ns0:ref>), ( <ns0:ref type='formula'>7</ns0:ref>), and (8):</ns0:p><ns0:formula xml:id='formula_5'>K s = ln 10 -2.5586 × T + 119.63 (6) V s, = e ( -K s × t s ) (7) F v = ∫ t s 0 V s dt t s (8)</ns0:formula><ns0:p>where is the average annual air temperature (℃) (Table <ns0:ref type='table'>S2</ns0:ref>), is a constant based on air T K s temperature, is the survival rate of oocysts and cysts over time, and is the manure storage V s t s time (days) (Table <ns0:ref type='table'>S2</ns0:ref>).</ns0:p><ns0:p>Calculating oocyst and cyst runoff (R) to the TGR Oocysts and cysts in stored manure that have been applied to agricultural land as a fertilizer are transported from land to rivers largely via surface runoff <ns0:ref type='bibr' target='#b63'>(Velthof et al., 2009)</ns0:ref>. Our model estimated oocyst and cyst runoff using the amounts of manure applied as fertilizer, maximal surface runoff, and a set of reduction factors <ns0:ref type='bibr' target='#b63'>(Velthof et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b28'>Hofstra et al., 2013)</ns0:ref>. The results (R) were calculated using Eq. ( <ns0:ref type='formula'>9</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_6'>R = S × F run,max × f lu × f p × f rc × f s (9)</ns0:formula><ns0:p>where is the number of oocysts and cysts in manure applied as a fertilizer that reached the TGR R via surface runoff in a district or county (oocysts and cysts year), is the number of oocysts and / S cysts in manure that was spread on land after storage (oocyst and cysts year), F run, max is the / fraction of maximum surface runoff across different slope classes, f lu is the reduction factor for land use, f p is the reduction factor for average annual precipitation, f rc is the reduction factor for rock depth, and f s is the reduction factor for soil type. The values assumed for this study are summarized in Table <ns0:ref type='table'>S2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating total oocyst and cyst emissions (E)</ns0:head><ns0:p>Our model defined total oocyst and cyst emissions as the annual number of oocysts and cysts per district in the TGR. The results (E) were calculated using Eq. ( <ns0:ref type='formula'>10</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_7'>E = CE u + CE r + DE u + R (10)</ns0:formula><ns0:p>where is the total oocyst and cyst emissions from humans and animals in a district or county E (oocysts and cysts/year), is oocyst and cyst emissions in the TGR by urban populations CE u connected to STP in a district or county, is oocyst and cyst emissions in the TGR by rural CE r populations connected to STP in a district or county, is direct oocyst and cyst emissions in DE u the TGR by urban populations in a district or county, and is oocyst and cyst emissions in the R TGR from human and livestock manure that has been applied as a fertilizer via runoff in a district or county.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sensitivity analysis</ns0:head><ns0:p>We tested our model's sensitivity to change using input parameters in a nominal range sensitivity analysis (NRSA). Input parameter values were based on reasonable lower and upper ranges of a base model, and we tested each variable individually <ns0:ref type='bibr' target='#b65'>(Vermeulen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. We selected the NRSA because it provides quantitative insight into the individual impact of different parameters on the model's outcome. Tables <ns0:ref type='table'>S6, S7</ns0:ref>, and S8 present the sensitivity analysis input variables.</ns0:p></ns0:div>
<ns0:div><ns0:head>Predicting total oocyst and cyst emissions for 2050</ns0:head><ns0:p>To explore the impact of future population, urbanization, and sanitation changes on human and animal Cryptosporidium and Giardia emissions in the TGR, we divided the emissions into urban resident, rural resident, and livestock categories to predict the total oocyst and cyst emissions for 2050 based on three scenarios. China's projected population, urbanization, and livestock production data for 2050 can be found in the Shared Socioeconomic Pathways (SSPs) database <ns0:ref type='bibr' target='#b80'>(Zhao, 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Huang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Chen et al., 2020)</ns0:ref> (https://tntcat.iiasa.ac.at/SspDb). We based Scenario 1 on SSP1, which is entitled 'Sustainability -Taking the green road' and emphasizes sustainability, well-being, and equity. In this scenario, there is moderate population change and well-planned urbanization <ns0:ref type='bibr'>(O'Neill, Kriegle, & Ebi, 2015;</ns0:ref><ns0:ref type='bibr' target='#b36'>Jiang & O'Neill, 2017;</ns0:ref><ns0:ref type='bibr' target='#b80'>Zhao, 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Huang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Chen et al., 2020)</ns0:ref>. Scenario 2 was based on SSP3, which is entitled 'Regional rivalry -A rocky road' and emphasizes regional progress. In this scenario, China's population change is significant and urbanization is unplanned <ns0:ref type='bibr'>(O'Neill, Kriegle, & Ebi, 2015;</ns0:ref><ns0:ref type='bibr' target='#b36'>Jiang & O'Neill, 2017;</ns0:ref><ns0:ref type='bibr' target='#b80'>Zhao, 2018;</ns0:ref><ns0:ref type='bibr' target='#b32'>Huang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b14'>Chen et al., 2020)</ns0:ref>. To emphasize the importance of wastewater and manure treatment, we created Scenario 3 as a variation of Scenario 1 based on <ns0:ref type='bibr' target='#b29'>Hofstra & Vermeulen (2016)</ns0:ref>. This scenario has the same population, urbanization, and sanitation changes as Scenario 1, but with the insufficient sewage and manure treatments from 2013. Since there were no available data for individual livestock species, we assumed that all livestock species will grow by the same percentage noted in the SSPs database. We also assumed that there will be changes only in population and livestock numbers, not in any other parameters (e.g., oocyst and cyst excretion rates and prevalence) <ns0:ref type='bibr' target='#b34'>(Iqbal, Islam, & Hofstra, 2019)</ns0:ref>. We based our sanitation, wastewater, and manure treatment predictions for these three scenarios on previous literature reviews <ns0:ref type='bibr' target='#b29'>(Hofstra & Vermeulen, 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Iqbal, Islam, & Hofstra, 2019)</ns0:ref>. Table <ns0:ref type='table'>S9</ns0:ref> provides an overview of the scenarios.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:02:45998:1:0:NEW 30 Jul 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Oocyst and cyst emissions in the TGR in 2013</ns0:head><ns0:p>Cryptosporidium oocyst and Giardia cyst emissions from humans, rabbits, pigs, cattle, sheep, goats, and poultry found in the TGR in 2013 are shown in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. Chongqing had a total of 1.6×10 15 oocysts/year and 2.1×10 15 cysts/year of Cryptosporidium and Giardia emissions. Human Cryptosporidium and Giardia emissions contained a total of 1.2×10 15 oocysts/year and 2.0×10 15 cysts/year, and animal emissions had a total of 3.4×10 14 oocysts/year and 1.5×10 14 cysts/year. Humans and animals were responsible for 42% and 10% of total emissions in the TGR, respectively. Humans were responsible for 78% of oocyst emissions, followed by 14% from pigs, and 8% from poultry. Humans were responsible for 93% of cyst emissions, followed by 6% from pigs, and 0.5% from cattle. Ultimately, we found that humans were the dominant source of oocysts and cysts, followed by pigs, poultry, and cattle.</ns0:p></ns0:div>
<ns0:div><ns0:head>Oocyst and cyst emission sanitation types</ns0:head><ns0:p>We immediately observed the differences in sanitation types (connected emissions, direct emissions, diffuse emissions, and non-source) across the human, livestock, urban, and rural populations (Fig. <ns0:ref type='figure'>4</ns0:ref>). We found that 49% of the populations were connected to a sewer, 36% had diffuse sources, 13% had direct sources, and 2% were non-source. In livestock populations, only 10% were connected to a sewer (manure treatment), 80% produced diffuse emissions, and 10% were non-source. We divided the human and livestock emissions by region: urban areas (made up of urban residents) and rural areas (made up of rural residents and all livestock). In urban areas, the emissions connected to a sewer were prominent (78%), followed by direct sources (22%). In rural areas, diffuse sources produced approximately 85% of total emissions, followed by connected sources (9%), and non-source (6%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Spatial distribution of oocyst and cyst emissions in the TGR in 2013</ns0:head><ns0:p>Our model produced a spatial distribution of Cryptosporidium and Giardia emissions in the TGR for each Chongqing district or county from 2013 (Fig. <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>). The total human Cryptosporidium emissions ranged from 5.4×10 12 to 7.4×10 13 oocysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5A</ns0:ref>) and the total human Giardia emissions ranged from 8.5×10 12 to 1.2×10 14 cysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5B</ns0:ref>). Overall, the emission spatial differences depended on population density and urbanization rate. The largest emissions were from the densely-populated Yubei, Wanzhou, and Jiulongpo districts.</ns0:p><ns0:p>The total animal source emissions ranged from 0 to 1.8×10 13 oocysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5C</ns0:ref>) and 0 to 7.2×10 13 cysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5D</ns0:ref>) for Cryptosporidium and Giardia, respectively. We based our results on the number of animals, manure production, manure treatment and runoff, and invariant oocyst and cyst emissions from each animal category over one year. The lowest emissions were observed in areas with low animal populations, such as the downtown districts of Yuzhong, Dadukou, and Nanan.</ns0:p><ns0:p>The total Cryptosporidium and Giardia emissions ranged from 1.0×10 13 to 8.1×10 13 oocysts/district and 1.0×10 13 to 1.2×10 14 cysts/district, respectively (Fig. <ns0:ref type='figure' target='#fig_3'>5E and F</ns0:ref>). Total human emissions were approximately six-fold higher than animal emissions and played a decisive role in total emission distribution. We found slightly more cysts than oocysts in total emissions and human emissions. In contrast, there were slightly more oocysts than cysts in animal emissions. The highest total emissions were found in areas with large animal and human populations, such as the main districts of Wanzhou, Yubei, and Hechuan.</ns0:p><ns0:p>Human Cryptosporidium and Giardia emissions from urban and rural areas can be found in Figure <ns0:ref type='figure'>6</ns0:ref>. In urban areas, Cryptosporidium and Giardia emissions ranged from 3.5×10 12 to 7.0×10 13 oocysts/district and 5.5×10 12 to 1.1×10 14 cysts/district, respectively (Fig. <ns0:ref type='figure'>6A and C</ns0:ref>). In rural areas, the emissions ranged from 0 to 9.8×10 12 oocysts/district and 0 to 1.6×10 13 cysts/district (Fig. <ns0:ref type='figure'>6B and D</ns0:ref>). Rural emissions were spread over much larger areas than urban emissions. Human emissions in urban areas were six-fold higher than in rural areas and played a crucial role in total human emission distribution.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sensitivity analysis</ns0:head><ns0:p>Since there were limited observational data on Cryptosporidium and Giardia, we performed a sensitivity analysis to verify the GloWPa-Crypto TGR model's performance. The sensitivity analysis (Tables <ns0:ref type='table'>S6 -S8</ns0:ref>) showed that the model was the most sensitive to changes in excretion rate (shown for 1 log unit change in excretion rates), particularly the excretion rates of humans (factor 8.03), pigs (factor 2.22), and poultry (factor 1.74). The model was more sensitive to prevalence changes in humans, pigs, and poultry (factors 1.39, 1.14, and 1.08, respectively). The results confirmed that humans, pigs, and poultry were the dominant sources of oocyst and cyst emissions. Besides excretion rate and prevalence, the model was most sensitive to changes in the amount of runoff, STP oocyst and cyst removal efficiencies, the amount of connected emissions, human population, and manure storage time (factors 1.30, 1.23, 1.21, 1.16, and 1.11, respectively), as these parameters affected oocyst and cyst survival and emissions. The model was not very sensitive to changes in the amount of rural resident feces applied as fertilizer, rural wastewater treatment, and the excretion rates and prevalence of animal species that did not contribute much to the total oocyst and cyst emissions (e.g., cattle, rabbits, sheep, and goats).</ns0:p></ns0:div>
<ns0:div><ns0:head>Scenario analysis: the effect of population, urbanization, and sanitation changes in 2050</ns0:head><ns0:p>In Scenario 1, moderate population change, planned urbanization, and strong improvements in sanitation, wastewater, and manure treatments will decrease the total emissions in the TGR to 9.5×10 14 oocysts/year and 1.2 ×10 15 cysts/year by 2050 (Fig. <ns0:ref type='figure'>7</ns0:ref>). This would reduce approximately 40% of the Cryptosporidium emissions and 44% of the Giardia emissions measured in 2013. All emissions from all three sources would decrease, with a notable 61% decrease for rural residents (Table <ns0:ref type='table'>S10</ns0:ref>). Figure <ns0:ref type='figure'>8B and E</ns0:ref> and Figure <ns0:ref type='figure'>9B and E</ns0:ref> show the decrease across all regions in Scenario 1. The largest decline would be found in the Yubei and Jiulongpo districts, where assumed urbanization rates would increase to 100% and 99%, respectively, and 99% of domestic sewers would obtain secondary or tertiary treatment. Scenario 1 also shows changes in the contributions to total emissions. Urban residents would be responsible for 64% and 81% of Cryptosporidium and Giardia emissions, respectively, which would be a 3% decrease and a 1% increase from 2013. Rural Cryptosporidium and Giardia emissions would decrease from 11% to 7% and 13% to 9%, respectively. Livestock Cryptosporidium and Giardia emissions would increase from 22% to 29% and 7% to 10%, respectively.</ns0:p><ns0:p>In Scenario 2, Cryptosporidium and Giardia emissions are expected to increase to 1.9 ×10 15 oocysts/year and 2.4 ×10 15 cysts/year by 2050 (Fig. <ns0:ref type='figure'>7</ns0:ref>), would would be 19% and 12% growth, respectively, from 2013. Emissions from urban residents and livestock would increase (Table <ns0:ref type='table'>S10</ns0:ref>) due to strong population growth, unplanned urbanization, limited sanitation, and expanded livestock production practices where untreated manure used as fertilizer is emitted into surface water. Emissions from rural residents would decrease 8% because the rate of urbanization would increase while the same sanitation practices from 2013 are used by that smaller rural population. Figure <ns0:ref type='figure'>8C</ns0:ref> and F and Figure <ns0:ref type='figure'>9C</ns0:ref> and F show that total emissions would increase in all regions (particularly the Wanzhou and Yubei districts) because of strong population growth and limited environmental regulation. In Scenario 2, urban residents would account for 63% and 79%, rural residents would account for 9% and 11%, and livestock would account for 29% and 10% of Cryptosporidium and Giardia emissions, respectively, by 2050. Scenario 3 has the same population, urbanization, and sanitation changes as Scenario 1, but with limited wastewater and manure treatment facilities. Scenario 3 has the highest Cryptosporidium and Giardia emissions of all the scenarios. Total emissions would increase to 2.0 ×10 15 oocysts/year and 2.7 ×10 15 cysts/year, with 29% and 27% growth compared to 2013, respectively (Fig. <ns0:ref type='figure'>7</ns0:ref>). Livestock would see the most growth in emissions (an increase of 42%) (Table <ns0:ref type='table'>S10</ns0:ref>). Figure <ns0:ref type='figure'>8D</ns0:ref> and G and Figure <ns0:ref type='figure'>9D and G</ns0:ref> show that an increase in emissions across all regions, except in regions with assumed urbanization rates of 100% and where 50% of emissions obtain secondary treatment (such as the Yuzhong and Shapingba districts). This result highlights the importance of wastewater and manure treatment. Connecting populations to sewers without appropriate sewage treatment introduces more waterborne pathogens to surface water, affecting water quality.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The increase in Cryptosporidium and Giardia surface water pollution in China is traced primarily to human and animal feces. China has one of the largest amounts of Cryptosporidium emissions from feces (10 16 oocysts/year; <ns0:ref type='bibr' target='#b28'>Hofstra et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b29'>Hofstra & Vermeulen, 2016;</ns0:ref><ns0:ref type='bibr' target='#b66'>Vermeulen et al., 2019)</ns0:ref>, but Cryptosporidium and Giardia emissions from human and animal feces in surface water across different Chinese provinces or regions have not been closely studied. The TGR, developed by the China Yangtze Three Gorges Project as one of the largest freshwater resources in the world, suffers from Cryptosporidium and Giardia pollution <ns0:ref type='bibr'>(Xiao et al., 2013ab, Liu et al., 2019)</ns0:ref>. Using data from 2013, we built a GloWPa-Crypto model to estimate Cryptosporidium spp. and G. duodenalis emissions from human and livestock in the TGR. We also used scenario analyses to predict the effects of sanitation, urbanization, and population changes on oocyst and cyst emissions for 2050. Our study can be used to better understand the risk of water contamination in the TGR and to ensure that the reservoir is adequately protected and treated. This knowledge can also contribute to the implementation of the Water Pollution Control Action Plan (i.e., the Ten-point Water Plan), which was sanctioned by the Chinese government to prevent and control water pollution <ns0:ref type='bibr' target='#b71'>(Wu et al., 2016)</ns0:ref>. Additionally, our results can serve as an example for other studies on important waterborne pathogens from fecal wastes and wastewater, particularly in developing countries.</ns0:p><ns0:p>Using the GloWPa-Crypto model, we estimated that the total Cryptosporidium and Giardia emissions from human and livestock feces in Chongqing in 2013 were 1.6×10 15 oocysts/year and 2.1×10 15 cysts/year, respectively. Using the total emissions from the two protozoa, the TGR's hydrological information (such as water temperature, solar radiation, and river depth; Table <ns0:ref type='table'>S11</ns0:ref>), and a hydrological model <ns0:ref type='bibr' target='#b66'>(Vermeulen et al., 2019)</ns0:ref>, we calculated the mean Cryptosporidium and Giardia concentrations in the TGR in 2013 to be 38 oocysts/10 L and 51 cysts/10 L, respectively. <ns0:ref type='bibr' target='#b73'>Xiao et al. (2013a)</ns0:ref> reported that Cryptosporidium oocysts and Giardia cysts are widely distributed in the TGR, with concentrations ranging from 0 to 28.8 oocysts/10 L for Cryptosporidium and 0 to 32.13 cysts/10 L for Giardia in the Yangtze River's mainstream and the backwater areas of tributaries and cities. <ns0:ref type='bibr' target='#b46'>Liu et al. (2019)</ns0:ref> used a calibrated hydrological and sediment transport model to investigate the population, livestock, agriculture, and wastewater treatment plants in the Daning River watershed, a small tributary of the TGR in Chongqing, and found Cryptosporidium concentrations of 0.7-33.4 oocysts/10 L. The results from our model were similar to the results found in other studies. Because of the adsorption, deposition, inactivation, and recovery efficiencies of Cryptosporidium and Giardia in water, the oocyst and cyst concentrations in the surface waters of streams and rivers were significantly reduced <ns0:ref type='bibr' target='#b4'>(Antenucci, Brookes, & Hipsey, 2005;</ns0:ref><ns0:ref type='bibr' target='#b58'>Searcy et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b66'>Vermeulen et al., 2019)</ns0:ref>. Therefore, the validity of our model was confirmed.</ns0:p><ns0:p>Human and animal feces are main sources of Cryptosporidium and Giardia emissions in surface water. In this study, the majority of human emissions were from densely populated urban areas (Fig. <ns0:ref type='figure'>6</ns0:ref>). In those urban areas, a fraction of human emissions were not connected to sewers and sewage was not efficiently treated. We found high concentrations of Cryptosporidium (6.01-16.3 oocysts/10 L) and Giardia (59.52-88.21 cysts/10 L) in the effluent from wastewater treatment plants in the TGR area <ns0:ref type='bibr'>(Xiao et al., 2013ab)</ns0:ref>, and 16.5 ×10 8 tons of sewage were discharged into the TGR, mainly in urban areas <ns0:ref type='bibr' target='#b73'>(Xiao et al., 2013a)</ns0:ref>. In rural areas, only 9% of the population was connected to sewage systems and a large portion of untreated rural sewage was used as potential agricultural irrigation water <ns0:ref type='bibr' target='#b30'>(Hou, Wang, & Zhao, 2012)</ns0:ref>. Therefore, we assumed that large amounts of raw sewage were used as a diffuse source that was dumped as a fertilizer after storage into the farmland, where it could then enter tributaries and the mainstream of the Yangtze River via runoff.</ns0:p><ns0:p>We found a lower amount of animal Cryptosporidium and Giardia emissions than human emissions because only 10% of diffuse emissions reached the TGR through runoff. Unlike the original model created by <ns0:ref type='bibr' target='#b28'>Hofstra et al. (2013)</ns0:ref>, which estimated livestock oocyst and cyst emissions in surface water, we assumed that a portion of the manure received treatment during storage and before it was applied to soil <ns0:ref type='bibr' target='#b3'>(An et al., 2017)</ns0:ref>. Recent studies also reported that oocyst emissions on land were associated with mesophilic or thermophilic anaerobic digestion during manure treatment, and could be reduced by several log units <ns0:ref type='bibr' target='#b33'>(Hutchison et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. Additionally, animal emissions are still important. The total global Cryptosporidium spp. emissions from livestock manure are up to 3.2 × 10 23 oocysts/year <ns0:ref type='bibr' target='#b64'>(Vermeulen et al., 2017)</ns0:ref>. In 2010, China had a total of 1.9 billion tons of livestock manure, 227 million tons of livestock manure pollution, and 1.84 tons/hectare of arable land of livestock manure pollution <ns0:ref type='bibr' target='#b52'>(Qiu et al., 2013)</ns0:ref>. Livestock manure discharged into the environment without appropriate processing is a serious source of pollution in soil and water systems <ns0:ref type='bibr' target='#b62'>(Tian, 2012;</ns0:ref><ns0:ref type='bibr' target='#b3'>An et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Our sensitivity analysis found that the model we used to calculate oocyst and cyst emissions was most sensitive to oocyst and cyst excretion rates, similar to the results from the GloWPa-Crypto L1 model <ns0:ref type='bibr' target='#b64'>(Vermeulen et al., 2017)</ns0:ref>. More detailed information on the toll of cryptosporidiosis and giardiasis and the excretion rates of infected people in Chongqing would improve the model. Additionally, our sensitivity analysis highlighted the significance of runoff. <ns0:ref type='bibr' target='#b46'>Liu et al. (2019)</ns0:ref> found that the combined effect of fertilization and runoff played a very important role in oocyst concentration in rivers. Future studies should consider the effect of runoff along with the timing of fertilization. The model was also sensitive to wastewater treatment and manure management. Scenario 3 proposed what would happen if population, urbanization, and sanitation changed similarly to Scenario 1, but without advancements in wastewater treatment and manure management. The results of Scenario 3 indicated that improving urbanization and sanitation with the same population could still result in an increase in surface water emissions if the sewage and manure management systems are inadequate. The analyses of Scenarios 1 and 2 showed a decrease in oocyst and cyst emissions when there were significant improvements in urbanization, sanitation, wastewater treatment, and manure management, along with appropriate population growth. The effects of population, urbanization, sanitation, manure management, and wastewater treatment on oocysts and cysts should be studied in more detail in order to reduce emissions.</ns0:p><ns0:p>Previous studies have used the GloWPa-Crypto model to estimate human and livestock Cryptosporidium emissions across many countries <ns0:ref type='bibr' target='#b28'>(Hofstra et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b29'>Hofstra & Vermeulen, 2016;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b66'>Vermeulen et al., 2019)</ns0:ref>, but none of these studies included Giardia. In our study, we used the GloWPa-Crypto model to estimate Cryptosporidium spp. and G. duodenalis emissions from humans and animals in the Chongqing area of the TGR. Unfortunately, the Giardia emissions from rabbits, sheep, and goats were not estimated because there is currently no data for their excretion rates. Earlier studies could not detect Giardia in rabbits, sheep, or goats <ns0:ref type='bibr' target='#b18'>(Ferguson et al., 2007)</ns0:ref>. Giardia was recently found in sheep and rabbits in northwest and central China, but the cyst excretion rates per kg of manure were indeterminate <ns0:ref type='bibr' target='#b67'>(Wang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b38'>Jin et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b35'>Jian et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>Jiang et al., 2018)</ns0:ref> and may have been underestimated.</ns0:p><ns0:p>To our knowledge, the GloWPa-Crypto model cannot be validated through the direct comparison of measured surface water values because this method ignores certain factors, such as the infiltration pathways and transport via soils and shallow groundwater to surface water <ns0:ref type='bibr' target='#b8'>(Bogena et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b69'>Watson et al., 2018)</ns0:ref>, the overflow of sewage treatment plants during the flood period <ns0:ref type='bibr' target='#b76'>(Xiao et al., 2017)</ns0:ref>, traditional dispersive small-scale peasant production <ns0:ref type='bibr' target='#b41'>(Li et al., 2016)</ns0:ref>, and the excretion of wildlife <ns0:ref type='bibr' target='#b5'>(Atwill, Phillips, & Rulofson, 2003)</ns0:ref>. The pathogen loading data of these factors are not readily available. Despite its few shortcomings, we used the GloWPa-Crypto model to further study environmental pathways, emissions in the TGR, and sources and scenarios for improved management.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This study is the first to explore Cryptosporidium spp. and G. duodenalis spatial emissions from human and livestock feces in the TGR, and to identify main sources of this pollution. There was a large amount of total emissions in Chongqing (1.6×10 15 oocysts/year and 2.1×10 15 cysts/year, respectively), indicating the need for effective pollution countermeasures. The total point source emissions from wastewater containing human excretion in urban areas were greater than the total nonpoint source emissions from human and livestock production in rural areas by a factor of 2.0 for oocysts and 3.9 for cysts. The emissions from urban areas were mainly from domestic wastewater in densely populated areas, while rural emissions were mainly from livestock feces in concetrated animal production areas. Sewage from cities and livestock feces from rural areas are therefore of particular concern in the TGR area.</ns0:p><ns0:p>The GloWPa-Crypto model was most sensitive to oocyst and cyst excretion rates, followed by prevalence and runoff. If there are significant population, urbanization, and sanitation management changes by 2050, the total Cryptosporidium and Giardia emissions in the TGR will decrease by 42% according to Scenario 1, increase by 15% in Scenario 2, or increase by 28% in Scenario 3. Our scenario analyses shows that changes in population, urbanization, sanitation, wastewater management, and manure treatment should be taken into account when trying to improve water quality. The GloWPa-Crypto model can be further refined by including direct rural resident emissions, direct animal emissions, emissions from sub-surface runoffs, and a more in-depth calculation of concentrations and human health risks using a hydrological model and scenario analysis. Our model can contribute to further understanding of environmental pathways, the risks of Cryptosporidium and Giardia pollution, and the design of effective prevention and control strategies that can reduce the outbreak of waterborne diseases in the TGR and other similar watersheds. Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 Total</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,70.87,525.00,402.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,377.62,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,219.37,525.00,402.75' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>Materials & MethodsPeerJ reviewing PDF | (2020:02:45998:1:0:NEW 30 Jul 2020)</ns0:note>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:02:45998:1:0:NEW 30 Jul 2020)</ns0:note>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:02:45998:1:0:NEW 30 Jul 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear editor and reviewers:
We are very pleased to have such an opportunity to revise our manuscript. Thank the editor and the reviewers for the good advice. We have very carefully prepared and revised our manuscript according to instructions for authors and the advice received from the editor and reviewers. We hope that the revised manuscript will be accepted. Our list of responses to the comment is below.
Responses to the comments of editor
Please ensure that the supplemental data set files are easy and straightforward to access. Please state the software needed to open these files.
Responses: We used the Matrix Laboratory (MATLAB R2016a) to run supplemental data set files. The software has been stated in the revised supplemental data set files.
Please use the name “GloWPa” in reference to the model that you use.
Responses: The model name has been revised as “GloWPa-Crypto” in the revised manuscript.
There appear to be many typos and grammatical errors throughout. Please work on this carefully and enlist the help of another reader to improve the writing quality.
Responses: The languages have been checked by PeerJ editor Mariko Terasaki. We have very carefully revised our manuscript.
Please provide all of the missing references pointed out by the reviewers.
Responses: The missing references have been provided in the revised manuscript.
There are many comments by both reviewers that ask for more information on specific issues; please address these.
Responses: Thank both reviewers for good advice. We have very carefully addressed and revised our manuscript according to the advice received from both reviewers.
Responses to the comments of review Lucie Vermeulen
Are calculations really done on a 0.5 degree grid basis? Because the supporting information gives the animal and human population data per district. If the calculations are really done on a grid basis, it should be explained how the results were aggregated to districts (figure 5 and 6).
Responses: The calculations were done on per district. Please see lines 139 in the revised manuscript.
Furthermore, a 0.5 degree grid is very coarse for a model for an area of this size (the area would only be about 6 x 10 grid cells, according to figure 5 and 6). The original Hofstra et al. model was a global scale model; then a 0.5 degree grid is justifiable, but if more detailed data for a smaller region is available then it is not justifiable in my view, a smaller grid should be chosen.
Responses: The data on the population of humans and livestock population were derived from Chongqing Statistic Bureau, China. The data from statistical yearbooks have been aggregated to per district, so the calculations were done on per district.
It is unclear from the paper and the supporting information where the data on urban / rural populations come from, and how these are spatially used in the model.
Responses: The data on urban/rural populations were derived from Chongqing Statistic Bureau, China. Please see Table S1 in the revised manuscript. We have spatially distinguished in urban and rural populations. Please see figure 6 in the revised manuscript.
It is unclear from the paper what data were taken from the original Hofstra 2013 study, and what data are local data for China. For some this is in the supporting information, but not all.
Responses: We have reorganized the data in the supporting information. Please see Table S2 in the revised supporting information.
Why was the year 2013 chosen for the study? Should be explained.
Responses: We used the emission data from 2013 for our model since the records for human and livestock populations from that year were the most complete. Please see lines 128 in the revised manuscript.
Line 199: first mention of calculations on a grid basis, this should be introduced earlier.
Responses: We calculated per district. Please see lines 201 in the revised manuscript.
Line 203: What are ‘breeding days’ and why is this variable included in the model? This variable was not in the original Hofstra 2013 model that this research is based on.
Responses: Each livestock species has a unique number of breeding days and produces different amounts of manure and excretions each year. Please see lines 185-187 in the revised manuscript.
Line 208: Is all manure treated in China? Because this is not standard practice in other places around the world, some more information on this would be helpful.
Responses: A part of manure have been collected and applied as a fertilizer in the rural area of China, and oocysts and cysts will temperature-dependent decay during the storage period. Please see lines 210-227 in the revised manuscript.
Line 230-235: the authors have not thought of this approach themselves, but taken this approach from Hofstra et al. 2013, this should be credited.
Responses: We have explained the approach taken from Hofstra et al. 2016. Please see lines 275-277 in the revised manuscript.
Line 399: what does ‘disposed harmlessly’ mean? This assumption should be explained (in the methods section).
Responses: We have explained the mean of ‘disposed of harmlessly’. Please see lines 188 in the revised manuscript.
Line 399-400: What assumption did you make for your calculations about the percentage of manure applied to land? Should be explained in methods section.
Responses: We have explained the literature about the percentage of manure applied to land. Please see lines 215-221 in the revised manuscript and Table S2 in the revised supporting information.
Line 408: did you use this number of 1.84 tons in your calculation? If yes, should be explained in your methods section. If not, why not, and what number did you use?
Responses: No, we used the ‘Maj’ in equation 3 to calculate the livestock emissions. Please see lines 210 in the revised manuscript and Table S4 in the revised supporting information.
Line 429 – 431: But if these four studies provide in China, and in fact you provide these data in Table S6, why were these data not incorporated in your model (0 in Table S4).
Responses: Giardia was recently found in sheep and rabbits in northwestern and central China. However, the cyst excretion rates per kg manure in sheep and rabbits were unclear. Please see lines 481-486 in the revised manuscript and Tables S4 and S5 in the revised supporting information.
Table S1: the livestock data should be given for the individual animal species instead of livestock total. Which are presumably the different animal species added up? Without the separated data, this research is not reproducible.
Responses: We have modified the data in Table S1. Please see Tables S1 in the revised supporting information.
Table S1: Where do these data come from? Reference?
Responses: We have explained these data come from. Please see Tables S1 in the revised supporting information.
Table S1: The human population density and livestock density data in Table S1 are given per district. However, the original model by Hofstra et al. 2013 used gridded human and livestock population density data (on a 0.5 x 0.5 degree grid). It is unclear from the paper whether the calculations were done on a grid basis or district basis. The results are only shown in maps on a district basis, but the text does mention grids here and there (e.g. line 199 mentions explicitly that calculations are done per grid cell). As is, this research is not reproducible.
Responses: The calculations were done on per district. Please see lines 201 in the revised manuscript.
What are the units of Table S2 in the supporting information?
Responses: We have added the units. Please see Tables S2 in the revised supporting information.
The results are presented in the text with four significant digits. This suggests an unrealistic amount of certainty that is not justifiable with this modelling approach.
Responses: We revised the results in the text with two significant digits. Please see the revised manuscript.
The population that is classified as non-source means a population that is connected to a sanitation type that produces no emissions to the environment, in the original Hofstra model. However, the text (lines 257 – 269) suggests that emissions are calculated for non-sources, this is very confusing.
Responses: Thanks!Non-sources have been revised according to the original Hofstra model. Please see lines 301 – 310 in the revised manuscript.
Line 280: are these (oo)cysts per grid cell, per year, per district? No unit given.
Responses: We have added the units. Please see lines 314 in the revised manuscript.
Line 301-302: ‘the data were limited by subjective and objective factors’. Unclear what is meant here.
Responses: We have deleted it. Please see lines 341 in the revised manuscript.
Interpretation of what the outcome of the sensitivity analysis means is lacking.
Responses: We have explained the meaning of the outcome of the sensitivity analysis. Please see lines 341-354 in the revised manuscript.
Line 321-322: why is it that people in these regions obtain this treatment? Explanation and interpretation of the patterns in figure 8 is lacking.
Responses: We have explained why people in these regions obtain this treatment. Please see lines 364-366 and figure 8 in the revised manuscript.
Line 383: This statement that the model generally produces estimates around 1.5-2 log higher than observations is made about the surface water oocyst concentrations model by Vermeulen et al. 2019, NOT the oocyst emission model by Hofstra et al. 2013. It is therefore incorrect and very misleading to apply this statement to this research!
Responses: Thanks!It is an error. We have deleted it. Please see lines 430 in the revised manuscript.
Line 388: give numbers on the concentrations of Crypto and Giardia that are found in the WWTPs in this area, that would be informative for the reader.
Responses: We have added the concentrations in the WWTPs in this area. Please see lines 435-437 in the revised manuscript.
Line 419-421: misleading, because you based your model on these models, they are not completely ‘different models’.
Responses: The model name has been revised as “GloWPa-Crypto” in the revised manuscript. Please see lines 476 in the revised manuscript.
Line 421-424: the work by Liu et al. 2019 sounds like a very relevant work. Can you discuss the assumptions that Liu et al. made regarding the oocyst production and emission in this watershed, and contrast these with your own assumptions?
Responses: The work by Liu et al. 2019 was modeled the spatio-temporal pattern of Cryptosporidium concentrations in the Daning River watershed of the Three Gorges
Reservoir Region, China. We have discussed the assumptions of oocyst concentration worked by Liu et al. 2019, and contrasted these with our assumptions in this watershed. Please see lines 423-426 in the revised manuscript.
Line 432-437: No, you cannot validate your model with surface water values because you did not calculate surface water values, only emissions. If you wanted to calculate surface water values, you should use your emissions as input for a hydrological model!
Responses: Thanks!It is an error. We used a hydrological model (Vermeulen et al., 2019) to calculate that the mean concentrations of Cryptosporidium and Giardia in the waters of the TGR, respectively. Please see lines 416-419 in the revised manuscript.
Line 453: you did not include actual runoff data in your model, only an assumption on a runoff fraction. This should be made clear to the reader and discussed in the paper!
Responses: We added the calculation of runoff. Please see lines 229-241 in the revised manuscript and Table S2 in the revised supporting information.
Line 458: not just a routing model, but any hydrological information! A routing model is only a part of hydrological modelling.
Responses: We have used a hydrological model instead of a routing model. Please see lines 516 in the revised manuscript.
The main conclusions from the scenario analysis should be incorporated in the conclusion section.
Responses: We have added the main conclusions from the scenario analysis. Please see lines 508-514 in the revised manuscript.
Figure 1: reference for these data should be given!
Responses: We have added the reference to these data. Please see figure 1 in the revised manuscript.
Figure 2: this figure is adapted from Hofstra et al. 2013. Credit should be given!
Responses: We have explained the figure adapted from Hofstra et al. 2013. Please see figure 2 in the revised manuscript.
Figure 2: Why is the arrow from Rural to Surface water grey? Because as I understand it, you are calculating this?
Responses: We did not calculate the direct emissions in the rural area. Please see lines 167-170 and figure 2 in the revised manuscript.
Figure 3: Strange y axis scale, bit difficult to interpret. (Especially the strange jumps from 5x10^13 to 6x10^14 to 1x10^15??)
Responses: We have changed to a log-unit scale for y axis. Please see figure 3 in the revised manuscript.
Figure 4: Incorrect explanation of non-source, as non-source is not an emission category. This figure actually does not show emission categories, but the fraction of the population having a sanitation type that does or does not produce a certain type of emissions.
Responses: The explanation of non-sources has been revised according to the original Hofstra model. Please see figure 4 in the revised manuscript.
Line 139 showed.
Responses: Please see lines 131 in the revised manuscript.
Line 245 human humans.
Responses: Please see lines 289 in the revised manuscript.
Line 214: presented present.
Responses: Please see lines 260 in the revised manuscript.
Line 91-95.
Responses: Please see lines 87-90 in the revised manuscript.
Line 95-99.
Responses: Please see lines 90-93 in the revised manuscript.
Line 260-262.
Responses: Please see lines 303-304 in the revised manuscript.
Line 395-397.
Responses: Please see lines 444-445 in the revised manuscript.
Line 400-403.
Responses: Please see lines 448-451 in the revised manuscript.
Line 410-412.
Responses: Please see lines 455-457 in the revised manuscript.
Line 416: ‘it is deserved to’.
Responses: We have deleted it.
Line 51“ are the largest animal emissions” “ account for the largest animal emissions”
Responses: Please see lines 50 in the revised manuscript.
Line 111 whereas should be removed.
Responses: Please see lines 106 in the revised manuscript.
Line 115 ‘the’ before China should be removed.
Responses: Please see lines 108 in the revised manuscript.
Line 116 preponderance is strange.
Responses: Please see lines 108 in the revised manuscript.
Line 118 lead leading.
Responses: Please see lines 110 in the revised manuscript.
Line 289: them those.
Responses: Please see lines 328 in the revised manuscript.
Line 316: ‘the’ should be removed.
Responses: Please see lines 358 in the revised manuscript.
Line 328: ‘also’ should be removed.
Responses: Please see lines 370 in the revised manuscript.
Line 350: ‘from the increasing pollution’ ‘the’ should be removed.
Responses: Please see lines 397 in the revised manuscript.
Line 437: ‘in actual’ should be removed.
Responses: Please see lines 493 in the revised manuscript.
Line 54: how can urbanization be ‘ improved’ ?
Responses: Please see lines 54 in the revised manuscript.
Line 65-66: Strange summary. The sentence suggests that children are examples of immunocompromised people, and that AIDS patients and neonatal animals are examples of malnourished individuals.
Responses: Please see lines 65 in the revised manuscript.
Line 69: Hofstra et al. 2013 is not a primary reference for this statement.
Responses: Please see lines 67 in the revised manuscript.
Line 73: not all animals are a reservoir of Crypto and Giardia!
Responses: Please see lines 71 in the revised manuscript.
Line 89: does ‘ host’ here refer to animals, humans or both?
Responses: Please see lines 85 in the revised manuscript.
Line 134-136: Text seems to suggest that the figure shows emissions, but it does not.
Responses: Please see lines 125-127 in the revised manuscript.
Line 140: ‘two pollution sources’ ‘ two types of pollution sources’.
Responses: Please see lines 132 in the revised manuscript.
Line 212-213: ‘ range of the parameter that can take’ ‘ range that the parameter can take’.
Responses: Please see lines 256-257 in the revised manuscript.
Line 230: insert ‘of’ after ‘understanding’.
Responses: Please see lines 276 in the revised manuscript.
Line 321-323: this is an assumption in a scenario and not a fact, should be made clear.
Responses: Please see lines 365 in the revised manuscript.
Line 361: commas around adequately should be removed.
Responses: Please see lines 409 in the revised manuscript.
General: inconsistent spelling of feces / faeces.
Responses: We have used feces throughout the paper.
Responses to the comments of review 2
Several references are out of place. Make sure that you check them again and only really cite the correct literature. E.g. Hofstra et al. 2013 is not the source to cite that diarrhea is the third leading cause of death. There are burden of disease studies or WHO documents reporting this. O’Neill et al 2015 is not the source that discusses the lack of available livestock SSP data (and are they indeed still missing or have they in the mean time become available?).
Responses: The references have been checked in the revised manuscript.
It is important to explain why the loads are relevant. What you say is that this model can contribute to risk assessment (line 460). However, risk assessment is based on the amount a person ingests. How do the loads that you calculated relate to this?
Responses: According to our result about the total emissions, calculating concentrations using a hydrological model in the TGR to investigate human health risk. Please see lines 516 in the revised manuscript.
Line 250-251 For human … of total emission reach the TGR. How do you know? How about decay along the way?
Responses: Inactivation or decay of the oocysts has not been accounted for in the original Hofstra et al. (2013) model. We have calculated the decay in the storage period. Please see lines 210-227 in the revised manuscript. The others influence reduction of pathogens should be studied further.
Line 279-280 Total annual animal source … emissions. Why are these animal sources spread over larger areas? I see for both human and animal emissions the full map coloured? I don’t understand.
Responses: Thanks!It is an error. Please see lines 319 in the revised manuscript.
You need to look at the units throughout the text. E.g. line 285 what are the units? (Oo)cysts per district or grid? Or? The units are in more locations unclear.
Responses: All units have been checked in the revised manuscript.
Line 403-408 What water pollutants did that Census of pollution sources look at? Would you expect the results be different for protozoans? Also, what does this text mean for your results. Should in your case livestock emissions have been larger than human emissions?
Responses: We have deleted it. Please see lines 451 in the revised manuscript.
Line 444 you mention that Chongqing is an emission hotspot. How do you know? You only studied this area. How will it compare to other areas?
Responses: Thanks!We have revised the wording. Please see lines 499 in the revised manuscript.
Figure 2 caption what is ‘sewage disposal’?
Responses: ‘sewage disposal’ is the wastewater treatment by sewage treatment plants. Please see figure 2 in the revised manuscript.
Figure 2 figure: now that you added rural and urban to the figure (compare to the figure in Hofstra et al. 2013), the figure is inconsistent. Are animals, humans, rural, urban, land, WWTP and surface water really all model components as you mention in the caption? In case you do want to add in urban and rural, consider splitting the box Humans up in two, or add rural population and urban population and remove the box with humans.
Responses: Comparing to the figure in Hofstra et al. 2013, figure 2 added rural residents and urban residents to split the box Humans up in two populations. Rural residents and urban residents calculated different emissions categories separately. Please see figure 2 in the revised manuscript.
Figure 3: why not use a log-unit scale. In the current way, the cattle Crypto emissions can not be quantified.
Responses: Thank you!We have changed to a log-unit scale for y-axis. Please see figure 3 in the revised manuscript.
Figure 4: why did you scale this figure to 100%. Why not immediately show the importance of the individual categories? Also, is the y-axis title correct? Shouldn’t it be loads rather than population?
Responses: Figure 4 shows (%) of the population of Chongqing in model emission categories, based on sanitation types of the GloWPa model following the Demographic and Health Survey (DHS) Program. Please see figure 4 in the revised manuscript.
Figure 7: you show differences between the scenarios. In the text you will need to put these differences into perspective. What does it mean that the emissions are almost halved for Scenario 1? How does halving compare to increased log reductions during treatment or the logs difference in the sensitivity analysis?
Responses: We have explained the results of the scenario differences into perspective. Please see lines 466-475 in the revised manuscript.
Figure 8: The difference figures do not clearly show positive and negative. Make this more obvious.
Responses: We have modified the figures to show positive and negative clearly. Please see figure 8 in the revised manuscript.
The paper uses emissions and loads interchangeably. However, in line 136 ‘load’ is defined. I would suggest the authors use load throughout the paper and remove the word emissions.
Responses: Thank you!We have used emission throughout the paper and removed the word load in the revised manuscript.
Line 51 have the largest instead of are.
Responses: Please see lines 50 in the revised manuscript.
Line 115 remove the before China.
Responses: Please see lines 108 in the revised manuscript.
Line 122 estimateS
Responses: Please see lines 114 in the revised manuscript.
Line 135 humanS
Responses: Please see lines 125 in the revised manuscript.
Line 137 end up IN THE surface water instead of on
Responses: Please see lines 128 in the revised manuscript.
Line 141 systems THAT reach the TGR
Responses: Please see lines 134 in the revised manuscript.
Line 143 in rural areas that feces enter surface water… rephrase (not sure what you want to say)
Responses: Please see lines 134 in the revised manuscript.
Line 145 extensive literature review that only found 3 papers? Change wording of extensive.
Responses: Please see lines 137 in the revised manuscript.
Line 153 humanS
Responses: Please see lines 152 in the revised manuscript.
Line 155 and diffuse emissions. The emission categories…
Responses: Please see lines 145 in the revised manuscript.
Line 164 were also reference to … rephrase
Responses: Please see lines 160 in the revised manuscript.
Line 191 animalS
Responses: Please see lines 182 in the revised manuscript.
Table S4 should be clear on having infected livestock.
Responses: Table S4 was adapted from Vermeulen et al. (2017).
Table S5 and 6 can be merged. Also spell average correctly
Responses: Please see Table S5 in the revised manuscript.
Line 212 range of the parameter that can take… rephrase, not sure what you want to say
Responses: Please see lines 256 in the revised manuscript.
Line 221 2050 WERE based
Responses: Please see lines 267 in the revised manuscript.
Line 227-228 does SSP3 emphasize regional progress? Also in times of war? Not sure…
Responses: The SSP3 was adapted from literatures. Please see lines 273 in the revised manuscript.
Line 228 strongly instead of highly
Responses: Please see lines 274 in the revised manuscript.
Line 230 understanding OF the
Responses: Please see lines 276 in the revised manuscript.
Line 251 Humans were responsible for ….% of the oocyst emissions. Line 253 same comment for the cyst emissions.
Responses: Please see lines 294 in the revised manuscript.
Line 256 add manure to subtitle
Responses: Line 256 has no word.
Line 324 urban residentS ARE responsible
Responses: Please see lines 367 in the revised manuscript.
Line 344 urban areas instead of residents
Responses: Please see lines 388 in the revised manuscript.
Line 351-352 China is one of the regions with Crypto…
Responses: Please see lines 398 in the revised manuscript.
Line 360 important instead of a significance
Responses: Please see lines 409 in the revised manuscript.
Line 361 remove commas twice
Responses: Please see lines 409 in the revised manuscript.
Line 363 aimed AT
Responses: Please see lines 410 in the revised manuscript.
Line 386 densely populated urban areas instead of high population density urban
Responses: Please see lines 433 in the revised manuscript.
Line 416 consequence, manure treatment, such as ….. on oocyst should be studied in more detail ….
Responses: We have deleted it. Please see lines 455 in the revised manuscript.
Line 453 Therefore is out of place. The sentence does not follow logically from the previous sentence.
Responses: Please see lines 511 in the revised manuscript.
Why is the year 2013 chosen as baseline?
Responses: We used the emission data from 2013 for our model since the records for human and livestock populations from that year were the most complete. Please see lines 128-130 in the revised manuscript.
The authors have chosen to use human and livestock loads and then split these up in connected, direct and diffuse loads. It is not in all cases clear what these categories involve. For example, in line 157 the authors mention that faecal waste has been collected and used for irrigation as a fertilizer. It is unclear to me which faecal waste is meant here. Additionally, in the explanation of the livestock loads there is no mention of dealing with the different categories differently, but figure 4 does split them up. It is unclear how this split up has been developed. Finally, the methodology does not discuss non-sources. The results and discussion section do. What is this non-source category?
Responses: In rural areas, a part of faecal waste has been collected and used for irrigation as a fertilizer in China. Therefore, we assumed that feces of people using septic tanks and pit latrines are a diffuse source of oocysts and cysts to surface water by runoff. Please see lines 150-152 in the revised manuscript. Secondly, the methodology has been discussed emission categories from humans and livestock. Please see lines 187-191 in the revised manuscript. Non-sources are not emissions to the environment. Please see lines 190-191 in the revised manuscript.
Further on the faecal waste. Is the the faecal waste from pit latrines (only latrines, not from septic tanks, or the waste from sewer pipes?) directly used on the land? How about storage in the pits before emptying. Would this influence reduction of pathogens in the faecal waste? To solve this point and the previous one, the paper would really benefit from a TGR area-specific explanation of the sanitation situation and livestock manure management.
Responses: The rural human faecal waste from the septic tank and pit latrine have been counted. Oocysts and cyst decay during the storage period have been calculated before the manure applied as a fertilizer in the rural area of China. Please see lines 210-227 in the revised manuscript.
Why is the HDI relevant? You are working in one country with only one HDI?
Responses: The GloWPa model used HDI to distinguish excretion rate oocysts/person/year for developed and developing countries, respectively. The HDI in Chongqing or China all were less than 0.785 in the year 2013. Please see lines 152-154 in the revised manuscript.
From Table S2 it seems like the current sanitation fractions used are the same across the districts. Is that indeed the case? This could make quite the difference to the final results and maps.
Responses: Distinguishing sanitation in different districts could make quite a difference to the final results and maps. However, there is no sufficient information about the sanitation in different districts of Chongqing from the literature.
The surface runoff fraction used (0.4) seems to be very high. Ferguson uses 2.5%, Vermeulen et al 2019 use even much lower values (2-8 log stay behind on the land). Motivate this fraction of 0.4.
Responses: We have revised the surface runoff fraction. Please see lines 229-241 in the revised manuscript.
Only from equation 4, I realised that some of the manure is treated before it is discharged into the surface water (is that my correct interpretation?). Is that my correct interpretation? What happens to the manure otherwise? Is there any manure storage? If so, would you need to consider losses during storage? Also, I happen to know that direct discharges from farms to the surface water regularly occur in China. Are these included? This is all not clear. Again, paint a picture of the situation in China and then explain which of these are included in in what way.
Responses: We have calculated the decay in the storage period. Please see lines 210-227 in the revised manuscript. In the case of illegal and undocumented dumping of the waste, currently, there is no way to include these emissions in our model due to lack of data. We have revised the conceptual framework of model components. Please see figure 2 in the revised manuscript.
It is not clear to me why the breeding days of livestock species are relevant. Why is this included? Motivate.
Responses: Each livestock species has a unique number of breeding days and produces different amounts of manure and excretions each year. Please see lines 184-187 in the revised manuscript.
The sensitivity analysis requires much more explanation. For example, the connected fraction is doubled. How can you get a connected fraction higher than 1? Additionally, the fractions together add up to one. When you change one, the others should also automatically change. How did you deal with this? Etcetera.
Responses: The sensitivity of the model inputs used in the risk calculations was explored through a nominal range sensitivity analysis (NRSA). Input parameter values were increased or decreased from the base model, one input at a time based on sensitivity analysis of the GloWPa model. We opted for an NRSA as it provides a quantitative insight into the individual impact of the different model parameters on the model outcome. Please see lines 255-260 in the revised manuscript.
The scenario analysis requires a lot more motivation. The way it is currently discussed, shows a global scale interpretation of the SSPs. However, you are focusing on China alone. Did anyone already do a higher resolution interpretation of the SSPs for China? I would be surprised if nobody has done that yet. It is important to do a local interpretation of such large-scale scenarios, as they may mean something very different in different parts of the world. So how can you interpret the SSP1 and 3 scenarios for China and the TGR region specifically? Also, motivate better why it is OK to assume that there are no changes in excretion rates and prevalence (wouldn’t a better developed country have lower prevalence?). Finally, you currently assume that the livestock density increases at the same rate as the human population. Is that realistic? Motivate. In this way, all choices will have to be more carefully motivated.
Responses: We have added the study of the SSPs for China. Please see lines 266-268 in the revised manuscript. We do not assume changed to the excretion rates and prevalence of Cryptosporidium and Giardia they emit in their feces. Hygiene and emissions would have an influence on the number of infections and excretion rates in the future. These effects are, however, very uncertain in 2050s based on the literature and the implementation of scenarios on epidemiology or behavior were outside the scope of our study. The livestock density increases have adapted from the study of the SSPs. Please see lines 280 in the revised manuscript.
In line 272, at once it is mentioned that spatially distributed maps are produced. How did that spatial distribution happen? And does a 0.5 x 0.5 lat lon degree resolution make sense in the case of this study, or should the resolution be higher, or can you also plot districts (maybe you have done that already, but that is not clear).
Responses: The calculations were done on per district. The data we collected on the population of humans and livestock population was derived from Chongqing Statistic Bureau, China. The data from statistical yearbooks have been aggregated to per district, so the calculations were done on per district. Please see lines 139 in the revised manuscript.
Are there animals in urban areas? Line 265…?
Responses: There are not animals in urban areas in our model. We divided the emissions of humans and animals into two main regions: urban areas including urban residents, rural areas including rural residents and total livestock. Please see lines 306-307 in the revised manuscript.
The discussion of the paper is weak. Many holes are picked in the own work, but the results are not put in perspective of earlier results or the own sensitivity analysis. The GloWPa model is seen as a different model (line 419), but at the same time, it is seen as the same model (at least, that is what I think?) in line 382-384 (‘the model’ on line 383). Nevertheless, results of both models for the same areas are not compared. Why not? This is an easy first step. You have more spatial detail in your model inputs, so your model is potentially the better model. Right? There are opportunities for more comparison. E.g. the Xiao et al 2013a paper is mentioned on line 369. Measured concentrations are mentioned. However, how can they be compared to your results? You try to make a calculation and arrive at a load, but it is unclear to me 1. how you get to this load and 2. how I can compare this load to the concentrations of Xiao. That is a missed opportunity.
Responses: Thank a lot for good advice. We compared the results with the other model in the same areas. Calculating surface water concentration should use the emissionss as input for a hydrological model. Please see lines 416-419 in the revised manuscript. We have spatially distinguished in urban and rural areas and river systems. Please see figure 1, 5 and 6 in the revised manuscript.
The discussion is full of texts like ‘it is reasonable that our estimated emissions [] exceeded actual loads of waters in the TGR within an order of magnitude’ (line 382) how do you know it is within the order of magnitude?!), the TGR-WP ‘comprehensively’ (line 425) simulated emissions (how do you know? What tells you this?), ‘Although there are many shortcomings, our TGR-WP model can contribute to understanding’ (line 438-440, what makes you conclude this? You just mentioned quite a number of things that were not included in this model. You need more ammunition to make it convincing). The discussion really requires better comparison with available data and much better argumentation. You did a sensitivity analysis. Use this analysis to put your results into perspective!
Responses: Thank a lot for good advice. We have revised the manuscript with available data and much better argumentation.
We hope that the revised manuscript will be accepted.
Sincerely,
Guosheng Xiao
29-7-2020
" | Here is a paper. Please give your review comments after reading it. |
656 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cryptosporidium spp. and Giardia duodenalis are two waterborne protozoan parasites that can cause diarrhea. Human and animal feces in surface water are a major source of these pathogens. This paper presents a GloWPa-TGR-Crypto model that estimates Cryptosporidium and G. duodenalis emissions from human and animal feces in the Three Gorges Reservoir (TGR), and uses scenario analysis to predict the effects of sanitation, urbanization, and population growth on oocyst and cyst emissions for 2050. Our model estimated annual emissions of 1.6×10 15 oocysts and 2.1×10 15 cysts from human and animal feces, respectively. Humans were the largest contributors of oocysts and cysts, followed by pigs and poultry. Cities were hot-spots for human emissions, while districts with high livestock populations accounted for the highest animal emissions. Our model was the most sensitive to oocyst excretion rates. The results indicated that 74% and 87% of total emissions came from urban areas and humans, respectively, and 86% of total human emissions were produced by the urban population. The scenario analysis showed a potential decrease in oocyst and cyst emissions with improvements in urbanization, sanitation, wastewater treatment, and manure management, regardless of population increase. Our model can further contribute to the understanding of environmental pathways, the risk assessment of Cryptosporidium and Giardia pollution, and effective prevention and control strategies that can reduce the outbreak of waterborne diseases in the TGR and other similar watersheds .</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Cryptosporidium spp. and Giardia duodenalis are two ubiquitous parasites that can cause gastrointestinal disease in humans and many animals worldwide <ns0:ref type='bibr' target='#b58'>(Šlapeta, 2013;</ns0:ref><ns0:ref type='bibr' target='#b53'>Ryan, Fayer, & Xiao, 2014;</ns0:ref><ns0:ref type='bibr' target='#b72'>Wu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b55'>Sahraoui et al., 2019)</ns0:ref>. They can cause cryptosporidiosis and giardiasis, which are typically self-limiting infections in immunocompetent individuals, but are life-threatening illnesses in immunocompromised people, such as AIDS patients <ns0:ref type='bibr' target='#b74'>(Xiao & Fayer, 2008;</ns0:ref><ns0:ref type='bibr' target='#b43'>Liu et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b22'>Ghafari et al., 2018 )</ns0:ref>. In developing countries, diarrhea has been identified as the third leading cause of death <ns0:ref type='bibr'>(WHO, 2008)</ns0:ref>, and global deaths from diarrhea are around 1.3 million annually <ns0:ref type='bibr'>(GBD, 2015)</ns0:ref>. There are also many waterborne cryptosporidiosis and giardiasis outbreaks regularly reported in developed countries <ns0:ref type='bibr' target='#b29'>(Hoxie et al., 1997;</ns0:ref><ns0:ref type='bibr' target='#b6'>Bartelt, Attias, & Black, 2016)</ns0:ref>.</ns0:p><ns0:p>Humans and many animals are important reservoirs for Cryptosporidium spp. and G. duodenalis, and large amounts of both pathogens and extremely high oocyst and cyst excretions have been traced in their feces <ns0:ref type='bibr' target='#b23'>(Graczyk & Fried, 2007;</ns0:ref><ns0:ref type='bibr' target='#b61'>Tangtrongsup et al., 2019)</ns0:ref>. Moreover, the transmission of these parasites occurs through a variety of mechanisms in the fecal-oral route, including the direct contact with or indirect ingestion of contaminated food or water <ns0:ref type='bibr' target='#b10'>(Castro-Hermida et al., 2009;</ns0:ref><ns0:ref type='bibr'>Dixon & Brent, 2016;</ns0:ref><ns0:ref type='bibr' target='#b54'>Saaed & Ongerth, 2019)</ns0:ref>. These parasites can enter and pollute surface water directly through sewage sludge or indirectly through field runoff <ns0:ref type='bibr' target='#b24'>(Graczyk et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b48'>Mons et al., 2008)</ns0:ref>. Oocysts and cycsts are highly infectious, very stable in environmental water, and largely resistant to many chemical and physical inactivation agents <ns0:ref type='bibr' target='#b9'>(Carmena et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b11'>Castro-Hermida, Gonzalez-Warleta, & Mezo, 2015;</ns0:ref><ns0:ref type='bibr' target='#b0'>Adeyemo et al., 2019)</ns0:ref>, making the presence of waterborne Cryptosporidium spp. and G. duodenalis pathogens in surface water a serious public health threat <ns0:ref type='bibr' target='#b72'>(Wu et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Li et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Cryptosporidiosis and giardiasis have been reported in at least 300 areas and more than 90 countries worldwide <ns0:ref type='bibr' target='#b77'>(Yang et al., 2017)</ns0:ref>. Previous studies have predicted that Cryptosporidium is in 4% to 31% of the stools of immunocompetent people living in developing countries <ns0:ref type='bibr' target='#b52'>(Quihui-Cota et al., 2017)</ns0:ref> and in 1% of the stools of people with high incomes <ns0:ref type='bibr' target='#b12'>(Checkley et al., 2015)</ns0:ref>. Additionally, it has been estimated that more than 200 million people are chronically infected with giardiasis, with 500,000 new cases reported each year, and that waterborne Giardia outbreaks affect approximately 10% of the world's population <ns0:ref type='bibr' target='#b49'>(Norhayati et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b54'>Saaed & Ongerth, 2019)</ns0:ref>. To date, Cryptosporidium spp. and G. duodenalis have been found in more than 27 provincial administrative regions in China <ns0:ref type='bibr' target='#b77'>(Yang et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b42'>Liu et al., 2020)</ns0:ref>. However, there continues to be a critical lack of surveillance systems documenting and tracking protozoan infection and waterborne outbreaks in developing countries <ns0:ref type='bibr' target='#b5'>(Baldursson & Karanis, 2011;</ns0:ref><ns0:ref type='bibr'>Efstratiou, Ongerth, & Karanis, 2017ab)</ns0:ref>. Cryptosporidium and Giardia have recently been added as pathogens in China's Standards for Drinking Water Quality (GB/ <ns0:ref type='bibr'>T5749-2006, 2007)</ns0:ref>, suggesting that greater attention is being paid to waterborne parasite control in a region with no previous monitoring and reporting systems. Nevertheless, the incidence rate and risks of waterborne protozoan illness are still poorly understood in China, making it difficult to combat parasitic protozoa, manage source water, and assess future risks <ns0:ref type='bibr' target='#b1'>(An et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b72'>Xiao et al., 2013a;</ns0:ref><ns0:ref type='bibr' target='#b5'>Baldursson & Karanis, 2011)</ns0:ref>.</ns0:p><ns0:p>The Three Gorges Reservoir (TGR), one of the world's largest comprehensive hydropower projects, is located at the upper reaches of the Yangtze River, the longest river in Asia <ns0:ref type='bibr' target='#b25'>(He et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b56'>Sang et al., 2019)</ns0:ref>. It is an important source of water and plays a crucial role in China's economy <ns0:ref type='bibr' target='#b41'>(Li, Huang, & Qu, 2017)</ns0:ref> by optimizing their water resources. Serious pollution from agricultural activities and domestic sewage discharge have adversely affected the sustainable development of the TGR and the entire Yangtze River Basin, and pose a threat to future resources <ns0:ref type='bibr' target='#b18'>(Fu et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b76'>Yang et al., 2015)</ns0:ref>. There is a lack of observational data on Cryptosporidium and Giardia emissions from people and livestock in China. The existing data does show that monitoring programs are expensive, time-consuming, and often cannot detect or properly measure the ambient concentrations of oocysts and cysts <ns0:ref type='bibr'>(Efstratiou, Ongerth & Karanis, 2017ab;</ns0:ref><ns0:ref type='bibr' target='#b45'>Martins et al., 2019)</ns0:ref>. Understanding the environmental emissions and transmission routes of parasitic protozoa is beneficial when developing strategies to assess and mitigate waterborne diseases. In this study, we aimed to: (i) use a spatially explicit model to estimate total annual oocyst and cyst emissions from human and livestock feces in the TGR; (ii) use scenario analysis to explore the impacts of population growth, urbanization, and sanitation changes on human and animal Cryptosporidium and Giardia emissions in surface water; and (iii) contribute to a general understanding of the risk of protozoan parasites and to strategies that will control and reduce the burden of waterborne pathogens in the TGR.</ns0:p></ns0:div>
<ns0:div><ns0:head>Study area and model components</ns0:head><ns0:p>The TGR Area is located at 28°30' to 31°44 ' N and 105°44' to 111°39' E in the lower section of the upper reaches of the Yangtze River. It has a watershed of 46,118 km 2 , reaches approximately 16.8 million residents, and its river system covers 38 Chongqing districts and counties (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Using the area's population (Fig. <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>) and livestock density (Fig. <ns0:ref type='figure' target='#fig_0'>1B</ns0:ref>) (Table <ns0:ref type='table'>S1</ns0:ref>), we applied the GloWPa-TGR-Crypto model to estimate oocyst and cyst emissions in the Chongqing region of the TGR <ns0:ref type='bibr' target='#b26'>(Hofstra et al., 2013)</ns0:ref>. We defined an emission as the annual total number of oocysts and cysts excreted by people and livestock found in surface water. We used the emission data from 2013 for our model since the records for human and livestock populations from that year were the most complete. The model (GloWPa-TGR-Crypto) consisted of two components: a human emission model and an animal emission model. Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows a schematic sketch of the model's components. We identified the two types of pollution sources for the total oocysts and cysts found in the TGR. Point sources were human emissions connected to sewage systems that indirectly reached the TGR after treatment, or directly before treatment. Nonpoint sources were emissions from rural residents or livestock resulting from manure being used as fertilizer and entering surface water via runoff. Our model was partly based on <ns0:ref type='bibr' target='#b26'>Hofstra et al. (2013)</ns0:ref> and other reviews suggesting improvements in manure treatment of livestock and human emissions <ns0:ref type='bibr' target='#b26'>(Hofstra et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b65'>Vermeulen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. We ran the model at both district and county levels. Our model can't differentiate Cryptosporidium species, as there was the paucity of the prevalence and excretion rates for different species in humans and livestock.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating oocyst and cyst excretion in human feces (H)</ns0:head><ns0:p>Using population (P), sanitation availability (F), and excretion (O p ) data from 2013, the model first estimated human oocyst and cyst emissions. We divided the human populations into four emission categories: connected sources, direct sources, diffuse sources, and non-sources. The detailed descriptions of the emission categories are provided in <ns0:ref type='bibr' target='#b38'>Kiulia et al. (2015)</ns0:ref>. The model not only calculated the human emissions connected to sewers in urban and rural areas, but it also calculated direct and diffuse emissions. Unlike <ns0:ref type='bibr' target='#b38'>Kiulia et al. (2015)</ns0:ref>, our model did not differentiate across age categories. In rural areas of China, a portion of fecal waste is collected and used for fertilizer and irrigation. Therefore, we assumed that human feces runoff from septic tanks and pit latrines was a diffuse source of oocysts and cysts in surface water. The two protozoan parasites' prevalence rate was 10% in developing countries (Human Development Index (HDI) 0.785; <ns0:ref type='bibr' target='#b26'>Hofstra et al., 2013)</ns0:ref>, and the average excretion rate (O p ) was assumed to < be and 1.58 for oocysts and cysts, respectively (Ferguson et al., 2007; Hofstra 1.0 × 10 8 × 10 8 et al., 2013). Our model used the secondary sewage treatment according to the Chinese discharge standard of pollutants for municipal wastewater treatment plants ( <ns0:ref type='bibr' target='#b19'>GB18918-2002)</ns0:ref> and the Chinese technological policy for the treatment of municipal sewage and pollution control (http://www.mee.gov.cn/). The removal efficiencies were 10%, 50%, and 95%, for primary, secondary, and tertiary treatments, respectively <ns0:ref type='bibr' target='#b26'>(Hofstra et al., 2013)</ns0:ref>. The results (H) were calculated using Eq. ( <ns0:ref type='formula'>1</ns0:ref>) and (2):</ns0:p><ns0:p>, representing four state sets.</ns0:p><ns0:formula xml:id='formula_0'>K i ∈ {K 1 , K 2 , K 3 , K 4 }, i ∈ {1, 2, 3, 4} ---Urban connected emissions K 1 ---Rural connected emissions K 2 ---Urban direct emissions K 3 ---Rural diffuse emissions K 4</ns0:formula><ns0:p>Oocyst and cyst excretions ( ) from each human emission (i) per district were calculated as K i follows:</ns0:p><ns0:p>st. {</ns0:p><ns0:formula xml:id='formula_1'>K 1 = CE u = P u × F cu × O p × (1 -F rem ) K 2 = CE r = P r × F cr × O p × (1 -F rem ) K 3 = DE u = P u × F du × O p K 4 = DifE r = P r × F difr × O p (1) 𝐻 = 4 ∑ 𝑖 = 1 𝐾 𝑖 (2)</ns0:formula><ns0:p>where H is the total oocyst and cyst excretion from different human emission categories in a district or county (oocysts and cycsts/year); and are total urban and rural populations in P u P r districts or counties, respectively; is the average oocyst and cyst excretion rates per person O p per year (oocysts and cysts/year); and are the fractions of urban and rural populations F cu F cr connected to a sewer, respectively; is the fraction of urban populations not connected to a F du sewer that is considered a direct source;</ns0:p><ns0:p>is the fraction of rural populations not using F difr sanitation that is considered a diffuse source; and is the fraction of oocysts and cysts F rem removed by sewage treatment plants (STP). The values assumed for this study are summarized in Tables <ns0:ref type='table'>S1 and S2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating oocyst and cyst excretion in animal manure (A)</ns0:head><ns0:p>Using the number of livestock, breeding day, manure excretion, oocyst and cyst excretion rate, and prevalence rate, the model then estimated livestock oocyst and cyst emissions in 2013. We established six livestock categories: rabbits, pigs, cattle, poultry, sheep, and goats. Unlike <ns0:ref type='bibr' target='#b26'>Hofstra et al. (2013)</ns0:ref>, our model used different livestock breeding day categories because each livestock species has a unique number of breeding days before slaughter and produces different amounts of manure and excretions each year. We also divided the animal populations into four emission categories: (1) connected emissions from livestock receiving manure treatment, (2) direct emissions from livestock directly discharged to surface water, (3) diffuse emissions resulting from using livestock manure as a fertilizer after storage, and (4) livestock manure that was not used for irrigation after storage or for any other use (e.g., burned for fuel) <ns0:ref type='bibr'>(Vermeulen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. We assumed that 10% of emissions would be connected emissions <ns0:ref type='bibr' target='#b78'>(Zhang et al., 2017)</ns0:ref>. Illegal and undocumented direct emissions (e.g., dumped manure) were not included in our model due to a lack of data. The results (A) were calculated using Eq. ( <ns0:ref type='formula'>3</ns0:ref>) and ( <ns0:ref type='formula'>4</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_2'>X j ∈ {X 1 , X 2 , X 3 , X 4 , X 5 }, j ∈ { 1, 2, 3, 4, 5 } ---Rabbit emissions X 1 ---Pig emissions X 2 ---Cattle emissions X 3 ---Sheep and goat emissions X 4 ---Poultry emissions X 5</ns0:formula><ns0:p>We calculated oocyst and cyst excretions (X j ) from each animal species (j) per district using the following equation:</ns0:p><ns0:formula xml:id='formula_3'>X j = N a j × D a j × M a j × O a j × P a j (3) 𝐴 = 5 ∑ 𝑗 = 1 𝑋 𝑗 (4)</ns0:formula><ns0:p>where X j is the oocyst and cyst excretions from livestock species j in a district or county (oocysts and cysts/year), is the number of animals in a district or county, is the breeding day for N a j D a j different livestock species j (days), is the mean daily manure of livestock species j (kg•day -M a j 1 ), is the oocyst and cyst excretion rate per infected livestock species j in manure ( 10 log O a j (oo)cysts•kg -1 •d -1 ), and is the prevalence of cryptosporidiosis and giardiasis in livestock P a j species j. The values assumed for this study are summarized in Tables <ns0:ref type='table'>S1, S3</ns0:ref>, S4, and S5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating oocyst and cyst emissions after manure storage (S)</ns0:head><ns0:p>In China, manure from human diffuse and livestock sources is collected and used for irrigation and fertilizer <ns0:ref type='bibr' target='#b44'>(Liu et al., 2019)</ns0:ref>. The decay of oocysts and cysts in manure that has been stored before being applied as fertilizer in the TGR watershed is temperature-dependent during the storage period <ns0:ref type='bibr' target='#b59'>(Tang et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. The number of oocysts and cysts in stored manure that has been loaded on land during irrigation was calculated using Eq. ( <ns0:ref type='formula'>5</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_4'>S = DifE r × F s,h × F v + A × F s,a × F v (5)</ns0:formula><ns0:p>where is the number of oocysts and cysts in manure that has been spread on land after storage S in a district or county (oocysts and cysts year);</ns0:p><ns0:p>and are the number of oocysts and cysts / DifE r A in manure for rural residents (human diffuse sources) and livestock (oocysts and cysts year), / respectively; and are the proportions of stored manure applied as a fertilizer from rural F s,h F s,a residents and livestock, respectively (Table <ns0:ref type='table'>S2</ns0:ref>); and is the proportion of average oocyst and F v cyst survival in the storage system.</ns0:p><ns0:p>The average oocyst and cyst survival rate ( ) in the storage system depended on F v temperature (T) and storage time ( ) <ns0:ref type='bibr' target='#b64'>(Vermeulen et al., 2017)</ns0:ref>. The results were calculated using t s</ns0:p><ns0:p>Eq. ( <ns0:ref type='formula'>6</ns0:ref>), ( <ns0:ref type='formula'>7</ns0:ref>), and (8):</ns0:p><ns0:formula xml:id='formula_5'>K s = ln 10 -2.5586 × T + 119.63 (6) V s, = e ( -K s × t s ) (7) F v = ∫ t s 0 V s dt t s (8)</ns0:formula><ns0:p>where is the average annual air temperature (℃) (Table <ns0:ref type='table'>S2</ns0:ref>), is a constant based on air T K s temperature, is the survival rate of oocysts and cysts over time, and is the manure storage V s t s time (days) (Table <ns0:ref type='table'>S2</ns0:ref>).</ns0:p><ns0:p>Calculating oocyst and cyst runoff (R) to the TGR Oocysts and cysts in stored manure that have been applied to agricultural land as a fertilizer are transported from land to rivers largely via surface runoff <ns0:ref type='bibr' target='#b63'>(Velthof et al., 2009)</ns0:ref>. Our model estimated oocyst and cyst runoff using the amounts of manure applied as fertilizer, maximal surface runoff, and a set of reduction factors <ns0:ref type='bibr' target='#b63'>(Velthof et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b26'>Hofstra et al., 2013)</ns0:ref>. The results (R) were calculated using Eq. ( <ns0:ref type='formula'>9</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_6'>R = S × F run,max × f lu × f p × f rc × f s (9)</ns0:formula><ns0:p>where is the number of oocysts and cysts in manure applied as a fertilizer that reached the TGR R via surface runoff in a district or county (oocysts and cysts year), is the number of oocysts and / S cysts in manure that was spread on land after storage (oocyst and cysts year), F run, max is the / fraction of maximum surface runoff across different slope classes, f lu is the reduction factor for land use, f p is the reduction factor for average annual precipitation, f rc is the reduction factor for rock depth, and f s is the reduction factor for soil type. The values assumed for this study are summarized in Table <ns0:ref type='table'>S2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Calculating total emissions (E) and mean concentrations (C) of oocysts and cysts</ns0:head><ns0:p>Our model defined total oocyst and cyst emissions as the annual number of oocysts and cysts per district in the TGR. The results (E) were calculated using Eq. ( <ns0:ref type='formula'>10</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_7'>E = CE u + CE r + DE u + R (10)</ns0:formula><ns0:p>where is the total oocyst and cyst emissions from humans and animals in a district or county E (oocysts and cysts/year), is oocyst and cyst emissions in the TGR by urban populations CE u connected to STP in a district or county, is oocyst and cyst emissions in the TGR by rural CE r populations connected to STP in a district or county, is direct oocyst and cyst emissions in DE u the TGR by urban populations in a district or county, and is oocyst and cyst emissions in the R TGR from human and livestock manure that has been applied as a fertilizer via runoff in a district or county.</ns0:p><ns0:p>According to total emissions from the two protozoa in Chongqing and the TGR's hydrological information, we preliminarily calculated mean Cryptosporidium and Giardia concentrations in the TGR in 2013 using the GloWPa-Crypto C1 model <ns0:ref type='bibr' target='#b66'>(Vermeulen et al., 2019)</ns0:ref>. Mean concentrations were calculated using Eq. ( <ns0:ref type='formula'>11</ns0:ref>). All equations and parameters used in this study were showed in Table <ns0:ref type='table'>S11</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_8'>C = E t × e -(K T + K R + K S ) × t Q s (11)</ns0:formula><ns0:p>Where mean concentrations of oocysts and cysts (oocysts and cysts•10L -1 ), is the sum of C is E t total oocyst and cyst emissions from humans and animals in all districts or counties in Chongqing (oocysts and cysts/year). , and represent loss rate constants of temperature,</ns0:p><ns0:formula xml:id='formula_9'>K T K R , K S</ns0:formula><ns0:p>solar radiation, and sedimentation, respectively ( ). is residence time of oocysts and cysts day -1 t in Chongqing section of the TGR, is the sum of the TGR's annual inflow and storage capacity Q s (m 3 •year -1 ).</ns0:p></ns0:div>
<ns0:div><ns0:head>Sensitivity analysis</ns0:head><ns0:p>We tested our model's sensitivity to change using input parameters in a nominal range sensitivity analysis (NRSA). Input parameter values were based on reasonable lower and upper ranges of a base model, and we tested each variable individually <ns0:ref type='bibr' target='#b65'>(Vermeulen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. We selected the NRSA because it provides quantitative insight into the individual impact of different parameters on the model's outcome. Tables <ns0:ref type='table'>S6, S7</ns0:ref>, and S8 present the sensitivity analysis input variables.</ns0:p></ns0:div>
<ns0:div><ns0:head>Predicting total oocyst and cyst emissions for 2050</ns0:head><ns0:p>To explore the impact of future population, urbanization, and sanitation changes on human and animal Cryptosporidium and Giardia emissions in the TGR, we divided the emissions into urban resident, rural resident, and livestock categories to predict the total oocyst and cyst emissions for 2050 based on three scenarios. China's projected population, urbanization, and livestock production data for 2050 can be found in the Shared Socioeconomic Pathways (SSPs) database <ns0:ref type='bibr' target='#b79'>(Zhao, 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Huang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chen et al., 2020)</ns0:ref> (https://tntcat.iiasa.ac.at/SspDb). We based Scenario 1 on SSP1, which is entitled 'Sustainability -Taking the green road' and emphasizes sustainability, well-being, and equity. In this scenario, there is moderate population change and well-planned urbanization <ns0:ref type='bibr'>(O'Neill, Kriegle, & Ebi, 2015;</ns0:ref><ns0:ref type='bibr' target='#b35'>Jiang & O'Neill, 2017;</ns0:ref><ns0:ref type='bibr' target='#b79'>Zhao, 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Huang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chen et al., 2020)</ns0:ref>. Scenario 2 was based on SSP3, which is entitled 'Regional rivalry -A rocky road' and emphasizes regional progress. In this scenario, China's population change is significant and urbanization is unplanned <ns0:ref type='bibr'>(O'Neill, Kriegle, & Ebi, 2015;</ns0:ref><ns0:ref type='bibr' target='#b35'>Jiang & O'Neill, 2017;</ns0:ref><ns0:ref type='bibr' target='#b79'>Zhao, 2018;</ns0:ref><ns0:ref type='bibr' target='#b30'>Huang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chen et al., 2020)</ns0:ref>. To emphasize the importance of wastewater and manure treatment, we created Scenario 3 as a variation of Scenario 1 based on <ns0:ref type='bibr' target='#b27'>Hofstra & Vermeulen (2016)</ns0:ref>. This scenario has the same population, urbanization, and sanitation changes as Scenario 1, but with the insufficient sewage and manure treatments from 2013. Since there were no available data for individual livestock species, we assumed that all livestock species will grow by the same percentage noted in the SSPs database. We also assumed that there will be changes only in population and livestock numbers, not in any other parameters (e.g., oocyst and cyst excretion rates and prevalence) <ns0:ref type='bibr' target='#b32'>(Iqbal, Islam, & Hofstra, 2019)</ns0:ref>. We based our sanitation, wastewater, and manure treatment predictions for these three scenarios on previous literature reviews <ns0:ref type='bibr' target='#b27'>(Hofstra & Vermeulen, 2016;</ns0:ref><ns0:ref type='bibr' target='#b32'>Iqbal, Islam, & Hofstra, 2019)</ns0:ref>. Table <ns0:ref type='table'>S9</ns0:ref> provides an overview of the scenarios.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Emissions and mean concentrations of oocysts and cysts in the TGR in 2013</ns0:head><ns0:p>Cryptosporidium oocyst and Giardia cyst emissions from humans, rabbits, pigs, cattle, sheep, goats, and poultry found in the TGR in 2013 are shown in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. Chongqing had a total of 1.6×10 15 oocysts/year and 2.1×10 15 cysts/year of Cryptosporidium and Giardia emissions. Human Cryptosporidium and Giardia emissions contained a total of 1.2×10 15 oocysts/year and 2.0×10 15 cysts/year, and animal emissions had a total of 3.4×10 14 oocysts/year and 1.5×10 14 cysts/year. Humans and animals were responsible for 42% and 10% of total emissions in the TGR, respectively. Humans were responsible for 78% of oocyst emissions, followed by 14% from pigs, and 8% from poultry. Humans were responsible for 93% of cyst emissions, followed by 6% from pigs, and 0.5% from cattle. Ultimately, we found that humans were the dominant source of oocysts and cysts, followed by pigs, poultry, and cattle. The mean Cryptosporidium and Giardia concentrations in the TGR in 2013 were 22 oocysts/10 L and 28 cysts/10 L, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Oocyst and cyst emission sanitation types</ns0:head><ns0:p>We immediately observed the differences in sanitation types (connected emissions, direct emissions, diffuse emissions, and non-source) across the human, livestock, urban, and rural populations (Fig. <ns0:ref type='figure'>4</ns0:ref>). We found that 49% of the populations were connected to a sewer, 36% had diffuse sources, 13% had direct sources, and 2% were non-source. In livestock populations, only 10% were connected to a sewer (manure treatment), 80% produced diffuse emissions, and 10% were non-source. We divided the human and livestock emissions by region: urban areas (made up of urban residents) and rural areas (made up of rural residents and all livestock). In urban areas, the emissions connected to a sewer were prominent (78%), followed by direct sources (22%). In rural areas, diffuse sources produced approximately 85% of total emissions, followed by connected sources (9%), and non-source (6%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Spatial distribution of oocyst and cyst emissions in the TGR in 2013</ns0:head><ns0:p>Our model produced a spatial distribution of Cryptosporidium and Giardia emissions in the TGR for each Chongqing district or county from 2013 (Fig. <ns0:ref type='figure' target='#fig_3'>5</ns0:ref>). The total human Cryptosporidium emissions ranged from 5.4×10 12 to 7.4×10 13 oocysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5A</ns0:ref>) and the total human Giardia emissions ranged from 8.5×10 12 to 1.2×10 14 cysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5B</ns0:ref>). Overall, the emission spatial differences depended on population density and urbanization rate. The largest emissions were from the densely-populated Yubei, Wanzhou, and Jiulongpo districts.</ns0:p><ns0:p>The total animal source emissions ranged from 0 to 1.8×10 13 oocysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5C</ns0:ref>) and 0 to 7.2×10 13 cysts/district (Fig. <ns0:ref type='figure' target='#fig_3'>5D</ns0:ref>) for Cryptosporidium and Giardia, respectively. We based our results on the number of animals, manure production, manure treatment and runoff, and invariant oocyst and cyst emissions from each animal category over one year. The lowest emissions were observed in areas with low animal populations, such as the downtown districts of Yuzhong, Dadukou, and Nanan.</ns0:p><ns0:p>The total Cryptosporidium and Giardia emissions ranged from 1.0×10 13 to 8.1×10 13 oocysts/district and 1.0×10 13 to 1.2×10 14 cysts/district, respectively (Fig. <ns0:ref type='figure' target='#fig_3'>5E and F</ns0:ref>). Total human emissions were approximately six-fold higher than animal emissions and played a decisive role in total emission distribution. We found slightly more cysts than oocysts in total emissions and human emissions. In contrast, there were slightly more oocysts than cysts in animal emissions. The highest total emissions were found in areas with large animal and human populations, such as the main districts of Wanzhou, Yubei, and Hechuan.</ns0:p><ns0:p>Human Cryptosporidium and Giardia emissions from urban and rural areas can be found in Figure <ns0:ref type='figure'>6</ns0:ref>. In urban areas, Cryptosporidium and Giardia emissions ranged from 3.5×10 12 to 7.0×10 13 oocysts/district and 5.5×10 12 to 1.1×10 14 cysts/district, respectively (Fig. <ns0:ref type='figure'>6A and C</ns0:ref>). In rural areas, the emissions ranged from 0 to 9.8×10 12 oocysts/district and 0 to 1.6×10 13 cysts/district (Fig. <ns0:ref type='figure'>6B and D</ns0:ref>). Rural emissions were spread over much larger areas than urban emissions. Human emissions in urban areas were six-fold higher than in rural areas and played a crucial role in total human emission distribution.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sensitivity analysis</ns0:head><ns0:p>Since there were limited observational data on Cryptosporidium and Giardia, we performed a sensitivity analysis to verify the GloWPa-TGR-Crypto model's performance. The sensitivity analysis (Tables <ns0:ref type='table'>S6 -S8</ns0:ref>) showed that the model was the most sensitive to changes in excretion rate (shown for 1 log unit change in excretion rates), particularly the excretion rates of humans (factor 8.03), pigs (factor 2.22), and poultry (factor 1.74). The model was more sensitive to prevalence changes in humans, pigs, and poultry (factors 1.39, 1.14, and 1.08, respectively). The results confirmed that humans, pigs, and poultry were the dominant sources of oocyst and cyst emissions. Besides excretion rate and prevalence, the model was most sensitive to changes in the amount of runoff, STP oocyst and cyst removal efficiencies, the amount of connected emissions, human population, and manure storage time <ns0:ref type='bibr'>(factors 1.30, 1.23, 1.21, 1.16, and 1.11, respectively)</ns0:ref>, as these parameters affected oocyst and cyst survival and emissions. The model was not very sensitive to changes in the amount of rural resident feces applied as fertilizer, rural wastewater treatment, and the excretion rates and prevalence of animal species that did not contribute much to the total oocyst and cyst emissions (e.g., cattle, rabbits, sheep, and goats).</ns0:p></ns0:div>
<ns0:div><ns0:head>Scenario analysis: the effect of population, urbanization, and sanitation changes in 2050</ns0:head><ns0:p>In Scenario 1, moderate population change, planned urbanization, and strong improvements in sanitation, wastewater, and manure treatments will decrease the total emissions in the TGR to 9.5×10 14 oocysts/year and 1.2 ×10 15 cysts/year by 2050 (Fig. <ns0:ref type='figure'>7</ns0:ref>). This would reduce approximately 40% of the Cryptosporidium emissions and 44% of the Giardia emissions measured in 2013. All emissions from all three sources would decrease, with a notable 61% decrease for rural residents (Table <ns0:ref type='table'>S10</ns0:ref>). Figure <ns0:ref type='figure'>8B and E</ns0:ref> and Figure <ns0:ref type='figure'>9B and E</ns0:ref> show the decrease across all regions in Scenario 1. The largest decline would be found in the Yubei and Jiulongpo districts, where assumed urbanization rates would increase to 100% and 99%, respectively, and 99% of domestic sewers would obtain secondary or tertiary treatment. Scenario 1 also shows changes in the contributions to total emissions. Urban residents would be responsible for 64% and 81% of Cryptosporidium and Giardia emissions, respectively, which would be a 3% decrease and a 1% increase from 2013. Rural Cryptosporidium and Giardia emissions would decrease from 11% to 7% and 13% to 9%, respectively. Livestock Cryptosporidium and Giardia emissions would increase from 22% to 29% and 7% to 10%, respectively.</ns0:p><ns0:p>In Scenario 2, Cryptosporidium and Giardia emissions are expected to increase to 1.9 ×10 15 oocysts/year and 2.4 ×10 15 cysts/year by 2050 (Fig. <ns0:ref type='figure'>7</ns0:ref>), would be 19% and 12% growth, respectively, from 2013. Emissions from urban residents and livestock would increase (Table <ns0:ref type='table'>S10</ns0:ref>) due to strong population growth, unplanned urbanization, limited sanitation, and expanded livestock production practices where untreated manure used as fertilizer is emitted into surface water. Emissions from rural residents would decrease 8% because the rate of urbanization would increase while the same sanitation practices from 2013 are used by that smaller rural population. Figure <ns0:ref type='figure'>8C</ns0:ref> and F and Figure <ns0:ref type='figure'>9C</ns0:ref> and F show that total emissions would increase in all regions (particularly the Wanzhou and Yubei districts) because of strong population growth and limited environmental regulation. In Scenario 2, urban residents would account for 63% and 79%, rural residents would account for 9% and 11%, and livestock would account for 29% and 10% of Cryptosporidium and Giardia emissions, respectively, by 2050. Scenario 3 has the same population, urbanization, and sanitation changes as Scenario 1, but with limited wastewater and manure treatment facilities. Scenario 3 has the highest Cryptosporidium and Giardia emissions of all the scenarios. Total emissions would increase to 2.0 ×10 15 oocysts/year and 2.7 ×10 15 cysts/year, with 29% and 27% growth compared to 2013, respectively (Fig. <ns0:ref type='figure'>7</ns0:ref>). Livestock would see the most growth in emissions (an increase of 42%) (Table <ns0:ref type='table'>S10</ns0:ref>). Figure <ns0:ref type='figure'>8D</ns0:ref> and G and Figure <ns0:ref type='figure'>9D</ns0:ref> and G show that an increase in emissions across all regions, except in regions with assumed urbanization rates of 100% and where 50% of emissions obtain secondary treatment (such as the Yuzhong and Shapingba districts). This result highlights the importance of wastewater and manure treatment. Connecting populations to sewers without appropriate sewage treatment introduces more waterborne pathogens to surface water, affecting water quality.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The increase in Cryptosporidium and Giardia surface water pollution in China is traced primarily to human and animal feces. China has one of the largest amounts of Cryptosporidium emissions from feces (10 16 oocysts/year; <ns0:ref type='bibr' target='#b26'>Hofstra et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b27'>Hofstra & Vermeulen, 2016;</ns0:ref><ns0:ref type='bibr' target='#b66'>Vermeulen et al., 2019)</ns0:ref>, but Cryptosporidium and Giardia emissions from human and animal feces in surface water across different Chinese provinces or regions have not been closely studied. The TGR, developed by the China Yangtze Three Gorges Project as one of the largest freshwater resources in the world, suffers from Cryptosporidium and Giardia pollution <ns0:ref type='bibr'>(Xiao et al., 2013ab, Liu et al., 2019)</ns0:ref>. Using data from 2013, we built a GloWPa-TGR-Crypto model to estimate Cryptosporidium spp. and G. duodenalis emissions from human and livestock in the TGR. We also used scenario analyses to predict the effects of sanitation, urbanization, and population changes on oocyst and cyst emissions for 2050. Our study can be used to better understand the risk of water contamination in the TGR and to ensure that the reservoir is adequately protected and treated. This knowledge can also contribute to the implementation of the Water Pollution Control Action Plan (i.e., the Ten-point Water Plan), which was sanctioned by the Chinese government to prevent and control water pollution <ns0:ref type='bibr' target='#b71'>(Wu et al., 2016)</ns0:ref>. Additionally, our results can serve as an example for other studies on important waterborne pathogens from fecal wastes and wastewater, particularly in developing countries.</ns0:p><ns0:p>Using the GloWPa-TGR-Crypto model, we estimated that the total Cryptosporidium and Giardia emissions from human and livestock feces in Chongqing in 2013 were 1.6×10 15 oocysts/year and 2.1×10 15 cysts/year, respectively. Using the total emissions from the two protozoa, the TGR's hydrological information (such as water temperature, solar radiation, and river depth; Table <ns0:ref type='table'>S11</ns0:ref>), and the GloWPa-Crypto C1 model <ns0:ref type='bibr' target='#b66'>(Vermeulen et al., 2019)</ns0:ref>, we preliminarily calculated the mean Cryptosporidium and Giardia concentrations in the TGR in 2013 to be 22 oocysts/10 L and 28 cysts/10 L, respectively. <ns0:ref type='bibr' target='#b72'>Xiao et al. (2013a)</ns0:ref> reported that Cryptosporidium oocysts and Giardia cysts are widely distributed in the TGR, with concentrations ranging from 0 to 28.8 oocysts/10 L for Cryptosporidium and 0 to 32.13 cysts/10 L for Giardia in the Yangtze River's mainstream and the backwater areas of tributaries and cities. <ns0:ref type='bibr' target='#b44'>Liu et al. (2019)</ns0:ref> used a calibrated hydrological and sediment transport model to investigate the population, livestock, agriculture, and wastewater treatment plants in the Daning River watershed, a small tributary of the TGR in Chongqing, and found Cryptosporidium concentrations of 0.7-33.4 oocysts/10 L. The results from our model were similar to the results found in other studies. Because of the adsorption, deposition, inactivation, and recovery efficiencies of Cryptosporidium and Giardia in water, the oocyst and cyst concentrations in the surface waters of streams and rivers were significantly reduced <ns0:ref type='bibr' target='#b3'>(Antenucci, Brookes, & Hipsey, 2005;</ns0:ref><ns0:ref type='bibr' target='#b57'>Searcy et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b66'>Vermeulen et al., 2019)</ns0:ref>. Therefore, the validity of our model was confirmed.</ns0:p><ns0:p>Human and animal feces are main sources of Cryptosporidium and Giardia emissions in surface water. In this study, the majority of human emissions were from densely populated urban areas (Fig. <ns0:ref type='figure'>6</ns0:ref>). In those urban areas, a fraction of human emissions were not connected to sewers and sewage was not efficiently treated. We found high concentrations of Cryptosporidium (6.01-16.3 oocysts/10 L) and Giardia (59.52-88.21 cysts/10 L) in the effluent from wastewater treatment plants in the TGR area <ns0:ref type='bibr'>(Xiao et al., 2013ab)</ns0:ref>, and 16.5 ×10 8 tons of sewage were discharged into the TGR, mainly in urban areas <ns0:ref type='bibr' target='#b72'>(Xiao et al., 2013a)</ns0:ref>. In rural areas, only 9% of the population was connected to sewage systems and a large portion of untreated rural sewage was used as potential agricultural irrigation water <ns0:ref type='bibr' target='#b28'>(Hou, Wang, & Zhao, 2012)</ns0:ref>. Therefore, we assumed that large amounts of raw sewage were used as a diffuse source that was dumped as a fertilizer after storage into the farmland, where it could then enter tributaries and the mainstream of the Yangtze River via runoff.</ns0:p><ns0:p>We found a lower amount of animal Cryptosporidium and Giardia emissions than human emissions because only 10% of diffuse emissions reached the TGR through runoff. Unlike the original model created by <ns0:ref type='bibr' target='#b26'>Hofstra et al. (2013)</ns0:ref>, which estimated livestock oocyst and cyst emissions in surface water, we assumed that a portion of the manure received treatment during storage and before it was applied to soil <ns0:ref type='bibr' target='#b2'>(An et al., 2017)</ns0:ref>. Recent studies also reported that oocyst emissions on land were associated with mesophilic or thermophilic anaerobic digestion during manure treatment, and could be reduced by several log units <ns0:ref type='bibr' target='#b31'>(Hutchison et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017)</ns0:ref>. Additionally, animal emissions are still important. The total global Cryptosporidium spp. emissions from livestock manure are up to 3.2 × 10 23 oocysts/year <ns0:ref type='bibr' target='#b64'>(Vermeulen et al., 2017)</ns0:ref>. In 2010, China had a total of 1.9 billion tons of livestock manure, 227 million tons of livestock manure pollution, and 1.84 tons/hectare of arable land of livestock manure pollution <ns0:ref type='bibr' target='#b51'>(Qiu et al., 2013)</ns0:ref>. Livestock manure discharged into the environment without appropriate processing is a serious source of pollution in soil and water systems <ns0:ref type='bibr' target='#b62'>(Tian, 2012;</ns0:ref><ns0:ref type='bibr' target='#b2'>An et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Our sensitivity analysis found that the model we used to calculate oocyst and cyst emissions was most sensitive to oocyst and cyst excretion rates, similar to the results from the GloWPa-Crypto L1 model <ns0:ref type='bibr' target='#b64'>(Vermeulen et al., 2017)</ns0:ref>. More detailed information on the toll of cryptosporidiosis and giardiasis and the excretion rates of infected people in Chongqing would improve the model. Additionally, our sensitivity analysis highlighted the significance of runoff. <ns0:ref type='bibr' target='#b44'>Liu et al. (2019)</ns0:ref> found that the combined effect of fertilization and runoff played a very important role in oocyst concentration in rivers. Future studies should consider the effect of runoff along with the timing of fertilization. The model was also sensitive to wastewater treatment and manure management. Scenario 3 proposed what would happen if population, urbanization, and sanitation changed similarly to Scenario 1, but without advancements in wastewater treatment and manure management. The results of Scenario 3 indicated that improving urbanization and sanitation with the same population could still result in an increase in surface water emissions if the sewage and manure management systems are inadequate. The analyses of Scenarios 1 and 2 showed a decrease in oocyst and cyst emissions when there were significant improvements in urbanization, sanitation, wastewater treatment, and manure management, along with appropriate population growth. The effects of population, urbanization, sanitation, manure management, and wastewater treatment on oocysts and cysts should be studied in more detail in order to reduce emissions.</ns0:p><ns0:p>Previous studies have used the GloWPa-Crypto model to estimate human and livestock Cryptosporidium emissions across many countries <ns0:ref type='bibr' target='#b26'>(Hofstra et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b27'>Hofstra & Vermeulen, 2016;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b66'>Vermeulen et al., 2019)</ns0:ref>, but none of these studies included Giardia. In our study, we used the GloWPa-TGR-Crypto model to estimate Cryptosporidium spp. and G. duodenalis emissions from humans and animals in the Chongqing area of the TGR. Unfortunately, the Giardia emissions from rabbits, sheep, and goats were not estimated because there is currently no data for their excretion rates. Earlier studies could not detect Giardia in rabbits, sheep, or goats <ns0:ref type='bibr' target='#b17'>(Ferguson et al., 2007)</ns0:ref>. Giardia was recently found in sheep and rabbits in northwest and central China, but the cyst excretion rates per kg of manure were indeterminate <ns0:ref type='bibr' target='#b67'>(Wang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b37'>Jin et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b33'>Jian et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Jiang et al., 2018)</ns0:ref> and may have been underestimated.</ns0:p><ns0:p>To our knowledge, the GloWPa-TGR-Crypto model cannot be validated through the direct comparison of measured surface water values because this method ignores certain factors, such as the infiltration pathways and transport via soils and shallow groundwater to surface water <ns0:ref type='bibr' target='#b7'>(Bogena et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b64'>Vermeulen et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b69'>Watson et al., 2018)</ns0:ref>, the overflow of sewage treatment plants during the flood period <ns0:ref type='bibr' target='#b75'>(Xiao et al., 2017)</ns0:ref>, traditional dispersive small-scale peasant production <ns0:ref type='bibr' target='#b39'>(Li et al., 2016)</ns0:ref>, and the excretion of wildlife <ns0:ref type='bibr' target='#b4'>(Atwill, Phillips, & Rulofson, 2003)</ns0:ref>. The pathogen loading data of these factors are not readily available. Despite its few shortcomings, we used the GloWPa-TGR-Crypto model to further study environmental pathways, emissions in the TGR, and sources and scenarios for improved management.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This study is the first to explore Cryptosporidium spp. and G. duodenalis spatial emissions from human and livestock feces in the TGR, and to identify main sources of this pollution. There was a large amount of total emissions in Chongqing (1.6×10 15 oocysts/year and 2.1×10 15 cysts/year, respectively), indicating the need for effective pollution countermeasures. The total point source emissions from wastewater containing human excretion in urban areas were greater than the total nonpoint source emissions from human and livestock production in rural areas by a factor of 2.0 for oocysts and 3.9 for cysts. The emissions from urban areas were mainly from domestic wastewater in densely populated areas, while rural emissions were mainly from livestock feces in concetrated animal production areas. Sewage from cities and livestock feces from rural areas are therefore of particular concern in the TGR area.</ns0:p><ns0:p>The GloWPa-TGR-Crypto model was most sensitive to oocyst and cyst excretion rates, followed by prevalence and runoff. If there are significant population, urbanization, and sanitation management changes by 2050, the total Cryptosporidium and Giardia emissions in the TGR will decrease by 42% according to Scenario 1, increase by 15% in Scenario 2, or increase by 28% in Scenario 3. Our scenario analyses shows that changes in population, urbanization, sanitation, wastewater management, and manure treatment should be taken into account when trying to improve water quality. The GloWPa-TGR-Crypto model can be further refined by including direct rural resident emissions, direct animal emissions, emissions from sub-surface runoffs, and a more in-depth calculation of concentrations and human health risks using a hydrological model and scenario analysis. Our model can contribute to further understanding of environmental pathways, the risks of Cryptosporidium and Giardia pollution, and the design of effective prevention and control strategies that can reduce the outbreak of waterborne diseases in the TGR and other similar watersheds. Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 Total</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,70.87,525.00,402.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,377.62,525.00,360.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,219.37,525.00,402.75' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>Materials & MethodsPeerJ reviewing PDF | (2020:02:45998:2:0:NEW 20 Aug 2020)</ns0:note>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:02:45998:2:0:NEW 20 Aug 2020)</ns0:note>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:02:45998:2:0:NEW 20 Aug 2020)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>Genetics and Evolution64: 46-51 DOI: 10.1016/j.meegid.2018.06.012. PeerJ reviewing PDF | (2020:02:45998:2:0:NEW 20 Aug 2020)</ns0:note>
</ns0:body>
" | "Dear editor and reviewers:
We are very pleased to have such an opportunity for minor revisions. Thank the editor and reviewer’s comment. We have very carefully prepared and revised our manuscript according to the advice received from the editor and the reviewer. We hope that the revised manuscript will be accepted. Our list of responses to the comment is below.
Responses to the comment of editor
Please also seriously consider the reviewer’s comment about your model name.
Responses: The model name has been revised as the “GloWPa-TGR-Crypto” in the revised manuscript.
Responses to the comments of review Lucie Vermeulen
I think you should not just refer to your model as ‘the GloWPa-Crypto model’ (as is done in the abstract), as this is the name of the original global scale Cryptosporidium model. You should make clear your model is an application/adaptation of the GloWPa-Crypto model, not suggest it is the same, as you apply it locally and not only for Crypto. Perhaps a derived name, such as ‘GloWPa-TGR-Crypto’. Or your own name with a clear reference, like ‘our TGR-Crypto-Giardia model is based on the GloWPa-Crypto model [ref]’.
Responses: Thanks you for the good advice. The model name has been revised as the “GloWPa-TGR-Crypto” in the revised manuscript.
You now mention in your discussion that you do a calculation using a hydrological model (lines 416-420). All calculations should be detailed in the methods section and reported in the results section, not in the discussion!! It is unclear now how you did this calculation, and what hydrological information you used. Applying a hydrological model is not something you can mention in one line in the discussion, this is a whole section of your paper at the very least if you actually do these calculations.
Responses: We preliminarily calculated mean Cryptosporidium and Giardia concentrations in the TGR in 2013 using the GloWPa-Crypto C1 model (Vermeulen et al., 2019). According to your advice, all calculations of mean concentrations have been detailed in the methods section and reported in the results section. Please see lines 244, 254-264, 300 and 310-312 in the revised manuscript. All equations and parameters used in this study were showed in table S11. Please see table S11 in the revised manuscript.
Vermeulen et al. is not the original reference for a hydrological model.
Responses: All equations and parameters used in this study were showed in table S11. Please see table S11.
Furthermore, this hydrological model is on a 0.5 degree grid scale, while you calculate on a district level. Unclear how you would combine this with grid-based hydrological modelling.
Responses: According to total emissions from the two protozoa in Chongqing and the TGR’s hydrological information, we used the hydrological model to calculate mean concentrations of Cryptosporidium oocysts and Giardia cysts in Chongqing section of the TGR without a grid-based scale and a district or county. Mean concentrations were preliminarily calculated using the equation(),where Et is the sum of total oocyst and cyst emissions from humans and animals in all districts or counties in Chongqing,Qs is the sum of the TGR’s annual inflow and storage capacity in Chongqing. All equations and parameters used in this study were showed in Table S11. Please see lines 257-264 in the revised manuscript and table S11.
Furthermore, table S11 suggests you also model decay in the surface water according to Vermeulen et al., if this is a calculation you do, it should also be detailed in the methods and results section.
Responses: Thanks! it has been detailed in the methods section and reported in the results section. Please see lines 244, 254-264, 300 and 310-312 in the revised manuscript and table S11.
I still do not understand what the addition of breeding days in equation 3 means. If you already have the number of animals, their manure production and the amount of (oo)cysts in the manure, then what does it add? What exactly does the number ‘breeding days’ represent? Is that the number of days between litters of offspring? The gestation time? The number of days young live before slaughter? Should be explained.
Responses: The number ‘breeding days’ represent the number of days livestock live before slaughter. Each livestock species has a unique number of breeding days before slaughter and produces different amounts of manure and (oo)cyst excretions each year. Please see lines 184-186 in the revised manuscript.
We hope that the revised manuscript will be accepted.
Best,
Guosheng Xiao
20-8-2020
" | Here is a paper. Please give your review comments after reading it. |
657 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The cryptic species that make up the Euwallacea fornicatus species complex can be readily distinguished via their DNA sequences. Until recently, it was believed that the Hawaiian Islands had been invaded by only one of these cryptic species, E. perbrevis (tea shot hole borer; TSHB). However, following the 2016 deposition of a DNA sequence in the public repository GenBank, it became evident that another species, E. fornicatus (polyphagous shot hole borer; PSHB), had been detected in macadamia orchards on Hawaiʻi Island (the Big Island). We surveyed the two most-populous islands of Hawaiʻi, Big Island and Oʻahu, and herein confirm that populations of TSHB and PSHB are established on both. Beetles were collected using a variety of techniques in macadamia orchards and natural areas. Individual specimens were identified to species using a high-resolution melt assay, described herein and validated by subsequent sequencing of specimens. It remains unclear how long each species has been present in the state, and while neither is currently recognized as causing serious economic or ecological damage in Hawaiʻi, the similarity of the newly-confirmed PSHB population to other damaging invasive PSHB populations around the world is discussed. Although the invasive PSHB populations in Hawaiʻi and California likely have different geographic origins within the beetle's native range, they share identical Fusarium and Graphium fungal symbionts, neither of which have been isolated from PSHB in that native range.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Species of the Euwallacea fornicatus complex attracted attention following their invasion and establishment in California and Florida. At the time of these invasions, the complex was thought to be a single species <ns0:ref type='bibr' target='#b34'>(Wood & Bright, 1992)</ns0:ref>, but after their emergence as significant pests in agricultural and natural ecosystems in the respective states, the invasive beetles were shown to be different species, and moreover, E. fornicatus s. l. was unveiled as a complex of at least four cryptic species <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017)</ns0:ref>. The earliest records of this species complex in the 48 contiguous states stem from collections made in 2003 (California) and 2004 (Florida) <ns0:ref type='bibr' target='#b20'>(Rabaglia et al., 2006)</ns0:ref>. However, the island state of Hawaiʻi was invaded much earlier, with collections of Xyleborus (= Euwallacea) fornicatus existing from the early part of the 20th century. The earliest confirmed collections are from avocado on Oʻahu in 1910 <ns0:ref type='bibr' target='#b29'>(Swezey, 1941)</ns0:ref>, but the author also states that it was known from avocado for many years prior to this. The presence of E. fornicatus was subsequently confirmed on the Big Island (Hawaiʻi) <ns0:ref type='bibr'>(1919)</ns0:ref>, <ns0:ref type='bibr'>Maui (1930), and</ns0:ref><ns0:ref type='bibr'>Molokaʻi (1936)</ns0:ref> <ns0:ref type='bibr' target='#b29'>(Swezey, 1941;</ns0:ref><ns0:ref type='bibr' target='#b24'>Schedl, 1941)</ns0:ref>. <ns0:ref type='bibr' target='#b23'>Samuelson (1981)</ns0:ref> later added Kauaʻi to the list but without a date. Thus, it appears that the beetles invaded the state sometime before 1910 and have since spread to all the islands. In addition to these three U.S. states, invasive populations of beetles morphologically identified as E. fornicatus have successfully established in many other places outside of their native range in Asia. They have been reported as invasive in the following locations (CABI, 2020): Australia, Papua New Guinea, Vanuatu, Fiji, Solomon Islands, Micronesia, Samoa, Niue, Hawaiʻi, Comoros, Madagascar, Reunion, South Africa, Israel, Costa Rica, Guatemala, Mexico, Panama, and the continental USA. Exactly how CABI determined the invasive status of these beetles is not clear, and both Australia and Papua New Guinea may in fact prove to be in the native range of this species complex <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017)</ns0:ref>. Like other ambrosia beetles, members of the E. fornicatus species complex are particularly well-equipped to invade new geographical areas. Female beetles excavate individual tunnels (galleries) inside branches and trunks of trees. Inside the gallery, the female inoculates the walls with symbiotic fungi and lays her eggs. The fungi grow, extracting nutrients from the plant, and these fungi provide the sole food source for the mother and her developing brood. In this state, the beetles can survive long distance transport very well. On reaching adulthood, female offspring leave the natal gallery, taking with them spores of the symbiotic fungi stored inside special organs called mycangia. The invasive potential of ambrosia beetles is further boosted by their sex determination mechanism and mating system. Like bees and ants, these beetles are haplo-diploid; males develop from unfertilized haploid eggs and females from fertilized diploid eggs <ns0:ref type='bibr' target='#b16'>(Kirkendall, 1993)</ns0:ref>. Their mating system is an example of local mate competition, where mothers produce many daughters and only a few sons. Thus, daughters mate with a brother (sib-mating) inside the natal gallery, and upon leaving, are already inseminated prior to dispersal through the environment. In the E. fornicatus species complex, dispersal can be through flight or simply by creating new galleries on the trees where they were born <ns0:ref type='bibr' target='#b1'>(Calnaido, 1965)</ns0:ref>. Therefore, in contrast to many other species where colonization of a new environment may be constrained by the need to meet members of the opposite sex, the population growth rate of this complex is not limited by lack of mates.</ns0:p><ns0:p>As already mentioned, the species morphologically recognized as E. fornicatus has recently been shown to consist of several cryptic species. Confirmation of these species was based on the discovery of substantial differences in the DNA sequences of multiple genes among a worldwide sample of populations <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017)</ns0:ref>. Four DNA lineages were recognized that were initially given the common names of tea shot hole borer (TSHB) 1A and 1B, polyphagous shot hole borer (PSHB) and Kuroshio shot hole borer (KSHB). These different species could be easily recognized by the DNA sequence of the mitochondrial cytochrome oxidase 1 (COI) gene. In the same study, <ns0:ref type='bibr' target='#b28'>Stouthamer et al. (2017)</ns0:ref> found that the populations they sampled from the Big Island and Maui were genetically identical, belonging to the TSHB-1B lineage. They were also identical to invasive populations in Florida, but differed from those in California (identified as PSHB and KSHB). Thus, TSHB was thought to be the only species of the E. fornicatus species complex to have invaded Hawaiʻi <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017)</ns0:ref>. Recent attempts to associate existing junior synonyms with these species <ns0:ref type='bibr' target='#b15'>(Gomez et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Smith et al., 2019)</ns0:ref> have resulted in the current association of the scientific name E. perbrevis with this species <ns0:ref type='bibr' target='#b25'>(Smith et al., 2019)</ns0:ref>. Following the publication of the Stouthamer et al. ( <ns0:ref type='formula'>2017</ns0:ref>) study, we discovered that in September 2016, a conflicting COI sequence was belatedly deposited in GenBank, which originated from two beetles collected from macadamia trees on the Big Island. The COI sequence identified these beetles as E. fornicatus <ns0:ref type='bibr' target='#b25'>(Smith et al., 2019)</ns0:ref> (or PSHB as we choose to refer to it), and they were collected in 2007 by Australian scientists studying the pests attacking macadamia trees in Hawaiʻi <ns0:ref type='bibr' target='#b18'>(Mitchell & Maddox, 2010)</ns0:ref>. For simplicity, hereafter we refer to E. perbrevis and E. fornicatus as TSHB and PSHB, respectively. We determine if PSHB is established on the Big Island, and also if it is present on Oʻahu, and identify fungal species associated with these beetles in Hawaiʻi.</ns0:p></ns0:div>
<ns0:div><ns0:head>Material & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Specimen collection</ns0:head><ns0:p>Specimens were collected in natural areas under permissions granted to CG by the United States Department of the Interior National Park Service (permit # HAVO-2019-SCI-0025), and The State of Hawaii Department of Land and Natural Resources (Endorsement No: I1393). Nathan Trump, General Manager, Island Harvest Inc., provided written permission to collect specimens on their property, and collections at the Pahala site were made under the auspices of a longstanding verbal permission historically granted by Randy Cabral and Randy Mochizuki, area managers, Mauna Loa Macadamia Nut Corp.</ns0:p><ns0:p>Three different methods were employed to collect beetles. The first method involved the use of Ricinus communis (castor bean) 'trap' logs. Ricinus communis logs (diameter 7-15 cm) were cut to a length of 30-35 cm, and both cut ends were dipped into paraffin wax to reduce the drying out of the logs. A quercivorol lure (ChemTica International S.A., Costa Rica), a known attractant of the beetles <ns0:ref type='bibr' target='#b2'>(Carrillo et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b6'>Dodge et al., 2017)</ns0:ref>, was attached to a bundle of six logs (to attract beetles) and this bundle was then hung in the field. The logs were left for 8 weeks to allow ample opportunity for foundresses to locate them and initiate their galleries. The logs were then retrieved and placed in laboratory cages. Beetles were collected daily as they emerged from the galleries in the logs. These logs were deployed under Leucaena trees at the Waimānalo Research Station, and in a mature castor bean grove at Maunawili (Table <ns0:ref type='table'>1</ns0:ref>). These sites were about 9.5 km apart on the eastern side of Oʻahu. Trapping took place from the beginning of June 2018 until the end of September 2019. Beetles were also captured using Lindgren funnel traps. In the Koʻolau Mountains (Oʻahu) and the Hilo Forest Reserve (Big Island), the traps were 'baited' with approximately 150 mL of an ethanol-methanol solvent lure containing between 40-50% ethanol and 50-55% methanol (Klean-Strip® Denatured Alcohol, W.M. Barr & Co. Inc., Memphis, TN, USA), and 50 mL of commercial anti-freeze car coolant containing ethylene glycol (Table <ns0:ref type='table'>1</ns0:ref>). At the Waiākea Research Station and around Pahala, in the Kaʻū district of the Big Island, Lindgren funnel traps baited with quercivorol lures were deployed in macadamia orchards (Table <ns0:ref type='table'>1</ns0:ref>). Finally, in several areas, beetles were also extracted directly from infested branches of a variety of different host plants including macadamia, hau (Hibiscus tiliaceus), monkeypod (Samanea saman), and, in the Waiʻanae mountains of Oʻahu, the endemic Planchonella sandwicensis (Table <ns0:ref type='table'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Identification of specimens</ns0:head><ns0:p>Two methods were used to determine the identity of the beetles. Initial identifications were made using a high-resolution melt (HRM) assay similar to that described by Rugman-Jones & Stouthamer (2017). Consistent, species-specific differences have been reported between TSHB and PSHB (and KSHB) in the DNA sequences of the 28S ribosomal subunit <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017)</ns0:ref>. Thus, PCR primers were designed for a short fragment of DNA spanning a particularly variable region of 28S (GenBank accessions MT8822790-792; Figure <ns0:ref type='figure' target='#fig_0'>1 A</ns0:ref>). This resulted in the primer pair, P-K-Tfor (5'-CGATCTCTGGCGACTGTTG-3') and P-K-Trev (5'-GGTCCTGAAAGTACCCAAAGC-3'), which yielded diagnostic melt curves for TSHB, PSHB, and KSHB (Figure <ns0:ref type='figure' target='#fig_0'>1 B</ns0:ref>). DNA was extracted from individual beetles using the simple, nondestructive HotSHOT method <ns0:ref type='bibr' target='#b31'>(Truett et al. 2000)</ns0:ref>, resulting in a final volume of 200 μL. The HRM utilized a Rotor-Gene Q 2-Plex qPCR machine (QIAGEN) and reactions were performed in 20 μL volumes containing 1x HOT FIREPol® EvaGreen® HRM Mix (Mango Biotechnology, Mountain View, CA, USA), 0.2 μM each primer, and 2 μL of DNA template. After an initial denaturing step of 95°C for 15 min (required to activate the HOT FIREPol® DNA Polymerase), amplification was achieved via 40 cycles of 95°C for 20 s, 57°C for 30 s and 72°C for 30 s. Immediately following amplification, a melt analysis was conducted. PCR products were held at 77°C for 90 s and then heated in 0.1°C increments to a final temperature of 92°C. Reactions were held for 2 s at each temperature increment before fluorescence was measured. Duplicate reactions were run for each specimen and positive controls for each of the three species were included in each run, as were 'no-template controls'. Based on the outcome of the HRM assays, the DNA of a subset of twenty specimens was sequenced to confirm its HRM diagnosis, and thereby validate the HRM assay. The COI gene was amplified from the HotSHOT-extracted DNA using the primers LCO1490 and HCO2198 'barcoding' primers <ns0:ref type='bibr' target='#b12'>(Folmer et al., 1994)</ns0:ref> following <ns0:ref type='bibr' target='#b28'>Stouthamer et al. (2017)</ns0:ref>. Purified amplicons were direct-sequenced in both directions at the Institute for Integrative Genome Biology, UCR.</ns0:p><ns0:p>In an attempt to identify the potential native origin of the Hawaiian PSHB population (see Results) its COI sequence (haplotype) was compared with those of native PSHB populations surveyed in previous studies <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Gomez et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Smith et al., 2019)</ns0:ref>. The respective sequences were retrieved from GenBank, combined with the Hawaiian sequences and collapsed into haplotypes using DnaSP version 5.10.01 <ns0:ref type='bibr' target='#b17'>(Librado & Rozas, 2009)</ns0:ref>. The H8 haplotype of E. perbrevis <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b25'>Smith et al., 2019)</ns0:ref> was added to root the analysis, and the entire dataset was trimmed to 567bp. Genealogical relationships among the haplotypes were investigated by conducting a maximum likelihood (ML) analysis in RAxML version 8.2.10 (Stamatakis, 2014) using the RAXMLGUI v. 2.0.0.-beta6 <ns0:ref type='bibr'>(Edler et al., 2019)</ns0:ref>. The program jModeltest 2.1.4 <ns0:ref type='bibr'>(Darriba et al. 2012</ns0:ref>) was used to identify GTR + Γ + I as the best-fit model of nucleotide substitution. The dataset was partitioned by third codon position and node support was assessed with 1,000 rapid bootstrap replicates. The resulting tree was redrawn using FigTree v.1.4.3 (http://tree.bio.ed.ac.uk/software/figtree/).</ns0:p></ns0:div>
<ns0:div><ns0:head>Identification of fungal species isolated from PSHB specimens</ns0:head><ns0:p>Using a method similar to that described by Lynch et al. ( <ns0:ref type='formula'>2016</ns0:ref>), fungal species associated with PSHB were isolated from the heads of female beetles collected alive from funnel traps in macadamia orchards around Pahala, Big Island. Beetles were surface sterilized by submerging in 70% ethanol and vortexing for 20 s. They were then rinsed with sterile de-ionized water and allowed to dry on sterile filter paper. Individual beetles were decapitated under a dissection microscope, and the head (containing the mycangia) was macerated in a 1.5 mL microcentrifuge tube using a sterile plastic pestle. Each macerated head was suspended in 1 mL of sterile water and 25 μL of this suspension was pipetted onto a Petri plate containing potato dextrose agar (PDA; BD Difco, Sparks, MD) amended with 0.01% (w/v) tetracycline hydrochloride (PDA-t) and spread using sterile glass L-shaped rods. Plates were incubated for 48-72 h at 25°C and single spore fungal colonies with unique morphologies were sub-cultured and shipped to the Eskalen lab (UC Davis) for molecular identification. The remaining abdomen/thorax segments were shipped to the Stouthamer lab (UC Riverside) for molecular identification of the beetle. DNA was extracted from the fungal isolates and sequenced following protocols detailed by <ns0:ref type='bibr' target='#b3'>Carrillo et al. (2019)</ns0:ref>, in which the PCR primers ITS4 and ITS5 <ns0:ref type='bibr' target='#b33'>(White et al., 1990)</ns0:ref> were used to amplify the ITS1-5.8S-ITS2 region of the fungal ribosome. Beetles were extracted and sequenced as described above.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>A total of 145 beetles were examined in this study. Of these, the HRM assay identified 38 as TSHB and 107 as PSHB (Table <ns0:ref type='table'>1</ns0:ref>). COI sequences of a subset of twenty of these specimens (12 x TSHB and 8 x PSHB) confirmed their HRM diagnosis, providing validation for the remaining 125 diagnoses. The COI sequence of the TSHB individuals was identical to that of all earlier TSHB specimens from Hawaiʻi, matching the H8 haplotype from <ns0:ref type='bibr' target='#b28'>Stouthamer et al. (2017;</ns0:ref><ns0:ref type='bibr' /> GenBank accession KU726996). Similarly, the PSHB sequences were also all identical to the sequence belatedly deposited in GenBank for specimens collected from macadamia on the Big Island in 2007 (Mitchell & Maddox, 2010; GenBank accession KX818247). Both species were found on the two islands surveyed; the Big Island and Oʻahu (Table <ns0:ref type='table'>1</ns0:ref>). PSHB appeared to be particularly abundant on the Big Island, accounting for 80 of the specimens trapped in Lindgren traps placed in macadamia orchards as opposed to only 15 TSHB. A further 9 TSHB and 1 PSHB were extracted from dead macadamia branches. Only two specimens were trapped in the Hilo Forest Reserve both of which were PSHB. The relative abundance of the two species was slightly more balanced in our Oʻahu surveys with a total of 24 PSHB and 14 TSHB (63% and 37%, respectively). The Hawaiian PSHB haplotype did not match any from the native range but in the ML analysis it grouped with haplotypes from Vietnam, Thailand, and China (Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>).</ns0:p><ns0:p>Fungi were identified from the heads of four individual specimens from the Big Island. Sequences of the COI confirmed these specimens as PSHB and both Fusarium euwallacea and Graphium euwallacea were identified. Both fungi were successfully cultured from two of the specimens and of the remaining specimens, only G. euwallacea was successfully cultured from one, and only F. euwallacea was cultured from the other. The DNA sequences obtained for the ITS1-5.8S-ITS2 region of these fungi were identical (100%) to those obtained for the fungi associated with PSHB in California (GenBank accessions JQ723754 [F. euwallaceae] and KF540225 [G. euwallaceae]). In addition to these fungi, one specimen harbored a further Fusarium sp. previously associated with the decline of Indian coral tree, Erythrina variegata, on the island of Okinawa, Japan (GenBank accession LC198904).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The islands of the US state of Hawaiʻi are particularly prone to invasion by exotic species due to their geography, climate, history, and economy. Indeed, it is thought that over half of Hawaii's free-living species are non-indigenous (US <ns0:ref type='bibr' target='#b32'>Congress, 1993)</ns0:ref>, and their numbers continue to rise. For example, a 1992 report documented the arrival of an average of 20 exotic invertebrate species each year from 1961 through 1991 (The Nature Conservancy of Hawaiʻi, 1992). Many of these species have had little noticeable effect in their new environment, but unfortunately a substantial proportion of adventive species have significantly impacted the ecology and economy of Hawaiʻi (The Nature Conservancy of <ns0:ref type='bibr'>Hawaiʻi, 1992</ns0:ref><ns0:ref type='bibr' target='#b27'>, State of Hawaiʻi, 2017)</ns0:ref>. The problems associated with detecting and accurately documenting invasive species are further complicated by the phenomenon of cryptic species: instances where genetically discrete species are erroneously classified as a single species because they are morphologically identical. This study provides the first confirmation that the Hawaiian Islands have been invaded not as previously thought by just one member of the Euwallacea fornicatus species complex, E. perbrevis (TSHB), but also by a second, E. fornicatus (PSHB).</ns0:p><ns0:p>Based on genetic characterization (HRM and sequencing) of beetles captured using a variety of different methods, it is clear that TSHB and PSHB occur on the Big Island and on Oʻahu. The co-occurrence of different cryptic species is not uncommon in this species complex <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017</ns0:ref><ns0:ref type='bibr' target='#b15'>, Gomez et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b25'>, Smith et al., 2019)</ns0:ref>. For example, in Taiwan, at least three species occur in complete sympatry <ns0:ref type='bibr' target='#b3'>(Carrillo et al., 2019)</ns0:ref>. <ns0:ref type='bibr' target='#b28'>Stouthamer et al. (2017)</ns0:ref> previously confirmed the presence of TSHB on the Big Island and on Maui but during the process of publishing that study, a sequence was deposited in GenBank by another group of researchers, indicating that PSHB had been detected on the Big Island. Although the sequence was only deposited in September of 2016, it originated from two specimens captured in 2007 as part of a study investigating pests of macadamia <ns0:ref type='bibr' target='#b18'>(Mitchell & Maddox, 2010)</ns0:ref>. Just how long PSHB, or indeed TSHB, have been present in Hawaiʻi remains unknown, but the current study confirms that both are well-established. It would be interesting to examine the entomological collections of the Bishop Museum, Honolulu, to look for evidence of the arrival of both. Recent systematic studies <ns0:ref type='bibr' target='#b15'>(Gomez et al., 2018;</ns0:ref><ns0:ref type='bibr'>Smith et al., 2019show</ns0:ref> that TSHB and PSHB can be separated from each other based on certain morphometric measurements, so these collections may hold a historical record of these invasions. The authoritative public website www.barkbeetles.info identifies specimens collected on Oʻahu from Erythrina sp. in 1919 as E. perbrevis (TSHB), presumably based on morphometric measurements. Since our survey targeted only the two mostpopulous islands, it is unknown whether PSHB is also present on the other islands. The species complex has also been recorded on Kauaʻi, Maui, and Molokaʻi <ns0:ref type='bibr' target='#b29'>(Swezey, 1941;</ns0:ref><ns0:ref type='bibr' target='#b24'>Schedl, 1941;</ns0:ref><ns0:ref type='bibr' target='#b23'>Samuelson, 1981)</ns0:ref>. As mentioned, TSHB is known from Maui <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017)</ns0:ref> but it now seems possible that PSHB is also there. As for the other two islands, no beetles have been sequenced from either Kauaʻi or Molokaʻi, so the specific identity of those members of the E. fornicatus species complex remains a mystery.</ns0:p><ns0:p>The PSHB haplotype identified in this study (identical to KX818247) has not been identified from the native area of the species complex <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Gomez et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Smith et al., 2019)</ns0:ref>. As such, it provides little information for identifying the area of origin of the Hawaiian invasion. The Hawaiian haplotype was most similar to the H27, H28, and H29 haplotypes of <ns0:ref type='bibr' target='#b28'>Stouthamer et al. (2017)</ns0:ref> which were identified from populations in Vietnam, northern Thailand, and China, respectively (Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). This more or less encompasses the entire native range of PSHB, as we currently understand it, excepting the islands of Taiwan, Okinawa, and Hong Kong.</ns0:p><ns0:p>DNA sequences of the symbiotic fungi recovered from the Hawaiian beetles also provided little information about potential origin. The F. euwallaceae and G. euwallaceae sequences generated from Hawaiian PSHB have, as yet, never been recovered in the native area of the beetles <ns0:ref type='bibr' target='#b3'>(Carrillo et al., 2019)</ns0:ref>. However, the Hawaiian fungal sequences were identical to those of the fungi associated with the invasive PSHB populations in California <ns0:ref type='bibr' target='#b10'>(Eskalen et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b17'>Lynch et al., 2016)</ns0:ref> and Israel <ns0:ref type='bibr'>(Freeman et al., 2013)</ns0:ref>. This creates an interesting paradox. Invasive PSHB populations in California and Hawaiʻi likely have different origins within the beetle's native range, and yet share identical Fusarium and Graphium fungal symbionts, neither of which have been isolated from PSHB anywhere in its native range. Indeed, among invasive populations of the E. fornicatus species complex, only the Fusarium associated with KSHB in California, F. kuroshium, has been found in the native range in Taiwan, although to add further to the conundrum, in Taiwan it has only been isolated from PSHB, and not KSHB <ns0:ref type='bibr' target='#b3'>(Carrillo et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Whatever its origin, it currently appears that the Hawaiian haplotype of PSHB has only invaded the Hawaiian Islands, where economic or ecological damage have yet to be quantified. However, history suggests that we should perhaps not ignore its presence. PSHB was first detected in California as early as 2003 but was not recognized as a problem until 2012 <ns0:ref type='bibr' target='#b11'>(Eskalen et al., 2013)</ns0:ref>. One of the two haplotypes identified in California (H33; <ns0:ref type='bibr' target='#b28'>Stouthamer et al., 2017)</ns0:ref> has also successfully established in Israel and South Africa, where, like in California it is significantly impacting both agriculture and native ecosystems. Exactly why beetles with this particular haplotype are such successful and widespread invaders is unclear. Perhaps it is just evidence of a serial invasion, with global trade aiding the subsequent movement of one established invasive 'bridgehead' population to other areas. But it may also be linked to differences in the virility of different haplotypes and/or the symbiotic fungi they carry. The fungal species carried by the Hawaiian beetles are identical to those carried by H33 which have proven pathogenic to a multitude of tree species <ns0:ref type='bibr' target='#b11'>(Eskalen et al., 2013)</ns0:ref>. The impact on native Hawaiian vegetation is at this point minor, or unrecognized, but several notable endemic species are attacked including Acacia koa, Pipturus albidus, and Planchonella sandwicensis (Gillett pers.obs.). Other host plants known to be used by beetles belonging to the E. fornicatus complex in Hawaiʻi include, Albizia lebbek, Albizia moluccana, Aleurites moluccana, Artocarpus altilis, Citrus, Colvillea, Cucumis, Enterolobium cyclocarpum, Eugenia jambolana, Ficus, Leucaena, Litchi chinensis, Macadamia, Mangifera, Nothopanax guilfoylei, Persea gratissima, Ricinus communis, Samanea, Schinus molle, Spondias, Sterculia foetida, and Tamarindus <ns0:ref type='bibr' target='#b23'>(Samuelson, 1981)</ns0:ref>.</ns0:p><ns0:p>While this study confirms that two members of the E. fornicatus species complex, TSHB and PSHB, have successfully established on the Hawaiian Islands, the full geographic extent of the two species remains unknown, since our survey focused only on the Big Island and Oʻahu. Furthermore, we focused our efforts on particular crops (macadamia) and locales. In our captures, PSHB was more abundant than TSHB but this may not be an accurate reflection of the relative abundance of the two species across different habitats and islands. Detecting invasive species across a large and heterogeneous landscape presents difficult challenges and will require cooperation among many stakeholders. Without a monitoring program aimed specifically at the E. fornicatus species complex, relevant agencies might at least seek to collate any by-catch specimens from other programs, which match the morphological description of E. fornicatus. The diagnostic method developed herein, based on HRM, then provides an important tool allowing the quick, cheap, and accurate identification to species of three cryptic members of the E. fornicatus species complex. KSHB was not detected in the current sample, but may prove a good inclusion for any future survey work. This specific assay was based on a stretch of the 28S nuclear ribosomal gene that is typically well-conserved within a species. Unlike a previous assay that was based on the much more variable COI gene <ns0:ref type='bibr'>(Rugman-Jones & Stouthamer, 2017)</ns0:ref>, and indeed was developed to identify such intra-specific variation, the new assay is unlikely to be affected by 'unknown' intra-specific variation. Thus, it provides a more accurate means of species identification when 'going in blind' (i.e. working in a new habitat without in-depth knowledge of intra-specific variation). Following identification via the HRM assay, subsequent sequencing of the COI of 20 of our specimens, confirmed their identity and validated the HRM assay. ML reconstruction performed in RAxML using all haplotypes deposited in GenBank <ns0:ref type='bibr' target='#b28'>(Stouthamer et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Gomez et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Smith et al., 2019)</ns0:ref>. Branch support was assessed with 1,000 rapid bootstrap replicates. An asterisk denotes bootstrap support over 70%.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
</ns0:body>
" | "Dear Editor
We thank the reviewers for their generous comments on our work and have edited our manuscript to address their concerns.
In particular, we have added the phylogenetic analysis requested by reviewer 3 and have corrected our usage of E. fornicatus s. l. and E. fornicatus s. s. Our responses to the individual comments of each reviewer are listed below.
We believe that the manuscript is now suitable for publication in peerJ.
Yours faithfully,
Paul Rugman-Jones (on behalf of all the authors)
Response to reviewers:
Reviewer 1 was happy with our basic reporting, experimental design and the validity of our findings. In response to their comments, we have accepted the majority of grammatical and language changes as advised, and revised our text in response to four particular comments [line numbers given for the revised manuscript]:
L192,193,194: is the species name euwallceae rather than euwallcea?
Thank you for spotting this, the fungal species epithet has been changed to euwallaceae throughout.
L227: or indeed how long either species has been present. Is it not possible that the original records prior to 1910 could be PSHB?
“Yes”, it is possible. We have changed the text to reflect this [lines 240-248]
L240: delete “subsequently” and “additional”
Text altered to better reflect our meaning [lines 255-257]
L245: is this better expressed by replacing “With the exception of” by “Along with”?
Text altered to better reflect our meaning [lines 260-262]
Reviewer 2 (Sarah Smith) was also happy with our basic reporting, experimental design and the validity of our findings. In response to her comments we have revised the manuscript as follows [line numbers given for the revised manuscript]:
Line 230. These species are actually relatively easy to separate, especially when they are next to each other in a mixed series. See both Gomez et al. 2018 and Smith et al. 2019’s tables and note that the two species do not overlap in either size or length/width ratio. This is a fact and not a claim as it has been tested and these differences are statistically significant (see Gomez et al.). Rewrite.
We have replaced the words “have claimed” with “show” [line 244]
The authors improperly refer to Euwallacea fornicatus with usage of s.s. and s.l. throughout the manuscript. The four species in the complex are monophyletic lineages and associated with species names. They should be referred to as these names or 'species complex' instead. It is acceptable in the intro but not in the abstract, lines 27, 64, 80, 98-99, 102, 216, Figure 1, or Table 1.
We apologize for the confusion and have changed the text as suggested. We have retained a single usage of the term “E. fornicatus s. l.” in the Introduction [line 46]
Line 305: The word ‘cryptic’ is not really applicable to these two species as they can easily be separated with a micrometer.
We have deleted the word “cryptic” [line 320] although in our opinion, the complex, as a whole, remains a cryptic entity. Indeed, Gomez et al. 2018 are clear in their statement, “It is obvious, however, that morphology-based classification must be considered tentative, and should be followed by DNA-based identification. Because of the overlap of morphological characters, DNA sequence typing provides a more robust and more reliable method for assessing species identity”
Line 51. Correctly: Réunion.
Thank you – corrected [line 61]
With response to the reviewer’s remaining comment, we offer the following rebuttal:
Line 228-229. It is very unfortunate that no attempt was made to locate additional specimens of these species in either the Bishop Museum or in the UH Manoa collection (where one author on this manuscript is based). I understand that the Bishop may not be accessible at this point due to covid, but the UH collection should be easy for at least one author to check. The big Island and Oahu were surveyed as part of the Forest Service’s EDRR program back in 2009 (Rabaglia et al. 2019: https://academic.oup.com/ae/article/65/1/29/5376569; no need to cite this unless you find it necessary). The UH collection contains authoritatively identified specimens of both species from the 2009 which will help elucidate how widespread E. fornicatus was distributed in the Big Island around the time the Maddox specimen was sequenced. I identified these specimens in January 2020
We agree, and indeed, that is why we mention it in our Discussion. In a perfect world this would have been done. However, as the reviewer is no doubt well aware, science requires both time and money. Subject to attaining funding, an appraisal of these collections will be a priority in any future survey work. As it stands, we don’t believe that the omission of such a search of these collections detracts from our study, the specific aim of which, was to confirm the current presence of PSHB in Hawaii.
Reviewer 3 also suggested (in two separate comments) that we should have delved into the collections of the Bishop Museum and UH.
Lines 227 -229 – Why not check the Bishop and the UH collections for specimens of PSHB ? One author is located in Oahu – I don’t think it is unreasonable (even with COVID -19 precautions) to request a loan of specimens from Bishop and the UH collections.
Concerning my comments pertaining to Lines 227-232, I believe conducting morphometric analysis on suspect specimens of PSHB in the Bishop and UH collections would be greatly improved this paper it is an opportunity to discover the earliest record of this species in Hawai’i
We offer the same rebuttal - We agree, and indeed, that is why we mention it in our Discussion. In a perfect world this would have been done. However, as the reviewer is no doubt well aware, science requires both time and money. Subject to attaining funding, an appraisal of these collections will be a priority in any future survey work. As it stands, we don’t believe that the omission of such a search of these collections detracts from our study, the specific aim of which, was to confirm the current presence of PSHB in Hawaii.
Reviewer 3 also requested that we conduct a phylogenetic analysis including all published haplotypes of PSHB (not just those in Stouthamer et al. 2017) to better investigate the Hawaiian haplotypes relationship to haplotypes in the native range.
Lines 230 -242– I understand the authors point that the exact PSHB haplotype (identical to KX818247) has not been found in the native range. However, sequences similar to it may give some evidence to its origin. The authors suggest that KX818247 is close to H27-29 but there are more E. fornicatus sequences reported in Smith et al. 2019. Conducting a phylogenetic analysis of all available E. fornicatus sequences would better illustrate the relationship of KX818247 to the haplotypes discover in the native range of the species.
We had, in fact actually done that, but had not worded the manuscript very well, which admittedly suggests that we had only considered the haplotypes from Stouthamer et al (2017). Originally, we did not want to burden the manuscript with the phylogenetic analysis and resulting tree, which is why we opted for “(data not shown)”. But, for completeness, we’re happy to include it and have added complete details of the anlysis to the manuscript. [lines 156-168, lines 203-204, and Figure 2]
In response to reviewer 3’s other comments:
Line 99 – “(according to Smith et al., 2019)” This is an odd phrase for a citation. Do the authors doubt the conclusions of this publication? If so more explanation is needed. If not, delete “according to”.
Deleted “according to” [line 99]
Line 216 – no need to include s.s. after E. fornicatus at this point in the paper. The use sensu stricto is reserved for when species boundaries are obscure. Gomez et al. 2018 and Smith et al. 2019 delimit the species (I read both papers so to be certain).
Corrected
Line 218 – “ both are found..” simplify to occur.
Simplified as suggested [line 232]
Lines 229- 232 – This sentence is awkwardly written and its meaning is unclear. 1. “Recent systematic studies (Gomez et al., 2018; Smith et al., 2019) have claimed that TSHB and PSHB…” What do the authors mean by “claimed”? This implies that the results/conclusions of the above two studies are suspect. If so this point would need some explanation. (Note: Upon reading these papers, I found the authors’ methodologies sound and conclusions reasonable. Their results clearly shows that the body sizes of these two species are distinct.) The authors do not use the morphometric analysis in this paper so the assertion that the conclusions of Gomez et al., 2018 and Smith et al., 2019 is baseless. 2. “… so these collections may have inadvertently captured a historical record of this invasion.” “Inadvertently” suggests that these entomology collections were made without purpose or haphazardly which is dismissive of their importance. I suggest this wording “…so these collections may hold a historical record of this invasion.”
We apologize for any confusion and have changed “have claimed” to “show” [line 244]. With regards the second part of this comment, we have changed the wording as suggested [lines 245-246]. We have also included a new sentance immediately thereafter stating that E. perbrevis has been confirmed (presumably based on morphometrics) on Oahu as early as 1919 [lines246-248].
Review the literature cited – there are several typographical errors throughout
Checked and updated with new references relating to the phylogenetic analysis.
" | Here is a paper. Please give your review comments after reading it. |
659 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Health-Related Quality of Life (HRQoL) for refugee women in reproductive age is highly affected by physical, political, psychosocial and environmental conditions in countries of asylum. HRQoL is enormously affected by the satisfaction of this vulnerable group with the physical, psychological, emotional and social care services provided in this critical time. Therefore, this study aimed toassess the HRQoL among Syrian refugee women of reproductive age living outside camps in Jordan. Methods. A cross-sectional correlational study was conducted with a convenience sample of 523 Syrian refugee women in the host communities in Jordan.Health-related quality of life (HRQOL) was measured using the short-form 36 (SF-36) questionnaire. Results. Significant negative correlations were found between SF-36 individual subscales score and the length of marriage, the number of children, parity and family income. The strongest correlations were between pain scale and length of marriage (r = -.21), and between Energy/Fatigue and 'number of children' (r = -.21). Conversely, antenatal care was positively correlated with physical, role emotional, pain, and general health. Physical functioning and general health were predicted significantly with less years of marriage, younger age at marriage, less violence and by higher family income. Conclusion. This study suggests low HRQoL scores for women of reproductive age across all domains. Several factors such as years of marriage, age at marriage, the number of children, violence, antenatal care and family income affected the women's general health. The provision of appropriate and accessible reproductive and maternal healthcare services in antenatal visits is critical for ensuring the immediate and long-term health and wellbeing of refugee women and their families.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The conflict in Syria started in 2011 and was declared by the United Nations as one of the world's worst humanitarian crises in the twenty-first century and a public health disaster <ns0:ref type='bibr' target='#b3'>(Baker, 2014)</ns0:ref>. Because of this crisis, an estimated 6.3 million Syrian refugees were forcibly removed from their homes to other countries. About 5.6 million of these refugees fled to neighboring countries, including Lebanon, Turkey, and Jordan (United Nations High Commissioner for Refugees (UNHCR), 2019).</ns0:p><ns0:p>Jordan hosts the second-highest number of refugees (87per 1,000 inhabitants) and is the sixth-highest refugee-hosting country in the world. At the time of this study (2018), 83% of Syrian refugees in Jordan live in urban areas, and only a smaller percentage of refugees were living in refugee camps concentrated mainly in the north of Jordan (UNHCR, 2018b).The largest proportion of Syrian refugees are living in four major cities, Amman: (34.42%), Irbid (27.14%), Mafrak (16.43%) and Zarka, (13.85%) <ns0:ref type='bibr'>(Higher Population Council, 2016)</ns0:ref>. With limited health care resources as a lower-middle income country, promoting and improving refugees' health is taxing to the existing health care system of Jordan. In fact, while Jordan had previously been categorized as an upper-middle income country, in 2017 Jordan was reclassified as a lowermiddle income country, in part because of a growing population including refugees <ns0:ref type='bibr'>(World Bank, 2017)</ns0:ref>.</ns0:p><ns0:p>The UNHCR figures of (2019) reported that women accounted for 49.9% of total Syrian refugee men and women residing out of camps in Jordan. Of this population (49.9%, 23.9% were women between the ages of 18-59 years old. In a world where people are forcibly displaced because of conflict or persecution, women and children are most vulnerable. Refugee women of reproductive age bear a disproportionate share of suffering and hardship due to displacement, war, and conflict situations requiring additional attention to maintain their physical, social and psychological wellbeing. Health issues such as mental health, reproductive health, communicable and non-communicable diseases are a priority for health care services for this population <ns0:ref type='bibr' target='#b9'>(Doocy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b17'>Hollander et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Khawaja et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b28'>Nelson-Peterman et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Several health issues arise when women escape conflicts and seek refuge in another country. Refugee women of reproductive age often encounter challenges in receiving family planning and perinatal care <ns0:ref type='bibr' target='#b27'>(Mortazavi et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b29'>Reese Masterson et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b34'>Sadat et al., 2013)</ns0:ref>. Lack of healthcare at this age may expose women to pregnancy and obstetric complications after birth. Several studies among Afghans, Somalis, Iraqis and Syrian refugees suggest that women face physical and mental health challenges <ns0:ref type='bibr' target='#b0'>(Alemi et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bogic et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Gerritsen et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b12'>Ghumman et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b15'>Hassan et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b22'>Lillee et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b24'>Maximova & Krahn, 2010;</ns0:ref><ns0:ref type='bibr'>Taylor et al., 2014)</ns0:ref>. A study by <ns0:ref type='bibr' target='#b15'>Hassan et al. (2016)</ns0:ref> suggested that Syrian women experienced a wide range of mental health problems because of the war and immigration. In another study, Syrian refugee women had an increased risk of suffering from post-traumatic stress disorder (PTSD) when exposed to more than one trauma <ns0:ref type='bibr' target='#b1'>(Alpak et al., 2015)</ns0:ref>. <ns0:ref type='bibr' target='#b0'>Alemi et al. (2016)</ns0:ref> demonstrated that migrants suffered depression, anxiety, and symptoms of psychological distress .In addition to mental and psychological health, in a study of Syrian refugees in Lebanon, Reese <ns0:ref type='bibr' target='#b29'>Masterson et al. (2014)</ns0:ref> found that Syrian women's physical health was affected by several reproductive health problems such as menstrual irregularities, pelvic pain and infections.</ns0:p><ns0:p>Health-Related Quality of Life (HRQoL) is a broad concept covering physical, mental, emotional and social constructs that affect the HRQoL. It is based on a person's level of satisfaction with their physical condition, emotional state, and family and social life, and takes PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed into account functional aspects in a person's life <ns0:ref type='bibr' target='#b13'>(Group, 1993;</ns0:ref><ns0:ref type='bibr' target='#b49'>Wilson & Cleary, 1995)</ns0:ref>. HRQoL is a major issue to be considered in a vulnerable group such as refugee women. Having extra stressors such as pregnancy and childbirth as an immigrant in a different physical, psychosocial and environmental condition puts them at higher risk for physical and mental problems.</ns0:p><ns0:p>Additionally, situations like immigration, aggravated by pregnancy and other reproductive needs, low level of education, separation from spouses and children, lack of health insurance and unemployment create serious challenges to maintain adequate healthcare services that promote health and HRQoL in extraordinary conditions. Accordingly, the purpose of this study was to assess health related quality of life of Syrian refugee women in the reproductive age within six months of giving birth to an infant as they live outside the refugee camps in Jordan. This will assist health care providers in planning interventions that enhance health services and guide policy makers in implementing services to promote the health of this vulnerable group.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods Design</ns0:head><ns0:p>A cross-sectional correlational survey was used in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sample and Setting</ns0:head><ns0:p>The population of this study comprises Syrian refugee women of reproductive age (14-49 years old) who had given birth within the last six months. A proportional quota sampling technique was used to select women who were living outside the refugee camps in Amman (the capital of Jordan), Irbid, Mafrak and Zarka). Selection was based on the percentage of Syrian refugee women in each city, the four major hosting cities of Syrian refugees <ns0:ref type='bibr'>(Higher Population Council, 2016)</ns0:ref> PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed A priori sample size calculation was conducted revealing a sample of at least 393 mothers based on a 0.05 two-tailed level of significance, an effect size = 0.20 (small), and a power = 0.80 using mean difference test <ns0:ref type='bibr' target='#b7'>(Cohen, 1992)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Instruments</ns0:head><ns0:p>The questionnaire consisted of two parts. The first part included questions pertaining to Syrian refugee women sociodemographic characteristics such as age, marital status, educational level, religion, number of children, income level, health insurance, and employment status. The second part included a short survey of the Arabic version of SF-36 of the eight HRQoL dimensions. The SF-36 measures quality of life in eight health domains: physical functioning (10 items), role limitations due to physical health problems (4 items), role limitations due to personal or emotional problems (3 items), energy/fatigue (4 items), emotional well-being (5 items), social functioning (2 items), bodily pain (2 items), and general health perceptions (5items) <ns0:ref type='bibr' target='#b46'>(Ware & Sherbourne, 1992)</ns0:ref>. SF-36 is a valid and reliable tool across diverse populations <ns0:ref type='bibr' target='#b25'>(McHorney et al., 1994)</ns0:ref>. SF-36 Health Survey has been used among the general populations in health and illness conditions and among refugees such as Karenni, Afghani, Iranian and Somali refugees <ns0:ref type='bibr' target='#b5'>(Cardozo et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b11'>Gerritsen et al., 2006)</ns0:ref>.</ns0:p><ns0:p>The Arabic version form was used to measure the health status of the Arab population in Saudi Arabia, Tunisia, and Lebanon. The tool measurement showed high internal consistency (α 0.70-0.90) across the eight subscales <ns0:ref type='bibr' target='#b37'>(Sheikh et al., 2015)</ns0:ref>. These results suggest that the SF-36 Arabic version of the questionnaire is a valid and reliable scale to measure the quality of life of the Arab population. <ns0:ref type='table' target='#tab_0'>PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>PeerJ reviewing</ns0:head></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>This report is part of a larger study that examined aspects of reproductive health practices of Syrian refugee women in Jordan. Twenty-seven research assistants (RAs) were recruited and trained by the research team on data collection that took place over a period of six months (January -July 2018). We recruited female RAs to ensure that women felt comfortable as they participated in this study. After training, the instrument and methods were tested on a sample of 30 Syrian refugee women and was found as clear and relevant. Data were collected in women's homes using face-to-face interviewing techniques. Research assistants assisted illiterate participants by reading each question and completing the form together.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ethical considerations</ns0:head><ns0:p>Ethical approval was obtained from the University of Jordan and the Department of Statistics (DOS). The DOS's approval was necessary to facilitate RAs access to the host community and collect data from participants. Participants were informed of the purpose and procedures of the study. They were assured of their rights to confidentiality and their right to voluntary participation and to decline participation without reprimands. They were informed that participation has no or minimal risk. Upon approval of participation, each woman was invited to sign a consent form in Arabic. An additional consent was sought from guardians of women under 18 years old. A copy of this form that included a contact number for the research team was given to participants in case they had additional questions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analyses</ns0:head><ns0:p>The statistical analysis was conducted using the Statistical Package for the Social Sciences (SPSS) version 23.0 for Windows (SPSS Inc. Chicago, IL, USA). The SF-36 survey scores were coded and scored using the guidelines provided by the RAND Corporation (i.e. transformed to positive score between 0 and 100 where the higher the score, the better the health). For example, for pain, a higher score indicates greater freedom from pain. Pearson correlation coefficient was used to examine the relationship between the total and subscale scores of the SF-36, and demographic variables. Multivariate linear regression analysis was used to identify predictors of Sf-36 subscales</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Characteristics of participants</ns0:head><ns0:p>A total of 523 Syrian refugee postpartum women participated in the study with a response rate of 98%. Age ranged from 16 to 44 years (M± SD = 26.11± 6.11. Mothers' age at marriage ranged from 12 to 37 years (M± SD = 18.50±3.71) with a duration of marriage ranging from 1 to 25 years (M± SD = 7.6±5.3). The number of children in the participants' families ranged from 1 to 11 years (M± SD = 3.21±1.84). Family's monthly income in Jordanian dinar (JD; 1 JD = 1.41 U.S. dollars) ranged from 30 to 940 JD (M ± SD = 222.6 ± 105.45). The majority of the participants did not experience violence (93.5). Women who have experience abortion one or more times were 37.3% of the studied participants (Table <ns0:ref type='table'>1</ns0:ref>).</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> shows the mean, standard deviation, and the Cronbach's alpha for the eight health dimension scales. Each of the subscales was scored between 0 and 100. The highest mean was for the physical functioning (66.18) while the lowest mean was for role limitations due to emotional problems (42.83). The reliability coefficients as demonstrated by Cronbach's alpha for the eight health subscales ranged between .61 and .90.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_1'>3</ns0:ref> shows the bivariate correlations between the eight health scales with the examined sociodemographic variables. Most importantly, the eight scales showed a significant negative association with (1) years of marriage, (2) number of children, and (3) previous births.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed That is, the greater the number of years of marriage, number of children, and number of previous births have negatively affected women's overall quality of life. Additionally, there were two demographic variables: (1) family income and (2) antenatal care that showed positive association with HRQoL scales. Family income was positively correlated with physical functioning (.12 ** ) and general health (.11 * ).Antenatal care was positively correlated with physical (.09 *) , role emotional (11 *) , freedom of pain (.12 **), and general health (.10 * ),indicating that women who received antenatal care had better quality in the physical, role emotional, freedom of pain, and general health domains. That is, the higher the family income, and the increased number of antenatal visits, the better is the quality of life of women in the reproductive age. There was a negative correlation between violence and emotional wellbeing (r=-.10). The strongest correlation between HRQoL scales and sociodemographic variables were between 'pain scale' and 'years of marriage' (r = -.21), and between 'Energy/Fatigue' and 'number of children' (r = -.21). Refugee women's characteristics as predictors for the health conditions are presented in Table <ns0:ref type='table'>4</ns0:ref>.</ns0:p><ns0:p>Before running the analyses, all the statistical assumptions for multiple linear regression were examined. None were violated. All the eight multiple regression models were significant in this study.</ns0:p><ns0:p>The selection of the nine predictors was based on its importance to the refugee women and the statistical correlation with the health scales. The highest explained variance was the same (R 2 = .07) for physical functioning, pain, general health scales. Physical functioning was predicted significantly as better health with less years of marriage, younger age at marriage, and with higher family income. The variables (number of children, previous abortion, previous birth, and baby loss) showed poor prediction across all the health conditions. General health was PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed predicted significantly with better family income and less violence. Freedom of pain was predicted significantly with less years of marriage and antenatal care.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This study explored the HRQoL of Syrian refugee women within six months of giving birth and who were living outside of the refugee camps in Jordan. The scores of HRQoL measurement scales and the subscales were compared with women in this study and women living in nearby Arab countries reported in other studies <ns0:ref type='bibr' target='#b8'>(Daher et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b23'>Matalqah et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b33'>Sabbah et al., 2003)</ns0:ref>, taking into considerations that these studies may have had different methodologies and have been conducted in different socio-political circumstances. The results of the current study showed that Syrian refugee women living outside of refugee camps in Jordan scored lower on all domains compared to other women living in northern Jordan and Iraqi refugee women.</ns0:p><ns0:p>Additionally, Syrian refugee women scored lowest in general health scores compared to Jordanian and Iraqi refugees. This can be explained in light of the psychosocial and physical burden after having a new baby and not being well settled in Jordan, unlike Jordanian, Lebanese and Iraqi women.</ns0:p><ns0:p>The highest mean score of the SF-36 scale in the study was the physical functioning. This finding is similar to another study of the Arab populations including Jordanian and Lebanese people. In their study, <ns0:ref type='bibr' target='#b33'>Sabbah et al., (2003)</ns0:ref> and <ns0:ref type='bibr' target='#b9'>Doocy et al., (2015)</ns0:ref> showed that physical health scores were the highest, whereas, the emotional health scores were the lowest. In another study about the Iraqi refugees, women scored the highest in social functioning. It is probably that Iraqi women were better settled with time elapsing since migration to Jordan and have developed PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed more social network that enabled them to cope with their needs compared to this study population <ns0:ref type='bibr' target='#b8'>(Daher et al., 2011)</ns0:ref>.</ns0:p><ns0:p>In the current study, there was a decline in physical health with increasing years of marriage, number of children, and older age at marriage; and at the same time, factors that significantly predicted physical functioning were also less years of marriage, younger age at marriage, higher family income, and less violence, all of which are aligned with <ns0:ref type='bibr' target='#b26'>Mirghafourvand, et al., (2016)</ns0:ref> study. Such findings may reflect women's energy depleting with increasing requirements to fulfil their assumed role as a caretaker of the family members, a mother, and a wife with an increased number of children and responsibilities. An increasing age and family demands in unusual living conditions, such as in resettlement, may drain a woman's energy and functioning levels.</ns0:p><ns0:p>The lowest SF36 score in this study was in role emotional. This result could point to the influence of their emotional condition and their inability to perform their roles as mothers and wives. Being a refugee in a different social and environmental context, compounded by having a new child, without social support, financial strains, and war-related stressful situations are aggravating factors that may likely contribute to women's emotional distress. This finding is consistent with the findings of a study in Northern Jordan <ns0:ref type='bibr' target='#b23'>(Matalqah et al., 2018)</ns0:ref>. Both populations (Jordanian and Syrian) are facing similar socioeconomic challenges in poverty and are surrounded by war and political turbulence in neighbouring countries. Alternatively, Iraqi refugee had their lowest score in energy and fatigue scales <ns0:ref type='bibr' target='#b8'>(Daher et al., 2011)</ns0:ref>, which could be related to their prolonged refuge situation compared to Syrian refugees.</ns0:p><ns0:p>In this study, women who received antenatal care had better health (physical, role emotional, pain, general health). A significant positive correlation was found between income PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed and antenatal visits with improved HRQoL subscale scores. Jordan is dedicated to providing reproductive health care services to refugees throughout the widespread mother and child care centres and mobile services <ns0:ref type='bibr' target='#b32'>(Saadallah & Baker, 2016)</ns0:ref>. However, a study of Syrian refugees in camps reported that 23 percent of refugee women were unaware of these services, about 28 percent experienced unplanned pregnancies, and 17 percent did not access antenatal care during their pregnancy <ns0:ref type='bibr' target='#b20'>(Kohler, 2014)</ns0:ref>.</ns0:p><ns0:p>The strongest correlation was between pain scale and length of marriage. In a study of Jordanian women <ns0:ref type='bibr' target='#b23'>Matalqah et al., (2018)</ns0:ref> found that Jordanian women suffered more pain and limitations due to pain compared with this study group, the Syrian refugee women. The other strong correlation was between energy/fatigue and number of children. Women in this study may be overwhelmed with their family's needs, and in the case of refugee women with a large family and large number of children, lack of resources and support, and having a new child. It is no surprise that these responsibilities drain their energy and cause fatigue <ns0:ref type='bibr' target='#b18'>(Iwata et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Schmied et al., 2017)</ns0:ref>.This study suggested that participants' energy and fatigue, and health in general were negatively affected by the increased number of children in the family; this may, in part, lead to increased financial burden. However, with increasing age and length of marriage, women usually develop better adaptation to the increased demands of the family. Age-related changes in coping and adaptation show a curvilinear pattern over the life course, with a peak in midlife and declines in older age <ns0:ref type='bibr' target='#b31'>(Robinson & Lachman, 2017)</ns0:ref>.</ns0:p><ns0:p>The findings suggest a relationship between low socioeconomic status and poor health.</ns0:p><ns0:p>Studies have shown that physical and mental health problems are highly prevalent in vulnerable populations, such as refugees and asylum seekers <ns0:ref type='bibr' target='#b10'>(Dorling et al., 2007)</ns0:ref>. It was also found that having health insurance has enhanced physical and emotional health. About two-thirds of the PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed women (68.5%) in the current study did not have health insurance. As in 2018 and as a result of the national financial burden and the international funding cuts, the Jordanian Government and International organizations have reduced budget plans in providing health care services to refugees. Therefore, refugees have to pay about 80% of health care services, besides facing the challenges to access basic health care services <ns0:ref type='bibr' target='#b9'>(Doocy et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Kohler, 2014)</ns0:ref>. As a result, there will be an increasingly challenging situation to meet the needs of over 650,000 refugees in the country (UNHCR, 2018a).</ns0:p><ns0:p>Vulnerable women such as refugees in their postpartum period may undergo physical, emotional and social changes that may expose them to several health risks <ns0:ref type='bibr' target='#b34'>(Sadat et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b41'>Tucker et al., 2010)</ns0:ref>. In this critical period, women may need health care services that improve their HRQoL and wellbeing and prevent comorbidity and complications. The literature has shown that forced displacement of Syrians affected their HRQoL, including their physical <ns0:ref type='bibr' target='#b9'>(Doocy et al., 2015)</ns0:ref>, psychological <ns0:ref type='bibr' target='#b47'>(Weinstein et al. 2016)</ns0:ref>, and social wellbeing <ns0:ref type='bibr' target='#b36'>(Sevinç et al., 2016)</ns0:ref>. This study confirms those findings and highlights specifically the needs of women of reproductive age living outside refugee camps.</ns0:p></ns0:div>
<ns0:div><ns0:head>Strengths and Limitations</ns0:head><ns0:p>This is the first study to address the HRQoL of Syrian refugees' women during a critical time, shortly after birth at a sensitive period when they needed more support, better health care services, while they were struggling with the refugee status.</ns0:p><ns0:p>A survey, self-report design is used in this study. In this design bias may affect the results as participants may be too embarrassed to reveal private details. Accordingly, we recommend further qualitative and longitudinal and study that can enhance our knowledge in examining the HRQoL of vulnerable women. Furthermore, the use of non-random sampling limited the generalizability of the findings. Further studies to include Palestinian refugee women is recommended.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, we found that the general health and HRQoL of Syrian refugee women were low compared to the findings of other studies of Arab women in the literature.</ns0:p><ns0:p>There is a relationship between low socioeconomic status and poor HRQoL. Several factors such as years of marriage, age at marriage, the number of children, violence, antenatal care and family income affected the women's general health. The provision of optimal reproductive and maternal healthcare is critical for ensuring the health and wellbeing of refugee women and their families.</ns0:p><ns0:p>For refugee women, access to maternity care influences their immediate and long-term health; it also impacts upon integration, attitudes to health and health-seeking behaviour, and may have health ramifications upon the next generations. Providing equitable access to quality reproductive health services for Syrian refugees and Jordanians poses enormous challenges for a small country like Jordan.</ns0:p><ns0:p>In conclusion, health services must be responsive to the health needs of refugee women in their reproductive age as they face several health challenges related to physical, mental and socioeconomic status. Nurses can play a major role in promoting the health of refugee women.</ns0:p><ns0:p>Addressing barriers to seeking health care services and attending to health care needs and providing health education, affordable and accessible health care services can improve Syrian refugee women health and HRQoL. Further studies to include Palestinian refugee women is recommended 1 Table <ns0:ref type='table'>1</ns0:ref> Characteristics of women and their living conditions (N=523) Manuscript to be reviewed Manuscript to be reviewed </ns0:p></ns0:div><ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 : Means and standard deviations for the health scales (N = 523)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Characteristics</ns0:cell><ns0:cell>Mean (SD)</ns0:cell><ns0:cell>n (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Age</ns0:cell><ns0:cell>26.11 (6.11)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Years of marriage</ns0:cell><ns0:cell>7.59 (5.26)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Age of women on marriage</ns0:cell><ns0:cell>18.5 (3.71)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Family income</ns0:cell><ns0:cell>222.6 (105.54)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Number of children</ns0:cell><ns0:cell>3.2 (1.84)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Antenatal care</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yes</ns0:cell><ns0:cell /><ns0:cell>470 (89.9)</ns0:cell></ns0:row><ns0:row><ns0:cell>No</ns0:cell><ns0:cell /><ns0:cell>53 (10.1)</ns0:cell></ns0:row><ns0:row><ns0:cell>Violence experience</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yes</ns0:cell><ns0:cell /><ns0:cell>34 (6.5)</ns0:cell></ns0:row><ns0:row><ns0:cell>No</ns0:cell><ns0:cell /><ns0:cell>489 (93.5)</ns0:cell></ns0:row><ns0:row><ns0:cell>Abortion</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Zero time</ns0:cell><ns0:cell /><ns0:cell>328 (62.7)</ns0:cell></ns0:row><ns0:row><ns0:cell>One time</ns0:cell><ns0:cell /><ns0:cell>113 (21.6)</ns0:cell></ns0:row><ns0:row><ns0:cell>Two times</ns0:cell><ns0:cell /><ns0:cell>51 (9.8)</ns0:cell></ns0:row><ns0:row><ns0:cell>Three times</ns0:cell><ns0:cell /><ns0:cell>22 (4.2)</ns0:cell></ns0:row><ns0:row><ns0:cell>Four and more</ns0:cell><ns0:cell /><ns0:cell>9 (1.8)</ns0:cell></ns0:row><ns0:row><ns0:cell>Baby loss</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Yes</ns0:cell><ns0:cell /><ns0:cell>50 (9.6)</ns0:cell></ns0:row><ns0:row><ns0:cell>No</ns0:cell><ns0:cell /><ns0:cell>473 (90.4)</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Bivariate correlations between health scales and specific conditions for Syrian refugee's women</ns0:figDesc><ns0:table /><ns0:note>PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 : Bivariate correlations between health scales and specific conditions for Syrian refugee's women Health scales</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>Violence</ns0:cell><ns0:cell>Years</ns0:cell><ns0:cell>Age</ns0:cell><ns0:cell>Number</ns0:cell><ns0:cell>Family</ns0:cell><ns0:cell>Previous</ns0:cell><ns0:cell>Previous</ns0:cell><ns0:cell>Baby</ns0:cell><ns0:cell>Antenatal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>experience</ns0:cell><ns0:cell>married</ns0:cell><ns0:cell>married</ns0:cell><ns0:cell>of</ns0:cell><ns0:cell>income</ns0:cell><ns0:cell>birth</ns0:cell><ns0:cell>abortion</ns0:cell><ns0:cell>loss</ns0:cell><ns0:cell>care</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>children</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Physical</ns0:cell><ns0:cell>-.02</ns0:cell><ns0:cell cols='4'>-.18 ** -.13 ** -.15 ** .12 **</ns0:cell><ns0:cell>-.15 **</ns0:cell><ns0:cell>-.05</ns0:cell><ns0:cell>.08</ns0:cell><ns0:cell>.09 *</ns0:cell></ns0:row><ns0:row><ns0:cell>functioning</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Role</ns0:cell><ns0:cell>.04</ns0:cell><ns0:cell>-.15 **</ns0:cell><ns0:cell>-.06</ns0:cell><ns0:cell>-.13 **</ns0:cell><ns0:cell>.05</ns0:cell><ns0:cell>-.12 **</ns0:cell><ns0:cell>-.12 **</ns0:cell><ns0:cell>.06</ns0:cell><ns0:cell>.04</ns0:cell></ns0:row><ns0:row><ns0:cell>physical</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Role</ns0:cell><ns0:cell>.04</ns0:cell><ns0:cell>-.14 **</ns0:cell><ns0:cell>-.08</ns0:cell><ns0:cell>-.11 *</ns0:cell><ns0:cell>.07</ns0:cell><ns0:cell>-.10 *</ns0:cell><ns0:cell>-.10 *</ns0:cell><ns0:cell>.03</ns0:cell><ns0:cell>.11 *</ns0:cell></ns0:row><ns0:row><ns0:cell>emotional</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Energy -</ns0:cell><ns0:cell>.06</ns0:cell><ns0:cell>-.19 **</ns0:cell><ns0:cell>-.06</ns0:cell><ns0:cell>-.21 **</ns0:cell><ns0:cell>-.01</ns0:cell><ns0:cell>-.20 **</ns0:cell><ns0:cell>-.04</ns0:cell><ns0:cell>.04</ns0:cell><ns0:cell>.073</ns0:cell></ns0:row><ns0:row><ns0:cell>fatigue</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Emotional</ns0:cell><ns0:cell>-.10 *</ns0:cell><ns0:cell cols='3'>-.13 ** -.09 * -.14 **</ns0:cell><ns0:cell>.02</ns0:cell><ns0:cell>-.13 **</ns0:cell><ns0:cell>-.09 *</ns0:cell><ns0:cell>-.03</ns0:cell><ns0:cell>.06</ns0:cell></ns0:row><ns0:row><ns0:cell>wellbeing</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Social</ns0:cell><ns0:cell>.01</ns0:cell><ns0:cell>-.16 **</ns0:cell><ns0:cell>-.03</ns0:cell><ns0:cell>-.12 **</ns0:cell><ns0:cell>.04</ns0:cell><ns0:cell>-.12 **</ns0:cell><ns0:cell>-.06</ns0:cell><ns0:cell>.01</ns0:cell><ns0:cell>.08</ns0:cell></ns0:row><ns0:row><ns0:cell>functioning</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Pain</ns0:cell><ns0:cell>.03</ns0:cell><ns0:cell>-.21 **</ns0:cell><ns0:cell>-.06</ns0:cell><ns0:cell>-.18 **</ns0:cell><ns0:cell>-.01</ns0:cell><ns0:cell>-.17 **</ns0:cell><ns0:cell>-.12 **</ns0:cell><ns0:cell>.07</ns0:cell><ns0:cell>.12 **</ns0:cell></ns0:row><ns0:row><ns0:cell>General</ns0:cell><ns0:cell>-.10 *</ns0:cell><ns0:cell>-.16 **</ns0:cell><ns0:cell>-.08</ns0:cell><ns0:cell>-.15 **</ns0:cell><ns0:cell>.11 *</ns0:cell><ns0:cell>-.16 **</ns0:cell><ns0:cell>-.13 **</ns0:cell><ns0:cell>.03</ns0:cell><ns0:cell>.10 *</ns0:cell></ns0:row><ns0:row><ns0:cell>health</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='3'>PeerJ reviewing PDF | (2020:05:48679:1:1:NEW 8 Aug 2020)</ns0:note>
</ns0:body>
" | "
School of Nursing
The University of Jordan
Queen Rania St,
Amman, 11942 Jordan
Tel: +962776770770
Fax:+ 96265300244
[email protected] July 28th. 2020
Dear Editors
We thank the reviewers for their time, effort and generous comments on the manuscript and have edited the manuscript to address their concerns.
We responded to the reviewers’ comments by modifying the reviewed pdf document.
We believe that the manuscript is now suitable for publication in PeerJ.
Dr. Manar Nabolsi
Associate professor of Nursing
On behalf of all authors
Reviewer 1 (Anonymous)
Basic reporting
Generally very well written.
Experimental design
In line 213, the authors should define what type of regression analysis was being performed (I assume univariate and then multivariate linear regression analysis)
Multivariate linear regression is added line 143
It should be clarified in table 3 and table 4 that one of these is univariate linear regression analysis and the other is multivariate linear regression analysis (looking for independent variables).
Table 3 is a bivariate correlations between variables. {it is not a regression table}, clarification in the text and the title of the table are added (Line 162)
Changed Table 3 Title
Table 4 has 8 models of Multiple Linear Regressions. Clarifications is added to the text (line 180)
Most importantly, the variables that are mentioned in Table 1 do not match with some that are in Table 3/4. Tables 3/4 include novel and confronting variables such as violence experiences, abortion, baby loss, which are not previously mentioned in the methods or results.
Matching between the variables in Table 1 and the regression analysis is considered.
The variables (violence experiences, abortion, baby loss) are added to Table 1 with description and added to text in the result section (Line 148)
Changed Table 1 added new Table 1
Similarly, Tables 3/4 exclude many variables from Table 1 such as social status, education levels, employment, health insurance, years in Jordan. What is the reasoning behind this extremely confusing discrepancy? Can the authors please keep a consistent note of which variables exactly are being studied and statistically analysed, and just omit all the others?
The variables that don’t appear in the regression and the correlations tables (social status, education levels, employment, health insurance, years in Jordan) were removed from table 1.Added New Table 1
Validity of the findings-
My major concern here is that in the conclusions of the abstract and of the main paper, the authors seem to imply that their study compared Syrian women to women in surrounding Arab populations and found poorer health outcomes. However, this is not reflective of this study which only interviewed Syrian women
The authors can compare their findings to those of other studies in the discussion, but should make the important caveat that these are studies may have had different methodologies, have been conducted during different socio-political circumstances etc.
They should not claim that this current study has done direct comparison between ethnic groups.
Thank you for your comments. We agree and clarified this issue in the manuscript (Abstract line 19, Discussion paragraph starts line 191, and in the conclusion line 282)
________________________________________
Reviewer 2 (Tania Jahan)
Basic reporting
The article meets all Basic reporting criteria.
So, No Comment.
Experimental design
No Comment.
Validity of the findings
No Comment.
Comments for the Author
The article meets all the standard criteria of the journal.
Thank you for your time and effort
________________________________________
Reviewer 3 (Ishtaiwi Abuzayed)
Basic reporting
No comment
Experimental design
No comment
Validity of the findings
No comment
Comments for the Author
Refugees who fled Syria included Syrian and Palestinian refugees.
it would be more comprehensive if a sample from Palestine refugees were included but may be added to the recommendation for future research
Thank you for your comment. We completely agree. Recommendation added line 298
" | Here is a paper. Please give your review comments after reading it. |
660 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Volleyball is an exceedingly popular physical activity in the adolescent population, especially with females. The study objective was to assess the effect of volleyball training and natural ontogenetic development on the somatic parameters of adolescent girls. The study was implemented in a group of 130 female volleyball players (aged 12.3±0.5 -18.1±0.6 years) along with 283 females from the general population (aged 12.3±0.5 -18.2±0.5 years). The measured parameters included: body height (cm), body mass (kg), body fat (kg, %), visceral fat (cm 2 ), body water (l), fat free mass (kg) and skeletal muscle mass (kg, %). Starting at the age of 13, the volleyball players had significantly lower body fat ratio and visceral fat values than those in the general population (p<0.001 in body fat % and p<0.01 in visceral fat). In volleyball players, the mean body fat (%) values were 17.7±6.6 in 12-year-old players, 16.7±4.9 in 13-year-old players, 18.5±3.9 in 16-year-old players, and 19.3±3.1 in 18-year-old players. In the general population, the mean body fat (%) values were 19.6±6.3 in 12-year-old girls, 21.7±6.4 in 13-year-old girls, 23.4±6.1 in 16-year-old girls, and 25.8±7.0 in 18-year-old girls. The visceral fat (cm 2 ) mean values were 36.4±19.3 in 12-year-old players, 39.2±16.3 in 13-year-old players, 45.7±14.7 in 16year-old players, and 47.2±12.4 in 18-year-old players. In the general population, the mean visceral fat (cm 2 ) values were 41.4±21.1 in 12-year-old girls, 48.4±21.5 in 13-yearold girls, 58.0±24.7 in 16-year-old girls, and 69.1±43.7 in 18-year-old girls. In volleyball players, lower body fat ratio corresponded with a higher skeletal muscle mass ratio. The differences found in skeletal muscle mass ratio were also significant starting at the age of 13 (p<0.001). The mean skeletal muscle mass (%) values were 44.1±3.4 in 12-year-old volleyball players, 45.4±2.5 in 13-year-old players, 45.0±2.2 in 16-year-old players, and 44.7±1.8 in 18-year-old players. In the general population, the mean skeletal muscle mass (%) values were 42.8±3.2 in 12-year-old girls, 42.±4.1 in 13-year-old girls, 41.9±3.3 in 16-</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Volleyball is currently considered to be a dynamic game, during which low intensity and high intensity movements alternate. The high intensity movements include jumps, shuffles and rapid changes in direction <ns0:ref type='bibr' target='#b5'>(Calleja-Gonzalez et al., 2019)</ns0:ref>. The offensive and defensive skills in volleyball are characterized as double-leg take-off and double-leg or single-leg landings <ns0:ref type='bibr' target='#b46'>(Tillman et al., 2004a;</ns0:ref><ns0:ref type='bibr'>Lobbietti et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b62'>Zahradnik et al., 2017)</ns0:ref>. A study by <ns0:ref type='bibr' target='#b48'>Tillman et al. (2004b)</ns0:ref> showed 1,087 jump-landings in two matches among four NCAA Division IA female volleyball teams. A study by <ns0:ref type='bibr' target='#b27'>Lobietti et al. (2010)</ns0:ref> reported 2,022/2,273 jump-landings for male/female players in six matches in each category of the Italian league. Similarly, <ns0:ref type='bibr' target='#b62'>Zahradnik et al. (2017)</ns0:ref> showed 992/1,375 jump-landings during three matches in elite volleyball teams in the Czech Republic. The total intensity of the movements in volleyball can be measured at 6 METs <ns0:ref type='bibr' target='#b44'>(Scribbans et al., 2015)</ns0:ref>, which is considered to be vigorous <ns0:ref type='bibr' target='#b0'>(Ainsworth et al., 2011)</ns0:ref>. Therefore, a corresponding level of physical conditioning is required to effectively cope with the load in the long term. This physical conditioning is achieved with regular physical activity, which is performed during the training process. The training process in volleyball starts in the period of PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed the second childhood (Infans II), which is related to the age categories in the system of competitions in the Czech Republic organised by the Czech Volleyball Federation. The competitions organised within the Mini-volleyball in Colours project are for children from the age of nine. The long-term preparation should influence the development of specific physical skills as well as the somatic parameters of the volleyball players. In particular, on their body composition as it is a result of the level of adaptation of the organism to the load within the conditional preparation <ns0:ref type='bibr' target='#b49'>(Tota et al., 2019)</ns0:ref>. This adaptation is manifested not only in the motor performance of the athlete, but also on their physical fitness and health <ns0:ref type='bibr' target='#b28'>(Malá et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Regular and adequate training since childhood should not only lead to the development of fitness and motor performance, but also to the development of a series of personal traits of the individual <ns0:ref type='bibr'>(Kahlin et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b19'>Joyner & Loprinzi, 2018)</ns0:ref>. Many of those personal traits are considered to be the predictor for adherence to physical activity <ns0:ref type='bibr' target='#b2'>(Annesi, 2004)</ns0:ref>. These traits include: the ability to exert effort, determination, diligence, inclination to continue in the activity and to fulfil the task <ns0:ref type='bibr' target='#b2'>(Annesi, 2004)</ns0:ref>. A regular physical activity might thus become a part of the lifestyle of these individuals. The inclusion of an adequate physical activity in one's daily routine is very important with respect to the individual's health, which may also be assessed through body composition assessment. The contribution of the physical activity to health and optimisation of body composition parameters has been documented in many studies <ns0:ref type='bibr' target='#b14'>(Haskell et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b25'>Lazaar et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b33'>Nelson et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b43'>Roriz DE Oliveira et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bunc, 2018)</ns0:ref>.</ns0:p><ns0:p>The importance of adherence to physical activity since childhood is currently very important as many studies show that the amount of children's physical activity that meets the minimum level of physical activity for health benefits is decreasing <ns0:ref type='bibr' target='#b26'>(Liu et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b45'>Sigmund et al.,2015)</ns0:ref>.</ns0:p><ns0:p>Between 2011 and 2016, a stable insufficient level of physical activity is stated, with the highest PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed occurrence being reported in the wealthiest countries <ns0:ref type='bibr' target='#b12'>(Guthold et al., 2018)</ns0:ref>. In the Czech Republic, the documented physical routine in children dropped by about 30% in the last 2 decades. The decrease in physical activity of children is also inversely related to their age (as their age increases, their spontaneous physical activity decreases) <ns0:ref type='bibr' target='#b4'>(Bunc, 2018)</ns0:ref>. A question arises as to whether or not using volleyball as the only physical activity can lead to an adjustment of somatic parameters (such as body mass), increase in fitness and thus provide considerable health benefits. The form and popularity of a physical activity are fundamental to its habitualization in both child and adolescent populations <ns0:ref type='bibr' target='#b8'>(Ennis, 2017;</ns0:ref><ns0:ref type='bibr' target='#b24'>Laroche, Girard, & Lemoyne, 2019)</ns0:ref>. Volleyball is one of the frequently used physical activities in those age categories <ns0:ref type='bibr' target='#b26'>(Liu et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b11'>Glinkowska & Glinkowski, 2018)</ns0:ref>. Data from the Czech Volleyball Association indicates it is a popular sport in the Czech Republic where the number of registered children increased by <ns0:ref type='bibr'>43.7 % between 2008 and 2018.</ns0:ref> There are several studies that have researched the body composition of female volleyball players <ns0:ref type='bibr' target='#b36'>(Nikolaidis, 2013;</ns0:ref><ns0:ref type='bibr' target='#b59'>Visnes & Bahr, 2013;</ns0:ref><ns0:ref type='bibr' target='#b7'>Ćopić et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b41'>Paz et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b56'>Valente-Dos-Santos et al., 2018)</ns0:ref>. These studies have analysed differences between volleyball and other sports, between female volleyball players and untrained individuals, body composition in relation to the game position of the players, the effect of training on body composition and the occurrence of injuries, or the effect of body composition on the physical ability of players.</ns0:p><ns0:p>However, there is a lack of data on the development of body composition parameters in female volleyball players through their ontogenetic development, which would also answer the question </ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Participants</ns0:head><ns0:p>The study included a total of 413 participants (130 female volleyball players and 283 girls from the control group -general population). The detailed characteristics of the number and age of the participants is presented in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. Participants had no medical difficulties and were not currently taking any medication or food supplements. Only those who were not menstruating were measured. Participants provided the information prior to the measurement. They participated voluntarily and they were informed of the course of the study in advance. All participants signed an informed consent prior to participation in the study (the consent was signed by legal guardians for participants who were below the age of 18). The study was approved by the Ethical Committee of the Faculty of Education at the University of Ostrava (PdF OU č. 18/2018) and it is in compliance with the Helsinki Declaration.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>The age division of volleyball players is based on the competition rules of the Czech Volleyball Association. Group VG1 is the category of Younger Pupils (12 years old and younger), VG2 is the category of Older Pupils (13-14 years old), VG3 is the category of Cadets (15-16 years old) and VG4 is the category of Juniors (17-19 years old). To be included in the study, the player had to be registered in the list of a given age category in a volleyball club. All the players played the highest level of competitions in the given age category in the Czech Republic. They were all players from teams in the Moravian-Silesian Region. The total number of female volleyball players was 810 based on an analysis of registered volleyball teams competing at the highest level in the given age category in the Czech Republic. The number of the monitored players represented 16% of all players in the Czech Republic. The control group included girls based on an intentional selection to avoid significant differences in the age between VG and CG.</ns0:p></ns0:div>
<ns0:div><ns0:head>Physical activity of the participants:</ns0:head><ns0:p>The girls in the general population had compulsory physical education twice a week. In their free time, they did not pursue any other regularly organised physical activity.</ns0:p><ns0:p>The volleyball players also had compulsory physical education twice a week. In their free time, they only pursued volleyball activities. The detailed schedule of volleyball activities during the season of competitions is presented in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The frequency and duration of matches is based on the system of the volleyball competition organisation. The data on the volume and frequency of training were obtained from responsible trainers. <ns0:ref type='table' target='#tab_6'>2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>The measurements of VG and CG were done in the morning during the autumn (September -October) in 2018. The volleyball players were measured in the Human Motion Diagnostic Centre laboratory and CG girls were measured at schools. The measurements and data evaluation were performed by the authors. All the measurements were executed with adherence to the principles of measurement using the bioelectric impedance method (BIA) <ns0:ref type='bibr' target='#b22'>(Kyle et al., 2004)</ns0:ref>.</ns0:p><ns0:p>Body height (BH) was measured using Stadiometer InBody BSM 370 (Biospace, South Korea), body mass (BM) and body composition were measured by the InBody 770 analyser (Biospace, South Korea). It is a tetrapolar multi-frequency bioimpedance analyser that uses the frequency of 1000 kHz for measurements and that is also a scale. The body composition parameters measured were body fat (BF) and visceral fat (VFA) expressed as area (cm 2 ), total body water (TBW), fat free mass (FFM) and skeletal muscle mass (SMM).</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis</ns0:head><ns0:p>The normality of data division was checked by the Shapiro-Wilk test. To assess the statistical significance of the differences in the means between the somatic parameters of the volleyball players and the girls in general population, a parametric independent t-test was used.</ns0:p><ns0:p>The level of statistical significance for all the used tests was set at α = 0.05. Practical significance was assessed using the effect of size (ES) by Cohen <ns0:ref type='bibr'>(Cohen's d)</ns0:ref>. The d value at the level of 0.2 indicates a minor difference, 0.5 an intermediate difference and 0.8 a major difference <ns0:ref type='bibr' target='#b6'>(Cohen, 1988)</ns0:ref>. We considered the value of Cohen's d≥0.5 to be practically significant. We used Cohen's effect of size because it enables the assessment of the size of the difference between groups independent of the sample size. To verify whether or not we could consider our control group of girls to be general population, we compared the values of their basic anthropometric parameters Manuscript to be reviewed of BH, BM and the calculated BMI with the values of the 6 th Nation-wide Anthropological Survey of Children and Adolescents <ns0:ref type='bibr' target='#b21'>(Kobzová et al., 2004)</ns0:ref>. For comparison, we used the normalization index (N i ). To verify the accuracy of the measured body composition parameters, we used the measurement standardisation for the InBody 770 analyser in our laboratory. For that purpose, our laboratory uses the calculation of the typical error of measurement (TEM) and intraclass correlation (ICC) from three repeated consecutive measurements according to <ns0:ref type='bibr' target='#b15'>Hopkins (2000)</ns0:ref>. The measurements were done on 63 participants (24 female, 39 male) in the mean age of 21.96±2.57 years. The TEM and ICC values are presented in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>3</ns0:ref> -insert here</ns0:p><ns0:p>The statistical processing of the results was performed using IBM SPSS Statistics (Version 21;</ns0:p><ns0:p>IBM, Armonk, NY, USA).</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>The results for the verification of our control group with the general population are shown in Table <ns0:ref type='table'>4</ns0:ref>. The mean values of the monitored somatic parameters and their ontogenetic development for both the volleyball players and the control group are presented in Figure <ns0:ref type='figure' target='#fig_6'>1</ns0:ref>. The analysis of the differences between the means of the monitored parameters amid the individual age groups of the volleyball players and the control group is presented in Table <ns0:ref type='table' target='#tab_3'>5</ns0:ref>. The results in The results indicate that the basic anthropometric attributes of our control group show normal development with respect to the mean values of the Czech general population. The Ni values found ranged from -0.04 to +0.25 SD. The range of ±0.75 SD is considered to be the mean development of an attribute. Therefore, we can consider our control group to be general population.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>1</ns0:ref> Means and standard deviations of somatic parameters -insert here Manuscript to be reviewed</ns0:p><ns0:p>No statistically or practically significant differences between VG1 and CG1 were found in the age category of G1 (12 years old). The value of Cohen's d was lower than 0.5 in all cases. In other age categories (G2-G4), no statistically or practically significant differences between BM (d<0.5) were found between the groups of volleyball players and the control groups in the corresponding age categories. Statistically significant differences were found in all other parameters and their practical significance was also confirmed. The volleyball players had significantly higher FFM values than the control group girls of the same age, even though no significant differences in BM were found. Between VG2 and CG2, we determined an intermediate difference (d≥0.5), and we determined a major difference (d≥0.8) in the older age categories. The higher representation of FFM also corresponds with the higher values of SMM, even when expressed as percentage of their ratio in total BM. The differences found were at the level of a major difference (d≥0.8), an intermediate difference (d≥0.5) was determined only when comparing SMM (kg) between VG2 and CG2. The higher FFM values in the volleyball players corresponded with significantly higher values of their TBW (statistically and practically).</ns0:p><ns0:p>When comparing VG2 with CG2, we determined an intermediate difference (d≥0.5), and we determined a major difference (d≥0.8) in the other age categories. The BF ratio in the volleyball players was significantly lower than in the control group. In the values expressed in kilograms (BF kg), we determined an intermediate difference (d≥0.5), and in the percentage of the BF to BM ratio, we determined a major difference (d≥0.8). Also, the VFA values were significantly lower in the volleyball players. The difference found was at the level of an intermediate difference (d≥0.5). Manuscript to be reviewed</ns0:p><ns0:p>To assess the development and differences in the monitored somatic parameters in relation to the increasing chronological age in the volleyball players and in the control group, we used the mean values of the individual groups (Figure <ns0:ref type='figure' target='#fig_6'>1</ns0:ref>). We analysed the differences related to the increasing age in the volleyball players separately from the control group girls. The practical significance was assessed using Cohen's d. Between the age of 12 and 13, there was a more considerable increase in BH and BM in the volleyball players (d>0.8) than in the control group (d=0.5). The increase in BM was manifested by a more considerable increase in FFM, SMM (kg) and TBW (l) (d>0.8). The development of the monitored parameters does not change in the following years, both in the volleyball players and in the control group. Between the age of 13 and 16, only minor differences (d<0.5) were determined in BF (%), VFA and SMM (%) in both the volleyball players and the control group, other parameters showed intermediate differences (d≥0.5).</ns0:p><ns0:p>Between the age of 16 and 18, there were no significant differences in the volleyball players and in the control group, only minor differences (d<0.5) were determined. It appears that the developmental tendencies do not differ, they only move towards better values in the volleyball players. It is also documented by the diagram of the development of somatic parameters in Figure <ns0:ref type='figure' target='#fig_6'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>To assess the physical activity represented by the participation of the volleyball players in training sessions and matches (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>), we used the recommendations stated in expert studies..</ns0:p><ns0:p>A daily physical activity of vigorous or moderate intensity that lasts 60 minutes is recommended for children and youth, which represents 420 minutes/week <ns0:ref type='bibr' target='#b16'>(Janssen, 2007;</ns0:ref><ns0:ref type='bibr' target='#b50'>Troiano et al., 2008;</ns0:ref><ns0:ref type='bibr' /> PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr' target='#b17'>Janssen & Leblanc, 2010;</ns0:ref><ns0:ref type='bibr' target='#b42'>Riebe et al., 2015 )</ns0:ref>. This volume is not met in the youngest category (VG1) in the age categories we monitored, which has a physical activity of 360 minutes/week.</ns0:p><ns0:p>This may be one of the possible reasons there was no difference in the monitored parameters between the volleyball players and the control group at the age of 12. Another cause might be the larger volume of spontaneous physical activity in the youngest girls of the general population, by which the girls make up for the absence of organised physical activities. Spontaneous physical activity decreases considerably as the age increases <ns0:ref type='bibr' target='#b4'>(Bunc, 2018)</ns0:ref>. The decrease is then replaced with volleyball training and matches in older age groups. From the age of 13 (VG2), the volleyball players not only meet the recommendation of 420 minutes/week, but they also exceed it. It had an effect on the statistically and practically significant differences in the monitored parameters. The statistically and practically significantly higher BH in the volleyball players is an exception as it cannot be linked to the higher volume of physical activity of the volleyball players. The high increase in BH cannot be caused by different ontogenetic development. In both groups (VG and CG), the girls are in the same chronological age and according to the expert studies are at the end of the peak height velocity (PHV). PHV is stated in the same period for girls both engaged and not engaged in sports. PHV for girls engaged in sports occurs at the age of 11.8-12.3 and for those not engaged in sports it is 11.4-12.2. Later, PHV is only mentioned in gymnasts <ns0:ref type='bibr' target='#b30'>(Malina & Geithner, 2011;</ns0:ref><ns0:ref type='bibr' target='#b31'>Malina et al., 2015)</ns0:ref>. The higher BH of the volleyball players is related to the rules and the essence of volleyball, whilst also being an advantage for serving. Furthermore, previous research in tennis also indicates that a higher BH was advantageous in serving and the tennis players with a higher BH served with a higher velocity <ns0:ref type='bibr' target='#b57'>(Vaverka & Cernosek, 2013)</ns0:ref>. The higher BH of the volleyball players is probably caused by the selection criteria of clubs focusing on girls with a higher BH. This is also confirmed by results</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed from studies that focused on the selection of female players for the national junior volleyball team. The selected players had a significantly higher BH than the players who were not selected <ns0:ref type='bibr' target='#b38'>(Papadopoulou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b52'>Tsoukos et al., 2019)</ns0:ref>. The BH results presented in scientific studies correspond with the values determined in our volleyball players from VG2 (aged 13+), unless the studies deal with players selected for representative purposes. The values in those studies range from 167.0±8.0 cm to 169.0±6.0 cm <ns0:ref type='bibr' target='#b35'>(Nikolaidis, 2012;</ns0:ref><ns0:ref type='bibr' target='#b38'>Papadopoulou et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Papadopoulou et al., 2020)</ns0:ref>. The BH values in young female volleyball players from the representation selections already exceed 170 cm at the age of 13 <ns0:ref type='bibr' target='#b37'>(Nikolaidis et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b38'>Papadopoulou et al., 2019)</ns0:ref>. The volleyball players in our youngest age category VG1 (12 years) have lower BH values than the twelve-year-old players in the study by <ns0:ref type='bibr' target='#b35'>Nikolaidis (2012)</ns0:ref> where the mean BH value stated for such girls is 161.5±8.0 cm. The difference is probably due to the fact that the selection of players for A teams competing at the highest level are often moved to a higher age category with regard to the lower number of players in Czech volleyball clubs.</ns0:p><ns0:p>Considering the fact that no statistically or practically significant differences were found in the values of BM in the volleyball players and the control group, we can not only compare the percentage ratio of the individual tissues in the total BM, but also their absolute values in kilograms. The primarily measured parameter in the BIA method is water, therefore, it is also required to analyse organism hydration as other parameters are calculated additionally on the basis of the primary parameter values. The TBW values closely correspond with the volume of muscle mass, presented by the SMM parameter. SMM is a body tissue that produces work and is developed by regular training <ns0:ref type='bibr' target='#b29'>(Malina, 2007)</ns0:ref>. This was also confirmed in the volleyball players we monitored whose SMM ratio was much higher from the age category of VG3 than the CG girls. The higher FFM and SMM to BM ratio in the volleyball players is also related to their</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed considerably lower BF ratio. The differences found were higher not only than TEM used in our laboratory for measurement standardisation (Table <ns0:ref type='table'>3</ns0:ref>), but also higher than inter-daily variability, which ranges from 0.7 to 1.3 kg and from 0.9 to 1.6% in relation to the used BIA analyser and the growing interval between the individual measurements (Vicente-Rodríguez, 2012; <ns0:ref type='bibr' target='#b23'>Kutáč, 2015)</ns0:ref>. The significantly lower BF ratio and higher SMM ratio determined in our volleyball players at the age of 13 and above (VG2) when compared with CG is the result of their regular athletic preparation and it is a condition for adequate athletic performance. Body composition, especially the BF and SMM ratios, is an important factor that influences the athletic performance <ns0:ref type='bibr' target='#b37'>(Nikolaidis et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b38'>Papadopoulou et al., et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b39'>Papadopoulou et al., 2020)</ns0:ref>. The effect of BF on athletic performance is described well by the correlation between the skinfold thickness and performance indexes (Abakov jump, hand grip muscle strength, physical working capacity, peak power, sit-and-reach test). The higher the skinfold thickness, the more negative its influence <ns0:ref type='bibr' target='#b39'>(Papadopoulou et al., 2020)</ns0:ref>. The same negative effect of BF was also demonstrated in a study that focused on performance in the above-jump test in 13-year-old female volleyball players <ns0:ref type='bibr' target='#b37'>(Nikolaidis et al., 2017)</ns0:ref>. When there was absence of a difference in BF, there was also absence of a difference in performance indexes <ns0:ref type='bibr' target='#b38'>(Papadopoulou et al., et al., 2019)</ns0:ref>. We can only compare the determined values of the soft tissue ratio (BF and SMM) with the results of a study that used the same BIA analyser as we did. Therefore, we compared our results with the results of a study by <ns0:ref type='bibr'>Ćopić et al. (2014)</ns0:ref>. Even though the study authors analysed elite adult female volleyball players, the differences in the BF results in our players were insignificant. players in the SMM ratio, the mean value of which in the aforesaid study was 46.1 %, which is related to the fact that this age group had the lowest BF ratio. We did not find any other similar data in the available scientific literature, as the values of our oldest players (VG 4), where it is possible to assume that the ontogenetic development has ended and who have been trained for the longest period of time, should show values that are the closest to the published values of elite adult players. To assess the health condition of an individual, however, the ratio of subcutaneous fat and visceral fat, need to be monitored as they are more active metabolically and its increase is considered to be a risk factor not only in obesity, but also in cardiovascular diseases <ns0:ref type='bibr'>(Beaufrére & Morio, 2000;</ns0:ref><ns0:ref type='bibr'>Van Gaal, Mertens & De Block, 2006;</ns0:ref><ns0:ref type='bibr' target='#b13'>Haberka et al., 2018)</ns0:ref>. In this study, visceral fat is expressed as an area (VFA). From the age of 13, the volleyball players had significantly lower values of this fat than the control group girls. When compared with the CG values, the lower BF ratio and the lower VFA values in our volleyball players are very positive for their state of health, especially considering the increasing prevalence of obesity in the child population <ns0:ref type='bibr' target='#b34'>(Ng et al., 2014)</ns0:ref>. Considering that child obesity is carried over into adulthood where it may potentially increase morbidity and thus, impair the quality of life <ns0:ref type='bibr' target='#b51'>(Tsigos et al., 2008)</ns0:ref> Manuscript to be reviewed 2011). A considerable difference in the monitored parameters in the volleyball players between the age of 12 and 13 is an exception. Significant increases that can be described as major differences (d≥0.8) were found in the parameters of BH, BM, SMM (kg) and TBW. The increased BH in the volleyball players between the age of 12 and 13 corresponds with the increase in BM. The considerable increase in BM is also accompanied by a considerable increase in the representation of the individual tissues (SMM, FFM, TBW), but it was not accompanied by a significant increase in body fat. The gradual significant increase in BF in our volleyball players probably does not occur thanks to the balanced energy intake and output. In practice, it means there is no increase in body fat due to a disturbed energy balance. The amount of body fat only changes within ontogenesis as a result of maturing of all the monitored girls. The energy output in girls doing sports is higher than in the girls at the same age without regular sports training, which increases the fitness of the girls doing sports <ns0:ref type='bibr' target='#b35'>(Nikolaidis et al., 2012)</ns0:ref>.</ns0:p><ns0:p>The energy intake in the regular adolescent population without regular physical activity ranges from 8.76±2.36 MJ/day to 9.28±2.0 MJ/day <ns0:ref type='bibr' target='#b9'>(Forrestal, 2011;</ns0:ref><ns0:ref type='bibr' target='#b32'>Murakami K & Livingstone, 2016)</ns0:ref>. The mean daily energy output in female volleyball players is 14.55±2.53 MJ <ns0:ref type='bibr' target='#b61'>(Woodruff & Meloche, 2012)</ns0:ref>, which is a value higher by 36.2-39.8 % when compared with the values stated for individuals without regular physical activity. The daily intake of female volleyball players was 14.37±4.90 MJ <ns0:ref type='bibr' target='#b61'>(Woodruff & Meloche, 2012)</ns0:ref>, which confirms the energy balance. This energy balance is required to manage the load during training. It is thus obvious that volleyball training will not lead to reduced BM, but it could prevent BF from increasing.</ns0:p><ns0:p>There is an increase in the energy intake in the regular population due to a lack of sufficient physical activity <ns0:ref type='bibr' target='#b53'>(Vadiveloo, Zhu, & Quatromoni, 2009)</ns0:ref>, which was demonstrated as permanent</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed in our study. It is confirmed by the significant differences in BG between VG and CG in all age groups starting at VG2 and the absence of difference in BF values during ontogenesis (Table <ns0:ref type='table' target='#tab_4'>6</ns0:ref>.).</ns0:p><ns0:p>Hypotheses H1 and H2 were confirmed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>There are several limitations to this study. The first one arises from the method used for the measurement of body composition. We used the bioelectrical impedance method (BIA) which is sensitive to the hydration of the organism. All the subjects (their legal guardians) were informed of the principles that needed to be observed before the measurement, however, it is not possible to ensure and verify the actual observance of all the principles for measurement in practice. The </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>The study results show that regular volleyball training leads to a lower body fat ratio, lower visceral fat values and a higher skeletal muscle mass ratio compared with the general population.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Figure 1</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>about whether or not regular volleyball training may influence the natural ontogenetic development of the parameters. The answers to these questions should guide volleyball trainers PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020) Manuscript to be reviewed to better interpret the values of somatic body composition parameters and avoid poor conclusions that might be reflected in inappropriate training plans. The study objective was to assess the effect of volleyball training and natural ontogenetic development on the somatic parameters of adolescent girls. Two hypotheses were formulated for verification in the study. H1: Volleyball training of adolescent girls significantly influences their body composition. H2: The trend of long-term changes in the selected body composition parameters is influenced by the natural ontogenetic development of young females.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Ni calculation: Ni = (MCG -Mean control group, M -Mean 6 th NAS, SD -standard 𝑀𝐶𝐺 -𝑀 𝑆𝐷 deviation 6 th NAS) The Ni value in the range of ±0.75 SD shows an average development of the indicator, in the range from ±0.76 to 1.5 SD a below average (above average) development of the indicator, and the value above ±1.5 SD means a highly below average (above average) development.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>, we may consider regular volleyball training to be a suitable activity for maintaining reasonable body mass, including the body fat values. No differences were found in the comparison of the development of the monitored parameters in the volleyball players and the control group girls in relation to the increasing chronological age. The values of somatic parameters gradually increase with the increasing chronological age up to the age of 16 in both groups (VG, CG), after that the development slows down and the differences are insignificant. The development of the monitored parameters corresponds with the course of development described in expert studies (Malina & Geithner, PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>second limitation concerns monitoring the menstrual cycle phases that may have an effect on the resulting body composition values. With regard to the fact that we were not able to monitor the phases, only the occurrence of menstrual bleeding was an exclusion criterion in the measurement. The third limitation concerns the use of the values of basic anthropometric parameters (BH, BM, BMI) from 6 th NAS, which was implemented in 2001. However, there have not been any updated normative values of the Czech population. The fourth limitation is the absence of checking the diet and energy balance of the monitored subjects. However, the volleyball players are not provided any common diet or a nutritionist within their training. The energy intake and output values are based on the values published in scientific studies.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Therefore, it is possible to recommend regular volleyball training as a physical activity for maintaining adequate body mass.The body fat ratio, the visceral fat and skeletal muscle mass values gradually increase with the increasing age of the volleyball players. However, those changes correspond with the changes in the general population. It was proven that the development of body composition parameters is subject to ontogenetic development having a higher effect than the volleyball training recorded.Body fat is a frequently monitored parameter in training practice. Its gradual increase during ontogenesis is a natural feminine effect, as the results of our study showed. This fact should also be accepted by trainers in their preparation and management of the training process, especially in adolescent female athletes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 -</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>insert here</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 -</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>insert hereProceduresPeerJ reviewing PDF | (</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>clearly imply that there are no differences in the individual age categories of the volleyball players and the control group girls with respect to age. No statistically or practically</ns0:figDesc><ns0:table /><ns0:note>significant differences were found. Therefore, the differences found in the monitored somatic parameters are not the result of a different age of the girls. Considering the fact that all the monitored parameters had normal distribution, it was possible to use the parametric t-test for the comparison. The values of Cohen's d, characterising the practical significance of the differences between the individual age group's ontogenetic development of the monitored variables are presented in Table5.Table 4 -insert here</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>-insert here PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>-insert here PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 : Differences in the somatic parameter values at increasing chronological age expressed by Cohen's d</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>BH -body height, BM -body mass, BF -body fat, VFA -visceral fat area, SMM -skeletal muscle mass, FFM -fat free mass, TBW -total body water, VG -volleyball group, CG -control group PeerJ reviewing PDF | (2020:06:49601:1:0:NEW 10 Aug 2020)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell cols='2'>12 vs 13 years</ns0:cell><ns0:cell cols='2'>13 vs 16 years</ns0:cell><ns0:cell cols='2'>16 vs 18 years</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>VG</ns0:cell><ns0:cell>CG</ns0:cell><ns0:cell>VG</ns0:cell><ns0:cell>CG</ns0:cell><ns0:cell>VG</ns0:cell><ns0:cell>CG</ns0:cell></ns0:row><ns0:row><ns0:cell>BH (cm)</ns0:cell><ns0:cell>1.5</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.0</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>BM (kg)</ns0:cell><ns0:cell>0.9</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.2</ns0:cell></ns0:row><ns0:row><ns0:cell>BF (kg)</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>BF (%)</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.4</ns0:cell></ns0:row><ns0:row><ns0:cell>VFA (cm 2 )</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>SMM (kg)</ns0:cell><ns0:cell>1.3</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell>SMM p (%)</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.4</ns0:cell></ns0:row><ns0:row><ns0:cell>FFM (kg)</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>TBW (l)</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>0.1</ns0:cell><ns0:cell>0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>TBW (%)</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.4</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Human Motion Diagnostics Center,
University of Ostrava, Faculty of Education,
Ostrava, Czech Republic
August 10th ,2020
Dear Editors
We would like to thank the reviewers for the valuable comments. We have tried to incorporate everything into our text and modify the text accordingly. We believe that we have improved our text adequately. The answers to the individual points are provided below.
Dr. Petr Kutáč
Associate Professor of University of Ostrava, Faculty of Education
On behalf of all authors
Answers to Reviewer 1
2. Abstract: Reduce the part of aims and methods.
Objectives and methods have been modified.
3. Abstract: Increase the part of results adding more numbers, means, SD, p values, effect sizes.
The values of the monitored parameters have been added.
4. Abstract: Revise the conclusions (l.30-33) to represent the specific findings of this study.
The conclusions have been modified.
5. l.38-41: It is not necessary. Start directly with volleyball.
Edited – the text starts with volleyball.
6. l.63-66: Add references.
References have been added. The reference (Annesi, 2004) is stated twice because the information on lines 85-88 is based on that study.
7. l.87: Add 2-3 sentences with references presenting the rationale of this study, i.e. what is missing in the existed literature and why it is important to study it.
Sentences with links have been added to the text.
8. l.87-89: Revise the aim. The aim is whether age-related differences vary between volleyball and control group.
Objective has been modified.
9. l.89: Add hypotheses and references.
The hypotheses have been supplemented.
10. l.92: When the study was conducted?
Information has been added (to the Method section).
11. l.120: Where it took place?
The information was added to the Methods section.
12. l.223: The discussion should be revised totally. Use a first paragraph to summarize the findings, then start each paragraph with one of your findings, present if it agrees or differs with the literature and explain why
Discussions have been revised.
13. l.295: Shorten the limitations. Revise this section to ‘limitations, strength and practical applications’ and add these aspects with an emphasis on practical applications.
Limitations have been modified.
14. l.316: Revise the conclusions so they are specific to the findings
The conclusions have been reworded.
15. l.332: References
Book and website citations have been replaced. We have only left the citation of the book: Cohen J. 1988. Statistical power analysis for the behavioral sciences. New Jersey: Lawrence Erlbaum Associates. It is a source that was used for the assessment of practical significance (Cohen’s d).
More references have been added.
16. l.332: Major literature is missing (e.g. doi: 10.1519/JSC.0b013e31823f8c06, 10.3390/medicina56040159, 10.3389/fpsyg.2019.02737, 10.23736/S0022-4707.16.06298-8). Literature has been added.
Answers to Reviewer 2
1. Basic reporting
a. The abstract starts with a basic sentence that starts the entire motivation for this study: “Volleyball is often used as an activity for the cultivation of fitness and for influencing body composition (…)”. This statement is provided without any reference to support it.
The introductory sentence has been removed from the abstract - it has been replaced.
c. In lines 50-52, the argument provided by the authors may even detract from their goal, because having a higher percentage of body fat may protect the joints when landing after a jump
The text has been edited.
d. In lines 63-66, there are very strong statements made, but without any references to support them.
References have been added. The reference (Annesi, 2004) is stated twice because the information on lines 85-88 is based on that study.
E. In lines 74-77, there is support for the need of engaging in physical activity…but the argument is made and sustained for children, while this study focused on teenagers.
The study by Sigmund et al., 2015: also includes adolescent population – 10.5-16.5 years old.
The reference to Weiler et al, 2014 has been replaced with a reference to Liu et al., 2013, which includes participants at the age from 12 to 19.
f. In lines 84-85, again there are strong statements without any references to support the claims.
References and information have been added.
2. Experimental design
c. One major flaw is that there was no reporting on weekly caloric intake. This will have a major impact on the findings, as I will describe later.
We have added information on energy intake and outlay in the discussion. We proceeded from values provided in scientific studies, which is also stated in the study limitations.
d. It is not clear at all how the estrogenic phase of the menstrual cycle was determined for each participant. If 28-day averages were used, they are very flawed, as natural interindividual variability in menstrual cycle can range from 23 to 35 days.
The following information has been modified in the Methods: Only girls that did not have menstrual bleeding at the time of the measurement were measured. The girls provided the information immediately prior to the measurement.
e. Another huge problem is that there is not reporting of the Typical Error of Measurement for the devices that were used (e.g., bioelectric impedance), nor is there any report of data reliability. This means that trustworthiness of data cannot be assessed and, therefore, casts a major shadow on data interpretation.
Information on the verification of the accuracy of measurement (TEM and ICC) has been added to the Methods – Statistical Analysis, the specific values are presented in Table 3.
3. Validity of the findings
b. Findings cannot be properly interpreted without consideration of factor a), plus there is not information on TEM or ICCs. Therefore, I cannot assess if the findings are valid, because insufficient information appertaining the quality of data has not been provided.
The TEM and ICC values have been added to all monitored parameters (Table 3). All the differences found are higher than TEM values.
Answers to Reviewer 3
Basic reporting
The introduction is easy to read. However, in my opinion, it is necessary to extend existing knowledge on relationship concerning performance and sport practice-related differences. The introduction should include more update references regarding the association between sport-specific training background and physical performance. Moreover, I suggest you to support the following sentence “Regular and adequate training since childhood should not only lead to the development of fitness and motor performance, but also to the development of a series of personal traits of the individuals” including an update reference.
Reference links have been inserted in the text. The reference list has been supplemented by the references used.
After the purpose statement, please provide a hypothesis for what the authors think the results will yield.
Two hypotheses (H1 and H2) were formulated.
Experimental design:
Some important information appears to be presently omitted from the methods section. Further description of the sampling procedure would be helpful for the reader. The recruitment process is a bit unclear. Please explain better how was selected the sample size and how was the data collected. When were the tests administered? Was the time of year, season, and time of day consistent for all subjects sampled? Further explanation about who collected the data is also necessary here. Some important information also appears to be presently omitted from the methods and results section. Have you tested the reliability of your data? If yes, please include the results.
It is not clear who conducted the assessments or whether they were blinded to group allocation.
All the required information has been added to the Methods: sample size and selection, implementation of measurements, data analysis.
Data reliability was verified using the typical error of measurement (TEM) and intraclass correlation (ICC). The information has been added to the Methods – Statistical Analysis (Table 3).
Validity of the findings
In general, the first paragraph of the discussion should at least state which hypotheses were supported.
The authors did not discuss what is novel about this research or what it offers in terms of health implications. The authors did not discuss how this research may be disseminated into greater practice. Moreover, the limitations and the strengths of this research were not discussed at all.
An evaluation of the validity of the hypotheses has been added to the beginning of the Discussion.
The discussion has been revised and supplemented.
The paper is interesting but there are many examples of poor sentence construction throughout. I accept that this is probably due to English not being the authors’ first language but it would need very careful editing and proof reading prior to publication.
The text was revised by a PhD of kinanthropology from the University of Western Australia, Crawley.
" | Here is a paper. Please give your review comments after reading it. |
661 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background: Scores can assess the severity and course of disease and predict outcome in an objective manner. This information is needed for proper risk assessment and stratification. Furthermore, scoring systems support optimal patient care, resource management and are gaining in importance in terms of artificial intelligence.</ns0:p><ns0:p>Objective: This study evaluated and compared the prognostic ability of various common pediatric scoring systems (PRISM, PRISM III, PRISM IV, PIM, PIM2, PIM3, PELOD, PELOD 2) in order to determine which is the most applicable score for pediatric sepsis patients in terms of timing of disease survey and insensitivity to missing data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods:</ns0:head><ns0:p>We retrospectively examined data from 398 patients under 18 years of age, who were diagnosed with sepsis. Scores were assessed at ICU admission and re-evaluated on the day of peak Creactive protein. The scores were compared for their ability to predict mortality in this specific patient population and for their impairment due to missing data.</ns0:p><ns0:p>Results: PIM (AUC 0.76 (0.68-0.76)), PIM2 (AUC 0.78 (0.72-0.78)) and PIM3 (AUC 0.76 (0.68-0.76)) scores together with PRSIM III (AUC 0.75 (0.68-0.75)) and PELOD 2 (AUC 0.75 (0.66-0.75)) are the most suitable scores for determining patient prognosis at ICU admission. Once sepsis is pronounced, PELOD 2 (AUC 0.84 (0.77-0.91)) and PRISM IV (AUC 0.8 (0.72-0.88)) become significantly better in their performance and count among the best prognostic scores for use at this time together with PRISM III (AUC 0.81 (0.73-0.89)). PELOD 2 is good for monitoring and, like the PIM scores, is also largely insensitive to missing values.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion:</ns0:head><ns0:p>Overall, PIM scores show comparatively good performance, are stable as far as timing of the disease survey is concerned, and they are also relatively stable in terms of missing parameters. PELOD 2 is best suitable for monitoring clinical course.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Early detection of critically ill patients is essential for timely, good care in a suitable facility. Sepsis remains one of the leading causes of childhood death, although our understanding of the pathophysiology of sepsis has changed drastically in the last couple of decades due to the development of new diagnostic projections and strategies in the treatment of this complex illness <ns0:ref type='bibr' target='#b6'>(Dellinger et al. 2013)</ns0:ref>.</ns0:p><ns0:p>To help assess severity of illness for risk stratification in terms of required resources, stratify patients prior to randomization in clinical trials, compare intra-and inter-institutional outcome and survival, improve quality assessment as well as cost-benefit analysis, and to facilitate clinical decision making, prognostic scoring systems were established in the 1980s and have been improved and validated since <ns0:ref type='bibr' target='#b18'>(Lemeshow & Le 1994;</ns0:ref><ns0:ref type='bibr' target='#b25'>Marcin et al. 1998)</ns0:ref>.</ns0:p><ns0:p>The first scoring systems were developed for adults and were less suitable for use in children.</ns0:p><ns0:p>Finally, corresponding scores were presented specially for children and continuously developed and improved <ns0:ref type='bibr' target='#b23'>(Leteurtre et al. 1999;</ns0:ref><ns0:ref type='bibr' target='#b33'>Pollack et al. 1988;</ns0:ref><ns0:ref type='bibr' target='#b43'>Shann et al. 1997)</ns0:ref>. Some of them permit the probability of survival to be estimated as a function of the determined score. Today's scores, which are especially suitable for children, are, for example, the Pediatric Risk of Mortality (PRISM) score, from which its further developments, namely the PRISM III and PRISM IV scores, the Pediatric Index of Mortality (PIM) score, were derived, the PIM2 and PIM3 scores and the PELOD (Pediatric Logistic Organ Dysfunction) score followed by the PELOD 2 score <ns0:ref type='bibr' target='#b20'>(Leteurtre et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b21'>Leteurtre et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b23'>Leteurtre et al. 1999;</ns0:ref><ns0:ref type='bibr' target='#b30'>Pollack et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b32'>Pollack et al. 1996b;</ns0:ref><ns0:ref type='bibr' target='#b33'>Pollack et al. 1988;</ns0:ref><ns0:ref type='bibr' target='#b43'>Shann et al. 1997;</ns0:ref><ns0:ref type='bibr' target='#b46'>Slater et al. 2003;</ns0:ref><ns0:ref type='bibr' target='#b47'>Straney et al. 2013)</ns0:ref>. Only few studies deal with the question whether the individual scores are equally suitable for all types of conditions PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed <ns0:ref type='bibr' target='#b7'>(Dewi et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gemke & van Vught 2002;</ns0:ref><ns0:ref type='bibr' target='#b22'>Leteurtre et al. 2001;</ns0:ref><ns0:ref type='bibr' target='#b27'>Muisyo et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b38'>Qiu et al. 2017b</ns0:ref>).</ns0:p><ns0:p>For each score, the development identified specific times or timescales for patient enrollment, within which the score provides the most accurate indication of the patient's condition and the likelihood of survival <ns0:ref type='bibr' target='#b20'>(Leteurtre et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b21'>Leteurtre et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b23'>Leteurtre et al. 1999;</ns0:ref><ns0:ref type='bibr' target='#b30'>Pollack et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b32'>Pollack et al. 1996b;</ns0:ref><ns0:ref type='bibr' target='#b33'>Pollack et al. 1988;</ns0:ref><ns0:ref type='bibr' target='#b43'>Shann et al. 1997;</ns0:ref><ns0:ref type='bibr' target='#b46'>Slater et al. 2003;</ns0:ref><ns0:ref type='bibr' target='#b47'>Straney et al. 2013</ns0:ref>). However, some patients develop certain life-threatening conditions -such as sepsisonly during their inpatient stay and are actually hospitalized for a completely different reason, for example following a surgical intervention <ns0:ref type='bibr' target='#b44'>(Sidhu et al. 2015)</ns0:ref>. In such a case, it is obvious that the condition of the patient determined at admission can only conditionally predict the course of a complication developed at a later time. Although some scores consider the admission reason in their evaluation <ns0:ref type='bibr' target='#b30'>(Pollack et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b47'>Straney et al. 2013)</ns0:ref>, the further course is still open.</ns0:p><ns0:p>Scores are calculated based on vital signs, laboratory parameters and other patient parameters. In everyday clinical practice and of course as a consequence of the retrospective study design it is often not possible to determine all required data, because they are not recorded, not collected or can no longer be found. However, incomplete data raise the question of score accuracy. Some evidence suggests that knowing the completeness of necessary data is essential for correct score results <ns0:ref type='bibr' target='#b1'>(Agor et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Gorges et al. 2018)</ns0:ref>. This knowledge of data completeness and its impact on the results is becoming of even more interest with regard to the keyword 'artificial intelligence.' More and more artificial intelligence assessments are based on patient stratification and different kinds of scores <ns0:ref type='bibr' target='#b0'>(Abbasi 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Komorowski et al. 2018)</ns0:ref>. For this reason, it is wise to also know more about the influence of 'missing values' on the accuracy of the informative value of the scores, since not all parameters needed for creation of the scores are available or measured.</ns0:p><ns0:p>In this study, the usual pediatric scores are compared in terms of their predictive value in a group of septic children. Also, the optimal timing for determining these scores in the clinical course picture of sepsis was evaluated in addition to the influence of the lack of data on the predictive value of the different scores.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>This retrospective analysis included 398 critically ill pediatric patients treated at Innsbruck Medical University Hospital.</ns0:p></ns0:div>
<ns0:div><ns0:head>Inclusion of patients</ns0:head><ns0:p>The medical files of patients younger than 18 years of age with diagnosed sepsis or a proven blood stream infection between 2000 and 2019 were reviewed. Children fulfilling the definitions according to the International Pediatric Consensus Conference <ns0:ref type='bibr' target='#b9'>(Goldstein et al. 2005)</ns0:ref> were included. The current definition of pediatric sepsis is systemic inflammatory response syndrome (SIRS) in the presence of or as a result of suspected or proven infection <ns0:ref type='bibr' target='#b9'>(Goldstein et al. 2005)</ns0:ref>.</ns0:p><ns0:p>SIRS is given when at least two of the four criteria are present, one of which must be abnormal temperature or leukocyte count <ns0:ref type='bibr' target='#b9'>(Goldstein et al. 2005)</ns0:ref>. In this connection, the fulfillment of SIRS criteria is dependent on the age-specific normal values. The study protocol was approved by the institutional review board of the Medical University of Innsbruck (AN2013-0044 and EK Nr: 1109/2019).</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>We collected the demographic variables such as age, sex and the diagnosed underlying disease.</ns0:p><ns0:p>The underlying disease was assigned to the appropriate organ category: central nervous system, cardiovascular system, respiratory system, hepatic or renal failure. Also recorded was whether the patient suffered from an oncologic disease. Furthermore, we collected routinely measured laboratory parameters on the day of peak C-reactive protein.</ns0:p><ns0:p>C-reactive protein was chosen as parameter for the most severe stage of sepsis since it reflects the inflammatory process and is widely used in clinical routine. Many studies have described an interrelation between an elevated C-reactive protein level and sepsis <ns0:ref type='bibr' target='#b16'>(Koozi et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b26'>Maury 1989;</ns0:ref><ns0:ref type='bibr' target='#b34'>Povoa et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b36'>Presterl et al. 1997;</ns0:ref><ns0:ref type='bibr' target='#b42'>Schentag et al. 1984)</ns0:ref> and that C-reactive protein is highest at the most severe point of sepsis <ns0:ref type='bibr' target='#b4'>(Castelli et al. 2004;</ns0:ref><ns0:ref type='bibr' target='#b24'>Lobo et al. 2003;</ns0:ref><ns0:ref type='bibr' target='#b34'>Povoa et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b35'>Povoa et al. 2005)</ns0:ref>. Furthermore, elevated C-reactive protein is also associated with organ failure <ns0:ref type='bibr' target='#b5'>(de Beaux et al. 1996;</ns0:ref><ns0:ref type='bibr' target='#b13'>Ikei et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b29'>Pinilla et al. 1998;</ns0:ref><ns0:ref type='bibr' target='#b40'>Rau et al. 2000;</ns0:ref><ns0:ref type='bibr'>Waydhas et al. 1996)</ns0:ref>, which makes C-reactive protein a suitable parameter for the surveillance of sepsis severity. PRISM, PRISM III, PRISMIV, PIM, PIM2, and PIM3 as well as PELOD and PELOD2 scores were retrospectively assessed. Since in the realm of this study the scores were evaluated for their ability to depict the disease process and etiology, we have chosen two time points for score assessment, namely the day of admission as well as the day of peak C-reactive protein (Supplemental File 1). In this way we were able to analyze whether the time of assessment influences the predictive power of the individual scores. In-hospital mortality and multi-organ dysfunction syndrome (MODS) were chosen as outcome parameters. During data acquisition, the percentage of missing values was recorded for each score as a function of the number of their requested parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical Analysis</ns0:head><ns0:p>A mathematician (TH) not involved in the study procedures or patient assessment was responsible for the statistical analyses conducted using R, version 3.5.3. We present continuous data as median (25th to 75th percentile) and categorical variables as frequencies (%). We show effect size and precision with estimated median differences between survivors and non-survivors for continuous data and odds ratios (OR) for binary variables, with 95% CIs. All statistical assessments were twosided, and a significance level of 5% was used. We applied the Wilcoxon rank sum test and Fisher's exact test to assess differences between survivors and non-survivors.</ns0:p><ns0:p>The precision of the scores as the difference between predicted mortality and actual mortality is presented depending on the percentage of missing parameters. With respect to their diagnostic ability, the scores were compared by means of ROC curves, and DeLong's test was used to assess differences in ROC AUC. Corresponding 95% CIs were provided for the ROC AUC of the scores and the differences in the ROC AUC between scores. For this analysis only complete data were used.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Patient characteristics:</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed For this analysis 398 patients met the eligibility criteria for study inclusion. In-hospital mortality in these septic children was 13.6% (n=54). The median age of the children was 29.6 months, whereas 14.6% of the study population consisted of neonates. There was no difference in survival between males and females (see Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>).</ns0:p><ns0:p>The most affected organ in terms of underlying disease was the respiratory system in 26.8% of the children followed by diseases of the central nervous system (22.3%) and the cardiovascular system (21.6%). Only the rate of kidney failure was significantly higher in the non-survivors, whereas the proportion of affected central nervous systems and digestive tracts tended to be higher in the nonsurvivors as well.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation of scores for the predictive ability for mortality</ns0:head><ns0:p>As seen in Fig. <ns0:ref type='figure'>1</ns0:ref>, the best prediction abilities in our study are seen for PIM (0.76; 0.68 to 0.76), PIM2 (AUC 0.78; 0.72 to 0.78) and PIM3 (AUC 0.76; 0.68 to 0.76), although there is no significant difference between them and the other tested scores except for PRISM. PRISM shows the poorest mortality prediction of all tested scores and is significantly poorer than PRISMIII (p=0.0122), PIM (p=0.0059), PIM2 (p=0.0125) and PELOD2 (p=0.0359). Also, the predictive ability of the scores PRISMIV and PELOD is as poor as that of PRISM, although with a slightly higher AUC. The most recent PRISMIV and PIM3 scores show no improvement in mortality prediction. On the contrary, they even show a deterioration in predictive value in our specific septic population as compared to the predecessor scores.</ns0:p><ns0:p>No difference was seen in thromboembolic complications or bleeding events between survivors and non-survivors (Table <ns0:ref type='table'>2</ns0:ref>). The parameters for organ function with regard to renal or hepatic impairment also show no difference. Furthermore, in this septic population none of the recorded PeerJ reviewing <ns0:ref type='table' target='#tab_2'>PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:ref> Manuscript to be reviewed inflammatory parameters, namely C-reactive protein, procalcitonin, and interleukin-6, differentiate between survivors and non-survivors. Only the coagulation parameters show different values depending on the survival of the septic children. Fibrinogen, antithrombin and platelets were significantly higher in the survivors. As seen in the global coagulation tests, prothrombin time (PT; Quick) and activated thromboplastin time (aPTT), the patients who did not survive were in a hypocoagulable state. This is also reflected in the statistically larger number of bleeding complications seen in the non-survivors.</ns0:p></ns0:div>
<ns0:div><ns0:head>Admission versus peak C-reactive protein: Does the time of score evaluation matter?</ns0:head><ns0:p>To address the next question, namely whether the time of scoring makes a difference, two time points were compared: admission and the time when sepsis was most severe according to peak Creactive protein. Except for PELOD, all scores improved towards the peak in C-reactive protein, as seen from their AUCs in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. PRISMIV and PELOD2 even improved significantly and became, together with PRISMIII, the scores with the highest predictive ability, as seen in their AUC of 0.8, 0.84 and 0.81, respectively. The worst performance at this time was seen for PRISM followed by PELOD and PIM3.</ns0:p></ns0:div>
<ns0:div><ns0:head>Missing Values</ns0:head><ns0:p>Due to the nature of a retrospective design and the non-availability of all data for scoring, we checked whether there is an influence on the predictive ability of the different scores. For this purpose, we compared the actually observed mortality and the individual mortality predictions as well as the AUCs depending on the different accepted extent of missing values.</ns0:p><ns0:p>Comparison of predicted versus actual mortality starts with only patients having no missing values for scoring, as seen in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. The more missing values are accepted for scoring, the more patients PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed are included, until 100% of the total patient population is included for scoring independently of the extent of their missing values.</ns0:p><ns0:p>As expected, depending on the size of the analyzed population, the line depicting the difference in mortalities settles only at a certain population size. When only those patients are included who have few missing values the patient number is very small, too small to make a validated statement about the difference in predicted and observed mortality.</ns0:p><ns0:p>With the increasing number of missing values allowed, all scores underestimate the actual mortality, except the PELOD score. The fewer missing parameters are accepted, the more similar the predicted and the actual mortality. Exceptions here are the PELOD and the PRISM scores as well as the PIM3 score with a high negative influenceability due to missing parameters, whereby the small number of cases limits the statement. Also, when comparing AUCs the small sample size is limiting, especially in PIM3.</ns0:p><ns0:p>The AUC of the scores changes only minimally as a consequence of the number of missing values.</ns0:p><ns0:p>Excluded here is PIM3, whose AUC with the completeness of the parameters is lower than the AUCs with missing values.</ns0:p><ns0:p>The PRISM score had a difference of 0.1 in the AUCs with the highest AUC of 0.66 at 50% missing values allowed. PRISMIII and PRISMIV had a difference of 0.11 in the AUCs. PRISMIII and PRISMIV had their highest AUCs of 0.84 at 30% missing values allowed. Also, the PELOD score had a difference of 0.11 in its AUCs, calculated according to the degree of missing values, with the highest AUC of 0.74 at 30% -40% missing values allowed.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed PIM and PIM2 had a difference of only 0.05 in their AUCs and had their highest AUC at 30% -40% and 20% missing values allowed. Also, PELOD 2 had a small difference of 0.07 with its highest AUC at 30% missing values allowed.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The aim of this study was to investigate and compare various common mortality risk assessment scoring systems, namely PRISM, PRISM III, PIM, PIM2, PIM3, PELOD and PELOD2 in pediatric sepsis patients. In doing so, we also evaluated different time points for the score assessments, namely PICU admission and the day of C-reactive protein peak. Furthermore, we investigated the influence of missing parameters on the predictive power of the scores.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison of the scores at admission</ns0:head><ns0:p>The mortality rate in our study was at 13.6% in the midfield of other PICUs in developed countries <ns0:ref type='bibr' target='#b41'>(Ruth et al. 2014)</ns0:ref>. The difference between the predicted and the actual mortality of the individual scores in our septic patient population is roughly comparable to that of other studies <ns0:ref type='bibr' target='#b2'>(Arias Lopez et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b7'>Dewi et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Hamshary et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b28'>Patki et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b37'>Qiu et al. 2017a;</ns0:ref><ns0:ref type='bibr' target='#b39'>Qureshi et al. 2007;</ns0:ref><ns0:ref type='bibr'>Taori et al. 2010)</ns0:ref>.</ns0:p><ns0:p>The PIM scores (PIM1, PIM2, PIM3) underestimate overall mortality as compared to actual mortality, as confirmed by other studies <ns0:ref type='bibr' target='#b28'>(Patki et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b37'>Qiu et al. 2017a;</ns0:ref><ns0:ref type='bibr'>Taori et al. 2010)</ns0:ref>. By contrast, the PRISM score, with its mortality prediction, gives quite a good comparison of the actual mortality of the entire population. This has also been confirmed in other studies <ns0:ref type='bibr' target='#b37'>(Qiu et al. 2017a;</ns0:ref><ns0:ref type='bibr'>Taori et al. 2010)</ns0:ref>. While PELOD showed a slight overprediction in mortality in our population, PELOD 2 showed a significant underprediction of the observed mortality. An even greater discrepancy was found in a study by Gonçalves et al. <ns0:ref type='bibr' target='#b10'>(Goncalves et al. 2015)</ns0:ref>.</ns0:p><ns0:p>Nevertheless, when looking at the ability to predict mortality for the individual patient, the PIM 2 score shows the best performance as reflected by highest AUC, closely followed by PIM and PIM3 but also by PRISM III and PELOD 2. Other studies also found a slightly higher AUC in PIM2 than in PIM <ns0:ref type='bibr' target='#b3'>(Brady et al. 2006;</ns0:ref><ns0:ref type='bibr' target='#b45'>Slater & Shann 2004</ns0:ref>). Even the good performance of PRISM III and PELOD 2 for the individual mortality prediction was also shown by Gonçalves et al. in a general critically ill pediatric population <ns0:ref type='bibr' target='#b10'>(Goncalves et al. 2015)</ns0:ref>.</ns0:p><ns0:p>We found that the PRISM score to be the worst performer (AUC 0.63) in our septic population. In contrast to our findings, a prospective study conducted in pediatric patient populations of specialist multidisciplinary ICUs showed the AUC (0.90) of the PRISM score to be clearly higher than our result <ns0:ref type='bibr' target='#b45'>(Slater & Shann 2004)</ns0:ref>. In another study of children with meningococcal sepsis, the PRISM score even outperformed the PIM score <ns0:ref type='bibr' target='#b22'>(Leteurtre et al. 2001)</ns0:ref>.</ns0:p><ns0:p>There are various ongoing discussions as to whether newer versions of the individual scores will improve the predictive value. While a multicenter study in Italy confirmed a significant improvement in PIM 3 compared to PIM 2 <ns0:ref type='bibr'>(Wolfler et al. 2016</ns0:ref><ns0:ref type='bibr'>), Tyagi et al. (Tyagi et al. 2018)</ns0:ref> found no relevant improvement between PIM 2 and PIM 3. We were able to determine an increase in the predictive value of the scores along versions, but the last versions (PIM 3 and PRISM IV) brought no further improvement.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison of the scores recorded at the time of the C-reactive protein peaks</ns0:head><ns0:p>The scores are intended for a broad mass, meaning all kinds of diseases and conditions. For each score, a specific point in time or timeframe was determined, for which the best performance is to PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed be expected. While the PRISM and PRISM III scores are calculated after 24 hours in-hospital, the PIM scores are computed within the first hour after admission. One drawback of a 24-hour assessment is that the patient may already be dead before the score can give a prognosis. In the case of an assessment made in the first hour, however, there is an inaccuracy factor regarding preclinical care. In seriously ill, well-cared-for and stabilized patients, a score may be deceptively low in value. As of version PIM 2, an attempt was made to compensate this with an additional parameter ('main reason for ICU admission').</ns0:p><ns0:p>In septic patients there is a similar problem: in some cases sepsis was the reason for admission, while in other cases sepsis developed during the course of hospital stay, possibly in postoperative patients <ns0:ref type='bibr' target='#b44'>(Sidhu et al. 2015;</ns0:ref><ns0:ref type='bibr'>Wang et al. 2018</ns0:ref>). In such a case, sepsis cannot be predicted at the time of admission and thus at the time of data collection. A score calculated during the most severe septic phase would therefore show better performance. Thus, our clientele's scores were reassessed at peak C-reactive protein. This analysis revealed that with disease progression PRISM IV and PELOD 2 were becoming significantly more precise in predicting mortality. We conclude that for PRISMIV and PELOD 2 the time when evaluation is performed is important for mortality prediction, while for the other scores the time of evaluation has no significant influence on the predictive ability of these scores. Just as Leteurtre et al., we feel that especially the PELOD 2 score serves well to monitor the progression of disease severity and predict outcome when evaluated regularly during the course of the disease <ns0:ref type='bibr' target='#b19'>(Leteurtre et al. 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison of scores for missing parameters</ns0:head><ns0:p>Although it is difficult, even in a prospective setting, to collect all the necessary data for creating the scores <ns0:ref type='bibr'>(Tibby et al. 2002)</ns0:ref>, it is even more difficult with a retrospective study design. Parameters are not or not fully recorded, not available at the specified time, not collected or lost due to incomplete documentation. However, this reflects the realism of everyday life. This problem has already been addressed by the developers of the PRISM score, who concluded that the missing values are often normal and therefore will hardly influence the score <ns0:ref type='bibr' target='#b31'>(Pollack et al. 1996a</ns0:ref>). The same assumption that missing values are normal was partially implemented in the scoring validation studies <ns0:ref type='bibr' target='#b11'>(Gorges et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Leclerc et al. 2017)</ns0:ref>. It was also incorporated into the PIM score, so that it is possible to specify missing data as such and thus there is no change in score points, which makes sense to a certain extent. For example, a lactate level that describes tissue hypoxia may not have been subject to lab testing by the treating physician, because the patient's medical condition was not presumed to be so poor. The situation is similar for other parameters.</ns0:p><ns0:p>Nevertheless, this assumption might be misleading: for example, there is only a small blood volume available for laboratory testing, especially in young pediatric patients <ns0:ref type='bibr' target='#b48'>(Sztefko et al. 2013)</ns0:ref>.</ns0:p><ns0:p>This might be supported by a validation study, where PELOD 2 and PRISM III scores show decreased performance when it is assumed that the unavailable data are within normal ranges <ns0:ref type='bibr' target='#b11'>(Gorges et al. 2018)</ns0:ref>.</ns0:p><ns0:p>We were able to show that only in a few cases was it possible to retrospectively collect 100% of the data for scoring. Here only a small patient population remained, so that the analyses could no longer be performed validly. However, it was seen that patients with a low percentage of missing values have high mortality.</ns0:p><ns0:p>With increasing data availability predicted and actual mortality approached each other, similar to what Agor and his team found in their study of the impact of missing scores on adult scores (Agor PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed et al. 2019). For our patients, the predicted and the actual mortality were quite close, except for the PRISM, PELOD and PIM3 scores, where the difference between the predicted and the actual mortality fluctuated, especially when only few missing values were allowed.</ns0:p><ns0:p>The most stable scores in terms of missing values, defined as the maximum deviation from predicted and actual mortality, have been shown to be PRISM III, PIM and PIM 2. Here, it has to be mentioned that, when a high percentage of missing values is allowed, mortality is underestimated by the scores, while with increasing data availability the scores tend to overestimate. The exceptions here are the PELOD score where the results are converse and PELOD 2 score, which consistently slightly overestimates mortality.</ns0:p><ns0:p>The AUC of the scores, however, changes only minimally with the number of missing values allowed. The smallest difference in the AUCs depending on the allowed missing values was seen in the PIM and PIM2 scores as well as the PELOD 2 score, which thus proved to be stable as compared to the missing values.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>The retrospective study design is limited in terms of score performance because some patients did not have all the data needed to calculate the scores. Since this is a single-center study, the number of patients needed for a valid statistical analysis is low overall, especially in the group of patients with 100% availability of the required data. In our study we assessed only the average effect of the missing values and not the weighting of the individual missing parameters necessary for the score.</ns0:p><ns0:p>On the other hand, it was possible for us to draw a realistic scenario of data availability in connection with the score survey in a retrospective study design. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Base excess arterial -1 (-4.05-1.25) -1 (-6.22-2.7) -1 (-3.9-0.9) -0. Results of the score analyses for mortality prediction at hospital admission and on the day with the highest level of C-reactive protein (CRP). The dark grey fields show the AUC with 95% CI. The numbers in the fields above the dark grey fields give the estimated difference in ROC curves with 95% CI and the fields below show the correlation with the corresponding ROC curves. Red p values indicate a significant difference. Only completed cases with all scores available are included. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>presented as medians (25 th -75 th percentile) b Estimated median difference c Differences between survivors and non-survivors assessed with the Wilcoxon rank sum test d Number of missing measurements in survivors/non-survivors Table 3(on next page) ROC analysis of scores predicting mortality a Results of the score analyses for mortality prediction at hospital admission and on the day with the highest level of C-reactive protein (CRP). The dark grey fields show the AUC with 95% CI. The numbers in the fields above the dark grey fields give the estimated difference in ROC curves with 95% CI and the fields below show the correlation with the corresponding ROC curves. Red p values indicate a significant difference. Only completed cases with all scores available are included.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,306.37,525.00,257.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page) Characteristics</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>a of patients stratified for survival and non-survival PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 . Characteristics a of patients stratified for survival and non-survival Total (n=398) Non-survivors (n=54) Survivors (n=344) Estimate with 95% CI b p value c</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Binary data are presented as no./total no. (%), continuous data as medians (25 th -75 th percentile), means for predicted mortality b Odds ratio for binary variables and estimated median difference for continuous variables, mean difference for predicted mortality c Differences between survivors and non-survivors assessed with Fisher's exact test for binary variables and the Wilcoxon rank sum test for continuous variables, the Welch two sample T test for predicted mortality d For one survivor the exact age in months is not known</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Female gender</ns0:cell><ns0:cell>176/398 (44.2%)</ns0:cell><ns0:cell>22/54 (40.7%)</ns0:cell><ns0:cell>154/344 (44.8%)</ns0:cell><ns0:cell>1.18 (0.63 to 2.22)</ns0:cell><ns0:cell>0.6591</ns0:cell></ns0:row><ns0:row><ns0:cell>Age d (months)</ns0:cell><ns0:cell>29.6 (3.83-105.64)</ns0:cell><ns0:cell cols='3'>22.41 (1.22-88.99) 30.68 (4.04-107.54) -1.1 (-10.81 to 5.3)</ns0:cell><ns0:cell>0.4817</ns0:cell></ns0:row><ns0:row><ns0:cell>Neonates < 1 month</ns0:cell><ns0:cell>58/396 (14.6%)</ns0:cell><ns0:cell>12/54 (22.2%)</ns0:cell><ns0:cell>46/342 (13.5%)</ns0:cell><ns0:cell>0.54 (0.26 to 1.22)</ns0:cell><ns0:cell>0.0988</ns0:cell></ns0:row><ns0:row><ns0:cell>Infants 1-3 months</ns0:cell><ns0:cell>35/396 (8.8%)</ns0:cell><ns0:cell>4/54 (7.4%)</ns0:cell><ns0:cell>31/342 (9.1%)</ns0:cell><ns0:cell>1.25 (0.41 to 5.06)</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Predicted mortality (%) at ICU admission</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PRISM</ns0:cell><ns0:cell>10.53 (1.54-7.55)</ns0:cell><ns0:cell>19.79 (2.3-20.75)</ns0:cell><ns0:cell>9.17 (1.5-7)</ns0:cell><ns0:cell>10.62 (1.3 to 19.94)</ns0:cell><ns0:cell>0.0264</ns0:cell></ns0:row><ns0:row><ns0:cell>PRISM III</ns0:cell><ns0:cell>6.12 (0.05-1.73)</ns0:cell><ns0:cell>17.76 (0.39-8.62)</ns0:cell><ns0:cell>4.36 (0.05-1.04)</ns0:cell><ns0:cell>13.4 (2.92 to 23.87)</ns0:cell><ns0:cell>0.0135</ns0:cell></ns0:row><ns0:row><ns0:cell>PRISM IV</ns0:cell><ns0:cell>3.59 (0.05-0.71)</ns0:cell><ns0:cell>12.09 (0.12-2.71)</ns0:cell><ns0:cell>2.31 (0.05-0.53)</ns0:cell><ns0:cell>9.79 (0.82 to 18.76)</ns0:cell><ns0:cell>0.0333</ns0:cell></ns0:row><ns0:row><ns0:cell>PIM</ns0:cell><ns0:cell>6.28 (1-5)</ns0:cell><ns0:cell>16.06 (2.84-23)</ns0:cell><ns0:cell>4.83 (1-4)</ns0:cell><ns0:cell cols='2'>11.23 (4.42 to 18.04) 0.0017</ns0:cell></ns0:row><ns0:row><ns0:cell>PIM 2</ns0:cell><ns0:cell>6.23 (0.75-4.01)</ns0:cell><ns0:cell>16.64 (2.8-17.1)</ns0:cell><ns0:cell>4.69 (0.75-3.7)</ns0:cell><ns0:cell cols='2'>11.95 (4.46 to 19.44) 0.0024</ns0:cell></ns0:row><ns0:row><ns0:cell>PIM 3</ns0:cell><ns0:cell>7.72 (1.11-4.16)</ns0:cell><ns0:cell>18.93 (2.88-15.3)</ns0:cell><ns0:cell>6.06 (1.11-3.53)</ns0:cell><ns0:cell cols='2'>12.86 (4.28 to 21.44) 0.0041</ns0:cell></ns0:row><ns0:row><ns0:cell>PELOD</ns0:cell><ns0:cell>16.24 (0.96-16.25)</ns0:cell><ns0:cell>32.23 (1.2-79.58)</ns0:cell><ns0:cell>13.9 (0.96-16.25)</ns0:cell><ns0:cell>18.33 (6.17 to 30.5)</ns0:cell><ns0:cell>0.004</ns0:cell></ns0:row><ns0:row><ns0:cell>PELOD 2</ns0:cell><ns0:cell>5.39 (0.13-2.21)</ns0:cell><ns0:cell>18.21 (0.87-17.59)</ns0:cell><ns0:cell>3.48 (0.13-1.39)</ns0:cell><ns0:cell cols='2'>14.72 (5.88 to 23.56) 0.0016</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Diagnosed underlying disease</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Central nervous system</ns0:cell><ns0:cell>81/364 (22.3%)</ns0:cell><ns0:cell>15/44 (34.1%)</ns0:cell><ns0:cell>66/320 (20.6%)</ns0:cell><ns0:cell>0.5 (0.24 to 1.07)</ns0:cell><ns0:cell>0.0532</ns0:cell></ns0:row><ns0:row><ns0:cell>Cardiovascular</ns0:cell><ns0:cell>78/361 (21.6%)</ns0:cell><ns0:cell>14/42 (33.3%)</ns0:cell><ns0:cell>64/319 (20.1%)</ns0:cell><ns0:cell>0.5 (0.24 to 1.1)</ns0:cell><ns0:cell>0.0704</ns0:cell></ns0:row><ns0:row><ns0:cell>Digestive tract</ns0:cell><ns0:cell>67/364 (18.4%)</ns0:cell><ns0:cell>13/44 (29.5%)</ns0:cell><ns0:cell>54/320 (16.9%)</ns0:cell><ns0:cell>0.49 (0.23 to 1.08)</ns0:cell><ns0:cell>0.0595</ns0:cell></ns0:row><ns0:row><ns0:cell>Respiratory system</ns0:cell><ns0:cell>97/362 (26.8%)</ns0:cell><ns0:cell>17/43 (39.5%)</ns0:cell><ns0:cell>80/319 (25.1%)</ns0:cell><ns0:cell>0.51 (0.25 to 1.06)</ns0:cell><ns0:cell>0.0649</ns0:cell></ns0:row><ns0:row><ns0:cell>Oncologic</ns0:cell><ns0:cell>51/365 (14%)</ns0:cell><ns0:cell>9/43 (20.9%)</ns0:cell><ns0:cell>42/322 (13%)</ns0:cell><ns0:cell>0.57 (0.24 to 1.44)</ns0:cell><ns0:cell>0.1636</ns0:cell></ns0:row><ns0:row><ns0:cell>Kidney</ns0:cell><ns0:cell>45/362 (12.4%)</ns0:cell><ns0:cell>10/44 (22.7%)</ns0:cell><ns0:cell>35/318 (11%)</ns0:cell><ns0:cell>0.42 (0.18 to 1.04)</ns0:cell><ns0:cell>0.0467</ns0:cell></ns0:row><ns0:row><ns0:cell>Liver</ns0:cell><ns0:cell>31/363 (8.5%)</ns0:cell><ns0:cell>3/44 (6.8%)</ns0:cell><ns0:cell>28/319 (8.8%)</ns0:cell><ns0:cell>1.31 (0.38 to 7.05)</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell>Skin</ns0:cell><ns0:cell>18/361 (5%)</ns0:cell><ns0:cell>1/43 (2.3%)</ns0:cell><ns0:cell>17/318 (5.3%)</ns0:cell><ns0:cell cols='2'>2.37 (0.35 to 101.43) 0.7078</ns0:cell></ns0:row><ns0:row><ns0:cell>Other diagnoses</ns0:cell><ns0:cell>108/367 (29.4%)</ns0:cell><ns0:cell>16/44 (36.4%)</ns0:cell><ns0:cell>92/323 (28.5%)</ns0:cell><ns0:cell>0.7 (0.35 to 1.45)</ns0:cell><ns0:cell>0.2931</ns0:cell></ns0:row></ns0:table><ns0:note>a</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 . ROC analysis of scores predicting mortality a</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:note>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='2'>PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)</ns0:note>
<ns0:note place='foot' n='7'>PeerJ reviewing PDF | (2020:03:47169:1:1:NEW 18 Aug 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Reviewer 1
Basic reporting
In their study “Comparison of pediatric scoring systems for mortality in septic patients and the impact of missing information on their predictive power” Christian Niederwanger and his colleagues evaluated different scoring systems used to assess and monitor children with sepsis. In their retrospective study the authors evaluated the medical records of 398 patients under 18 years of age with sepsis and apllied the following scoring systems: PRISM, PRISM III, PRISM IV, PIM, PIM2, PIM3, PELOD, PELOD 2. They found that in particular the PIM scores are helpful in assessing the severity and that missing data do not seem to compromise the prognostic power.
The article is structured according to the guideline of the journal. The literature cited is extensive. The abstract summarizes the important results of the study. The introduction is brief and informative and the statistical methods seem appropriate. Limitations are addressed. Figures and tables are informative.
Experimental design
The research questions are well defined. The experimental design (retrospective review of existing medical data of a single institution and application established scoring systems) is straight forward but as eluded to in the limitiation flawed by the nature of a retrospective design and in particular incomplete data sets. It is interesting that the authors tackle this problem from the start head on and included the problem of incomplete data sets in their statistical analyses.
As far as can tell most scoring systems have been evaluated in much larger studies and in a prospective fashion so the overall novelty of the findings is limited.
Validity of the findings
The authors provided all the necessary data and discussed their findings in the context of the existing literature. Open questions are addressed.
The conclusion is very lengthy and should be shortened
You are right in saying that our conclusion is very lengthy, so we have shortened the conclusion and now the conclusion addresses the main findings of the study.
Comments for the author
I have several further remarks:
• The article is overall well written and to the point apart from several wording issue and should be read by a native speaker.
Thank you for your suggestion. A native speaker has checked the final manuscript.
(see Introduction for example, line 81: However, some patients develop certain life-threatening clinical pictures- please change to conditions etc;
Thank you for your advice. We have changed some expressions not used in English language and the manuscript was proof-read again from a native speaker.
line 100 Please change as follows: Also, the optimal timing for determining these scores in the clinical course picture of sepsis was evaluated in addition to the influence of the lack of data on the predictive value of the different scores.
Thank you your help. We have changed the sentences following your suggestion.
• Methods: the definition of children with sepsis should be briefly described.
We have inserted a short description of the sepsis definition for children by Goldstein et al 2005.
• The discussion is lengthy and could be more concise
We agree that the discussion should be more concise. In order to do so we have revised and shortened the discussion and we hope that it is more compact and on point.
• Line 248: the authors compare the finding of their study with a large prospective australien prospective cohort study (Slater and Shann, 2004) which is not entirely correct and should be rephrased.
We have reworded the citation, so that the comparison is not misleading anymore.
Reviewer 2
Basic reporting
no comment
Experimental design
no comment
Validity of the findings
no comment
Comments for the author
In this study, the authors evaluated and compared the prognostic ability of several common pediatric scoring systems to determine which is the most applicable score for pediatric sepsis patients from timing of disease survey and insensitivity to missing data. The main findings were:
They found that PIM scores show comparatively good performance, are stable as far as timing of the disease survey is concerned, and the system is also relatively stable in terms of missing parameters. And PELOD 2 is best suitable for monitoring clinical course.
The work in general has been well carried out, and the results give a clear hypothesis regarding the pediatric scoring systems.
Reviewer: Libing Zhou
Basic reporting
no comment
Experimental design
Please explain the relationship between peak CRP and disease severity.
Please provide a flow chart in the methods section.
You are right with adding an explanation about the relationship between CRP and disease severity. This adds to the understanding of the study design for the reader. So we have added a description of the relationship of CRP and the disease severity in the method section in order to understand why CRP is a good marker for disease severity. And we also provided a flow chart of the study design, which available as supplemental file.
Validity of the findings
Whether the weight of missing data affects the predicted results more than the number of missingdata?
You are entirely right in addressing this issue. For sure this would make an difference. In our opinion, this should be addressed in separate studies about every single score or score group. Which parameters are mainly not measured or collected in the real world situation and which of these parameters impact the single scores the most would be really interesting endpoints.
In order to keep the complexity in our study low and so understandable for the average reader we have decided to only assess the average effect of the missing values and not the effect of the single missing values. Otherwise it would have not been possible to compare all these different scores we have addressed in this study. But we hope in future to go a step further and evaluate the impact of the type of missing value.
Anyhow, we have added this limitation into the manuscript in the limitation section.
Comments for the author
Overall the quality of the article is fairly good and provides the effect of missing data on the prediction results, which has clinical practical significance.
" | Here is a paper. Please give your review comments after reading it. |
662 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Host miRNAs are known as important regulators of virus replication and pathogenesis.</ns0:p><ns0:p>They can interact with various viruses through several possible mechanisms including direct binding of viral RNA. Identification of human miRNAs involved in coronavirus-host interplay becomes important due to the ongoing COVID-19 pandemic. In this paper we performed computational prediction of high-confidence direct interactions between miRNAs and seven human coronavirus RNAs. As a result, we identified six miRNAs (miR-21-3p, miR-195-5p, miR-16-5p, miR-3065-5p, miR-424-5p and miR-421) with high binding probability across all analyzed viruses. Further bioinformatic analysis of binding sites revealed high conservativity of miRNA binding regions within RNAs of human coronaviruses and their strains. In order to discover the entire miRNA-virus interplay we further analyzed lungs miRNome of SARS-CoV infected mice using publicly available miRNA sequencing data. We found that miRNA miR-21-3p has the largest probability of binding the human coronavirus RNAs and being dramatically up-regulated in mouse lungs during infection induced by SARS-CoV.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Coronavirus disease 2019 (COVID-19) caused by severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) acquired pandemic status on March 11, 2020 making a dramatic impact on the health of millions of people <ns0:ref type='bibr' target='#b66'>(Zhou et al., 2020a;</ns0:ref><ns0:ref type='bibr' target='#b47'>Remuzzi and Remuzzi, 2020)</ns0:ref>. Lung failure induced by the acute respiratory distress syndrome (ARDS) is the most common cause of death during viral infection <ns0:ref type='bibr' target='#b61'>(Xu et al., 2020)</ns0:ref>.</ns0:p><ns0:p>MicroRNAs (miRNAs) are short (22 nucleotides in average) non-coding RNAs which appear to regulate at least one-third of all human protein-coding genes <ns0:ref type='bibr' target='#b42'>(Nilsen, 2007)</ns0:ref>. Namely, in association with a set of proteins miRNA forms an RNA-induced silencing complex (RISC) and binds 3'-UTR of a target mRNA. The latter promotes translation repression or even mRNA degradation <ns0:ref type='bibr' target='#b8'>(Carthew and Sontheimer, 2009)</ns0:ref>. Multiple works suggest the critical function of miRNAs in the pathogenesis of various human diseases. Thus, alteration of miRNAs expression is observed during different types of cancer <ns0:ref type='bibr' target='#b13'>(Di Leva et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b52'>Shkurnikov et al., 2019)</ns0:ref>, cardiovascular <ns0:ref type='bibr' target='#b49'>(Schulte et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b43'>Nouraee and Mowla, 2015)</ns0:ref> and neurological diseases <ns0:ref type='bibr' target='#b29'>(Leidinger et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b11'>Christensen and Schratt, 2009)</ns0:ref>. Other studies have suggested that miRNAs can also participate in intercellular communication <ns0:ref type='bibr' target='#b56'>(Turchinovich et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b5'>Baranova et al., 2019)</ns0:ref>.</ns0:p><ns0:p>There are numerous reports consistently demonstrating the role of miRNAs in viral infections. One of the research directions deals with miRNAs which can target viral RNAs. Since RNA of a single-stranded RNA virus (ssRNA virus) is not structurally distinguishable from host mRNA, there are no barriers for miRNA to bind it. In contrast to conventional binding to 3'-UTR of target mRNA, host miRNAs often bind to the coding region or 5'-UTR of viral RNA <ns0:ref type='bibr' target='#b7'>(Bruscella et al., 2017)</ns0:ref>. Besides translational repression, such interactions can also enhance viral replication or purposefully alter the amount of free miRNAs in a cell <ns0:ref type='bibr' target='#b55'>(Trobaugh and Klimstra, 2017)</ns0:ref>. For example, miR-122 can bind to 5'-UTR of the hepatitis C virus (HCV) RNA which increases RNA stability and viral replication since it becomes protected from a host exonuclease activity <ns0:ref type='bibr' target='#b50'>(Shimakami et al., 2012)</ns0:ref>. Another report contains evidence that miR-17 binding PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed sites on RNA of bovine viral diarrhea virus (BVDV) seek to decrease level of free miR-17 in the cell, therefore mediating expression of miRNA targets <ns0:ref type='bibr' target='#b48'>(Scheel et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Other research groups focused on miRNAs altering their expression in response to the viral infection. Specifically, <ns0:ref type='bibr'>Liu et al.</ns0:ref> showed that proteins of avian influenza virus H5N1 cause upregulation of miR-200c-3p in the lungs <ns0:ref type='bibr' target='#b32'>(Liu et al., 2017)</ns0:ref>. This miRNA targets 3'-UTR of ACE2 mRNA, therefore decreasing its expression. On the other hand, it was shown that decrease in ACE2 expression is critical in ARDS pathogenesis <ns0:ref type='bibr' target='#b22'>(Imai et al., 2005)</ns0:ref>. Therefore, H5N1 virus promotes miRNA-mediated ACE2 silencing to induce ARDS. Recent reports suggest several other host miRNAs which can potentially regulate ACE2 and TMPRSS2 expression which can be also important during SARS-CoV-2 infection due to crucial role of these enzymes for virus cell entry <ns0:ref type='bibr' target='#b41'>(Nersisyan et al., 2020)</ns0:ref>. Another example was given by <ns0:ref type='bibr'>Choi et al.</ns0:ref> who studied miRNAs altering their expression during influenza A virus infection <ns0:ref type='bibr' target='#b10'>(Choi et al., 2014)</ns0:ref>. It was proved that several miRNAs which play an important role in cellular processes, including immune response and cell death, exhibited significant expression differences in infected mice. In the same paper authors show that treatment with the respective anti-miRNAs demonstrates an effective therapeutic action.</ns0:p><ns0:p>In recent paper, Fulzele with co-authors found hundreds of miRNAs which can potentially bind to SARS-CoV-2 RNA as well as the RNA of the highly similar SARS-CoV coronavirus <ns0:ref type='bibr' target='#b17'>(Fulzele et al., 2020)</ns0:ref>.</ns0:p><ns0:p>However, this large miRNA list should be narrowed to find high-confidence interactions which should be further experimentally validated. In this work we hypothesized that there can be miRNA-mediated virus-host interplay mechanisms common for several human coronaviruses. For that purpose, we used bioinformatic tools to predict miRNA binding sites within human coronavirus RNAs including ones inducing severe acute respiratory syndrome (SARS-CoV-2, SARS-CoV and MERS-CoV) as well as other human coronaviruses causing common cold, namely, HCoV-OC43, HCoV-NL63, HCoV-HKU1 and HCoV-229E. To find and explore more complex regulatory mechanisms, we also analyzed miRNome of mouse lungs during SARS-CoV infection to find miRNAs whose expression was significantly altered upon viral infection.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Prediction of miRNA binding sites</ns0:head><ns0:p>To find miRNAs which can bind to viral RNAs we used miRDB v6.0 (Chen and Wang, 2020) and TargetScan v7.2 <ns0:ref type='bibr' target='#b0'>(Agarwal et al., 2015)</ns0:ref>. Target predictions were filtered according to their miRDB target scores, threshold value was set to 75 as in e.g. <ns0:ref type='bibr' target='#b40'>(Nakano et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b69'>Zhuang et al., 2019)</ns0:ref>. Viral genomes and their annotations were downloaded from the NCBI Virus <ns0:ref type='bibr' target='#b19'>(Hatcher et al., 2017)</ns0:ref> under the following accession numbers:</ns0:p><ns0:formula xml:id='formula_0'>• NC 045512.2 (SARS-CoV-2);</ns0:formula><ns0:p>• NC 004718.3 (SARS-CoV);</ns0:p><ns0:p>• NC 019843.3 (MERS-CoV);</ns0:p><ns0:p>• NC 006213.1 (HCoV-OC43);</ns0:p><ns0:p>• NC 005831.2 (HCoV-NL63);</ns0:p><ns0:p>• NC 006577.2 (HCoV-HKU1);</ns0:p><ns0:formula xml:id='formula_1'>• NC 002645.1 (HCoV-229E).</ns0:formula><ns0:p>To analyze miRNA-mRNA interactions, we also used miRTarBase v8 <ns0:ref type='bibr' target='#b20'>(Huang et al., 2020)</ns0:ref>. DIANA-miRPath v3.0 was employed for KEGG pathway analysis <ns0:ref type='bibr' target='#b59'>(Vlachos et al., 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>RNA sequencing data and differential expression analysis</ns0:head><ns0:p>MiRNA sequencing (miRNA-seq) data from The Cancer Genome Atlas Lung Adenocarcinoma (TCGA-LUAD) project <ns0:ref type='bibr' target='#b12'>(Collisson et al., 2014)</ns0:ref> was used to quantify miRNA expression in the human lungs. Specifically, the said data was downloaded from GDC Data Portal (https://portal.gdc.cancer.gov/) and included miRNA expression table with its columns correspond to 46 normal lung tissues and rows associated with miRNAs (note that only small fraction of TCGA-LUAD cancer samples had analyzed matched normal tissues). We used log 2 -transformed Reads Per Million mapped reads (RPM) as a miRNA expression unit.</ns0:p><ns0:p>Two miRNA-seq datasets, GSE36971 <ns0:ref type='bibr' target='#b45'>(Peng et al., 2011)</ns0:ref> and GSE90624 <ns0:ref type='bibr' target='#b38'>(Morales et al., 2017)</ns0:ref> Manuscript to be reviewed whole lung lobes). Raw FASTQ files were downloaded from the Sequence Read Archive <ns0:ref type='bibr' target='#b31'>(Leinonen et al., 2011)</ns0:ref>. Adapters were trimmed via Cutadapt 2.10 (Martin, 2011), miRNA expression was quantified by miRDeep2 <ns0:ref type='bibr' target='#b16'>(Friedländer et al., 2012)</ns0:ref> using GRCm38.p6 mouse genome (release M25) from GENCODE <ns0:ref type='bibr' target='#b15'>(Frankish et al., 2019)</ns0:ref> and miRBase 22.1 <ns0:ref type='bibr' target='#b25'>(Kozomara et al., 2019)</ns0:ref>. Gene expression profile of SARS-CoV infected mouse lungs was downloaded in form of count matrix from the Gene Expression Omnibus (GEO) <ns0:ref type='bibr' target='#b6'>(Barrett et al., 2013)</ns0:ref> under GSE52405 accession number <ns0:ref type='bibr' target='#b24'>(Josset et al., 2014)</ns0:ref>. Differential expression analysis was performed with DESeq2 <ns0:ref type='bibr' target='#b35'>(Love et al., 2014)</ns0:ref>. The results were filtered using 0.05 threshold on adjusted p-value and 1.5 on fold change (linear scale).</ns0:p></ns0:div>
<ns0:div><ns0:head>Sequence alignment</ns0:head><ns0:p>Multiple Sequence Alignment (MSA) of viral genomic sequences was done using Kalign 2.04 <ns0:ref type='bibr' target='#b28'>(Lassmann et al., 2009)</ns0:ref>. Two MSA series were performed. In the first one we aligned seven human coronavirus genomes. In the second one different coronavirus strains were aligned for each of analyzed viruses. All genomes available on the NCBI Virus were used for SARS-CoV, MERS-CoV, HCoV-OC43, HCoV-NL63, <ns0:ref type='bibr'>253,</ns0:ref><ns0:ref type='bibr'>139,</ns0:ref><ns0:ref type='bibr'>58,</ns0:ref><ns0:ref type='bibr'>39 and 28 genomes,</ns0:ref><ns0:ref type='bibr'>respectively)</ns0:ref>. For SARS-CoV-2 thousand genomes were randomly selected to preserve the percentage of samples from each country.</ns0:p><ns0:p>GISAID clade annotation <ns0:ref type='bibr' target='#b14'>(Elbe and Buckland-Merrett, 2017)</ns0:ref> was obtained for 956 SARS-CoV-2 genomes (annotation was missing for other genomes). For each virus we established the mapping between alignment and genomic coordinates. With the use of this mapping, miRNA seed region binding positions within viral RNAs were placed on the alignment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data and code availability</ns0:head><ns0:p>All code was written in Python 3 programming language with extensive use of NumPy <ns0:ref type='bibr' target='#b57'>(Van Der Walt et al., 2011) and</ns0:ref><ns0:ref type='bibr'>Pandas (McKinney, 2010)</ns0:ref> modules. Statistical analysis was performed using the SciPy stats <ns0:ref type='bibr' target='#b58'>(Virtanen et al., 2020)</ns0:ref>, plots were constructed using the Seaborn and Matplotlib <ns0:ref type='bibr' target='#b21'>(Hunter, 2007)</ns0:ref>.</ns0:p><ns0:p>MSA was visualized using Unipro UGENE <ns0:ref type='bibr' target='#b44'>(Okonechnikov et al., 2012)</ns0:ref>. All used data and source codes are available on GitHub (https://github.com/s-a-nersisyan/host-miRNAs-vs-coronaviruses).</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Human coronavirus RNAs have numerous common host miRNA binding sites</ns0:head><ns0:p>To identify human miRNAs that may bind to RNAs of human coronaviruses we used two classical miRNA target prediction tools: miRDB and TargetScan. TargetScan results can be ranked with different seed-region binding types while miRDB results can be ranked with so-called 'target score' associated with the probability of successful binding. Interestingly, for each of viruses TargetScan predicted 2-3 times higher number of miRNAs, while 80-85% miRNAs predicted by miRDB were predicted by TargetScan too (for the summary on the number of miRNAs predicted for each of viral genomes see Table <ns0:ref type='table' target='#tab_0'>S1</ns0:ref>).</ns0:p><ns0:p>We made a list of 19 miRNAs potentially targeting multiple viral RNAs by selecting miRNAs with miRDB target scores greater than 75 in all analyzed viruses (Table <ns0:ref type='table'>S2</ns0:ref>). For further analysis, we selected only high confidence miRNAs according to miRBase, namely, hsa-miR-21-3p, hsa-miR-195-5p, hsa-miR-16-5p, hsa-miR-3065-5p, hsa-miR-424-5p and hsa-miR-421. According to TargetScan, all 'guide' strand miRNAs except hsa-miR-3065-5p were conserved among species including miR-16-5p/195-5p/424-5p family with the shared seed sequence. Despite being a 'passenger' strand, hsa-miR-21-3p was shown to be functionally active and conserved over the mammalian evolution <ns0:ref type='bibr'>(Báez-Vega et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Lo et al., 2013)</ns0:ref>. For target scores of selected miRNAs as well as corresponding hierarchical clustering of viruses see Figure <ns0:ref type='figure' target='#fig_1'>1A</ns0:ref>. As it can be seen, such clustering grouped together SARS-CoV and SARS-CoV-2 as well as HCoV-229E and HCoV-NL63 which can be also observed when clustering is performed based on viral genomic sequences similarity <ns0:ref type='bibr' target='#b68'>(Zhou et al., 2020b)</ns0:ref>.</ns0:p><ns0:p>Six identified miRNAs showed similar functional patterns. Namely, KEGG pathway analysis of experimentally validated target genes revealed 54 enriched terms including pathways involved in pathogenesis of lung and several other cancers, viral infections as well as signaling pathways such as p53, TGF-β and FoxO (see Table <ns0:ref type='table'>S3</ns0:ref>). In order to assess which of these miRNAs could demonstrate activity in human lungs, we analyzed miRNA-seq data from TCGA-LUAD project. Two of the said miRNAs demonstrated relatively high expression (see Figure <ns0:ref type='figure' target='#fig_1'>1B</ns0:ref>). Specifically, hsa-miR-21-3p and hsa-miR-16-5p corresponded to top-5% of highly expressed miRNAs according to their mean expression level taken across all samples.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/11</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:formula xml:id='formula_2'>H C o V -H K U 1 M E R S -C o V S A R S -C o V S A R S -C o V -2 H C o V -O C 4 3 H C o V -2 2 9 E H C o V -N L 6</ns0:formula></ns0:div>
<ns0:div><ns0:head>Viral binding sites of miRNAs are conserved across different coronaviruses and their strains</ns0:head><ns0:p>Each of identified miRNAs had dozens of binding regions within analyzed viral RNAs (see Table <ns0:ref type='table'>S4</ns0:ref>).</ns0:p><ns0:p>Interestingly, the peak number of hsa-miR-16-5p/195-5p/424-5p binding positions fell on SARS-CoV and SARS-CoV-2, while for other miRNAs the most enriched virus was HCoV-NL63. To go deeper and analyze mutual arrangement of these sites we performed multiple sequence alignment on seven analyzed genomes and mapped the predicted miRNA binding positions from individual genomes to the obtained alignment. Further, for each binding site mapped to the alignment we calculated a number of viruses sharing that particular miRNA binding site. Positions common for two or more viruses were utilized in the downstream analysis.</ns0:p><ns0:p>In general, different miRNAs demonstrated dissimilar patterns of viral binding regions (for summary information see Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). In particular, hsa-miR-21-3p and hsa-miR-421 had positions on the alignment specific to six out of seven considered coronaviruses (see Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). Two most enriched binding positions of hsa-miR-16-5p/195-5p/424-5p family were common for five viruses, while maximum number of viruses sharing binding regions of hsa-miR-3065-5p was equal to three. Interestingly, most binding sites obtained for all considered miRNAs were found within nonstructural proteins located in polyprotein 1ab coding region (89%), about 8% of positions were located within spike protein while the rest was spread over N and M proteins. Detailed information is given in Table <ns0:ref type='table'>S5</ns0:ref>. To group coronaviruses based on the probability of sharing common miRNA binding positions, we calculated the number of matching positions in the alignment for each miRNA and pair of viruses (see Figure <ns0:ref type='figure' target='#fig_1'>S1</ns0:ref>). Then, per each miRNA this data was normalized by the overall number of binding positions shared by two or more viruses, and used as a distance matrix for hierarchical clustering (see Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_3'>2</ns0:formula><ns0:p>Interestingly, for majority of miRNAs such clustering was completely similar to that of viruses based on</ns0:p></ns0:div>
<ns0:div><ns0:head>4/11</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed their genomic sequence similarity <ns0:ref type='bibr' target='#b68'>(Zhou et al., 2020b)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_4'>t</ns0:formula></ns0:div>
<ns0:div><ns0:head>t t c T t t A g a + t g a C t + T g G G T G T T T A T g A t A C -5′</ns0:head><ns0:formula xml:id='formula_5'>A C A C C A G T C G A T G G G C T T G | | | | | | | | | |</ns0:formula></ns0:div>
<ns0:div><ns0:head>C T A C T T T A G A C T G A C T C T T G G T G T T T A T G A T T T A C T T C A G G C T T A C T C T T G G T G T T T A T G A C T T T G T T T A G A A T G C C T T T G G G T G T T T A T A A T G T T T T T T A A A T G T A C T A T G G G T G T T T A T G A T T G T T T T T A G A A T G C C T A T G G G T G T T T A T A A T G T T C T G T A A G T G C A C A T T A G G T G T T T A T G</ns0:head></ns0:div>
<ns0:div><ns0:head n='10789'>-3′ C A G C T T G C T C T C A T G C C G C T G T T G A T G C A C T C G G C A T G C T C T C A T G C A G C T G T T G A T G C C C T C A G C A T G T T C A C A C G C A G C T G T T G A T G C T T T C A G C G G C C A G C C A T G C A G C T G T T G A C G C A T T T T G C T T G T G C C C A T G C T G C T G T T G A T T C C T T C C G C T T G T T C T C A C G C T G C T G T T G A T T C G C T c a G C t t g + t c t C A t G C a G C T G T T G A t g C + + T T A C</ns0:head><ns0:formula xml:id='formula_6'>A A C A G A C A T T A A T T G G G C G C | | | | | | | | Seed region -5′ hsa-</ns0:formula><ns0:p>In order to assess conservativity of miRNA binding regions across viral strains, we performed multiple sequence alignment of available viral genomes per each human coronavirus independently. The results revealed high conservativity of these regions: 59% to 98% of binding sites within coronavirus RNAs had no mutation, while mean of average mutation rates (i.e. number of mutations normalized by region length and number of strains) across all viruses varied from 0.3% to 0.7% (see Table <ns0:ref type='table'>S4</ns0:ref>). Interestingly, there were no mutations for each of viruses in aforementioned hsa-miR-21-3p binding site shared by six coronaviruses, while HCoV-OC43 mutation rate was equal to 1% in the similar hsa-miR-421 site.</ns0:p><ns0:p>Exact values of average mutation rates are given in Table <ns0:ref type='table'>S6</ns0:ref>. Finally, we assessed mutation rates within seven SARS-CoV-2 clades introduced by GISAID. Binding sites of hsa-miR-21-3p and hsa-miR-421 had mismatches only within GH and S clades, sites of hsa-miR-3065-5p were mutated in GH and GR clades, while binding regions of hsa-miR-16-5p/195-5p/424-5p family showed mutations in all clades except O (see Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>G</ns0:head><ns0:p>GH GR L O S V hsa-miR-21-3p 0 0.007% 0 0 0 0.03% 0 hsa-miR-16-5p/195-5p/424-5p 0.01% 0.03% 0.01% 0.02% 0 0.05% 0.08% hsa-miR-3065-5p 0 0.02% 0.01% 0 0 0 0 hsa-miR-421 0 0.003% 0 0 0 0.01% 0 Table 2. Mean of average mutation rates in miRNA binding sites across SARS-CoV-2 clades.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/11</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:formula xml:id='formula_7'>M E R S -C o V H C o V -O C 4 3 H C o V -H K U 1 H C o V -N L 6 3 H C o V -2 2 9 E S A R S -C o V -2 S A R S -C o V 0.0 0.2 0.4 0.6 0.8 1.0 A S A R S -C o V -2 S A R S -C o V H C o V -N L 6 3 H C o V -2 2 9 E M E R S -C o V H C o V -O C 4 3 H C o V -H K U 1 0.0 0.2 0.4 0.6 0.8 D S A R S -C o V -2 S A R S -C o V H C o V -N L 6 3 H C o V -2 2 9 E M E R S -C o V H C o V -O C 4 3 H C o V -H K U 1 0.0 0.2 0.4 0.6 0.8 1.0 C H C o V -H K U 1 H C o V -N L 6 3 H C o V -2 2 9 E M E R S -C o V H C o V -O C 4 3 S A R S -C o V -2 S A</ns0:formula></ns0:div>
<ns0:div><ns0:head>MiR-21 and its target genes exhibit significant expression alteration in mouse lungs during SARS-CoV infection</ns0:head><ns0:p>To further explore a potential interplay between host miRNAs and coronaviruses, we hypothesized that some of miRNAs predicted to bind viral RNAs can have altered expression during the infection. In order to verify this hypothesis, we analyzed two publicly available miRNA-seq datasets of mouse lungs during SARS-CoV infection. The first dataset (GSE36971) included data derived from four mouse strains infected by SARS-CoV and four corresponding control mice. The second dataset (GSE90624) comprised three infected and four control mice.</ns0:p><ns0:p>Differential expression analysis revealed 19 miRNAs in the first dataset and 21 in the second dataset where expression change during infection was statistically significant (see Table <ns0:ref type='table'>S7</ns0:ref>). Six miRNAs were differentially expressed in both datasets, five of them had matched fold change signs, namely, were overexpressed in infected mice (see Figure <ns0:ref type='figure'>4</ns0:ref>). This was a statistically significant overlap since an estimate of the probability for 19-and 21-element random miRNA sets having five or more common elements was equal to 4.07 × 10 −7 (hypergeometric test). Surprisingly, miR-21a-3p which we previously identified as a potential regulator of all analyzed coronavirus genomes with one of the highest scores exhibited</ns0:p></ns0:div>
<ns0:div><ns0:head>6/11</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed Differential expression analysis was performed using DESeq2. Note that counts were normalized independently per each dataset. Thus, presented values should not be directly compared across (A) and (B). Vertical lines on the bar top indicate 95% confidence intervals.</ns0:p><ns0:p>Expression of mmu-miR-21a-5p (the opposite 'guide' strand of the same hairpin) was also increased in the infected group (2.8-and 3.2-folds, respectively). The latter had a particular importance since mmu-miR-21a-5p was highly expressed in mice during both experiments. Namely, according to its mean expression across all samples it was 4th and 38th out of 2302 in the first and the second datasets, respectively. Thus, significant expression change of this miRNA can dramatically affect expression of its target genes.</ns0:p><ns0:p>In order to capture aberrant expression of miRNA target genes during infection, we analyzed RNA sequencing (RNA-seq) data of eight SARS-CoV infected mice strains published by the same group of authors as in the first miRNA-seq dataset (GSE52405). Two strategies were pursued to generate a list of miRNA targets. Namely, we used target prediction tools described in the previous section as well as literature-curated miRTarBase database.</ns0:p><ns0:p>First, we took genes predicted both by miRDB and TargetScan with miRDB target score greater than 75. Additionally, we thresholded this list using top-10% predictions based on TargetScan's cumulative weighted context++ score. A significant fraction of mmu-miR-21a-5p target genes were down-regulated during the infection. Namely, 6 out of 24 considered genes demonstrated significant decrease in expression (hypergeometric test p = 7.6 × 10 −3 ). For four other miRNAs, there was no statistical significance on the number of down-regulated target genes. The situation was quite different for interactions enlisted in miRTarBase. Thus, 2 out of 2 mmu-miR-21a-3p target genes (Snca and Reck) were down-regulated (hypergeometric test p = 5.7 × 10 −3 ), while only 6 out of 37 mmu-miR-21a-5p target genes exhibited expression decrease (hypergeometric test p = 0.057). As in the previous case, no significant number of down-regulated target genes was observed for other miRNAs.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>In this paper we identified several cellular miRNAs <ns0:ref type='bibr'>(miR-21-3p, miR-16-5p/195-5p/424-5p, miR-3065-5p</ns0:ref> and miR-421) potentially regulating all human coronaviruses via direct binding to viral RNAs. Moreover, aside from virus specific binding sites we identified genomic positions which can serve as conserved targets for putative miRNAs. As one can expect, viruses with high genomic similarity such as SARS-CoV-2 / SARS-CoV or HCoV-229E / HCoV-NL63 had higher number of shared binding sites. Similar computational approach to discover direct miRNA-virus interactions was already employed by Fulzele</ns0:p></ns0:div>
<ns0:div><ns0:head>7/11</ns0:head><ns0:p>PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed et al <ns0:ref type='bibr' target='#b17'>(Fulzele et al., 2020)</ns0:ref>. Namely, using miRDB researchers predicted miRNAs targeting RNAs of SARS-CoV-2 and SARS-CoV. Despite large intersection of predicted miRNA sets, in the present paper we focused on miRNAs targeting as much human coronavirus RNAs as possible, which resulted in different lists of 'top' miRNAs.</ns0:p><ns0:p>Several hypotheses can be put forward to explain biological motivation of direct host miRNA-virus interactions. At first sight, one can think about host miRNA-mediated immune response to the viral infection. For example, translation of human T cell leukemia virus type I (HTLV-1) is inhibited by miR-28-3p activity <ns0:ref type='bibr' target='#b4'>(Bai and Nicot, 2015)</ns0:ref>. However, our results suggest that binding sites of identified miRNAs are actually conserved across human coronaviruses. Thus, viruses can purposefully accumulate host miRNA binding sites to slow down their own replication rate in order to evade fast detection and elimination by the immune system. Such behaviour was reported e.g. in the case of eastern equine encephalitis virus (EEEV) <ns0:ref type='bibr' target='#b53'>(Trobaugh et al., 2014)</ns0:ref>. Authors reported that haematopoietic-cell-specific miRNA miR-142-3p directly binds viral RNA which limits the replication of virus thereby suppressing innate immunity. The latter was shown to be crucial in the virus infection pathogenesis.</ns0:p><ns0:p>Functional activity of identified miRNAs was already referred to in the context of viral infections.</ns0:p><ns0:p>Thus, it was proved that miR-21-3p regulates the replication of influenza A virus (IAV) <ns0:ref type='bibr' target='#b60'>(Xia et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Namely, hsa-miR-21-3p targeting 3'-UTR of HDAC8 gene was shown to be down-regulated during IAV infection of human alveolar epithelial cell line A549 using both miRNA microarray and quantitative PCR analysis. Consecutive increase in the HDAC8 expression was shown to promote viral replication. Another report highlights the role of miR-16-5p in pathogenesis of Enterovirus 71 (EV71) infection <ns0:ref type='bibr' target='#b64'>(Zheng et al., 2017)</ns0:ref>. In particular, authors validated EV71-induced expression of miR-16-5p and found that this miRNA can inhibit EV71 replication in vitro and in vivo by targeting CCNE1 and CCND1 genes.</ns0:p><ns0:p>Remarkably, we found that expression of miR-21-3p in mice lungs exhibits a 8-fold increase upon SARS-CoV infection. Interestingly, miR-21-5p (the 'guide' strand of the same pre-miRNA hairpin) demonstrated only a 3-fold increase in expression. To explain this phenomena of non-symmetrical expression increase, we hypothesize that binding to the viral genome saves star miRNA miR-21-3p from degradation after unsuccessful attempt of AGO2 loading. A similar mechanism was already mentioned in several papers. Namely, Janas with co-authors demonstrated that Ago-free miRNAs can escape degradation by forming Ago-free miRNA-mRNA duplex <ns0:ref type='bibr' target='#b23'>(Janas et al., 2012)</ns0:ref>. Another concept was named as target RNA directed miRNA degradation (TDMD) and consists of the fact that highly complementary target RNA can trigger miRNA degradation by a mechanism involving nucleotide addition and exonucleolytic degradation <ns0:ref type='bibr' target='#b26'>(la Mata et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b63'>Zhang et al., 2019)</ns0:ref>. Thus, non-proportional upregulation of miR-21 arms can be indirect evidence that miR-21-3p directly targets the viral RNA or that the miR-21-5p is being actively degraded during target mRNA binding. </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>, were used to analyze miRNome of SARS-CoV infected mouse lungs (RNA was extracted from homogenized 2/11 PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. miRNAs with the highest target scores. (A) Hierarchical clustering of coronaviruses based on miRDB target scores. Rows are sorted according to the mean target score. (B) Expression distribution of miRNAs in human lungs.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Shared binding sites of hsa-miR-21-3p and hsa-miR-421 on human coronavirus RNAs. (A) hsa-miR-21-3p. (B) hsa-miR-421.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>BFigure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Hierarchical clustering of human coronaviruses based on the number of shared binding sites. (A) hsa-miR-21-3p. (B) hsa-miR-16-5p/195-5p/424-5p. (C) hsa-miR-3065-5p. (D) hsa-miR-421.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>8. 3 -Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 4. Differentially expressed mouse lung miRNAs. (A) GSE36971. (B) GSE90624. Differential expression analysis was performed using DESeq2. Note that counts were normalized independently per each dataset. Thus, presented values should not be directly compared across (A) and (B). Vertical lines on the bar top indicate 95% confidence intervals.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Several miRNAs having potential of direct interactions with human coronaviruses were discovered in this paper. While a majority of them were virus-specific, some miRNAs were shown to target all analyzed viral RNAs. Exploration of publicly available miRNomic data of SARS-CoV infected mice lungs revealed that one of these miRNAs (miR-21-3p) demonstrated a dramatic expression increase upon infection. Taking into account high structural similarity of SARS-CoV and SARS-CoV-2 including common miR-21-3p binding sites as well as the fact that this miRNA is also expressed in human lungs, the obtained results open new opportunities in understanding COVID-19 pathogenesis and consecutive development of therapeutic approaches.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Number of common miRNA binding sites on coronavirus RNAs. Column names refer to the number of viruses sharing a miRNA binding region.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>3 4 5 6 Total</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='11'>/11 PeerJ reviewing PDF | (2020:07:50789:1:0:NEW 18 Aug 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Editor,
We would like to thank you and reviewers for evaluating our manuscript, we have revised the
paper accordingly. In particular, we shifted focus of the work from miR-21-3p to all identified
miRNAs.
We hope that the revised version is now acceptable for publication in PeerJ.
Thank you for your time and consideration.
Sincerely,
Stepan Nersisyan,
on behalf of all co-authors
Reviewer 1
Point 1. As authors suggest Virus infection could soak up the miRNA to prevent its function. Is it
possible to show the potential targets of this miRNA in human lungs mRNA, which could be
altered by the miRNA-21-3p.
Response. We thank the Reviewer for the suggestion. We added pathway analysis of miRNA
target genes to address this point (lines 86-87 and 142-145).
Point 2. GISAID website divided the SARSCov2 genome into five clades, S, Gr, GH, G, S, and
V. Is possible to test whether miRNA-21-3p binding sequences remain intact in all of these
clades or its potential binding will be limited to viruses of certain clades only.
Response. We obtained GISAID clade annotation for utilized SARS-CoV-2 genomes and
performed mutation analysis within each of clades for each of considered miRNAs (lines
112-114, 180-184, Table 2).
Reviewer 2
Point 1. I recommend changing title (not focusing on one miRNA).
Response. We thank the Reviewer for the suggestion. The title has been modified accordingly.
Points 2 and 3. Change discussion and mention about other miRNAs like 16-5p, 195-5p. Delete
portion of miR-21-3p and 5p discussion part. It looks great on paper about discussing 5p and
3p…unless in-vivo or in-vitro studies. We perform such studies and outcomes are disappointing.
Response. We reorganized and expanded the Discussion accordingly to focus on all miRNAs.
Paragraphs related to miR-21-3p were strongly shortened.
Point 4. Make this manuscript generalize, don’t focus on one miRNA.
Response. Aside from aforementioned changes, we added analysis for all miRNAs in “Viral
binding sites of miRNAs are conserved across different coronaviruses and their strains” sections
as well as swapped it with “The miR-21 and its target genes exhibit significant expression
alteration in mouse lungs during SARS-CoV infection” section.
Point 5. Mice lung data is ok but in reality miRNA targets are different in human and mice.
Response. We clearly understand this point, however we do not extrapolate our findings on
miRNA-mRNA interactions from mice to humans: they are presented as a functional
consequence of aberrant miRNA expression in mice lungs.
Reviewer 3
Point 1. Functionality of miR-21-3p in different viral disease conditions are missing in the
introduction.
Response. We thank the Reviewer for pointing this out. According to recommendations of
Reviewer 2, we altered focus of the research from miR-21-3p to all miRNAs discovered by
target prediction (including changes in manuscript title). Thus, we think that review on
miR-21-3p activity is redundant in the Introduction of revised version of the paper (though it was
necessary in the original version).
Point 2. Which part of the lungs was used for this study?
Response. According to the original publications, RNA was extracted from homogenized whole
lung lobes. We added this information on lines 97-98.
Point 3. Is this miRNA (21-3p) cross species conserved?
Response. We thank the reviewer for raising this question. Yes, this miRNA is conserved over
the mammalians. We extended the “Human coronavirus RNAs have a large number of common
host miRNA binding sites” section accordingly (lines 134-138).
Point 4. Statistical analysis for Fig. 2 is missing.
Response. Figure 2 represents results of differential expression analysis performed with
DESeq2 (described in Materials and Methods). We added a note in the description of the Figure
(Figure 4 in the revised version).
Points 5 and 6. It will be interesting to know about the other targets of miR-21-3p. A table
summarizing the fold change and targets of miR-21-3p in different disease conditions will be
interesting to note.
Response. We thank the Reviewer for the suggestion. We added pathway analysis of miRNA
target genes to address these points (lines 86-87 and 142-145).
Point 7. English language editing required.
Response. We’ve performed language editing of the manuscript text.
Reviewer 4
Point 1. There still have some grammatical errors and typos. The authors should re-check and
revise carefully.
Response. We thank the Reviewer for pointing this out. We’ve fixed language errors.
Point 2. One of the important concerns is that the authors have not compared the performance
results to the previous works on mRNA binding site prediction. Even currently no study focused
on coronavirus specifically but the authors could also compare to the general one.
Response. We thank the reviewer for pointing this out. We added the corresponding paragraph
to the Discussion (lines 228-233)
Point 3. There are many mRNA samples in TCGA-LUAD, why did the authors only select 46
tissues? Any criteria for this?
Response. Indeed, there are hundreds of samples in TCGA-LUAD, however the majority of
them are primary tumor samples and only 46 samples correspond to normal solid lung tissues.
We added the explanation (lines 93-94).
Point 4. Why did the authors select genes with target score > 75?
Response. Threshold value of 75 corresponds to the middle of target score range (50 to 100)
and utilized in several other works. We added a clarification sentence with some references
(lines 75-76).
" | Here is a paper. Please give your review comments after reading it. |
663 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Aims/Introduction. To investigate the clinical outcomes of patients with type 2 diabetes mellitus (T2DM) who initiated dapagliflozin in real-world practice in Taiwan.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methods.</ns0:head><ns0:p>In this multicenter retrospective study, adult patients with T2DM who initiated dapagliflozin after May 1 st 2016 either as add-on or switch therapy were included. Changes in clinical and laboratory parameters were evaluated at 3 and 6 months. Baseline factors associated with dapagliflozin response in glycated hemoglobin (HbA1c) were analyzed by univariate and multivariate logistic regression.</ns0:p><ns0:p>Results. A total of 1960 patients were eligible. At 6 months, significant changes were observed: HbA1c</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>by -0.73% (95% confidence interval [CI] -0.80, -0.67), body weight was -1.61 kg (95% CI -1.79, -1.42), and systolic/diastolic blood pressure by -3.6/-1.4 mmHg. Add-on dapagliflozin showed significantly greater HbA1c reduction (-0.82%) than switched therapy (-0.66%) (p=0.0023). The proportion of patients achieving HbA1c <7% target increased from 6% at baseline to 19% at Month 6. Almost 80% of patients experienced improvement in HbA1c, and 65% of patients showed simultaneous reduction in both HbA1c and body weight. Multivariate logistic regression analysis indicated patients with higher baseline HbA1c and those who initiated dapagliflozin as add-on therapy were associated with a greater reduction in HbA1c.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions.</ns0:head><ns0:p>In this real-world study with the highest patient number of Chinese population to date, the use of dapagliflozin was associated with significant improvement in glycemic control, body weight, and blood pressure in patients with T2DM. Initiating dapagliflozin as add-on therapy showed better glycemic control than as switch therapy.</ns0:p></ns0:div>
<ns0:div><ns0:head>Use and effectiveness of dapagliflozin in patients with type 2 diabetes mellitus: a multicenter retrospective study in Taiwan</ns0:head><ns0:p>Jung-Fu Chen 1,2 , Yun-Shing Peng 3 , Chung-Sen Chen 4 , Chin-Hsiao Tseng 5 , Pei-Chi Chen 6 , Ting-I Lee 7,8 , Yung-Chuan Lu 9,10 , Yi-Sun Yang 11 , Ching-Ling Lin 12 , Yi-Jen Hung 13 , Szu-Ta Chen 14 , Chieh-Hsiang Lu 15-17 , Chwen-Yi Yang 18 , Ching-Chu Chen 19,20 , Chun-Chuan Lee 21 , Pi-Jung Hsiao 17 , Ju-Ying Jiang 22 , Shih-Te Tu 23</ns0:p></ns0:div>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Type 2 diabetes mellitus (T2DM) is a chronic metabolic disease affecting populations worldwide, and it has become an important public health challenge among Asian countries, especially in ethnic Chinese populations. <ns0:ref type='bibr' target='#b7'>(Cho et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Lim & Chan 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Lim et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b31'>Nanditha et al. 2016)</ns0:ref> According to the 2019 IDF Diabetes Atlas and a nationwide database analysis in Taiwan, the prevalence of diabetes increase from 4.31% in 2000 to 6.6% in 2019 for adults aged 20-79 years, resulting in a more than 70% increase in the total diabetic population (1.23 million in 2019). <ns0:ref type='bibr'>(International Diabetes Federation 2019;</ns0:ref><ns0:ref type='bibr' target='#b22'>Jiang et al. 2012</ns0:ref>) Currently, most guidelines recommend pharmacologic therapy based on evaluating glycated hemoglobin (HbA1c) levels for glycemic control. When the glycemic target is not achieved by lifestyle management and metformin, a second agent may be initiated, considering medication profiles and patient-related factors.(American Diabetes Association 2019; Diabetes Association Of The Republic Of China Taiwan 2019; <ns0:ref type='bibr' target='#b11'>Garber et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>McGuire et al. 2016)</ns0:ref> Sodium-glucose cotransporter-2 (SGLT2) inhibitors are a newer class of oral antidiabetic drugs (OADs) that inhibit glucose reabsorption at the early segments of the proximal convoluted tubule, thereby promoting glucosuria independently of insulin action. <ns0:ref type='bibr' target='#b15'>(Hasan et al. 2014</ns0:ref>) These agents improve glycemic control without increasing the risk of hypoglycemia, and have pleiotropic effects such as weight loss and reduction in blood pressure (BP). Combining SGLT2 inhibitors with metformin has been demonstrated to have additive effect compared to metformin alone in HbA1c and body weight reduction. <ns0:ref type='bibr' target='#b30'>(Molugulu et al. 2017)</ns0:ref> Given that T2DM has been known to have a higher risk of cardiovascular events, significant attention has been paid to the benefit of SGLT2 inhibitors on cardiovascular outcomes in T2DM patients with or without pre-existing cardiovascular disease. <ns0:ref type='bibr' target='#b37'>(Scholtes et al. 2019)</ns0:ref> Dapagliflozin, a member of the SGLT2 inhibitor class, has been shown in randomized controlled trials (RCTs) to improve glycemic control both as monotherapy <ns0:ref type='bibr' target='#b2'>(Bailey et al. 2015)</ns0:ref> and as add-on to other antidiabetic drugs, <ns0:ref type='bibr' target='#b32'>(Nauck et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b39'>Sun et al. 2014)</ns0:ref> along with wellestablished safety profile in a pooled analysis of phase IIb/III trials. <ns0:ref type='bibr' target='#b21'>(Jabbour et al. 2018)</ns0:ref> Complimentary to RCTs, observational studies can provide real-world data reflecting clinical practice patterns and outcomes not collected in RCTs. <ns0:ref type='bibr' target='#b12'>(Garrison et al. 2007)</ns0:ref> The real-world evidence on dapagliflozin has been reported in North America <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b19'>Huang et al. 2018b</ns0:ref>) and Europe, <ns0:ref type='bibr' target='#b10'>(Fadini et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Mirabelli et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Scheerer et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b38'>Scorsone et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017</ns0:ref>) whereas data are limited in Asia, especially for ethnic Chinese populations. <ns0:ref type='bibr' target='#b14'>(Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Tobita et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b41'>Viswanathan & Singh 2019)</ns0:ref> Dapagliflozin has been reimbursed by the National Health Insurance (NHI) in Taiwan since May of 2016. The purpose of the LEAD (Learning from RWE: A Multicenter retrospective study of Dapagliflozin in Taiwan) study was to retrospectively investigate the clinical outcomes for patients with T2DM initiating dapagliflozin under real-world setting in Taiwan.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Data source and study population</ns0:head><ns0:p>This was a multicenter retrospective observational study (ClinicalTrials.gov identifier NCT03084965) enrolling patients with T2DM exposed to dapagliflozin after dapagliflozin reimbursement in Taiwan since May 2016. Medical information was extracted manually by chart review and recorded in a standardized case report form after obtaining written informed consent from the patients. Patients were eligible for inclusion if they (1) were diagnosed with T2DM and aged 20 years old, (2) initiated dapagliflozin after May 1 st 2016, either as add-on therapy to existing OAD(s) with or without insulin, or as switch therapy from another OAD, and (3) completed follow-up of at least 6 months regardless of continuation on dapagliflozin therapy. Data were excluded if they (1) received other SGLT2 inhibitors prior to the initiation of dapagliflozin, (2) had a diagnosis of type 1 diabetes, or (3) were included in other clinical trials concurrently during the retrospective data collection period.</ns0:p></ns0:div>
<ns0:div><ns0:head>Assessment and outcome measures</ns0:head><ns0:p>Patient baseline demographics and clinical characteristics were recorded at the time of dapagliflozin initiation. Reasons for starting/switching to dapagliflozin, rate and reasons of dapagliflozin discontinuation were collected. Measurements of HbA1c, body weight, BP, fasting plasma glucose (FPG), and lipid profile (total cholesterol, high-density lipoprotein cholesterol [HDL-C], low-density lipoprotein cholesterol [LDL-C], and triglycerides) were obtained at baseline, 3 and 6 months to assess for changes. Further subgroup analyses were performed to assess changes in HbA1c by baseline HbA1c, manner of dapagliflozin initiation (as add-on or switch), BMI, and age. Changes in body weight were also analyzed by subgroups of baseline BMI, HbA1c, and age. The proportion of patients within each glycemic level (HbA1c <7%, 7-8%, 8-9%, and >9%) was evaluated at baseline and 6 months.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis</ns0:head><ns0:p>Descriptive statistics were provided for all variables. Continuous variables were presented with mean ± standard deviation, and numbers or percentages were used for categorical variables. Changes from baseline of the variables were analyzed using evaluable data at the respective time points. Mean changes of each variable from baseline to Month 3, baseline to Month 6, and between 3 and 6 months were evaluated by paired t-test and Wilcoxon signed-rank test in the overall cohort and subgroups. Differences in HbA1c reduction between dapagliflozin add-on and switch groups at Month 3 and Month 6 were analyzed by Wilcoxon rank-sum test. For changes in HbA1c and body weight among other subgroups at Month 3 and Month 6, the Kruskal-Wallis test was used. Patient distributions of changes in HbA1c and body weight at Month 6 were displayed by scatter plots for total, add-on, and switch groups. Baseline factors associated with dapagliflozin response in HbA1c were examined by univariate and multivariate logistic regression analyses. The cutoff for dapagliflozin responders was determined by using the median of change in HbA1c in patients with evaluable data at Month 6. Statistical significance was set at p value <0.05. All statistical analyses were conducted using SAS version 9.3.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Baseline characteristics</ns0:head><ns0:p>A total of 1960 patients were eligible, with a mean age of 57.8 ± 11.5 years, and 52% were male. The baseline clinical and laboratory characteristics are shown in Table <ns0:ref type='table'>1</ns0:ref>. The mean HbA1c was 8.8 ± 1.4%, and body weight was 75.2 ± 15.8 kg. Baseline systolic blood pressure (SBP) and diastolic blood pressure (DBP) was 136.6 ± 17.6 and 77.8 ± 11.3 mmHg, respectively. Metformin was the most prescribed antihyperglycemic agent (92.4%), followed by sulfonylurea (70.7%) and dipeptidyl peptidase-4 (DPP4) inhibitor (53.5%). Over half of the patients were on either dual (38.2%) or triple (36.2%) therapy prior to initiating dapagliflozin. A large proportion of patients (78.9%) initiated dapagliflozin at the higher dose (10 mg). Regarding the manner of initiation, more than half (53.8%) of the patients were switched to dapagliflozin, while the others (46.2%) started dapagliflozin as add-on therapy. Main reasons for starting dapagliflozin were for its HbA1c lowering efficacy (81.2%), followed by less concern for weight gain (26.8%) and lower risk of hypoglycemia (11.4%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Effectiveness of dapagliflozin on clinical and laboratory parameters</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> shows changes in clinical and laboratory parameters after dapagliflozin initiation at 3 and 6 months. Compared with baseline, statistically significant reductions in HbA1c were observed both at Month 3 (-0.68%; 95% confidence interval [CI] -0.74, -0.62, p <0.0001) and at Month 6 (-0.73%; 95% CI -0.80, -0.67, p <0.0001). Considering patients with evaluable HbA1c data at baseline and Month 6, the proportion of patients achieving the glycemic target (HbA1c <7%) was 6% at baseline, and it increased to 19% by Month 6 after initiating dapagliflozin. Moreover, the proportion of patients who were poorly controlled (HbA1c >9%) decreased from 34.7% at baseline to 15.9% at Month 6 (Fig <ns0:ref type='figure' target='#fig_0'>S1A</ns0:ref>). Improvements in FPG (-28.3 mg/dL), BMI (-0.60), body weight (-1.61 kg), and SBP/DBP (-3.6/-1.4 mmHg) at Month 6 were also significant (all p <0.0001, Table <ns0:ref type='table'>2</ns0:ref>). Aside from BP reduction that plateaued at Month 3, small but significant improvements were observed in HbA1c, FPG, BMI, and body weight from Month 3 to Month 6. Small changes in lipid profiles were also noted (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Subgroup analyses: HbA1c and body weight</ns0:head><ns0:p>We performed analyses to examine the effects of dapagliflozin treatment on HbA1c and body weight among different subgroups. For HbA1c, a statistically significant trend of greater reduction was observed in patients with higher baseline HbA1c from baseline to 3 and 6 months (Fig. <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>). In patients with baseline HbA1c between 8-9% and those >10%, a significant greater HbA1c reduction was found at Month 6 compared with Month3. Patients who received dapagliflozin as add-on therapy had a significantly greater reduction in HbA1c (-0.82%) than those who were switched from one antihyperglycemic agent to dapagliflozin at Month 6 (-0.66%, p=0.0023, Fig. <ns0:ref type='figure' target='#fig_0'>1B</ns0:ref>). Considering patients with evaluable HbA1c data at baseline and Month 6, the proportion of patients achieving the glycemic target (HbA1c <7%) were 5% and 6% for dapagliflozin add-on and switch groups at baseline, and they subsequently increased to 23% and 15% by Month 6, respectively (Fig. <ns0:ref type='figure' target='#fig_0'>S1B</ns0:ref>). When stratified by baseline BMI, no significant difference in HbA1c reduction was found across subgroups (Fig. <ns0:ref type='figure'>S2A</ns0:ref>). Similarly, no difference among age subgroups was found for HbA1c reduction (Fig. <ns0:ref type='figure'>S2B</ns0:ref>).</ns0:p><ns0:p>For body weight, treatment with dapagliflozin showed a statistically significant trend of greater weight loss with increasing baseline BMI from baseline to 3 and 6 months (Fig. <ns0:ref type='figure'>2</ns0:ref>). Among patients with evaluable data at both Month 3 and Month 6, further weight reduction was significant at Month 6 compared with Month 3 across all BMI categories, except for those with baseline BMI ≥35. When stratified by baseline HbA1c, significantly greater weight reductions were found in patients with lower baseline HbA1c throughout the study, despite similar baseline body weight among HbA1c categories (Fig. <ns0:ref type='figure'>S3A</ns0:ref>). There was no difference in weight reduction across age subgroups at Month 3, but a significant difference was found at Month 6 (Fig. <ns0:ref type='figure'>S3B</ns0:ref>). Comparing data between 3 and 6 months, significant reductions in weight were found in groups aged 40-65 and 65-75 years.</ns0:p></ns0:div>
<ns0:div><ns0:head>Relationship between changes in HbA1c and body weight</ns0:head><ns0:p>Patient distributions of changes in HbA1c and body weight after initiating dapagliflozin for 6 months were presented in scatter plots (Fig. <ns0:ref type='figure'>3</ns0:ref>). Of the 1094 patients with evaluable data, 79.9% and 77.6% of them experienced reductions in HbA1c and body weight, respectively. Moreover, 64.0% of patients showed simultaneous reductions in both outcomes (Fig. <ns0:ref type='figure'>3A</ns0:ref>). For patients in the add-on (n=529) and switch (n=565) groups, 69.2% and 59.4% had a reduction in both HbA1c and body weight, respectively (Fig. <ns0:ref type='figure'>3B and 3C</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Baseline factors associated with dapagliflozin response in HbA1c</ns0:head><ns0:p>Baseline factors associated with dapagliflozin response in HbA1c at Month 6 were shown in Table <ns0:ref type='table'>3</ns0:ref>. Responders and non-responders were determined by using the median of change in HbA1c (-0.60%; min, max [-6.6, 3.2]) as the cutoff.</ns0:p><ns0:p>Univariate logistic regression analysis indicated that higher baseline HbA1c, FPG, add-on dapagliflozin, use of insulin were significantly associated with dapagliflozin response in HbA1c reduction, while being on dual therapy at the time of dapagliflozin initiation was significantly associated with less HbA1c reduction. In multivariate logistic regression analysis, significant associations with dapagliflozin response in HbA1c reduction were found in patients with higher baseline HbA1c (odds ratio [OR] 2.099; 95% CI 1.786-2.468, p <0.0001), and those who received dapagliflozin as add-on therapy (OR 1.596; 95% CI 1.191-2.137, p=0.0017).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discontinue rate and reasons for discontinuation</ns0:head><ns0:p>Among eligible patients, the total number of discontinuation at Month 3 and Month 6 were 153 (7.8%) and 247 (12.6%), respectively. The main reasons for the discontinuation were inadequate HbA1c control (n=83; 4.23%), intolerance (n=46; 2.35%), or poor compliance to the current regimens (n=9; 0.46%). The most commonly reported reasons for intolerance were genital or urinary tract infections (n=15), frequency of urination (n=8), and vaginal itching (n=5).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This multicenter retrospective study presented the first nationwide real-world evidence for dapagliflozin in Taiwan. In this cohort of 1960 Taiwanese patients with T2DM, initiating dapagliflozin was associated with significant improvements in glycemic control, body weight, and BP. Overall, the changes from baseline were -0.73% for HbA1c, -28.3 mg/dL for FPG, -1.61 kg (-2.14%) for body weight, and -3.6/-1.4 mmHg for SBP/DBP at 6 months. These clinical benefits of dapagliflozin are comparable to the efficacy data from meta-analyses of RCTs. <ns0:ref type='bibr' target='#b39'>(Sun et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b44'>Zhang et al. 2014)</ns0:ref> In addition, small changes in lipids were noted, including an increase in total cholesterol, LDL-C, HDL-C, and a decrease in triglycerides.</ns0:p><ns0:p>Several observation studies assessing the effectiveness of dapagliflozin have been published recently. <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b10'>Fadini et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Huang et al. 2018b;</ns0:ref><ns0:ref type='bibr' target='#b29'>Mirabelli et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Scheerer et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b41'>Viswanathan & Singh 2019;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017)</ns0:ref> Despite differences in ethnicity, location, and other clinical factors, the baseline HbA1c among studies were similar to our data, ranging 8.5%-9.5%, and many patients were already on two or three OADs, some were even using insulin. The effects of dapagliflozin on glycemic control, body weight, and BP were consistent across these studies: reduction in HbA1c ranged from 0.7%-1.5% over 6 to 12 months of observation, reduction in body weight by percentage ranged 1.75%-3.83%, and reduction for SBP and DBP were of 2.3-3.8 mmHg and 1.1-2.0 mmHg, respectively. In our study, these effects were significant 3 months after initiating dapagliflozin, and aside from the BPlowering effect that plateaued at Month 3, we found small but significant improvements in other key metabolic parameters from Month 3 to Month 6. Also, 65% of patients showed simultaneous reductions in both HbA1c and body weight, comparable to the previous reports. <ns0:ref type='bibr' target='#b10'>(Fadini et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Scheerer et al. 2016</ns0:ref>) These data suggested that dapagliflozin is effective in realworld practice for patients with inadequately controlled T2DM.</ns0:p><ns0:p>The baseline characteristics in our cohort, such as mean age, sex ratio, and HbA1c were similar to the South-East Asia cohort in the recent global DISCOVER study. <ns0:ref type='bibr' target='#b13'>(Gomes et al. 2019)</ns0:ref> Although diabetic patients in South-East Asia (and Western Pacific region) have lower BMI than those in Western countries, the current Taiwanese study had slightly higher BMI compared with the South-East Asia data in the DISCOVER study <ns0:ref type='bibr'>(28.3 vs. 27.3</ns0:ref>). To explore and identify whether some baseline factors may be associated with better treatment response to dapagliflozin, subgroup and logistic regression analyses were performed. We observed a significant trend of greater HbA1c reduction in patients with higher baseline HbA1c, and the association was further indicated in multivariate logistic regression analysis. This association is consistent with findings from the literature. In general, studies suggest that patients with higher baseline HbA1c experienced a greater reduction in HbA1c with dapagliflozin treatment, <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b14'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Hong et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Scheerer et al. 2016</ns0:ref>) and one study indicates that a lower baseline HbA1c was associated with the achievement of HbA1c goal (<7%). <ns0:ref type='bibr' target='#b42'>(Wilding et al. 2017)</ns0:ref> Other patient characteristics that may be associated with better dapagliflozin response were male, <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017</ns0:ref>) younger age (<45 years old), <ns0:ref type='bibr' target='#b42'>(Wilding et al. 2017)</ns0:ref> shorter disease duration (<2-4 years), <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b14'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017</ns0:ref>) and non-insulin use <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b17'>Hong et al. 2019)</ns0:ref>; however, some of these associations were not identified and other data were not available in the current study. On the other hand, baseline body weight or BMI did not influence the magnitude of reduction in HbA1c, which is similar to previous studies. <ns0:ref type='bibr' target='#b14'>(Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b36'>Scheerer et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017)</ns0:ref> Regarding the effects of dapagliflozin by manners of initiation, we found patients who initiated dapagliflozin as add-on therapy had a significantly greater reduction in HbA1c (-0.82%) than those who were switched (-0.66%) from other OADs. Two recent studies also showed treatment with SGLT2 inhibitors (i.e., dapagliflozin or empagliflozin) as add-on therapy had a greater glucose-lowering effect than as switch therapy. <ns0:ref type='bibr' target='#b14'>(Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b17'>Hong et al. 2019</ns0:ref>) Notably, we found DPP4 inhibitors accounted for the majority (69.5%) of the switched agents (data not shown), a finding which is likely due to the fact that Taiwan's NHI only covers either a DPP4 inhibitor or an SGLT2 inhibitor. A single-center retrospective study in Taiwan recently reported that patients who switched from a DPP4 inhibitor to an SGLT2 inhibitor (i.e., empagliflozin) for 6 months had significant reductions in HbA1c and body weight, whereas those who remained on a DPP4 inhibitor did not experience significant changes. <ns0:ref type='bibr' target='#b18'>(Huang et al. 2018a</ns0:ref>) Moreover, data from long-term clinical trials demonstrated sustained efficacy and tolerability of dapagliflozin as addon therapy. <ns0:ref type='bibr' target='#b1'>(Bailey et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b8'>Del Prato et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Leiter et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b27'>Matthaei et al. 2015)</ns0:ref> Taken together, switching to an SGLT2 inhibitor from other OADs such as DPP4 inhibitors may be a suitable option for antidiabetic treatment, and add-on SGLT2 inhibition could provide better clinical benefit than switch therapy in patients already taking other OADs without adequate glycemic control. <ns0:ref type='bibr' target='#b17'>(Hong et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b18'>Huang et al. 2018a)</ns0:ref> In our study, we observed a significant trend of greater weight loss with increasing baseline BMI. This trend remained when data were calculated by the percentage of weight reduction, with about 1% weight reduction in the BMI <24 category and over 2% in those with BMI ≥30 (data not shown). Loss of fat mass has been known to account for approximately 70% of total weight loss observed with dapagliflozin treatment. <ns0:ref type='bibr' target='#b3'>(Bolinder et al. 2012)</ns0:ref> These findings have significant clinical implications in Asians, as visceral adiposity is known to be higher in Asians than Caucasians at a given BMI, contributing to insulin resistance that leads to cardiovascular and renal complication. <ns0:ref type='bibr' target='#b25'>(Lim et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b26'>Ma & Chan 2013)</ns0:ref> Given its effectiveness on HbA1c, FPG, body weight, and BP, dapagliflozin could serve as a promising OAD for Asian population to achieve better glycemic control and reduce future risk of cardiovascular diseases.</ns0:p><ns0:p>Since several cardiovascular outcome trials of SGLT2 inhibitors have been published, their effects on cardiorenal protection have come under the spotlight. <ns0:ref type='bibr' target='#b37'>(Scholtes et al. 2019</ns0:ref>) Both largescale RCT and real-world studies (e.g. CVD-REAL study) indicate that dapagliflozin lowers hospitalization for heart failure and cardiovascular mortality in T2DM patients with existing or risk of cardiovascular disease. <ns0:ref type='bibr' target='#b5'>(Cavender et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b33'>Norhammar et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Raparelli et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Wiviott et al. 2019</ns0:ref>) Although cardiovascular events were not assessed in this study, the effects of dapagliflozin on several cardiovascular risk factors (e.g., HbA1c, body weight, BP) observed were similar to those demonstrated in the DECLARE-TIMI 58 trial. In addition, change in lipid profile by dapagliflozin was similar to that reported in a pooled analysis of clinical trials. <ns0:ref type='bibr' target='#b21'>(Jabbour et al. 2018)</ns0:ref> While the slightly elevated LDL-C level may be of concern, a study showed that dapagliflozin increases concentration of the less atherogenic large buoyant LDL-C, while the potent atherogenic small dense LDL-C remained suppressed. <ns0:ref type='bibr' target='#b16'>(Hayashi et al. 2017)</ns0:ref> Future study with long-term follow-up would be expected to assess the cardiovascular outcomes with dapagliflozin treatment in the Taiwanese population.</ns0:p><ns0:p>Our study provided clinical effects of dapagliflozin on glycemic, weight, BP control at 6 months using a large representative cohort of patients withT2DM in Taiwan. Besides, comprehensive analyses were performed to identify subgroups and baseline predictors for better dapagliflozin response. However, limited by the retrospective observational study design, partial loss of follow-up, missing data, and other confounding factors were inevitable. Besides, the safety data was not prospectively collected. However, we have recorded the reasons for discontinuation due to intolerance. In addition, disease duration and renal function (eGFR) were not recorded, so we were unable to confirm their association with dapagliflozin response in HbA1c, as observed in other studies. <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b14'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017</ns0:ref>)</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The LEAD study presented the effectiveness of dapagliflozin observed from the highest number of Chinese patients with T2DM in a real-world setting to date. Add-on therapy showed better glycemic control than switch therapy. The initiation of dapagliflozin was associated with significant improvements in glycemic control, body weight, and BP in Taiwanese patients, which is comparable to that in RCTs and other observational studies. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,178.87,525.00,237.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,199.12,525.00,393.00' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:02:45716:1:1:NEW 25 May 2020)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:02:45716:1:1:NEW 25 May 2020)</ns0:note>
</ns0:body>
" | "
Division of Endocrinology and Metabolism, Changhua Christian Hospital
No. 135 Nanhsiao Street, Changhua City 50006, Taiwan
+886-4-7009699
[email protected]
May 19th, 2020
Dear Editors
We truly appreciate the generous comments from the reviewers on the manuscript. Based on the insightful comments provided, we have analyzed the results in detail and revised the manuscript to address their concerns. This make the current real-world study-LEAD more comprehensive and clinically valuable.
We believe that the manuscript is now suitable for publication in PeerJ.
Shih Te Tu, MD
On behalf of all authors.
(Pages and lines were according to the tracked changes manuscript files)
Reviewer 1
Basic reporting
In this manuscript, Jung-Fu Chen and colleagues evaluated the use and short-term effectiveness of dapagliflozin, the first‐in‐class SGLT-2 inhibitor in Taiwan since 2016, under routine clinical practice conditions. Major strengths of the study include the large sample size (1960 diabetic patients) with nationwide distribution and the assessment of baseline predictors of clinical response.
1. I would suggest the Authors to update the reference list of real-world evidences on SGLT-2 inhibitors with:
Mirabelli M, Chiefari E, Caroleo P, Vero R, Brunetti FS, Corigliano DM, Arcidiacono B, Foti DP, Puccio L, Brunetti A. Long-Term Effectiveness and Safety of SGLT-2 Inhibitors in an Italian Cohort of Patients with Type 2 Diabetes Mellitus. J Diabetes Res. 2019 Nov 4; 2019:3971060. doi: 10.1155/2019/3971060.
Reply: Thank you for the suggestion. We have cited in the introduction part on page 4 line 113-116 and in discussion part on page 8 line 271-273.
Also, epidemiological trends of diabetes mellitus in Taiwan should be integrated and updated with the most recent International Diabetes Federation (IDF) and 2019 Diabetes Atlas references.
Reply: Thanks for the recommendation. We have integrated the report of IDF 2019 Diabetes Atlas in introduction part on page 3 line 87-91.
2. Scatter Plots in Figure 3 are not easy and intuitive to read. I suggest the use of conditional colors and larger fonts, other than a presentation of panels ordered from A to C (and not the current B, A, C).
Reply: Thanks for the suggestion. We have revised Figure 3.
3. The “LEAD” acronym of this multicentric retrospective study should be explained in the introduction section, with the meaning of terms it refers to. The LEAD abbreviation could be mistaken for a well-known clinical trial program assessing the efficacy of GLP1-RA Liraglutide.
Reply: LEAD stands for “Learning from RWE: A Multicenter retrospective study of Dapagliflozin in Taiwan” We mentioned in the introduction part on page 4 line 119-120.
4. Table 1: the mean values of SPB and DBP should be illustrated in two separate rows.
Reply: Thanks for the comment. The mean values of SBP and DBP are illustrated in two separate rows in Table 1.
5. Given its importance, Table S1 should be inserted as an ordinary table and not in supplementary materials.
Reply: Per your comment, Table S1 has been inserted as an ordinary table (Table 3).
6. I would recommend English editing to improve readability of the manuscript and solve minor grammar issues.
Reply: Thank you for your comment. We have further edited the manuscript to improve its readability.
Experimental design
This real-world study adheres to high ethical standards and research integrity principles. However, I would rise some concerns about clarity in the Methods section.
1. It should be described how patient data were retrospectively collected and unified in a single database. Was it an automated procedure or were data extracted manually from the original medical records? The first option would have secured a higher degree of uniformity in the collection and analysis of data from multiple diabetes centers, thereby representing a strength point of this study.
Reply: Thank you for the insightful comment. The patient data were extracted manually from respective site by chart review and recorded in a standardized case report form. We have elaborated the procedure in the revised manuscript on page 4 line 128-129.
2. The statistical software used for the analysis should be specified.
Reply: Thank you for the comment. SAS version 9.3 was used and we have updated this in the revised manuscript on page 5 line 177-178.
3. The exclusion criteria for participant eligibility are not clear. In particular, the phrase “patients undergoing treatment with other investigational drugs concurrently during the retrospective data collection period” appears confusing. Were you excluding patients concurrently treated with other novel antidiabetic agents (e.g. GLP1 RA) or patients generally involved in clinical trials independently of diabetes?
Reply: Thank you for the comment. We have clarified the exclusion criteria in the revised manuscript on page 5 line 147-149. The phrase “patients undergoing treatment with other investigational drugs concurrently during the retrospective data collection period” represents patients who were included in other clinical trials.
4. The methods section should include how missing data were handled for the statistical analysis.
Reply: In each analysis, patients without the respective data were excluded. For instance:
Baseline characteristic: Patients with baseline data were included (N=1960)
Change from baseline to Month 3: Patients with baseline and month 3 date were included (n=1344 in terms of HbA1c data)
Change from baseline to Month 6: Patients with baseline and month 6 date were included (n=1197 in terms of HbA1c data)
The influence of dapagliflozin dosage (5 mg vs 10 mg) on glycemic and weight control has not been assessed. It would be an interesting point, given that the higher dose of dapagliflozin was not routinely administered in this study.
Reply: The proportions of dapagliflozin 10 mg and 5 mg used in this study were 78.9% and 21.1%, respectively. We conducted a subgroup analysis stratified by different doses, which showed similar effectiveness regardless of the dosage of dapagliflozin.
The changes of A1c at month 6 from the baseline were -0.74% and -0.71% with 10 mg and 5 mg, respectively. The changes of body weight at month 6 from the baseline were -1.65kg and -1.43kg with 10 mg and 5 mg, respectively.
Validity of the findings
This real-world study on dapagliflozin effectiveness in Taiwanese patients covers a central topic in Diabetology, and so Health Sciences. However, given the interesting results, the Discussion section could be much richer.
1. I would suggest the Authors to discuss the paradoxical negative influence on dapagliflozin response by a dual combination therapy (Table S1, OR 0.6). It cannot be excluded that the adjunct of dapagliflozin on distinct dual combination therapy options (e.g. met+sulfonylurea, met+acarbose, met+ insulin etc) may affect its effectiveness. For example, Mirabelli et al. evidenced that the SGLT2 inhibitor-sulfonylurea combination therapy has a negative impact on weight control in owerweight/obese diabetic patients.
Reply: Due to the insulin-independent mechanism of dapagliflozin, the efficacy should not be affected by other glucose-lowering agents. In the multivariate analysis of the logistic regression, the OR of dual therapy is 0.9 with no significant p value (0.8). As a result, no evidence showed that the number of glucose-lowering agents used may affect the change in HbA1c in the current study.
Considering the number of patients in respective subgroups may be too small to have enough power to demonstrate the result, we didn’t conduct the subgroup analysis of specific drug user.
2. Although the primary outcome of this study is the effectiveness of dapagliflozin, the finding of a high rate of treatment interruption within 6 months of follow-up (400 patients, 20.4%) should be better detailed in the Results section and properly discussed. The “intolerance” term suggests genitourinary side effects in most cases. However, given the high prevalence of nephropathy in this study population (34.6%, Table 1) the risk of dapagliflozin-induced kidney injury is also conceivable, together with a loss in glycemic efficacy for a glycosuric agent. Also, how many patients shifted from dapagliflozin to different antidiabetic regimens due to “inadequate HbA1c control”?
Reply: Thank you for the insightful comments. We have replied your comments point-by-point as follows:
• The discontinuation rate within 6 months was 12.6% (n=247), we therefore adjusted the wording on page 7 line 255 in order to improve its readability.
• Most commonly reported reasons for intolerance include genital tract infection, urinary tract infection, and increased urinary frequency. No patients in this category experienced dapagliflozin-induced kidney injury. This result coincides with an analysis of the DECLARE-TIMI 58 study, which demonstrated less frequent AKI events in dapagliflozin group in comparison to placebo group (HR=0.69; 95% CI=0.55-0.87). [https://doi.org/10.1111/dom.14041]
• 83 of 1460 patients (4.23%) discontinued dapagliflozin due to inadequate HbA1c control. We have updated the number in the revised manuscript on page 7 line 258-259.
Comments for the author
The study is interesting and well executed, however, greater clarity in the methods description is required, together with the addition of dapagliflozin dosage in the regression logistic model to predict a positive glycemic response. Also, the discussion section should explore the study findings outlined above.
Reviewer 2
Basic reporting
Chen et al. conducted a multicentre retrospective study to examine the effectiveness of dapagliflozin on cardiovascular risk factors in Taiwan. It studied 1960 people with type 2 diabetes (52% men; mean age 58 years, HbA1c 8.8%, BMI 28 kg/m2) who were initiated with or switched to dapagliflozin. At baseline, 8% and 35% had prior coronary artery disease and nephropathy, respectively. Raw data were shared.
A major pitfall in the present study was lack of a control group. The effects of dapagliflozin were in line with existing literature, which was also stated in the Discussion.
Other major comments are as follows:
1. Line 89: Please clarify what did LEAD mean.
Reply: Thanks for the comment. LEAD stands for Learning from RWE: A Multicenter retrospective study of Dapagliflozin in Taiwan.
4. Discussion:
a) Need more details of the study limitations.
Reply: Thanks for the comment. We have added more details of the study limitations on page 10 line 351-353.
b) A few references that can be useful throughout the manuscript:
https://pubmed.ncbi.nlm.nih.gov/29852973/
Reply: Thank you for the recommendation. We added in discussion part on page 9 line 334-337: “Both large-scale RCT and real-world studies (eg. CVD-REAL study) indicate that dapagliflozin lowers hospitalization for heart failure and cardiovascular mortality in T2DM patients with existing or risk of cardiovascular disease.”
https://pubmed.ncbi.nlm.nih.gov/31416989/
Reply: Thanks for the recommendation. We cited the reference in introduction part on page 3 line 84-86: “Type 2 diabetes mellitus (T2DM) is a chronic metabolic disease affecting populations worldwide, and it has become an important public health challenge among Asian countries, especially in ethnic Chinese populations.”
https://pubmed.ncbi.nlm.nih.gov/31902326/
Reply: Thanks for the recommendation. We cited this reference in discussion part on page 9 line 334-337: “Both large-scale RCT and real-world studies (eg. CVD-REAL study) indicate that dapagliflozin lowers hospitalization for heart failure and cardiovascular mortality in T2DM patients with existing or risk of cardiovascular disease.”
https://pubmed.ncbi.nlm.nih.gov/27866701/
Reply: Thanks for the recommendation. We cited the reference in introduction part on page 3 line 84-86: “Type 2 diabetes mellitus (T2DM) is a chronic metabolic disease affecting populations worldwide, and it has become an important public health challenge among Asian countries, especially in ethnic Chinese populations.” And also mentioned in discussion part on page 9 line 327-330: “These findings have significant clinical implications in Asians, as visceral adiposity is known to be higher in Asians than Caucasians at a given BMI, contributing to insulin resistance that leads to cardiovascular and renal complication.”
Experimental design
2. Statistical analysis:
a) How did the authors handle missing data including a loss to follow-up and drug non-adherence? The results shown in Table 2 were per-protocol analysis. Would this introduce bias?
Reply:
In each analysis, patients without the respective data were excluded. For instance:
Baseline characteristic: Patients with baseline data were included (N=1960)
Change from baseline to Month 3: Patients with baseline and month 3 date were included (n=1344 in terms of HbA1c data)
Change from baseline to Month 6: Patients with baseline and month 6 date were included (n=1197 in terms of HbA1c data)
Patients who discontinued dapagliflozin treatment were recorded and excluded from the effectiveness results. Due to the low number of patients(12.6%) who discontinued treatment, we found there is low risk for bias on the results.
Validity of the findings
3. Results:
a) Given 90% of people were treated with metformin at baseline, did the authors compare the effects of using dapagliflozin as 2nd and 3rd line therapy, as well as in different dual, triple or >3 drug combinations?
Reply: We have done the sub-analysis of HbA1c and body weight stratified by different number of drug combinations. There were no differences in terms of A1c and weight reduction among subgroups. Due to the complexity of medication and small number of patients if being further divided, we did not analysis according to different dual, triple or >3 drug combinations.
b) Lines 158-160: What was the proportion of drug initiation/switch in primary and secondary prevention groups?
Reply: Thank you for the comment. Due to the relatively low percentage of patients with prior CAD (7.8%), cerebrovascular disease (1.7%), and heart failure (2.6%), we did not further calculate the proportion of drug initiation/switch in these groups.
c) Lines 181-184: Were the baseline characteristics similar between people who received add-on therapy versus switching therapy?
Reply: Thank you for the comment. Regarding their baseline characteristics, we focus on the diabetic medications and found a pronounced difference in the use of DPP4 inhibitors (27.6% in add-on group v.s. 75.7% in switching group). We assume the reason for this observation is due to the reimbursement restriction in Taiwan, which only reimburse either SGLT2 inhibitors or DPP4 inhibitors at the same time in one patient.
d) Safety and non-adherence data are important and should be reported.
Reply: Thank you for the comment. Due to the retrospective nature of this study, it is relatively difficult to obtain safety data in comparison to prospective study. However, we have collected the reason for discontinuing dapagliflozin due to intolerance. Most commonly reported reasons for intolerance include genital tract infection, urinary tract infection, and increased urinary frequency. We have added these data on page 7 line 259-260.
e) Table 1: What were the proportions of people with or without cardiovascular disease/nephropathy treated with dapagliflozin at baseline? Did they show similar effectiveness?
Reply: Thank you for the comment. 7.8% of patients had CAD at baseline, whereas 34.6% of patients had nephropathy at baseline. Based on the univariate logistic regression analysis, both CAD and nephropathy were not associated with responses in HbA1c and body weight.
f) Table S1: Did the authors report the logistic regression results of other cardiovascular risk factors e.g. body weight?
Reply: Yes, we have performed the logistic regression of the body weight. (Shown in the table as below) The result showed that greater weight reduction was corelated to lower HbA1c, lower LDL and not using insulin.
• Patients with lower HbA1c have good glucose control and may have low change be treated with insulin which will cause weight gain.
• This association has not been reported in previous study. Thus, this may be a chance finding especially only 62% of patient in the study had LDL data.
• It is reasonable that patients not using insulin compared with patients using insulin have greater body weight reduction after dapagliflozin treatment.
g) Figure S3: The majority of results showed a greater response in people with worse glycemic control. However, the change in body weight in people with HbA1c >10% was less than those with better glycemic control. What could be the possible reasons?
Reply: T2DM patients with poor glucose control tend to be treated with insulin under the RWE setting. Insulin may increase body weight and thus indirectly reduce the body weight reduction effect of dapagliflozin treatment.
" | Here is a paper. Please give your review comments after reading it. |
664 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Aims/Introduction. To investigate the clinical outcomes of patients with type 2 diabetes mellitus (T2DM) who initiated dapagliflozin in real-world practice in Taiwan.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials and Methods.</ns0:head><ns0:p>In this multicenter retrospective study, adult patients with T2DM who initiated dapagliflozin after May 1 st 2016 either as add-on or switch therapy were included. Changes in clinical and laboratory parameters were evaluated at 3 and 6 months. Baseline factors associated with dapagliflozin response in glycated hemoglobin (HbA1c) were analyzed by univariate and multivariate logistic regression.</ns0:p><ns0:p>Results. A total of 1960 patients were eligible. At 6 months, significant changes were observed: HbA1c by -0.73% (95% confidence interval [CI] -0.80, -0.67), body weight was -1.61 kg (95% CI -1.79, -1.42),</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>and systolic/diastolic blood pressure by -3.6/-1.4 mmHg. Add-on dapagliflozin showed significantly greater HbA1c reduction (-0.82%) than switched therapy (-0.66%) (p=0.0023). The proportion of patients achieving HbA1c <7% target increased from 6% at baseline to 19% at Month 6. Almost 80% of patients experienced at least 1% reduction in HbA1c, and 65% of patients showed both weight loss and reduction in HbA1c. Around 37% of patients had at least 3% weight loss. Multivariate logistic regression analysis indicated patients with higher baseline HbA1c and those who initiated dapagliflozin as add-on therapy were associated with a greater reduction in HbA1c.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions.</ns0:head><ns0:p>In this real-world study with the highest patient number of Chinese population to date, the use of dapagliflozin was associated with significant improvement in glycemic control, body weight, and blood pressure in patients with T2DM. Initiating dapagliflozin as add-on therapy showed better glycemic control than as switch therapy.</ns0:p></ns0:div>
<ns0:div><ns0:head>Use and effectiveness of dapagliflozin in patients with type 2 diabetes mellitus: a multicenter retrospective study in Taiwan</ns0:head><ns0:p>Jung-Fu Chen 1,2 , Yun-Shing Peng 3 , Chung-Sen Chen 4 , Chin-Hsiao Tseng 5 , Pei-Chi Chen 6 , Ting-I Lee 7,8 , Yung-Chuan Lu 9,10 , Yi-Sun Yang 11 , Ching-Ling Lin 12 , Yi-Jen Hung 13 , Szu-Ta Chen 14 , Chieh-Hsiang Lu 15-17 , Chwen-Yi Yang 18 , Ching-Chu Chen 19,20 , Chun-Chuan Lee 21 , Pi-Jung Hsiao 17 , Ju-Ying Jiang 22 , Shih-Te Tu 23</ns0:p></ns0:div>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Type 2 diabetes mellitus (T2DM) is a chronic metabolic disease affecting populations worldwide, and it has become an important public health challenge among Asian countries, especially in ethnic Chinese populations. <ns0:ref type='bibr' target='#b7'>(Cho et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b24'>Lim & Chan 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Lim et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b31'>Nanditha et al. 2016)</ns0:ref> According to the 2019 IDF Diabetes Atlas and a nationwide database analysis in Taiwan, the prevalence of diabetes increase from 4.31% in 2000 to 6.6% in 2019 for adults aged 20-79 years, resulting in a more than 70% increase in the total diabetic population (1.23 million in 2019). <ns0:ref type='bibr'>(International Diabetes Federation 2019;</ns0:ref><ns0:ref type='bibr' target='#b22'>Jiang et al. 2012</ns0:ref>) Currently, most guidelines recommend pharmacologic therapy based on evaluating glycated hemoglobin (HbA1c) levels for glycemic control. When the glycemic target is not achieved by lifestyle management and metformin, a second agent may be initiated, considering medication profiles and patient-related factors.(American Diabetes Association 2019; Diabetes Association Of The Republic Of China Taiwan 2019; <ns0:ref type='bibr' target='#b12'>Garber et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b28'>McGuire et al. 2016)</ns0:ref> Sodium-glucose cotransporter-2 (SGLT2) inhibitors are a newer class of oral antidiabetic drugs (OADs) that inhibit glucose reabsorption at the early segments of the proximal convoluted tubule, thereby promoting glucosuria independently of insulin action. <ns0:ref type='bibr' target='#b16'>(Hasan et al. 2014</ns0:ref>) These agents improve glycemic control without increasing the risk of hypoglycemia, and have pleiotropic effects such as weight loss and reduction in blood pressure (BP). Combining SGLT2 inhibitors with metformin has been demonstrated to have additive effect compared to metformin alone in HbA1c and body weight reduction. <ns0:ref type='bibr' target='#b30'>(Molugulu et al. 2017)</ns0:ref> Given that T2DM has been known to have a higher risk of cardiovascular events, significant attention has been paid to the benefit of SGLT2 inhibitors on cardiovascular outcomes in T2DM patients with or without pre-existing cardiovascular disease. <ns0:ref type='bibr' target='#b36'>(Scholtes et al. 2019)</ns0:ref> Dapagliflozin, a member of the SGLT2 inhibitor class, has been shown in randomized controlled trials (RCTs) to improve glycemic control both as monotherapy <ns0:ref type='bibr' target='#b2'>(Bailey et al. 2015)</ns0:ref> and as add-on to other antidiabetic drugs, <ns0:ref type='bibr' target='#b32'>(Nauck et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b38'>Sun et al. 2014)</ns0:ref> along with wellestablished safety profile in a pooled analysis of phase IIb/III trials. <ns0:ref type='bibr' target='#b21'>(Jabbour et al. 2018)</ns0:ref> Complimentary to RCTs, observational studies can provide real-world data reflecting clinical practice patterns and outcomes not collected in RCTs. <ns0:ref type='bibr' target='#b13'>(Garrison et al. 2007)</ns0:ref> The real-world evidence on dapagliflozin has been reported in North America <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Huang et al. 2018b</ns0:ref>) and Europe, <ns0:ref type='bibr' target='#b10'>(Fadini et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b29'>Mirabelli et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>Scheerer et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b37'>Scorsone et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017</ns0:ref>) whereas data are limited in Asia, especially for ethnic Chinese populations. <ns0:ref type='bibr' target='#b15'>(Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b40'>Tobita et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b41'>Viswanathan & Singh 2019)</ns0:ref> Dapagliflozin has been reimbursed by the National Health Insurance (NHI) in Taiwan since May of 2016. The purpose of the LEAD (Learning from RWE: A Multicenter retrospective study of Dapagliflozin in Taiwan) study was to retrospectively investigate the clinical outcomes for patients with T2DM initiating dapagliflozin under real-world setting in Taiwan. Patients were eligible for inclusion if they (1) were diagnosed with T2DM and aged 20 years old, (2) initiated 5 mg or 10 mg of dapagliflozin after May 1 st 2016, either as add-on therapy to existing OAD(s) with or without insulin, or as switch therapy from another OAD, and (3) completed follow-up of at least 6 months regardless of continuation on dapagliflozin therapy. Data were excluded if they (1) received other SGLT2 inhibitors prior to the initiation of dapagliflozin,</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Data source and study population</ns0:head><ns0:p>(2) had a diagnosis of type 1 diabetes, or (3) were included in other clinical trials concurrently during the retrospective data collection period.</ns0:p></ns0:div>
<ns0:div><ns0:head>Assessment and outcome measures</ns0:head><ns0:p>Patient baseline demographics and clinical characteristics were recorded at the time of dapagliflozin initiation. Reasons for starting/switching to dapagliflozin, rate and reasons of dapagliflozin discontinuation were collected. Measurements of HbA1c, body weight, BP, fasting plasma glucose (FPG), and lipid profile (total cholesterol, high-density lipoprotein cholesterol [HDL-C], low-density lipoprotein cholesterol [LDL-C], and triglycerides) were obtained at baseline, 3 and 6 months to assess for changes. Further subgroup analyses were performed to assess changes in HbA1c by baseline HbA1c, manner of dapagliflozin initiation (as add-on or switch), BMI, age, and dosage of dapagliflozin. Changes in body weight were also analyzed by subgroups of baseline BMI, HbA1c, age, and dosage of dapagliflozin. Patients who have one or more times record using 5 mg dapagliflozin during the follow up were defined as 5 mg group. The proportion of patients within each glycemic level (HbA1c <7%, 7-8%, 8-9%, and >9%) was evaluated at baseline and 6 months.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis</ns0:head><ns0:p>Descriptive statistics were provided for all variables. Continuous variables were presented with mean ± standard deviation, and numbers or percentages were used for categorical variables. Changes from baseline of the variables were analyzed using evaluable data at the respective time points. Mean changes of each variable from baseline to Month 3, baseline to Month 6, and between 3 and 6 months were evaluated by paired t-test and Wilcoxon signed-rank test in the overall cohort and subgroups. Differences in HbA1c reduction between dapagliflozin add-on and switch groups at Month 3 and Month 6 were analyzed by Wilcoxon rank-sum test. For changes in HbA1c and body weight among other subgroups at Month 3 and Month 6, the Kruskal-Wallis test was used. Patient distributions of changes in HbA1c and body weight at Month 6 were displayed by scatter plots for total, add-on, and switch groups. Baseline factors associated with dapagliflozin response in HbA1c were examined by univariate and multivariate logistic regression analyses. The cutoff for dapagliflozin responders was determined by using the median of change in HbA1c in patients with evaluable data at Month 6. Regarding the missing data, patients without the respective data were excluded in each analysis. For instance, when analyzing change from baseline to Month 3, only patients with both baseline and Month 3 data were included. Statistical significance was set at p value <0.05. All statistical analyses were conducted using SAS version 9.3.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Baseline characteristics</ns0:head><ns0:p>A total of 1960 patients were eligible, with a mean age of 57.8 ± 11.5 years, and 52% were male. The baseline clinical and laboratory characteristics are shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. The mean HbA1c was 8.8 ± 1.4%, and body weight was 75.2 ± 15.8 kg. Baseline systolic blood pressure (SBP) and diastolic blood pressure (DBP) was 136.6 ± 17.6 and 77.8 ± 11.3 mmHg, respectively. Metformin was the most prescribed antihyperglycemic agent (92.4%), followed by sulfonylurea (70.7%) and dipeptidyl peptidase-4 (DPP4) inhibitor (53.5%). Over half of the patients were on either dual (38.2%) or triple (36.2%) therapy prior to initiating dapagliflozin. A large proportion of patients (78.9%) initiated dapagliflozin at the higher dose (10 mg). Regarding the manner of initiation, more than half (53.8%) of the patients were switched to dapagliflozin, while the others (46.2%) started dapagliflozin as add-on therapy. Main reasons for starting dapagliflozin were for its HbA1c lowering efficacy (81.2%), followed by less concern for weight gain (26.8%) and lower risk of hypoglycemia (11.4%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Effectiveness of dapagliflozin on clinical and laboratory parameters</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> shows changes in clinical and laboratory parameters after dapagliflozin initiation at 3 and 6 months. Compared with baseline, statistically significant reductions in HbA1c were observed both at Month 3 (-0.68%; 95% confidence interval [CI] -0.74, -0.62, p <0.001) and at Month 6 (-0.73%; 95% CI -0.80, -0.67, p <0.001). Considering patients with evaluable HbA1c data at baseline and Month 6, the proportion of patients achieving the glycemic target (HbA1c <7%) was 6% at baseline, and it increased to 19% by Month 6 after initiating dapagliflozin. Moreover, the proportion of patients who were poorly controlled (HbA1c >9%) decreased from 34.7% at baseline to 15.9% at Month 6 (Fig <ns0:ref type='figure' target='#fig_0'>S1A</ns0:ref>). Improvements in FPG (-28.3 mg/dL), BMI (-0.60), body weight (-1.61 kg), and SBP/DBP (-3.6/-1.4 mmHg) at Month 6 were also significant (all p <0.001, Table <ns0:ref type='table'>2</ns0:ref>). Aside from BP reduction that plateaued at Month 3, small but significant improvements were observed in HbA1c, FPG, BMI, and body weight from Month 3 to Month 6. Small changes in lipid profiles were also noted (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Subgroup analyses: HbA1c and body weight</ns0:head><ns0:p>We performed analyses to examine the effects of dapagliflozin treatment on HbA1c and body weight among different subgroups. For HbA1c, a statistically significant trend of greater reduction was observed in patients with higher baseline HbA1c from baseline to 3 and 6 months (Fig. <ns0:ref type='figure' target='#fig_0'>1A</ns0:ref>). In patients with baseline HbA1c between 8-9% and those >10%, a significant greater HbA1c reduction was found at Month 6 compared with Month3. Patients who received dapagliflozin as add-on therapy had a significantly greater reduction in HbA1c (-0.82%) than those who were switched from one antihyperglycemic agent to dapagliflozin at Month 6 (-0.66%, p=0.002, Fig. <ns0:ref type='figure' target='#fig_0'>1B</ns0:ref>). Considering patients with evaluable HbA1c data at baseline and Month 6, the proportion of patients achieving the glycemic target (HbA1c <7%) were 5% and 6% for dapagliflozin add-on and switch groups at baseline, and they subsequently increased to 23% and 15% by Month 6, respectively (Fig. <ns0:ref type='figure' target='#fig_0'>S1B</ns0:ref>). When stratified by baseline BMI, no significant difference in HbA1c reduction was found across subgroups (Fig. <ns0:ref type='figure'>S2A</ns0:ref>). Similarly, no difference among age subgroups was found for HbA1c reduction (Fig. <ns0:ref type='figure'>S2B</ns0:ref>). When stratified by dosage of dapagliflozin, patients receive 10 mg dapagliflozin had significantly greater reduction in HbA1c (-0.74%) than those who receive 5 mg at month 6 (-0.67%, p<0.001, Fig. <ns0:ref type='figure'>S2C</ns0:ref>).</ns0:p><ns0:p>For body weight, treatment with dapagliflozin showed a statistically significant trend of greater weight loss with increasing baseline BMI from baseline to 3 and 6 months (Fig. <ns0:ref type='figure'>2</ns0:ref>). Among patients with evaluable data at both Month 3 and Month 6, further weight reduction was significant at Month 6 compared with Month 3 across all BMI categories, except for those with baseline BMI ≥35. When stratified by baseline HbA1c, significantly greater weight reductions were found in patients with lower baseline HbA1c throughout the study, despite similar baseline body weight among HbA1c categories (Fig. <ns0:ref type='figure'>S3A</ns0:ref>). There was no difference in weight reduction across age subgroups at Month 3, but a significant difference was found at Month 6 (Fig. <ns0:ref type='figure'>S3B</ns0:ref>). Comparing data between 3 and 6 months, significant reductions in weight were found in groups aged 40-65 and 65-75 years. When stratified by dosage of dapagliflozin, patients receive 10 mg dapagliflozin had significantly greater reduction in body weight (-1.65 kg) than those who receive 5 mg at month 6 (-1.43 kg, p=0.011, Fig. <ns0:ref type='figure'>S3C</ns0:ref>).</ns0:p><ns0:p>In addition, patients were stratified into four groups according to the number of antidiabetic therapies at baseline: monotherapy, dual therapy, triple therapy, and >3 drugs combination. Across these subgroups, no significant differences were observed in both HbA1c and body weight reductions (Fig. <ns0:ref type='figure'>S2D</ns0:ref>; Fig. <ns0:ref type='figure'>S3D</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Relationship between changes in HbA1c and body weight</ns0:head><ns0:p>Patient distributions of changes in HbA1c and body weight after initiating dapagliflozin for 6 months were presented in scatter plots (Fig. <ns0:ref type='figure'>3</ns0:ref>). Of the 1094 patients with evaluable data, 79.9% and 77.6% of them experienced reductions in HbA1c and body weight, respectively. Moreover, 64.0% of patients showed simultaneous reductions in both outcomes (Fig. <ns0:ref type='figure'>3A</ns0:ref>). For patients in the add-on (n=529) and switch (n=565) groups, 69.2% and 59.4% had a reduction in both HbA1c and body weight, respectively (Fig. <ns0:ref type='figure'>3B and 3C</ns0:ref>). In terms of clinical meaningful change, 74.4% and 74.2% of patients had at least 0.5% and 1% reduction in HbA1c for 6 months, respectively. 37.1% of patients had at least 3% weight loss for 6 months. The baseline characteristics of at least 1% reduction in HbA1c, 3% weight loss and both were show in the supplement table 1-3, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Baseline factors associated with dapagliflozin response in HbA1c</ns0:head><ns0:p>Baseline factors associated with dapagliflozin response in HbA1c at Month 6 were shown in Table <ns0:ref type='table'>3</ns0:ref>. Responders and non-responders were determined by using the median of change in HbA1c (-0.60%; min, max [-6.6, 3.2]) as the cutoff.</ns0:p><ns0:p>Univariate logistic regression analysis indicated that higher baseline HbA1c, FPG, add-on dapagliflozin, use of insulin were significantly associated with dapagliflozin response in HbA1c reduction, while being on dual therapy at the time of dapagliflozin initiation was significantly associated with less HbA1c reduction. In multivariate logistic regression analysis, significant associations with dapagliflozin response in HbA1c reduction were found in patients with higher baseline HbA1c (odds ratio [OR] 2.10; 95% CI 1.79-2.47, p <0.001), and those who received dapagliflozin as add-on therapy (OR 1.60; 95% CI 1.19-2.14, p=0.002).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discontinue rate and reasons for discontinuation</ns0:head><ns0:p>Among eligible patients, the total number of discontinuation at Month 3 and Month 6 were 153 (7.8%) and 247 (12.6%), respectively. The main reasons for the discontinuation were inadequate HbA1c control (n=83; 4.23%), intolerance (n=46; 2.35%), or poor compliance to the current regimens (n=9; 0.46%). The most commonly reported reasons for intolerance were genital or urinary tract infections (n=15), frequency of urination (n=8), and vaginal itching (n=5).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This multicenter retrospective study presented the first nationwide real-world evidence for dapagliflozin in Taiwan. In this cohort of 1960 Taiwanese patients with T2DM, initiating dapagliflozin was associated with significant improvements in glycemic control, body weight, and BP. Overall, the changes from baseline were -0.73% for HbA1c, -28.3 mg/dL for FPG, -1.61 kg (-2.14%) for body weight, and -3.6/-1.4 mmHg for SBP/DBP at 6 months. These clinical benefits of dapagliflozin are comparable to the efficacy data from meta-analyses of RCTs. <ns0:ref type='bibr' target='#b38'>(Sun et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b44'>Zhang et al. 2014)</ns0:ref> In addition, small changes in lipids were noted, including an increase in total cholesterol, LDL-C, HDL-C, and a decrease in triglycerides.</ns0:p><ns0:p>Several observation studies assessing the effectiveness of dapagliflozin have been published recently. <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b10'>Fadini et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Huang et al. 2018b;</ns0:ref><ns0:ref type='bibr' target='#b29'>Mirabelli et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>Scheerer et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b41'>Viswanathan & Singh 2019;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017)</ns0:ref> Despite differences in ethnicity, location, and other clinical factors, the baseline HbA1c among studies were similar to our data, ranging 8.5%-9.5%, and many patients were already on two or three OADs, some were even using insulin. The effects of dapagliflozin on glycemic control, body weight, and BP were consistent across these studies: reduction in HbA1c ranged from 0.7%-1.5% over 6 to 12 months of observation, reduction in body weight by percentage ranged 1.75%-3.83%, and reduction for SBP and DBP were of 2.3-3.8 mmHg and 1.1-2.0 mmHg, respectively. In our study, these effects were significant 3 months after initiating dapagliflozin, and aside from the BPlowering effect that plateaued at Month 3, we found small but significant improvements in other key metabolic parameters from Month 3 to Month 6. Also, 65% of patients showed simultaneous reductions in both HbA1c and body weight, comparable to the previous reports. <ns0:ref type='bibr' target='#b10'>(Fadini et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b15'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Scheerer et al. 2016</ns0:ref>) These data suggested that dapagliflozin is effective in realworld practice for patients with inadequately controlled T2DM.</ns0:p><ns0:p>The baseline characteristics in our cohort, such as mean age, sex ratio, and HbA1c were similar to the South-East Asia cohort in the recent global DISCOVER study. <ns0:ref type='bibr' target='#b14'>(Gomes et al. 2019)</ns0:ref> Although diabetic patients in South-East Asia (and Western Pacific region) have lower BMI than those in Western countries, the current Taiwanese study had slightly higher BMI compared with the South-East Asia data in the DISCOVER study <ns0:ref type='bibr'>(28.3 vs. 27.3</ns0:ref>). To explore and identify whether some baseline factors may be associated with better treatment response to dapagliflozin, subgroup and logistic regression analyses were performed. We observed a significant trend of greater HbA1c reduction in patients with higher baseline HbA1c, and the association was further indicated in multivariate logistic regression analysis. This association is consistent with findings from the literature. In general, studies suggest that patients with higher baseline HbA1c experienced a greater reduction in HbA1c with dapagliflozin treatment, <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hong et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>Scheerer et al. 2016</ns0:ref>) and one study indicates that a lower baseline HbA1c was associated with the achievement of HbA1c goal (<7%). <ns0:ref type='bibr' target='#b42'>(Wilding et al. 2017)</ns0:ref> Other patient characteristics that may be associated with better dapagliflozin response were male, <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017</ns0:ref>) younger age (<45 years old), <ns0:ref type='bibr' target='#b42'>(Wilding et al. 2017)</ns0:ref> shorter disease duration (<2-4 years), <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017</ns0:ref>) and non-insulin use <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hong et al. 2019)</ns0:ref>; however, some of these associations were not identified and other data were not available in the current study. On the other hand, baseline body weight or BMI did not influence the magnitude of reduction in HbA1c, which is similar to previous studies. <ns0:ref type='bibr' target='#b15'>(Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Scheerer et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017)</ns0:ref> Regarding the effects of dapagliflozin by manners of initiation, we found patients who initiated dapagliflozin as add-on therapy had a significantly greater reduction in HbA1c (-0.82%) than those who were switched (-0.66%) from other OADs. Two recent studies also showed treatment with SGLT2 inhibitors (i.e., dapagliflozin or empagliflozin) as add-on therapy had a greater glucose-lowering effect than as switch therapy. <ns0:ref type='bibr' target='#b15'>(Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b18'>Hong et al. 2019</ns0:ref>) Notably, we found DPP4 inhibitors accounted for the majority (69.5%) of the switched agents (data not shown), a finding which is likely due to the fact that Taiwan's NHI only covers either a DPP4 inhibitor or an SGLT2 inhibitor. A single-center retrospective study in Taiwan recently reported that patients who switched from a DPP4 inhibitor to an SGLT2 inhibitor (i.e., empagliflozin) for 6 months had significant reductions in HbA1c and body weight, whereas those who remained on a DPP4 inhibitor did not experience significant changes. <ns0:ref type='bibr' target='#b19'>(Huang et al. 2018a</ns0:ref>) Moreover, data from long-term clinical trials demonstrated sustained efficacy and tolerability of dapagliflozin as addon therapy. <ns0:ref type='bibr' target='#b1'>(Bailey et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b8'>Del Prato et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Leiter et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b27'>Matthaei et al. 2015)</ns0:ref> Taken together, switching to an SGLT2 inhibitor from other OADs such as DPP4 inhibitors may be a suitable option for antidiabetic treatment, and add-on SGLT2 inhibition could provide better clinical benefit than switch therapy in patients already taking other OADs without adequate glycemic control. <ns0:ref type='bibr' target='#b18'>(Hong et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b19'>Huang et al. 2018a)</ns0:ref> In our study, we observed a significant trend of greater weight loss with increasing baseline BMI. This trend remained when data were calculated by the percentage of weight reduction, with about 1% weight reduction in the BMI <24 category and over 2% in those with BMI ≥30 (data not shown). Besides, we also observed a significant weight loss in patients received 10 mg dapagliflozin compared with 5 mg. This result is similar with a previous study that have demonstrate a dose-dependent reduction in body weight with dapagliflozin in Chinese patients under placebo-controlled trial conditions. <ns0:ref type='bibr' target='#b5'>(Cai et al. 2018</ns0:ref>) Loss of fat mass has been known to account for approximately 70% of total weight loss observed with dapagliflozin treatment. <ns0:ref type='bibr' target='#b3'>(Bolinder et al. 2012)</ns0:ref> These findings have significant clinical implications in Asians, as visceral adiposity is known to be higher in Asians than Caucasians at a given BMI, contributing to insulin resistance that leads to cardiovascular and renal complication. <ns0:ref type='bibr' target='#b25'>(Lim et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b26'>Ma & Chan 2013)</ns0:ref> Given its effectiveness on HbA1c, FPG, body weight, and BP, dapagliflozin could serve as a promising OAD for Asian population to achieve better glycemic control and reduce future risk of cardiovascular diseases.</ns0:p><ns0:p>Since several cardiovascular outcome trials of SGLT2 inhibitors have been published, their effects on cardiorenal protection have come under the spotlight. <ns0:ref type='bibr' target='#b36'>(Scholtes et al. 2019</ns0:ref>) Both largescale RCT and real-world studies (e.g. CVD-REAL study) indicate that dapagliflozin lowers hospitalization for heart failure and cardiovascular mortality in T2DM patients with existing or risk of cardiovascular disease. <ns0:ref type='bibr' target='#b6'>(Cavender et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b33'>Norhammar et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b34'>Raparelli et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b43'>Wiviott et al. 2019</ns0:ref>) Although cardiovascular events were not assessed in this study, the effects of dapagliflozin on several cardiovascular risk factors (e.g., HbA1c, body weight, BP) observed were similar to those demonstrated in the DECLARE-TIMI 58 trial. In addition, change in lipid profile by dapagliflozin was similar to that reported in a pooled analysis of clinical trials. <ns0:ref type='bibr' target='#b21'>(Jabbour et al. 2018)</ns0:ref> While the slightly elevated LDL-C level may be of concern, a study showed that dapagliflozin increases concentration of the less atherogenic large buoyant LDL-C, while the potent atherogenic small dense LDL-C remained suppressed. <ns0:ref type='bibr' target='#b17'>(Hayashi et al. 2017)</ns0:ref> Future study with long-term follow-up would be expected to assess the cardiovascular outcomes with dapagliflozin treatment in the Taiwanese population.</ns0:p><ns0:p>Our study provided clinical effects of dapagliflozin on glycemic, weight, BP control at 6 months using a large representative cohort of patients withT2DM in Taiwan. Besides, comprehensive analyses were performed to identify subgroups and baseline predictors for better dapagliflozin response. However, there are several limitations in this study. First, due to a lack of comparison arm in prospective fashion, the reported effect size might not reflect the true differences. In addition, limited by the retrospective observational study design, partial loss of follow-up, missing data, and other confounding factors were inevitable. Due to the relatively low percentage of patients with prior CAD (7.8%), cerebrovascular disease (1.7%), and heart failure (2.6%), we were unable to further analyzed the outcomes stratified by primary and secondary prevention groups. Besides, the safety data was not prospectively collected. However, we have recorded the reasons for discontinuation due to intolerance. In addition, disease duration and renal function (eGFR) were not recorded, so we were unable to confirm their association with dapagliflozin response in HbA1c, as observed in other studies. <ns0:ref type='bibr' target='#b4'>(Brown et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b15'>Han et al. 2018;</ns0:ref><ns0:ref type='bibr' target='#b42'>Wilding et al. 2017</ns0:ref>)</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The LEAD study presented the effectiveness of dapagliflozin observed from the highest number of Chinese patients with T2DM in a real-world setting to date. Add-on therapy showed better glycemic control than switch therapy. The initiation of dapagliflozin was associated with significant improvements in glycemic control, body weight, and BP in Taiwanese patients, which is comparable to that in RCTs and other observational studies. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,178.87,525.00,247.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,199.12,525.00,393.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Baseline characteristics in 1960 patients.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>1 Age (years)</ns0:cell><ns0:cell>57.8 ± 11.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Sex (male)</ns0:cell><ns0:cell>1020 (52%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Weight (kg)</ns0:cell><ns0:cell>75.2 ± 15.8</ns0:cell></ns0:row><ns0:row><ns0:cell>BMI (kg/m 2 )</ns0:cell><ns0:cell>28.3 ± 4.9</ns0:cell></ns0:row><ns0:row><ns0:cell>BMI categories</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell><24</ns0:cell><ns0:cell>17.6%</ns0:cell></ns0:row><ns0:row><ns0:cell>≥24 -<27</ns0:cell><ns0:cell>26.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>≥27 -<30</ns0:cell><ns0:cell>24.9%</ns0:cell></ns0:row><ns0:row><ns0:cell>≥30 -<35</ns0:cell><ns0:cell>22.3%</ns0:cell></ns0:row><ns0:row><ns0:cell>≥35</ns0:cell><ns0:cell>9.2%</ns0:cell></ns0:row><ns0:row><ns0:cell>HbA1c (%)</ns0:cell><ns0:cell>8.8 ± 1.4</ns0:cell></ns0:row><ns0:row><ns0:cell>HbA1c distribution</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell><7%</ns0:cell><ns0:cell>5.8%</ns0:cell></ns0:row><ns0:row><ns0:cell>≥7% -<8%</ns0:cell><ns0:cell>25.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>≥8% -<9%</ns0:cell><ns0:cell>31.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>>9% -≤10%</ns0:cell><ns0:cell>19.6%</ns0:cell></ns0:row><ns0:row><ns0:cell>>10%</ns0:cell><ns0:cell>18.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>FPG (mg/dL)</ns0:cell><ns0:cell>173.0 ± 53.9</ns0:cell></ns0:row><ns0:row><ns0:cell>SBP / DBP(mmHg)</ns0:cell><ns0:cell>136.6 ± 17.6 / 77.8 ± 11.3</ns0:cell></ns0:row><ns0:row><ns0:cell>DBP (mmHg)</ns0:cell><ns0:cell>77.8 ± 11.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Total cholesterol (mg/dL)</ns0:cell><ns0:cell>164.3 ± 36.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Triglycerides (mg/dL)</ns0:cell><ns0:cell>171.2 ± 133.6</ns0:cell></ns0:row><ns0:row><ns0:cell>LDL-C (mg/dL)</ns0:cell><ns0:cell>89.8 ± 27.8</ns0:cell></ns0:row><ns0:row><ns0:cell>HDL-C (mg/dL)</ns0:cell><ns0:cell>44.8 ± 13.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Medical history</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hypertension</ns0:cell><ns0:cell>60.5%</ns0:cell></ns0:row><ns0:row><ns0:cell>CAD</ns0:cell><ns0:cell>7.8%</ns0:cell></ns0:row><ns0:row><ns0:cell>Cerebrovascular disease</ns0:cell><ns0:cell>1.7%</ns0:cell></ns0:row><ns0:row><ns0:cell>PAD</ns0:cell><ns0:cell>7.4%</ns0:cell></ns0:row><ns0:row><ns0:cell>Heart failure</ns0:cell><ns0:cell>2.6%</ns0:cell></ns0:row><ns0:row><ns0:cell>Nephropathy</ns0:cell><ns0:cell>34.6%</ns0:cell></ns0:row><ns0:row><ns0:cell>Retinopathy</ns0:cell><ns0:cell>10.7%</ns0:cell></ns0:row><ns0:row><ns0:cell>Neuropathy</ns0:cell><ns0:cell>14.1%</ns0:cell></ns0:row><ns0:row><ns0:cell>Antihyperglycemic therapy</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ reviewing PDF | (2020:02:45716:2:0:CHECK 12 Aug 2020)Manuscript to be reviewed 2 Values are in n (%), mean ± SD, or percent of total population. The following parameters had 3 patients (n) with missing data:Weight, n = 192; BMI, n = 206; HbA1c, n = 119; FPG, n = 380; </ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ reviewing PDF | (2020:02:45716:2:0:CHECK 12 Aug 2020)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "
Division of Endocrinology and Metabolism, Changhua Christian Hospital
No. 135 Nanhsiao Street, Changhua City 50006, Taiwan
+886-4-7009699
[email protected]
Aug 12nd, 2020
Dear Editors
We truly appreciate the generous comments from the reviewers on the manuscript. Based on the insightful comments provided, we have analyzed the results in detail and revised the manuscript to address their concerns. This make the current real-world study-LEAD more comprehensive and clinically valuable.
We believe that the manuscript is now suitable for publication in PeerJ.
Shih Te Tu, MD
On behalf of all authors.
(Pages and lines were according to the tracked changes manuscript files)
Reviewer 1
Basic reporting
The manuscript has been improved, addressing this reviewer’s comments with careful attention. There is only a minor point that should still require Authors’ consideration.
Please provide these novel subgroup analyses, evaluating the real-world effects of dapagliflozin dosage (5mg vs 10 mg) on HbA1c and body weight, to the reader as a supplementary figure, with differences (p values) among subgroups at 3 and 6 months.
Reply: Thanks for the comment and we have add this novel subgroup analysis in the supplementary figure 2C and 3C.
Subgroup stratification by dapagliflozin dosage should be cited in material and methods, and findings should be shown and briefly discussed in the appropriate manuscript sections, taking into account that a dose-dependent reduction in body weight with dapagliflozin has been demonstrated in Chinese patients under placebo-controlled trial conditions.
(Cai X. et al. The Association Between the Dosage of SGLT2 Inhibitor and Weight Reduction in Type 2 Diabetes Patients: A Meta‐Analysis. Obesity (Silver Spring). 2018 Jan;26(1):70-80. doi: 10.1002/oby.22066 ).
Reply: Thanks for the comment and we have mention on page 4 line 138, page 5 line 153-155, page 6 line 219-221, page 7 line 231-233 and page 9 line 326-329.
Experimental design
no comment
Validity of the findings
no comment
Reviewer 2
Basic reporting
Thank you for the revision.
A few comments:
Line 87-92: Current professional guidelines recommend the use of glucose-lowering drugs based on compelling indications while HbA1c is an important tool in glycemic monitoring. Suggest rephrase.
Reply:
Thanks for the comment and we have rephrased it accordingly on page 3 line 87-92.
Line 244-245 and throughout the manuscript, tables and figures: Odds ratios and 95% confidence intervals in 2 decimal points and P-values in 3 decimal points would have provided sufficient precision.
Reply:
Thank you for your comment. We have adjusted the OR and 95% CI in 2 decimal points and p-value in 3 decimal points throughout the manuscript.
Experimental design
Line 157-171: The authors should state that only patients with complete data at baseline, month 3 and month 6 were included. Information on the handling of missing data is required either being described in the Statistical Analysis section or in a supplementary table.
Reply:
Thank you for the comment. We have described how we handle the missing data in the Statistical Analysis section on page 5 line 170-172.
Validity of the findings
Line 71-72 and Line 230: Please clarify the definitions of “improvement in HbA1c” and “simultaneous reduction in HbA1c and body weight”. Readers need to know if these improvements were clinically meaningful e.g. the proportion of patients with at least 0.5% or 1% HbA1c reduction; at least 3% weight loss. Based on these, we can know the characteristics and medication profiles of patients who had a very good response to dapagliflozin. Fig 3 only provided crude visualization.
Reply: Thank you for this truly insightful and clinically useful comment. We have revised base on novel data on page 3 line 71-73 and page 7 line 242-244. Also, we have further analyzed the baseline characteristics and medication profiles in patients with very good response (e.g. 1% reduction in HbA1c and/or 3% reduction in weight loss). In fact, their baseline characteristics are generally similar to the overall population. No significant prognostic factors can be identified.
Response to Comment 3 of Reviewer 2: The 2 bar charts showing the effectiveness of dapagliflozin on HbA1c and body weight by different combination therapy did report some useful data. This reviewer opined that the results should be reported.
Reply: Thanks for the comment and we have add this subgroup analysis in the supplementary figure 2D and 3D and mentioned on page 7 line 235-238.
Study limitations:
- As mentioned in the previous review, the authors should discuss the limitation related to a lack of comparison arm and thus, the reported effect sizes might not reflect the true differences.
- The inability to analyse by primary and secondary prevention groups.
Reply:
Thank you for these two insightful comments regarding the study limitations. We have added them on page 9-10 line 344-350.
" | Here is a paper. Please give your review comments after reading it. |
665 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. This study aimed to assess whether ecological inferences from isotopic functional indices (IFIs) are impacted by changes in isotopic baselines in aquatic food webs. We used sudden CO 2outgassing and associated shifts in DIC-δ 13 C brought by waterfalls as an excellent natural experimental set-up to quantify impacts of changes in algal isotopic baselines on ecological inferences from IFIs.</ns0:p><ns0:p>Methods. Carbon (δ 13 C) and nitrogen (δ 15 N) stable isotopic ratios of invertebrate communities sharing similar structure were measured at above-and below-waterfall sampling sites from five rivers and streams in Southern Quebec (Canada). For each sampled invertebrate community, the six Layman´s IFIs were then calculated in the δ-space (δ 13 C vs. δ 15 N).</ns0:p><ns0:p>Results. As expected, isotopic functional richness indices, measuring the overall extent of community trophic space, were strongly sensitive to changes in isotopic baselines unlike other IFIs. Indeed, other IFIs were calculated based on the distribution of species within δ-space and were not strongly impacted by changes in the vertical or horizontal distribution of specimens in the δ-space. Our results highlighted that IFIs exhibited different sensitivities to changes in isotopic baselines, leading to potential misinterpretations of IFIs in river studies where isotopic baselines generally show high temporal and spatial variabilities. The identification of isotopic baselines and their associated variability, and the use of independent trophic tracers to identify the actual energy pathways through food webs must be a prerequisite to IFIs-based studies to strengthen the reliability of ecological inferences of food web structural properties.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Stable isotopes analysis, mainly those of carbon and nitrogen, of aquatic consumers is a common technique to provide quantitative and qualitative measurements of energy flows in food webs <ns0:ref type='bibr' target='#b6'>(Cabana and Rasmussen 1996;</ns0:ref><ns0:ref type='bibr' target='#b33'>Post et al. 2002;</ns0:ref><ns0:ref type='bibr' target='#b44'>Vander Zanden et al. 2016)</ns0:ref>. Consumer isotopic ratios are often represented in a δ-space (i.e. δ 13 C-δ 15 N biplot), where species trophic interactions can be assessed using a large variety of analytical tools <ns0:ref type='bibr' target='#b24'>(Layman et al. 2012</ns0:ref>). Among them, isotopic functional indices (IFIs) are based on the distribution and the dispersion of species in δspace and have been developed to calculate measures of trophic structure of food webs <ns0:ref type='bibr' target='#b25'>(Layman et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b23'>Jackson et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b10'>Cucherousset and Villéger 2015)</ns0:ref>. Briefly, IFIs allow to infer food web structural properties, and can be grouped according to three major components of trophic diversity. First, isotopic functional richness providing a quantitative indication of the extent of isotopic space of the entire community <ns0:ref type='bibr' target='#b25'>(Layman et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b23'>Jackson et al. 2011)</ns0:ref>. Secondly, isotopic functional divergence providing information on the average degree of trophic diversity within a δ-space <ns0:ref type='bibr' target='#b25'>(Layman et al. 2007)</ns0:ref>, and thirdly, isotopic functional evenness quantifies the regularity in species distribution and may also be seen as an indicator of trophic redundancy in food webs <ns0:ref type='bibr' target='#b25'>(Layman et al. 2007;</ns0:ref><ns0:ref type='bibr'>Rigolet al. 2013)</ns0:ref>.</ns0:p><ns0:p>The IFI concept is, however, based on two main assumptions: that two close species have similar role in food webs; and that isotopic metrics are good proxies of food web structural properties <ns0:ref type='bibr' target='#b25'>(Layman et al. 2007</ns0:ref>), but too few empirical studies have been conducted to evaluate the validity of these underlying assumptions <ns0:ref type='bibr' target='#b39'>(Syväranta et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b22'>Jabot et al. 2017)</ns0:ref>. Several authors have, however, pointed out that overlaps and variabilities in isotopic baselines could be major pitfalls of IFIs and hamper identification of actual food web structure <ns0:ref type='bibr' target='#b21'>(Hoeinghaus and Zeug 2008;</ns0:ref><ns0:ref type='bibr' target='#b22'>Jabot et</ns0:ref> PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed al. 2017), but very few studies have empirically tested for the sensitivity of IFIs to these issues <ns0:ref type='bibr' target='#b23'>(Jackson et al. 2011)</ns0:ref>. Moreover, differences in IFIs sensitivities to changes in isotopic baselines could be inherently driven by differences in calculation methods: being higher for IFIs based on dispersion of species in the δ-space than for others based on their distribution. For instance, several authors have suggested that isotopic functional richness indices (i.e. measuring species dispersion in the δ-space) are strongly influenced by changes in ranges of consumers δ 13 C and δ 15 N values <ns0:ref type='bibr'>(Brind´Amour and Dubois 2013;</ns0:ref><ns0:ref type='bibr'>Syvaränta et al. 2013)</ns0:ref>, and ecological inferences of food web structural properties from these scale-dependent IFIs are therefore highly sensitive to changes in isotopic baselines.</ns0:p><ns0:p>Carbon of aquatic consumers sustained by autochthonous food resources (i.e., algae) is derived from the fixation by autochthonous primary producers of dissolved inorganic carbon (DIC) during photosynthesis. In river ecosystems, many biological and biochemical processes (i.e. respiration, water flow velocity, etc.) can influence DIC-δ 13 C values (see also <ns0:ref type='bibr' target='#b18'>Finlay 2003)</ns0:ref>, leading to strong spatial/temporal variability in algal δ 13 C values <ns0:ref type='bibr' target='#b19'>(France and Cattaneo 1998;</ns0:ref><ns0:ref type='bibr' target='#b17'>Finlay 2001;</ns0:ref><ns0:ref type='bibr' target='#b36'>Rasmussen 2010</ns0:ref>). Due to this large variability in isotopic baseline over time/space, different diets could lead to similar isotopic ratios of aquatic consumers, and conversely same diets could have different isotopic ratios. Changes in isotopic ratios of basal food resources can thus lead to potential misinterpretations of IFIs in river studies comparing food webs across sites and/or over time, and complementary empirical studies are needed to better assess whether ecological inferences from IFIs are impacted by variabilities in isotopic baselines.</ns0:p><ns0:p>Artificial variability in river algal δ 13 C values can be acquired by manipulating DIC-δ 13 C values <ns0:ref type='bibr' target='#b9'>(Cole et al. 2002)</ns0:ref>. In that vein, artificial tracer studies (i.e., 13 C-tracer addition experiments) have been conducted in small streams to induce changes in isotopic baselines of algal food resources PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed and track the fate of algal biomass in stream food webs <ns0:ref type='bibr' target='#b20'>(Hotchkiss and Hall 2015)</ns0:ref>, but this strategy appeared, however, not suitable for larger ecosystems (Sánchez-Carrillo and Álvarez-Cobelas 2017). Waterfalls decrease the thickness of the boundary layer at the air/water interface, leading to massive gaseous exchanges with the atmosphere over short distances <ns0:ref type='bibr' target='#b8'>(Chen et al. 2004;</ns0:ref><ns0:ref type='bibr' target='#b41'>Teodoru et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b26'>Leibowitz et al. 2017)</ns0:ref>. Hence, waterfalls induce CO 2 -outgassing and associated shifts in DIC-δ 13 C values in acidic running waters where carbonate dissolution cannot compensate for the loss of CO 2 <ns0:ref type='bibr' target='#b31'>(Palmer et al. 2001;</ns0:ref><ns0:ref type='bibr' target='#b12'>Doctor et al. 2008)</ns0:ref>. Rapid degassing and equilibration to atmospheric values and associated shifts in DIC-δ 13 C below waterfalls should induce changes only in algal δ 13 C values, and theoretically not affect isotopic ratios of allochthonous organic matter.</ns0:p><ns0:p>As algal production has been shown to be an important source of C in similar streams/rivers (see also <ns0:ref type='bibr' target='#b36'>Rasmussen 2010)</ns0:ref>, we expected a shift in scale dependent IFIs linked to a shift in algal δ 13 C values. Therefore, waterfall systems could provide an excellent natural experimental set-up to quantify impacts of changes in isotopic baselines on ecological inferences from IFIs in a range of rivers and streams varying in size.</ns0:p><ns0:p>The aim was to study impacts of changes in isotopic baselines on the evaluation of food web structure using IFIs, and we hypothesized that DIC isotopic shifts brought by CO 2 -outgassing at waterfall sites should induce changes in food web structure inferences based on IFIs. Similarity in food web structures at aboveversus below-waterfall sampling sites from five rivers and streams in Southern Quebec (Canada) were tested by comparing taxonomic composition and δ 15 N values to assess positioning of trophic guilds. We also compared IFIs (calculated in the δ 13 C-δ 15 N space)</ns0:p><ns0:p>for invertebrate communities at above-and below-waterfall sites, and differences in IFIs within waterfall paired-sites was interpreted as a result of changes in algal isotopic baselines brought by waterfall induced-DIC isotopic shifts. We hypothesized that IFIs exhibit different sensitivities to PeerJ reviewing PDF | ( <ns0:ref type='table' target='#tab_0'>2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:ref> Manuscript to be reviewed changes in isotopic baselines due to calculation methods. IFIs calculated using the dispersion of species in a δ-space (isotopic functional richness) are therefore scale-dependent and should be more sensitive to changes in isotopic baselines than those based on the distribution of species within a δ-space (isotopic functional evenness and diversity).</ns0:p></ns0:div>
<ns0:div><ns0:head>Material and methods</ns0:head></ns0:div>
<ns0:div><ns0:head n='1.'>Site description and sampling protocol</ns0:head><ns0:p>Five waterfalls (with vertical drops ranging from 18 to 72 m), from small streams to large rivers (widths ranging from 6 to 50 m), were studied in Southern Quebec, Canada (between 46-47°N and 72-73°W). Their catchment areas are situated on the Canadian Shield (corresponding to a metamorphic geological formation), making the running water weakly conductive and slightly acidic (ranging from 20 to 50 µS.cm -1 with an average pH value around 6.3 ± 0.4 at investigated sites). To use changes in DIC-δ 13 C values brought by waterfall CO 2 -outgassing as an integrative indicator of changes in algal isotopic baselines, each site was sampled at two locations immediately upstream and downstream of the waterfall (hereafter above-and below-waterfall), and the maximum distance between the two sampling points was 300 m. Paired sampling locations were also selected to have similar environmental conditions (water velocity, riverbed substrates, water depth, surrounding vegetation cover, canopy cover, etc.). Therefore, as habitat structures in each waterfall paired-sites were similar, food web structures of above-and below-waterfall invertebrate communities were also expected to be similar. <ns0:ref type='table' target='#tab_0'>PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>PeerJ reviewing</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.'>C-gas sampling and carbon stable isotope analysis of DIC</ns0:head><ns0:p>Water chemistry (partial pressure of CO 2 : pCO 2 , DIC-δ 13 C) was measured at each sampling site to characterize biogeochemical effects of waterfalls and quantify expected shifts in algal δ 13 C values.</ns0:p><ns0:p>The selected sites were visited between 1 to 2 times in spring and summer (in early May and late June 2016). pCO 2 was measured using the headspace method (Campeau and del Giorgio 2014). 30 mL of water sample was collected from approximately 10 cm below the water surface, using a 60 mL polypropylene syringe, and 30 mL of ambient air was added into the syringe to create a 1:1 ratio (ambient air: water sample). Then, the syringe was vigorously shaken for 1.5 min to equilibrate the gases in the water and air fractions. 30 mL of the headspace was then injected into a 40 mL glass vial, prefilled with saturated NaCl solution (360 g.L -1 at 20°C), through a butyl rubber septum. A second needle was used to evacuate the excess of NaCl solution. Vials were kept inverted for storage, and headspaces were analysed using a Shimadzu GC-8A Gas Chromatograph with flame ionization detector at University of Quebec at Montreal (Montreal, Canada). pCO 2 in water samples was then retrocalculated using the headspace ratio, water temperature and ambient air concentrations of CO 2 at studied sites. pCO 2 measurements were performed in duplicates for each sampling site. Supersaturation ratios (SR) were also calculated by dividing the gas water partial pressure by the atmospheric CO 2 concentration.</ns0:p><ns0:p>At each sampling site, 500 mL of water sample was also collected in duplicates in early May and late June 2016 to analyse DIC-δ 13 C. Water samples were filtered at 0.2 µm using nitrocellulose membrane filters and stored for a maximum of 72 h in the dark at 4°C until analysis. 150 µL of phosphoric acid (H 3 PO 4 ; 85 %) was added into 12.5 mL amber borosilicate vials to ensure that all PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed DIC content in the water sample would be converted into CO 2. Then, vials were flushed using Helium during 10 min to ensure a full evacuation of ambient air. 4 mL of water sample was injected in He-flushed vials through the rubber septa using fine needles, and vials were equilibrated at 20°C for 18h. DIC-δ 13 C was obtained using with a ThermoFinnigan Gas Bench II coupled to an Isotope Ratio Mass Spectrometer (IRMS), and results were expressed as the delta notation with Vienna Pee Dee Belemnite as the standard: </ns0:p><ns0:formula xml:id='formula_0'>δ 13 C (‰) = ([R sample /R standard ] -1)</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.'>Invertebrate sampling and carbon stable isotope</ns0:head><ns0:p>Each sampling station was sampled in early July 2016, and benthic invertebrates were collected in riffle sections using a kick-net (0.1 m², 600 µm mesh size). Equal sampling effort was applied to each habitat type within above-and below-waterfall sites. Invertebrate specimens were sorted immediately in the field into taxonomic groups and transported in the dark at 4°C back to the laboratory 4-8 hours later to be frozen at -20°C until analysis. A small isotopic deviation can be observed using this method (see also <ns0:ref type='bibr' target='#b16'>Feuchtmayr and Grey 2003;</ns0:ref><ns0:ref type='bibr' target='#b46'>Wolf et al. 2016</ns0:ref>), but we assumed that this effect was the same for all samples. Invertebrates were identified at the genus level <ns0:ref type='bibr' target='#b28'>(Merritt and Cummins, 1996)</ns0:ref>, and specimens were then classified into different feeding groups as herbivores, detritivores, and predators <ns0:ref type='bibr' target='#b43'>(Thorp 1991;</ns0:ref><ns0:ref type='bibr' target='#b28'>Merritt and Cummins 1996;</ns0:ref><ns0:ref type='bibr' /> Electronic supplementary material S2). Samples were then dried at 60°C for 72 h and ground into fine powder. Carbon (δ 13 C) and nitrogen (δ 15 N) stable isotopic ratios were then analysed using an Isotope Ratio Mass Spectrometer interfaced with an Elemental Analyser (EA-IRMS) at University of Quebec at Trois-Rivieres (Trois-Rivieres, Canada). Results were expressed according to the </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Isotopic functional indices and data analysis</ns0:head><ns0:p>Non-metric Multidimensional Scaling (NMDS) was used to visualize dissimilarities among/within invertebrate communities at waterfalls sites, and the Bray-Curtis index was used to measure dissimilarities of invertebrate communities based on presence/absence data. T-tests were also performed on trophic guilds δ 15 N values to compare their trophic positions in food webs at aboveand below-waterfall sites.</ns0:p><ns0:p>Means of δ 13 C and δ 15 N values of all individuals for each species calculated at each sampling site were used to derive six IFIs following <ns0:ref type='bibr' target='#b25'>Layman et al. (2007)</ns0:ref>: δ 13 C range (CR), δ 15 N range (NR), total area of the convex hull encompassing all the observations (TA), mean distance to centroid (CD), mean nearest neighbour distance (MNND) and standard deviation of nearest neighbour distance (SDNND). Layman´s IFIs can be grouped into isotopic functional richness (CR, NR and TA); isotopic functional divergence (CD); and isotopic functional evenness (NND and SDNND).</ns0:p><ns0:p>As isotopic functional richness indices (CR, NR and TA) provide a quantitative indication of the extent of the isotopic niche space of the entire community and are calculated using the dispersion of species in the δ-space <ns0:ref type='bibr' target='#b25'>(Layman et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b23'>Jackson et al. 2011)</ns0:ref>, we hypothesized that those IFIs should be more sensitive than those based on the distribution of species in the δ-space (CD, MNND and SDNND; Appendix. 1). All indices were calculated using SIAR package for R <ns0:ref type='bibr' target='#b32'>(Parnell and Jackson 2013)</ns0:ref>. Principal component analysis (PCA) was also performed to display changes in structural properties of above-and below-waterfall invertebrate communities and provide an overview of relationships between IFIs and changes in DIC-δ 13 C values. All statistical analyses and plots were performed using the R 3.5.2 software (R Core Team 2018).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>A total of 36 water samples were analysed for pCO 2 and DIC-δ 13 C. In our study, sampled running waters were slightly acidic (with an average pH value around 6.3 ± 0.4) and carbonate dissolution cannot compensate for the loss of CO 2 . Therefore, waterfalls induced consistent increase in DICδ 13 C values induced by rapid CO 2 -outgassing (Fig. <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>). Below-waterfall DIC-δ 13 C values were always higher than those of above-waterfall samples, and results showed an average increase of 2.2 ‰ (ranging from -3.8 ± 0.2 ‰ to -0.9 ± 0.2 ‰; Fig. <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>). Temporal comparisons between the two sampling periods (early May and late June 2016) revealed strong differences in pCO 2 and water temperature (rising from 5.3 ± 1°C in May to 19.3 ± 2.6°C in June), but smaller effects for DIC-δ 13 C values relative to the abovevs. below-waterfall sites (Fig. <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>).</ns0:p><ns0:p>Most of the genera caught at each above-waterfall site were also found at below-waterfall sites (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). Bray-Curtis index between each paired waterfall sites ranged 0.13-0.4, whereas the average Bray-Curtis index value calculated among sites was 0.38 ± 0.09, suggesting that compositional differences among communities were higher among waterfalls than within each paired-sites, and high similarity of invertebrate community composition of each paired waterfall sites was also further validated through an NMDS plot ( Manuscript to be reviewed invertebrate specimens ranged 0.8-10.2 ‰ (Fig. <ns0:ref type='figure'>3</ns0:ref>), and no significant difference was observed for each trophic guild between above-and below-waterfall samples (p-value > 0.05; Fig. <ns0:ref type='figure'>4</ns0:ref>). The δ 13 C values of invertebrate specimens ranged from -33.3 ‰ to -23 ‰ (Fig. <ns0:ref type='figure'>3</ns0:ref>) and showed consistent increases in δ 13 C values for below-waterfall samples. The δ 13 C-δ 15 N biplots visually highlighted strong similarities between above-and below-waterfall invertebrate communities, showing large overlaps in isotopic spaces encompassing all species locations (Fig. <ns0:ref type='figure'>3</ns0:ref>). Large differences in δ 13 C values were also observed among trophic guilds (Fig <ns0:ref type='figure'>5</ns0:ref>).</ns0:p><ns0:p>Calculations of IFIs showed notable changes between above-and below-waterfall IFI values (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>). Isotopic functional evenness and divergence indices <ns0:ref type='bibr'>(SDNND, MNND and CD, respectively)</ns0:ref> showed relatively small changes between above-and below-waterfall sites (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>). In contrast, isotopic functional richness indices (mainly TA and especially CR, as expected, but also NR in a lesser extent) followed important changes between above-and below-waterfall sites, and with few exceptions isotopic functional richness indices were lower at below-waterfall sites than at abovewaterfall samples (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>). Principal component analysis (PCA) gave an overview of differences between above-and below-waterfall invertebrate communities and illustrated the overall relationships between all IFIs. The first two PCA axes explained 54.7 % and 29.1 % of the total variance, respectively (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>.A). The first PCA axis mainly explained isotopic functional divergence and evenness indices (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>.B), whereas the second PCA axis explained CR and NR (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>.B). As expected, additional projection of DIC-δ 13 C values in the factorial map revealed visual correlation with CR (as δ 13 C range of invertebrate specimens; Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>.B). Overall, PCA1 and PCA2 scores of below-waterfall invertebrate communities were higher than those of abovewaterfall communities (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>.A), suggesting that IFIs values were often lower at below-waterfall sampling sites than at above-waterfall locations.</ns0:p><ns0:p>PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:p><ns0:p>Manuscript to be reviewed Discussion 1. Waterfalls, community structure and basal resources NMDS (Fig. <ns0:ref type='figure'>2</ns0:ref>) and taxonomic list (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>) showed only little changes in taxonomic composition and suggested that compositional differences among communities were higher among waterfalls than within each paired site. Furthermore, no changes in δ 15 N values of aquatic consumers (Fig. <ns0:ref type='figure'>4</ns0:ref>) were reported suggesting that the analyzed organisms occupied similar trophic positions in the food webs <ns0:ref type='bibr' target='#b6'>(Cabana and Rasmussen 1996)</ns0:ref>) and might therefore feed on similar diets above and below the waterfall sites. Therefore, in our study, waterfalls did not significantly impact the food web structure of invertebrate communities, and these results could strengthen previous findings showing the absence of major effect of waterfall on invertebrate community composition in four tropical rivers <ns0:ref type='bibr' target='#b1'>(Baker et al. 2016)</ns0:ref>.</ns0:p><ns0:p>Waterfall CO 2 -outgassing and associated shift in DIC-δ 13 C values should induce punctual changes in algal δ 13 C values and could also help to decipher the respective contribution of allochthonous and autochthonous carbon to lotic food webs. With few exceptions, δ 13 C values of trophic guilds were higher at below-waterfall sites than those of above-waterfall samples (Fig. <ns0:ref type='figure'>3</ns0:ref>), supporting the view that aquatic invertebrates mainly feed on in-stream algae <ns0:ref type='bibr' target='#b40'>(Tanentzap et al. 2017</ns0:ref>). However, differences among trophic guilds observed in our study might also suggest varying reliance on algae <ns0:ref type='bibr'>(Fig 5)</ns0:ref>. Surprisingly, large isotopic shifts were also observed for detritivores (but were in general smaller than those for herbivores), suggesting an important dependence on autochthonous Manuscript to be reviewed sources for these organisms theoretically relying on detritus (Fig. <ns0:ref type='figure'>5</ns0:ref>; <ns0:ref type='bibr' target='#b27'>McNeely et al. 2006)</ns0:ref>.</ns0:p><ns0:p>Therefore, our study could support the hypothesis of the prevalence of autochthony in river and stream food webs <ns0:ref type='bibr'>(Brett et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b40'>Tanentzap et al. 2017</ns0:ref>) but could also emphasize the issue of trait plasticity for inveterate leading to differences between actual and theoretical feeding habits.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Sensitivity of IFI to changes in isotopic baselines</ns0:head><ns0:p>IFIs have become increasingly used in aquatic ecology <ns0:ref type='bibr' target='#b30'>(Olsson et al. 2009;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abrantes et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Dézerald et al. 2018;</ns0:ref><ns0:ref type='bibr'>Burdon et al. 2019</ns0:ref>), but IFI concept mainly relies on untested assumptions.</ns0:p><ns0:p>In this study, we consider waterfall systems as a natural experimental set-up to quantify impacts of changes in isotopic baselines on ecological inferences from IFIs. We consider that waterfall CO 2 -outgassing and associated shift in DIC-δ 13 C values should induce punctual changes in algal δ 13 C values and therefore help to understand how changes in isotopic baselines impact upon IFIs.</ns0:p><ns0:p>Notable differences in in the expected direction in IFI values were reported between above-and below-waterfall sites (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>), and changes were likely driven by shifts in DIC-δ 13 C values (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>.B). Isotopic functional evenness and divergence indices (CD, SDNND and MNND) were only slightly impacted by changes in isotopic baselines (Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>). Indeed, those indices were calculated based on the distribution of species in the δ-space <ns0:ref type='bibr' target='#b25'>(Layman et al. 2007)</ns0:ref>, and ecological inferences were therefore not strongly impacted by changes in the vertical or horizontal distribution of specimens in the isotopic space. In contrast, isotopic functional richness indices (CR, NR and TA), calculated on species dispersion in the δ-space and providing a quantitative indication of the extent of isotopic niche space of the entire community, were strongly influenced by changes in algal δ 13 C values (Figs. <ns0:ref type='figure' target='#fig_7'>6 and 7</ns0:ref>). Moreover, differences in CR, NR and TA values between above-and belowwaterfall sites often exceeded the range of changes previously reported in literature <ns0:ref type='bibr' target='#b37'>(Rigolet et al. 2015)</ns0:ref>. These results could strengthen previous findings that these indices are very sensitive to changes in ranges of consumers δ 13 C and δ 15 N values <ns0:ref type='bibr'>(Brind´Amour and Dubois 2013;</ns0:ref><ns0:ref type='bibr'>Syvaränta et al. 2013)</ns0:ref>. Hence, isotopic functional richness indices (CR, NR and TA) might be frequently misinterpreted in river studies comparing food webs across sites and/or over time with fluctuating isotopic baselines.</ns0:p><ns0:p>Our study suggested that the reliability of IFI inferences can be strengthened by identifying changes in IFIs driven by variability in isotopic baselines. In that vein, different propositions have already been made in the literature to improve the understanding of food web structure and resource partitioning in consumers (e.g. <ns0:ref type='bibr' target='#b22'>Jabot et al. 2017)</ns0:ref>. The most promising approach likely consists of increasing the number of isotopes studied (like hydrogen and sulphur: <ns0:ref type='bibr' target='#b14'>Doucett et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b34'>Proulx and Hare 2014)</ns0:ref> and including other types of data (such as gut content, fatty acids contents or compound specific isotope analysis). Indeed, the combination of these complementary proxies will provide new insights on actual energy pathways through food webs. By enabling a better understanding of trophic interactions in food webs, future IFI-based studies will contribute to better document food web structural properties.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>Our study demonstrated that changes in isotopic baselines can impact the evaluation of river food web structure using IFIs, but these effects depended on IFI types (i.e. being higher for IFIs Manuscript to be reviewed Taxonomic list of macroinvertebrate collected in the sampling sites.</ns0:p><ns0:p>Specimens were classified into different functional groups according to their theoretical feeding behaviours: herbivore, detritivore, and predator <ns0:ref type='bibr'>(Thorp and Covich, 1991;</ns0:ref><ns0:ref type='bibr' target='#b28'>Merritt and Cummins, 1996)</ns0:ref>. Waterfall systems are abbreviated to the first four letters.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>x 1000; where R = 13 C/ 12 C. Sample measurement replications from internal standards (C1 = -3.0 ‰, and C5 = -22.0 ‰) produced analytical errors (1σ) of ± 0.3 ‰ (n = 17).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020) Manuscript to be reviewed delta notation (see above). Sample measurement replications from three internal standards (STD67: δ 13 C = -37 ‰ and δ 15 N = 8.6 ‰, UTG40: δ 13 C = -26.2 ‰ and δ 15 N = -4.5 ‰, and BOB1 δ 13 C = -27 ‰ and δ 15 N = 11.6 ‰) produced analytical errors (1σ) of ± 0.02 ‰ for δ 13 C values and ± 0.2 ‰ for δ 15 N values (n = 148).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Fig 2). Furthermore, the δ 15 N values of PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ reviewing PDF | (2020:03:46994:1:2:NEW 7 Aug 2020)Manuscript to be reviewed measuring species distribution in the δ-space than for other IFIs), leading to potential misinterpretations of IFIs in river studies where isotopic baselines generally show high temporal and spatial variabilities. The identification of isotopic baselines and their associated variability, and the use of independent trophic tracers to identify the actual energy pathways through food webs must be a prerequisite to IFIs-based studies to strengthen the reliability of ecological inferences of food web structural properties.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 (</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,250.12,525.00,222.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,256.87,525.00,330.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,236.62,525.00,347.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,240.82,525.00,308.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
</ns0:body>
" | "Reviewer 1
Basic reporting
The manuscript by Belle and Cabana entitled “Effects of changes in isotopic baselines on the evaluation of food web structure using isotopic functional indices” tested the validity of Layman’s IFIs in natural streams. The results indicated that some indices strongly depend on isotopic baselines, but some were not. The information is useful in applying “isotope space” study in food web research.
Experimental design
The use of waterfall as a controlled experiment in natural streams is a good idea. The experimental design is simple and straightforward.
One uncertain point is the authors sampled DIC in early May and late June 2016, but stream invertebrates in early June. As the environmental condition seems to change drastically from snow melting season to summer in Canada, the growth of each living organism seems to be high. Concentrations and isotope ratios of DIC are instantaneous, and short-lived algae may change their isotope ratio rapidly, but long-lived predator may integrate the signals through time. Do the author consider that the isotope ratios of DIC and invertebrate communities are comparable as in Fig. 5 and 6?
Reply: We observed consistent sifts in δ13C in May and June consistent with degassing (change in pCO2). This degassing physical process associated with waterfalls should keep happening throughout the year, even though temporal shifts in the upstream DIC might also occur. Therefore, a shift in the downstream food web relative to the upstream food web should be observed in large long-lived organisms. Indeed, large organisms which have long growing periods, spanning more than a year in some cases, (e.g. Plecoptera, Odonata, and crayfish collected here), time-integrated the fall-induced shift in our study. Stream invertebrates do grow in early spring in Canada and therefore should respond to the isotopic shift in early May. Algae will respond rapidly to changes in δ13C and organisms with different turnover-time will integrate in different ways these changes. Here we paired-compared the same species above and below the waterfall in many cases and we think that this should alleviate the problem of differential time-integration across species. A continuous monitoring over the whole spring-summer-fall was beyond the scope of our study.
Validity of the findings
The final conclusion is clear. I agree with the authors that we need to test various isotopic indices to compare for relevant types of questions.
L. 120 the same environmental conditions: Could you add simple descriptions among sites? Readers may want to make use of similar settings.
Reply: Environmental conditions have not been quantitively measured on the field.
L120: “Paired sampling locations were also selected to have similar environmental conditions (water velocity, riverbed substrates, water depth, surrounding vegetation cover, canopy cover, etc.).”
L. 152 delta-definition: The factor 1000 is an extraneous numerical factor and should be deleted (Coplen2011 Rapid Commun. Mass Spectrom).
Reply: We decided to keep our definition which is commonly used in the ecological literature.
L. 153 and other standards, too. Internal standards (C1 = -3 ‰, and C5 = -22 ‰): We don’t know each value itself, but the values should be written with significant figures, like -3.0 ‰.
Reply: Modification have been made.
L153. “Sample measurement replications from internal standards (C1 = -3.0 ‰, and C5 = -22.0 ‰) produced analytical errors (1σ) of ± 0.3 ‰ (n = 17).”
L. 158 benthic invertebrates were sampled: Exact where did the authors sample invertebrates? Riffles, pools, or mixed?
Reply: We sampled in riffle sections. Modification has been made.
L158. “Each sampling station was sampled in early July 2016, and benthic invertebrates were collected in riffle section using a kick-net (0.1 m², 600 µm mesh size).”
L. 162 A small isotopic deviation can be observed using this technique: Indicate the reasons why the methods may cause the deviation.
Reply: The reasons can be found in the quoted reference.
L. 190 Remove “calculate”.
Reply: Modification has been made.
L189.” As isotopic functional richness indices (CR, NR and TA) provide a quantitative indication of the extent of the isotopic niche space of the entire community and are calculated using the dispersion of species in the δ-space (Layman et al. 2007; Jackson et al. 2011),”
L. 205 “increase of -2.2 ‰” should be “increase of 2.2 ‰”.
Reply: Modification has been made.
L204. “Below-waterfall DIC-δ13C values were always higher than those of above-waterfall samples, and results showed an average increase of 2.2 ‰”
L. 212 displayed high similarity: How did the authors identify “high similarity”?
Reply: Modification has been made.
L178. “Non-metric Multidimensional Scaling (NMDS) was used to visualize dissimilarities among/within invertebrate communities at waterfalls sites, and the Bray–Curtis index was used to measure dissimilarities of invertebrate communities based on presence/absence data.”
L210. “Most of the genera caught at each above-waterfall site were also found at below-waterfall sites (Table 1). Bray-Curtis index between each paired waterfall sites ranged 0.13-0.4, whereas the average Bray–Curtis index value calculated among sites was 0.38 ± 0.09, suggesting that compositional differences among communities were higher among waterfalls than within each paired-sites, and high similarity of invertebrate community composition of each paired waterfall sites was also further validated through an NMDS plot (Fig 2).”
L. 221: (Fig. 6): Fig. 5 comes later. Thus Fig. 6 should change to Fig. 5.
Reply: Modification has been made.
L221. “Large differences in δ13C values were also observed among trophic guilds (Fig 5). Calculations of IFIs showed notable changes between above- and below-waterfall IFI values (Fig. 6).”
L. 271 (Fig. 4): Fig. 6?
Reply: Modification has been made.
L274. “Notable differences in in the expected direction in IFI values were reported between above- and below-waterfall sites (Fig. 6), and changes were likely driven by shifts in DIC-δ13C values (Fig. 7.B).”
L. 272 (Fig. 5.B): (Fig. 7.B)?
Reply: Modification has been made.
L274. “Notable differences in in the expected direction in IFI values were reported between above- and below-waterfall sites (Fig. 6), and changes were likely driven by shifts in DIC-δ13C values (Fig. 7.B).”
L. 284 and/or over time with fluctuating isotopic baselines: This seems true, but the authors didn’t test the hypothesis (see 2. Experimental design).
Reply: Same reply as above.
We observed consistent sifts in δ13C in May and June consistent with degassing (change in pCO2). This degassing physical process associated with waterfalls should keep happening throughout the year, even though temporal shifts in the upstream DIC might also occur. Therefore, a shift in the downstream food web relative to the upstream food web should be observed in large long-lived organisms. Indeed, large organisms which have long growing periods, spanning more than a year in some cases, (e.g. Plecoptera, Odonata, and crayfish collected here), time-integrated the fall-induced shift in our study. Stream invertebrates do grow in early spring in Canada and therefore should respond to the isotopic shift in early May. Algae will respond rapidly to changes in δ13C and organisms with different turnover-time will integrate in different ways these changes. Here we paired-compared the same species above and below the waterfall in many cases and we think that this should alleviate the problem of differential time-integration across species. A continuous monitoring over the whole spring-summer-fall was beyond the scope of our study.
Table 1 Explain “1” and “0”, also “A” and “B”.
Reply: Modifications have been made.
Table 1 Taxonomic list of macroinvertebrates collected in the sampling sites (1/0 refer to presence/absence). Specimens were classified into different functional groups according to their generally accepted feeding behaviors as recorded in the literature: herbivore, detritivore, and predator (Thorp and Covich, 1991; Merritt and Cummins, 1996). Waterfall systems are abbreviated to the first four letters. A refers to above-waterfall sites and B to below-waterfall sites.
Fig. 5 Trophic guilds are abbreviated to the first four letters (e.g., Herbivore becomes “Herb”): but not abbreviated.
Reply: Modifications have been made.
Figure 5 Changes in δ13C values for DIC and consumers belonging to different trophic guilds between above- and below-waterfall samples (with Δδ13C = δ13Cbelow - δ13Cabove). Dots represent the average per trophic guild, and vertical lines, standard deviations.
Reviewer 2
It seems that this manuscript is the revised version I have reviewed previously in another journal. The content has been improved considerably with newly added analyses such as NMDS, PCA, and IFIs, all of which were not shown in the previous manuscript. The results suggest that these indices are not very much sensitive to environmental change between above- and below-water falls. I think that the authors did a good job to wrap up their research, except for one critical issue as follows.
The main aim (Lines 93-94) and conclusion (Lines 286-288) of this study should be reframed, because both autochthonous and allochthonous baseline data (e.g., algae and terrestrial plants) were unfortunately not present. I strongly disagree to use DIC as baseline, because there is isotopic fractionation against 13C between DIC and algae. Its size and variation are both greater than the delta13C difference in DIC between above- and below-water falls.
Reply: Here we do not use d13C-DIC as an endmember in a scheme to estimate allo- vs autochtony. Rather, we use it as a means to see how Layman metrics would react to expected changes in the range of consumer d13C brought by degassing, assuming the food web is the same above and below the waterfall.
However, this criticism might be avoidable if the authors used herbivores and detritivores as aquatic- and terrestrial-baselines, respectively (Post 2002 Ecology). This is advantageous over using benthic algae as baseline, which show a large variation in delta13C depending on environmental heterogeneity such as water velocity (Finlay et al. 1999 L&O) and on DI13C variation (Lines 67-70).
Reply: There is a long tradition in stream ecology to debate whether “detritivores” are truly only feeding on detritus, and what proportion of aquatic OM they might be assimilating. For example, Pteronarcys, a supposed classic stonefly shredder, has been seen to have d13C around -20 permil in hard-water forests, clearly out of the terrestrial vegetation in these sites (no C4 plants) (unpub. data , GC). Therefore, we strongly disagree that one can a priori find a true detritivore in streams and use it as an endmember for terrestrial OM and, as stated above, this was not our goal.
Reviewer 3
Basic reporting
This study performed the analysis of isotopic functional indices (IFIs) for a natural experimental set-up to quantify impacts of changes in algal isotopic baselines on ecological inference. This study was well conducted and very interesting for isotope ecology. However I have few minor comments.
For NMDS, please describe NMDS stress for the Figure 2.
Reply: Modification has been made.
Figure 2 NMDS ordination biplot (A: samples points; B: individual taxa) of invertebrate communities at waterfall sites based on Bray–Curtis index (presence/absence; NMDS stress = 0.124). Waterfall sites are abbreviated to the first four letters, and each pair of sampling site are linked with a dotted black line.
Figure 4 I could not see x and y ticks.
Reply: Modification has been made.
Figure 5 What the error bar means here?
Reply: Modification has been made.
Figure 5 Changes in δ13C values for DIC and consumers belonging to different trophic guilds between above- and below-waterfall samples (with Δδ13C = δ13Cbelow - δ13Cabove). Dots represent the average per trophic guild, and vertical lines, standard deviations.
" | Here is a paper. Please give your review comments after reading it. |
666 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. There exists great interest in using dried bloodspots across the clinical, public health, and nutritional sciences to characterize circulating levels of essential elements yet current methods face several challenges related to instrumentation, quality control, and matrix effects. Elemental analysis via total X-ray fluorescence (TXRF) may help overcome these challenges. The objective of this study was to develop and apply a novel TXRF-based analytical method to quantify essential elements (copper, selenium, zinc) in dried bloodspots. Methods. Analytical methods were developed with human whole blood standard reference materials from the Institut National de Santé Publique du Québec (INSPQ). The method was developed in careful consideration of several quality control parameters (e.g., analytical accuracy, precision, linearity, and assay range) which were iteratively investigated to help refine and realize a robust method. The developed method was then applied to a quantitative descriptive survey of punches (n=675) taken from residual dried bloodspots from a newborn screening biobank program (Michigan BioTrust for Health). Results. The analytical method developed to quantify the 3 target elements in dried bloodspots fared well against a priori quality control criteria (i.e., analytical accuracy, precision, linearity and range). In applying this new method, the average (± SD) blood copper, selenium, and zinc levels in the newborn samples were 1,117.0 ± 627.1 μg/L, 193.1 ± 49.1 μg/L, and 4,485 ± 2,275 μg/L respectively. All the elements were normally distributed in the sample population, and the measured concentrations fall within an expected range. Conclusions. This study developed and applied a novel and robust method to simultaneously quantify three essential elements. The method helps overcome challenges in the field concerning elemental analysis in dried bloodspots and the findings help increase understanding of nutritional status in newborns.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Trace element quantification is critical in activities such as clinical assessments, nutritional research, and public health interventions <ns0:ref type='bibr' target='#b0'>(1,</ns0:ref><ns0:ref type='bibr' target='#b1'>2)</ns0:ref>. There remains increasing interest in the development and use of biomarkers to study such elements <ns0:ref type='bibr' target='#b2'>(3,</ns0:ref><ns0:ref type='bibr' target='#b3'>4)</ns0:ref> though many biomarkers remain costly to analyze and suffer from logistical and ethical constraints. For example, venous whole blood is frequently viewed as the 'gold standard' for many analytes though its collection is invasive and sampling requires trained professionals and specialized supplies.</ns0:p><ns0:p>Dried blood spots (DBS) are being hailed as an alternative to venipuncture <ns0:ref type='bibr' target='#b3'>(4,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref>. They provide a minimally invasive and low-cost method for collecting capillary blood onto filter paper, and thus represent an ethical, practical, and economical alternative to venipuncture. Among other benefits <ns0:ref type='bibr' target='#b3'>(4,</ns0:ref><ns0:ref type='bibr' target='#b4'>5)</ns0:ref>, the blood does not need to be processed after collection thus reducing complexities associated with sampling, transport, and storage <ns0:ref type='bibr' target='#b5'>(6,</ns0:ref><ns0:ref type='bibr' target='#b7'>7)</ns0:ref>. Despite the potential benefits of DBS for elemental analysis, outstanding gaps have impaired widespread adoption <ns0:ref type='bibr' target='#b8'>(8)</ns0:ref><ns0:ref type='bibr' target='#b9'>(9)</ns0:ref><ns0:ref type='bibr' target='#b10'>(10)</ns0:ref><ns0:ref type='bibr' target='#b11'>(11)</ns0:ref><ns0:ref type='bibr' target='#b12'>(12)</ns0:ref>. Notably, quantification of target essential elements in DBS often require instrumentation that can achieve low detection limits while maintaining accuracy and precision, and the effects of storage temperature and method as well as the paper filter matrix need careful consideration.</ns0:p><ns0:p>Moving forward, some of the aforementioned challenges associated with the measurement of essential elements in DBS may be overcome through the use of Total Reflection X-Ray Fluorescence (TXRF) Spectroscopy. Comparisons with accepted methods of elemental analysis such as atomic absorption spectroscopy (AAS) and inductively-coupled plasma mass spectroscopy (ICPMS) demonstrate that TXRF is a practical, accurate, and reliable alternative <ns0:ref type='bibr' target='#b13'>(13)</ns0:ref><ns0:ref type='bibr' target='#b14'>(14)</ns0:ref><ns0:ref type='bibr' target='#b15'>(15)</ns0:ref>. Like these spectroscopy methods, TXRF can detect multiple elements in a range of sample types though for TXRF the approach entails simpler preparation methods and reduced sample volumes and run times, and these in turn help reduce analytical costs. In addition, the matrix effect is minimized in TXRF as aqueous samples are dried onto a quartz carrier disc thus reducing absorption or secondary excitation <ns0:ref type='bibr' target='#b17'>(16)</ns0:ref>. This may be particularly advantageous for studying elements in DBS.</ns0:p><ns0:p>The objective of this study was to develop and apply a novel method of quantifying essential elements in DBS using TXRF spectroscopy. The elements selected for this study were copper (Cu), selenium (Se), and zinc (Zn). We focused on these elements as they are classified as 'essential' elements and more specifically they play essential roles in metabolism, have antioxidant properties, and mediate proper reproductive and other important health outcomes as reviewed by the U.S. Institute of Medicine (IOM) Panel on Micronutrients <ns0:ref type='bibr' target='#b0'>(1,</ns0:ref><ns0:ref type='bibr' target='#b1'>2)</ns0:ref>. For each of these elements we first developed a new analytical assay and then we applied the developed method to measure these three elements in residual DBS obtained from newborns (n=675) from the State of Michigan's newborn screening biobank program (Michigan BioTrust for Health).</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>General Overview</ns0:head><ns0:p>In study phase #1, the method was developed using human whole blood standard reference material (SRM) with assigned elemental concentrations (Supplemental Materials Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). Artificial DBS were created in the laboratory using these SRMs, and then these DBS were used to evaluate several quality control criteria (i.e., assay linearity, range, accuracy, precision) <ns0:ref type='bibr' target='#b18'>(17)</ns0:ref> to help establish a suitable analytical method. Next, in study phase #2, the established method was applied to quantify essential elements in DBS from newborns (n=675) from the Michigan BioTrust for Health program as part of a larger effort concerning newborn hearing loss. The elements focused on were copper (Cu), selenium (Se), and zinc (Zn). While TXRF can detect multiple elements, we focused the current work to develop methods for these three essential elements which play key roles in nutrition, health and disease <ns0:ref type='bibr' target='#b0'>(1,</ns0:ref><ns0:ref type='bibr' target='#b1'>2)</ns0:ref>.</ns0:p><ns0:p>Institutional Review Board (IRB) approval for this work was obtained from McGill University (A06-M29-16B), the University of Michigan (HUM000771006), and the Michigan Department of Health and Human Services (201212-05-XA-R).</ns0:p></ns0:div>
<ns0:div><ns0:head>Dried Bloodspots</ns0:head><ns0:p>In the methods development phase, human whole blood SRM (n=7) of varying elemental concentrations from the Institut National de Santé Publique du Québec (INSPQ) were used (Supplemental Materials Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). Artificial DBS were created by pipetting a 60 L sample of whole blood SRM onto Whatman©903 filter paper (GEO Healthcare Services, Mississauga, ON, Canada) and dried overnight at room temperature in a Class 100 ISO Cleanhood. After drying, the DBS cards were stored in plastic bags at ambient temperature until use. The 60 L DBS was sub-sampled using a 3mm punch (Harris Corporation, Melbourne, FL, USA). A punch of this size is often used in studies of DBS and assumed to contain 3.1 μL of blood <ns0:ref type='bibr' target='#b18'>(17)</ns0:ref>. Blank filter paper adjacent to the DBS was also analyzed from approximately 10% of all DBS cards. Punched DBS samples were placed in a metal-free microcentrifuge tube (Rose Scientific Ltd.) until analysis. We then used these punched DBS samples to evaluate a series of parameters (Supplemental Materials Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>) to yield a suitable analytical method in terms of analytical accuracy, precision, linearity and range as per bioanalytical assay development recommendations from ICH (Reports Q2A and Q2B) and ISO 17025 as summarized in Huber <ns0:ref type='bibr' target='#b19'>(18)</ns0:ref>.</ns0:p><ns0:p>In study phase #2, we focused on newborn DBS collected between 2003 and 2015 from the Michigan BioTrust for Health. The analyses of these samples spanned 38 batch runs with each batch containing a maximum of 24 samples composed of 18 individual newborn DBS samples, 3 DBS SRMs, 2 method blanks, and 1 DBS sample from which a duplicate punch was analyzed. For batch runs #1-10 (n=180), the DBS were stored at ambient temperatures until analysis. The rest of the samples (n=495) were stored at -20C between collection and analyses. For the first 12 batches analyzed, one punch (3mm diameter) was taken from the edge of a single spot of a DBS card. For the remaining batches, the Michigan BioTrust for Health provided rectangular punches of 2 mm x 6 mm in size, or the equivalent of two 3mm diameter punches.</ns0:p><ns0:p>To bring the dried blood into solution, 15 L of concentrated HCl with 5 mmol EDTA was added to the 3mm punch in the microcentrifuge tube. Note, in the case of the rectangular punches, double volumes were applied. The sample was then vortexed thoroughly and digested for 1.5 hours at 55C. Following the digestion, the sample was centrifuged for 15 minutes at 25C at 12000 rpm. An 8 L portion of the extraction fluid was removed and placed into a second microcentrifuge vial, to which a 4 L solution containing a mixture of gallium (internal standard, 100 g/L final concentration) and polyvinyl alcohol (1% vol/vol) was added. This solution was mixed, and then an 8 L aliquot was placed onto a Serva-conditioned quartz sample disc carrier. The sample was covered and allowed to dry overnight in a lab oven set at 55C following which analysis was performed. A representative photo of an extract dried onto the sample disc carrier is provided (Supplemental Materials Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Multi-Element Analysis</ns0:head><ns0:p>Multi-element measurement (Cu, Se, Zn) was carried out using TXRF spectroscopy (S2 PICOFOX, Bruker AXS Microanalysis GmbH, Germany; technical specifications are in Supplemental Materials Table <ns0:ref type='table'>3</ns0:ref>) as detailed previously by others <ns0:ref type='bibr' target='#b15'>(15)</ns0:ref>. Samples were read for 2,500 seconds and the results were analyzed using the instrument's software, Spectra 7 (Bruker AXS Inc.). Elemental concentration was quantified using a gallium internal standard and a seven-point matrix-matched calibration curve for each element. A representative spectra is provided (Supplemental Materials Figure <ns0:ref type='figure'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Quality Control</ns0:head><ns0:p>Each run batch contained a maximum of 24 samples including a range of quality control samples as previously detailed. The punched DBS made from SRMs (n=7; Supplemental Materials Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>) were used to establish matrix-matched calibration curves and characterize analytical accuracy, precision, linearity, and assay range. Analytical percent accuracy was calculated as the difference between the observed value and the accepted concentration value of the SRM. Intra-day assay precision was assessed by analyzing DBS samples from the Michigan BioTrust for Health study in duplicate, and calculated as the relative percent difference between the two measures. Interday assay precision was assessed by comparing the values of the SRMs across batch runs with a coefficient of variability (%CV) calculated. In order to determine background levels of elements of the filter paper, we analyzed blank filter paper removed from adjacent areas to the spots of blood of select DBS samples. The elemental concentrations of the filter paper were not subtracted from the results of the accompanying blood spot as discussed further below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Analyses</ns0:head><ns0:p>Data from Spectra 7 (Bruker AXS Inc.) was first analyzed using descriptive statistics and graphical plots to understand basic features of the dataset. For the methods development phase of the work, findings were compared against assay performance criteria (i.e., assay linearity, range, accuracy, precision) <ns0:ref type='bibr' target='#b19'>(18)</ns0:ref> to help establish a working method. For the application phase of this work (i.e., analysis of newborn DBS from the Michigan Biotrust for Health), measures of central tendency (mean, median) and associated variances (standard deviation, inter-quartile ranges) were calculated. Further, t-tests and ANOVAs were run to test if elemental levels varied according to batch number (n=38), punch type (3mm diameter vs. rectangular punch), and storage temperature (ambient vs. -20C). Outliers were detected using the generalized extreme studentized deviate (ESD) test. The p-value was set at =0.05 for all tests. All analyses were conducted using Microsoft® Excel Version 15.4 and JMP®Pro 13.0.0. Data are represented as mean standard deviation unless otherwise noted. ±</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Assay Quality Control</ns0:head><ns0:p>The linearity of the developed method was assessed by measuring elemental concentrations in DBS created in the laboratory with seven different whole blood SRMs (purposefully chosen as their assigned values spanned a range of concentrations deemed to be physiologically relevant). From these artificially created DBS, sub-samples were punched from the edge of a DBS and were analyzed individually. The average of three samples was calculated and compared to the assigned elemental concentration to develop a matrix-matched calibration curve, and to also assess linearity and accuracy.</ns0:p><ns0:p>For Cu, the resulting linear regression comparing concentrations analyzed on the TXRF (i.e., punches of DBS taken from whole blood SRM added to filter paper cards) and the assigned SRM concentrations in the whole blood was Y = 0.99X -7.2, with a coefficient of determination (R 2 ) of 0.98 (Supplemental Materials Figure <ns0:ref type='figure'>3</ns0:ref>). The assay linearity for Cu was thus deemed to be acceptable. The average recovery of Cu from the DBS of the SRMs during the methods development phase was 102.3 ± 5.9%, and later during the application phase ranged from 100.3 to 117.2% for the 3 focal SRMs used (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). For Se, the resulting linear regression comparing concentrations analyzed on the TXRF (of the DBS) and the assigned SRM concentrations in the whole blood was Y = 1.2X + 9.6, with a R 2 of 0.98 (Supplemental Materials Figure <ns0:ref type='figure'>4</ns0:ref>). The assay linearity was thus deemed to be acceptable. The average recovery of Se from the DBS of the SRMs during the methods development phase was 100.9% ± 8.6 and later during the application phase ranged from 90.9 to 95.7% for the 3 focal SRMs used (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). For Zn, the resulting linear regression comparing concentrations analyzed on the TXRF (of the DBS) and the assigned SRM concentrations in the whole blood was Y = 1.1X -585.5, with a coefficient of determination of 0.975 (Supplemental Materials Figure <ns0:ref type='figure'>5</ns0:ref>). As with Se and Cu, the assay linearity was again deemed to be acceptable. The average recovery of Zn from the DBS of the SRMs during the methods development phase was 102.3 ± 5.6%, and later during the application phase ranged from 104.9 to 115.2% for the 3 focal SRMs used (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>).</ns0:p><ns0:p>Assay precision was evaluated using two different approaches. First, inter-assay precision (as % CV) was evaluated by comparing the SRMs analyzed across the run batches. SRMs from Batch 1 were damaged prior to analysis due to human error and the internal standard of SRMs from Batch 30 were not quantified, so those values were removed. Second, intra-assay precision (expressed as % relative percent difference, RPD) was addressed by measuring two individual punches from a given sample in the Michigan BioTrust for Health samples. The inter-and intraassay precision of the three selected SRMs for Cu, Se and Zn are provided (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). For Cu, the inter-assay precision of the SRM samples ranged from 10.9 to 16.8%, and the intra-assay precision ranged from 0.8 to 39.6%; five of the replicates had a %RPD greater than 30% (Supplemental Materials Figure <ns0:ref type='figure'>6</ns0:ref>). For Se, the inter-assay precision of the SRM samples ranged from 13.6 to 23.2%, and the intra-assay precision ranged from 0.1 to 53.2%; three of the replicates had a % RPD greater than 30% (Supplemental Materials Figure <ns0:ref type='figure'>7</ns0:ref>). For Zn, the interassay precision ranged from 12.2 to 15.7%, and the intra-assay precision ranged from 0.4% to 70.5%; four of the replicates had a %RPD greater than 30% (Supplemental Materials Figure <ns0:ref type='figure'>8</ns0:ref>).</ns0:p><ns0:p>The elements were measured in punches of blank filter paper taken adjacent to the DBS. One 3 mm circular punch of blank filter paper was taken from approximately 10% of the blotted DBS cards analyzed in the Michigan BioTrust for Health cohort and two blank filter papers were analyzed in every batch. For Cu, the average of all the blank filter papers was 818.0 ± 1413.9 g/L. However, this value is driven by two extremely high blank values (6,194.8 and 8,262.4 g/L). When these values are removed the resulting average was 629.5 ± 874.1 g/L. Se was only detected in one sample of blank filter paper at a concentration of 13.4 g/L. For Zn, the average of all the blank filter paper measurements was 2,005.3 ± 1,622.2 g/L.</ns0:p></ns0:div>
<ns0:div><ns0:head>Assay Application: Michigan BioTrust for Health Project</ns0:head><ns0:p>The three target essential elements were measured in DBS samples obtained from 675 newborns from the Michigan BioTrust for Health Project. For each element some outliers were identified through the generalized ESD test and the final sample size for each element is provided in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>.</ns0:p><ns0:p>The average (SD) blood Cu, Se, and Zn levels in the punches sampled from the newborn DBS were 1,117.0 ± 627.1 g/L, 193.1 ± 49.1 g/L, and 4,485 ± 2,275 g/L respectively (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>; Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>). All the elements were normally distributed in the sample population (Supplemental Materials Figure <ns0:ref type='figure'>9</ns0:ref>). In terms of variation across the 38 different batch runs, some differences were calculated for each of the 3 elements with mean values in some of the batches being significantly different (Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>). Specifically, for Cu, (3 batches), Se (5 batches), and Zn (7 batches), the number of batches that were significantly different are indicated in brackets. Variation within and across batches is expected, and we did not find any systemic bias as other quality control measures across the batches (e.g., inter-assay precision measures) performed well.</ns0:p><ns0:p>We also investigated the potential variation between the two different types of punches. The first 200 samples were punched using a 3mm circular punch from the perimeter of the DBS in the lab while the remaining samples were provided by the Michigan BioTrust for Health as rectangular punches of 2mm x 6 mm in size. Therefore, the location of the latter samples on the DBS is unknown and could be from the center or edge, increasing the potential variability due to differences in rates of deposition of elements across the DBS. The mean results of the two different punches were compared and there were no statistically significant differences for any of the elements.</ns0:p><ns0:p>The newborn DBS samples were collected from 2003 to 2015 and were analyzed in the lab in 2016 thus indicating a maximum potential storage time of 13 years. Some of the samples were stored at room temperature whereas others were stored frozen. The DBS cards that were stored at an ambient temperature (n=380) prior to analysis versus the 295 cards that were kept frozen at -20C had Cu and Se measurements that were not different. For Zn, values in the frozen cards </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This study developed and applied a novel method to simultaneously quantify 3 essential elements in DBS of relevance to clinical and public health sciences. While DBS are being hailed as a cost-effective, minimally invasive, and thus practical and attractive method for collecting blood from clinical and other settings (e.g., remote field sites) (4) there remain outstanding questions particularly related to the quality of the analytical measurements taken <ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b19'>18)</ns0:ref>. The work presented here helps overcome some challenges present in the literature concerning elemental analysis in DBS.</ns0:p><ns0:p>The current study established a working method by carefully examining relevant quality control parameters, and then applied this method to a large cohort of 675 newborns in order to increase understanding of blood levels of Cu, Se, and Zn in newborns. The importance of these target elements, especially in newborns, is firmly established <ns0:ref type='bibr' target='#b0'>(1,</ns0:ref><ns0:ref type='bibr' target='#b1'>2)</ns0:ref> though to generalize the results from the current work to a broader population has been challenging as few comparison populations exist; few studies have quantified essential elements in early lifestage groups owing to aforementioned ethical and practical challenges associated with sampling venous blood particularly from newborns (as well as infants). Thus, the development of an analytical method using DBS opens up possibilities especially since these samples are collected from newborns as part of screening programs that are routine, and in some cases required by law, in many jurisdictions <ns0:ref type='bibr' target='#b4'>(5,</ns0:ref><ns0:ref type='bibr' target='#b19'>18,</ns0:ref><ns0:ref type='bibr' target='#b22'>19)</ns0:ref>.</ns0:p><ns0:p>The average blood Cu in the newborns studied here was 1,117.0 ± 627.1 g/L, and this is comparable to a reference value of blood Cu in adults (970 ± 130 g/L) <ns0:ref type='bibr' target='#b23'>(20)</ns0:ref>. We are not aware of another review reporting upon Cu in newborn blood though <ns0:ref type='bibr'>Krachler et al. (21)</ns0:ref> measured this element in serum and reported a range of values between 590 and 1,390 g/L. Further, in a study of 3,210 children aged 0-14 from Lu'an (China) the median whole blood copper level in a subset of 397 children below the age of 1 was about 1,306 g/L with a range of 758 -2,420 g/L <ns0:ref type='bibr' target='#b25'>(22)</ns0:ref>. For blood Se, the concentrations measured here in the newborns (193.1 ± 49.1 g/L) were at the upper end of the adult reference range (58 to 234 g/L) reported on a review of 20 datasets <ns0:ref type='bibr' target='#b23'>(20)</ns0:ref>. We are not aware of another dataset reporting upon Se in newborn whole blood though Galinier et al. <ns0:ref type='bibr' target='#b26'>(23)</ns0:ref> established a reference range of 47.4 ± 7.9 g/L in the umbilical cord serum of neonates (n=241). For blood Zn, the average concentrations measured here in the newborn DBS was 4,485 ± 2,274 g/L. In the aforementioned study from Lu'an, China, the median whole blood zinc level in a subset of 397 children below the age of 1 was about 4,315 g/L with a min-max range of 3,360 -6,479 g/L <ns0:ref type='bibr' target='#b25'>(22)</ns0:ref>. In adults, for comparison, Iyengar and Woittiez (20) determined blood Zn concentrations to be 6,500 ± 1,100 g/L. In general, these comparisons suggest that the data from the current study are within an expected range though they also emphasize the need for more information on this topic so that adequate reference ranges can be established for newborns.</ns0:p><ns0:p>The measurements taken in the current study were carefully evaluated for several quality control criteria by consulting the resource of Huber <ns0:ref type='bibr' target='#b18'>(17)</ns0:ref>, and in general the developed method performed well in terms of linearity, range, accuracy and precision. Nonetheless, there are always areas to improve upon. For example, here the internal gallium standard was added to the processed sample and in the future this standard could be added to the DBS prior to processing. With respect to the presence of these elements in the blank filter paper, there were some potential challenges noted with the Cu and Zn data. Contamination in the filter paper may arise during the manufacturing of the cards, blood collection, storage, transportation as well as sample preparation. Several studies have found high variability in elemental levels within a card and across cards <ns0:ref type='bibr' target='#b8'>(8,</ns0:ref><ns0:ref type='bibr' target='#b10'>10,</ns0:ref><ns0:ref type='bibr' target='#b27'>24,</ns0:ref><ns0:ref type='bibr' target='#b28'>25)</ns0:ref>. The variability in contamination could explain some of the outliers in this analysis and thus serve as a barrier to accurate elemental quantification of DBS. Unfortunately, here we could not account for such variation on an individual sample basis though in future studies one may consider analyzing paired sample punches (i.e., DBS punch and a nearby punch of the blank filter paper) from each card.</ns0:p><ns0:p>The work presented here helps overcome some challenges present in the literature concerning elemental analysis though there are notable study limitations that warrant attention. One of the greatest challenges in the analysis of DBS is the unknown blood volume in a punch. While punches allow for a consistent area to be analyzed the sample volume remains unknown due to variances in sample collection methods and human physiology. For example, an individual's hematocrit affects the spread of blood on the filter paper, and this affects the blood volume in a given punch. Hematocrit has a wide range of normal values that can change based on sex, age, and health status <ns0:ref type='bibr' target='#b29'>(26)</ns0:ref>. Furthermore, the hematocrit range for infants under two years of age is from 28% to 55%, compared to 41% to 50% in adult males and 36% to 44% in adult females <ns0:ref type='bibr' target='#b30'>(27)</ns0:ref>. As a result, it is difficult to calculate an accurate concentration and here we adopted an estimation that a 3mm diameter punch contains a 3.1 L volume <ns0:ref type='bibr' target='#b19'>(18)</ns0:ref>. Researchers have also shown that there may be differences in analyte distribution between the perimeter and center of the blood spot <ns0:ref type='bibr' target='#b12'>(12,</ns0:ref><ns0:ref type='bibr' target='#b29'>26)</ns0:ref>.</ns0:p><ns0:p>Normalization may help overcome contamination in elemental analysis as well as account for unknown sample volume. Normalization can be done with one or multiple elements that have a narrow physiological distribution and absence from the blank filter paper <ns0:ref type='bibr' target='#b8'>(8,</ns0:ref><ns0:ref type='bibr' target='#b31'>28)</ns0:ref>. <ns0:ref type='bibr'>Langer et al. (8)</ns0:ref> suggested the use of potassium (K) as it is found at low levels in unspotted filter paper and could be used to normalize volume differences. Additional elements, such as magnesium (Mg) and calcium (Ca), have a narrow range in blood and could also be used to normalize values. However, Langer et al. <ns0:ref type='bibr' target='#b8'>(8)</ns0:ref> suggests they may be inappropriate due to high concentrations and variability in blank filter paper samples. Although these elements can be quantified with the current TXRF method, the SRMs used in this study are not certified for Ca, K, or Mg and, thus, accuracy and precision currently cannot be calculated to explore this method. We do note that the SRMs used in the current study have been assigned values for other notable elements both toxic (e.g., Pb, Cd) and essential (e.g., Cr, Mn). Moving ahead, the multi-element capabilities of TXRF make it a promising instrument to concurrently measure a range of elements and thus future work is necessary (e.g., develop and validate methods for other elements; generate appropriate reference materials with assigned levels of other elements) to enable such expansion particularly in this era of the exposome.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The objective of this study was to develop and apply a novel TXRF-based analytical method to quantify essential elements (copper, selenium, zinc) in DBS. While there is great demand across the clinical, public health and ecological sciences for such a method, current approaches face several challenges related to instrumentation, quality control, and matrix effects. Here we demonstrate that elemental analysis of DBS with TXRF may help overcome these challenges, and that the developed method can be scaled-up in relatively large study settings. The analytical method developed to quantify the 3 target elements fared well against a priori quality control criteria (i.e., analytical accuracy, precision, linearity and range). We demonstrate the possibility of using the method to characterize essential elements in DBS from residual newborn screening programs, and by extension the method can be extended to a range of other settings such as, for example, demographic surveys in low-and middle-income countries and research in remote sites where there remain logistical barriers to sampling venous whole blood. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41194:1:0:NEW 26 Oct 2019)Manuscript to be reviewedChemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science were significantly (p < 0.001) lower than those taken from the cards stored at ambient temperature (3,959.6 versus 4,892.7 ug/L).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 Boxplots</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Analytical accuracy and precision of elemental measurements taken in dried blood spots using different whole blood standard reference materials (SRM) from the Institut National de Santé Publique du Québec (INSPQ) as detailed in Supplemental Materials Table1.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Element</ns0:cell><ns0:cell>SRM</ns0:cell><ns0:cell>Assigned SRM</ns0:cell><ns0:cell>Measured</ns0:cell><ns0:cell>Accuracy (%)</ns0:cell><ns0:cell>Precision</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Concentration</ns0:cell><ns0:cell>Concentration in</ns0:cell><ns0:cell /><ns0:cell>(% CV)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(g/L)</ns0:cell><ns0:cell>DBS (g/L)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Copper</ns0:cell><ns0:cell>QM-B-</ns0:cell><ns0:cell>3094.9</ns0:cell><ns0:cell>3103.7 ± 365.9</ns0:cell><ns0:cell>100.3 ± 11.8</ns0:cell><ns0:cell>11.8</ns0:cell></ns0:row><ns0:row><ns0:cell>(Cu)</ns0:cell><ns0:cell>Q1313</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>QM-B-</ns0:cell><ns0:cell>813.4</ns0:cell><ns0:cell>953.4 ± 160.2</ns0:cell><ns0:cell>117.2 ± 19.6</ns0:cell><ns0:cell>16.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Q1505</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>QM-B-</ns0:cell><ns0:cell>3037.7</ns0:cell><ns0:cell>3215.4 ± 349.3</ns0:cell><ns0:cell>105.8 ± 11.5</ns0:cell><ns0:cell>10.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Q1506</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Selenium</ns0:cell><ns0:cell>QM-B-</ns0:cell><ns0:cell>290.6</ns0:cell><ns0:cell>264.4 ± 35.9</ns0:cell><ns0:cell>90.9 ± 12.2</ns0:cell><ns0:cell>13.6</ns0:cell></ns0:row><ns0:row><ns0:cell>(Se)</ns0:cell><ns0:cell>Q1313</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>QM-B-</ns0:cell><ns0:cell>172.1</ns0:cell><ns0:cell>158.2 ± 36.7</ns0:cell><ns0:cell>91.9 ± 21.3</ns0:cell><ns0:cell>23.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Q1505</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>QM-B-</ns0:cell><ns0:cell>226.6</ns0:cell><ns0:cell>216.9 ± 33.1</ns0:cell><ns0:cell>95.7 ± 14.6</ns0:cell><ns0:cell>15.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Q1506</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Zinc (Zn)</ns0:cell><ns0:cell>QM-B-</ns0:cell><ns0:cell>7910.9</ns0:cell><ns0:cell>8297.1 ± 1104.1</ns0:cell><ns0:cell>104.9 ± 13.9</ns0:cell><ns0:cell>13.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Q1313</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>QM-B-</ns0:cell><ns0:cell>6335.3</ns0:cell><ns0:cell>7297.5 ± 1145.7</ns0:cell><ns0:cell>115.2 ± 18.1</ns0:cell><ns0:cell>15.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Q1505</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>QM-B-</ns0:cell><ns0:cell>10853.1</ns0:cell><ns0:cell>11591.1 ± 1412.1</ns0:cell><ns0:cell>106.8 ± 13.0</ns0:cell><ns0:cell>12.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Q1506</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41194:1:0:NEW 26 Oct 2019)Manuscript to be reviewedChemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Concentrations (μg/L) of copper, selenium and zinc measured in newborn dried bloodspot samples</ns0:figDesc><ns0:table /><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41194:1:0:NEW 26 Oct 2019)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Concentrations (g/L) of copper, selenium and zinc measured in newborn dried bloodspot samples from the Michigan Biotrust for Health program.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Percentile</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41194:1:0:NEW 26 Oct 2019)</ns0:note></ns0:figure>
</ns0:body>
" | "AUTHOR RESPONSE TO REVIEWER QUESTIONS
Please find below our responses to each reviewer question prefaced with “RESPONSE:” and highlighted in yellow.
Editor comments (Pawel Urban)
According to the reviewer reports, this work has merit. However, the reviewers suggest some changes and additions to improve the report. Please respond to their comments in an appropriate revision
RESPONSE: We thank Dr. Urban for handling our manuscript.
Reviewer 1 (Peter Wobrauschek)
Basic reporting
there is no language problem - perfectly written throughout
literature references adequate
abstract, text and the complete submitted article is well structured and clear
RESPONSE: We thank Dr. Wobrauschek for reviewing our paper and the thoughtful comments.
I am missing in the abstract and the paper the following information:
1. why are the chosen elements Cu Zn and Se of importance
RESPONSE: We added some sentences to the end of the Introduction to let the reader know why these are essential elements of health importance, and notably we cite key papers from the US Institute of Medicine’s Panel on Micronutrients which is a thorough resource for those seeking more information..
2. give some examples on the importance of these elements on the nutrional status of new born and in general what more could be expected in understanding the composition of DBS
RESPONSE: Please see above. We focus the paper on the development of this bioanalytical method realizing the interested reader will be one who has relatively strong knowledge on these essential elements, though for those seeking more information we cite key papers from the US Institute of Medicine’s Panel on Micronutrients. We do provide a few additional notes on these elements.
3. using TXRF you get multelement information in each spectrum were there other elements considered carrying interesting information
RESPONSE: This is a good point. See the final paragraph of the Discussion in which we mention the need to assay other elements but also point out that proper matrix-matched reference materials do not always exist. In other areas (i.e., Introduction) we also mention the multi-element capabilities of TXRF.
4. In conclusion mention some examples instead of 'great demand across clinical and public health
RESPONSE: Please see the final sentence of the Conclusion in which we point out the greatest areas of interest, notably in newborn screening, demographic health surveys, and in remote locations.
Experimental design
Details in sample preparation would be fine
1. p7/111 back up the assumption that in a 3mm punch yoiu find 3,1 µl blood
RESPONSE: We included the reference to Li and Lee (2014) that provides lots of discussion on this particular subject matter. As we do not experimentally investigate the matter here, we feel uncomfortable in justifying it further and prefer to keep this default estimate that the community is using.
2. p7/114 what parameters do you study to get a suitable analytical method
RESPONSE: This is a good point and discussed later in the paper. However, to better prepare the reader we have added some extra sentences here to let the reader know how we determined “success”.
3. was the purity of HCl checked by TXRF
RESPONSE: No, the purity of HCL was not checked though we use a trace metal grade.
4. PVA 1% how much was added ?
RESPONSE: it depends as a vol/vol % of 1% was targeted, and so “vol/vol” was added.
5. p7/134 the reflectors were siliconized by SERVA solution to prepare the surface of the reflectors
RESPONSE: Correct, this is the purpose of Serva.
6. p8/141 a photo of the final sample preparation on the reflector is informative pleae add
RESPONSE: Good idea! Just added one to the Supplemental Materials file (Supplemental Figure #1).
7. P8/142 one spectrum of the DBS measured should be added to show the spectroscopic details
RESPONSE: Another good idea! Just added one to the Supplemental Materials file (Supplemental Figure #2).
8. P10/260 any idea why the Zn was lower in frozen cards??
RESPONSE: It is a peculiar observation and we are unable to uncover any mechanisms based on a literature search, and thus we prefer to simply make note of the observation as is.
9. P11/311 does the filter paper blank contain any of the targeted elements Cu Se Zn??
RESPONSE: Yes. As discussed around line 230, we measured these 3 elements in 10% of the blotted DBS cards and report upon the values measured.
p12/343 Mg can not be measured in air, He flush or vacuum required
RESPONSE: Noted, though in our case we discuss the possibility of measuring Mg in blood samples (not in air) which is routinely done in clinical settings.
Validity of the findings
The use of TXRF for this application is a meaningful approach but includes the problem of dissolving the filter material Whatman 903 - nevertheless results are looking promising and fit in a general technique used in sampling minute volumes of blood which is advantageous.
Specifying more details on a medical fundamental research field where these information are important is required.
RESPONSE: See earlier. We focus the paper on the development of this bioanalytical method realizing the interested reader will be one who has relatively strong knowledge on these essential elements, though for those seeking more information we cite key papers from the US Institute of Medicine’s Panel on Micronutrients. We do provide a few additional notes on these elements which play critical roles in health and disease (e.g., are antioxidants, play key roles in metabolism, mediate proper reproductive health outcomes).
Comments for the Author
Some technical and spectroscopic information in particular sample preparation and influence of blank measurements on filter paper are missing. Specify if possible contamination or production impurities in the filter paper can lead to uncontrolled errors in the results. Point to the multelement capacity of TXRF so also cross readings among other elements are possible.
RESPONSE: See earlier responses to these important points as we add more technical and spectroscopic information, point the reviewer to where we present the blank filter data, and discuss the multi-element capability of TXRF.
Reviewer 2 (Anonymous)
Basic reporting
.
Experimental design
.
Validity of the findings
.
Comments for the Author
Very informative and interesting work. A useful application of TXRF for analysis of biological samples which are available in less quantity.
- Q1: Line 129 : The sample was then vortexed thoroughly and digested for 1.5 hours at 550C. At 550C the blood sample and the filter paper both get digested?
RESPONSE: Correct. The blood sample is dried onto the paper, and thus these are digested together.
- Q2: In Line 133 : what was the purpose to add polyvinyl alcohol (1%) ?
RESPONSE: This was to help the extract spread evenly onto the disc.
- Q3: Why Cu, Se and Zn were studied? Do these elements have special significance? Please explain.
RESPONSE: See our responses to Reviewer #1. We focus the paper on the development of this bioanalytical method realizing the interested reader will be one who has relatively strong knowledge on these essential elements, though for those seeking more information we cite key papers from the US Institute of Medicine’s Panel on Micronutrients. We do provide a few additional notes on these elements which play critical roles in health and disease (e.g., are antioxidants, play key roles in metabolism, mediate proper reproductive health outcomes).
- Q4: please give some comments on the Variation in analytical results with respect to time and temp?
RESPONSE: These two parameters were not experimentally tested here and so we can not provide any strong evidence of impact. We make observations in the paper given that there were delays between sample collection and analyses as well as the use of different storage temperatures but are not in. position to make any firm conclusions as these were not experimentally studied.
" | Here is a paper. Please give your review comments after reading it. |
667 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Quantitative analysis of the active ingredients of Traditional Chinese Medicine is a research tendency. The objective of this study was to build a novel method, namely, Detection-confirmationstandardization-quantification (DCSQ) for the quantitative analysis of active components in traditional Chinese medicines, without individual reference standard.</ns0:p><ns0:p>Methods. Danshen (the dried root of Radix Salvia miltiorrhiza) was used as the matrix. The 'extraction' function in high-performance liquid chromatography-mass (HPLC-MS) instrument was used to find the peaks corresponding to cryptotanshinone, tanshinone I, and tanshinone IIA in the total ion current (TIC) chromatogram of Danshen. The multicomponent reference standard (MCRS) containing the three tanshinones mainly was prepared by preparative HPLC. The contents of them in the resulting MCRS were determined by NMR; moreover, the constituents of the MCRS were confirmed. The MCRS containing known quantities of the three tanshinones was used as the reference standard for the quantitative analysis of cryptotanshinone, tanshinone I, and tanshinone IIA in Danshen Samples by analytical HPLC.</ns0:p><ns0:p>Results. Establishing the optimized HPLC conditions for the quantitative analysis of the active components in Danshen. And the assignments of the extracted peaks were confirmed by analyzing the characteristic fragments in their MS/MS product ion spectra and UV spectra. Then the MCRS containing the three tanshinones was prepared successfully. The results of determination about contents by NMR showed lineshapes fitted with high likelihood and calibration curves possessed high linearity. The results of determination on Danshen Samples obtained through DCSQ exhibited minimal deviations, in contrast to those obtained through individual reference standards.</ns0:p><ns0:p>Conclusion. The establishing DCSQ is independent and convenient for the quantitative analysis of the active components in TCMs by MCRS, without individual reference standard. This method is a great advance in quantitative analysis for complex composition, especially TCMs.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Traditional Chinese medicines (TCMs) have been widely used in the treatment of various diseases because of their remarkable and reliable biological activities. Therefore, the determination of the active components in TCMs by chromatography methods such as highperformance liquid chromatography (HPLC), thin-layer chromatography (TLC), and highperformance capillary electrophoresis (HPCE) are considered as the main strategy by which the quality of TCMs may be controlled. Conventionally, the active components of TCMs are quantified by the corresponding reference standards(National Commission of Chinese Pharmacopoeia, 2020). Popular compounds are purchased from authoritative organizations, whereas rare compounds are purified inhouse. Another approach, namely, single standard to determine multicomponents (SSDMC), had been developed to reduce the reliance of quantity on the reference standards (Fang et al., 2017; <ns0:ref type='bibr' target='#b2'>Liu et al.,2017)</ns0:ref> . The reference standards are still needed when assigning peaks and calculating conversion factors (the molar response ratio of reference standard to analyte). NMR spectroscopy is considered as a promising quantification method for the direct determination of target compounds in mixtures without reference standards; the amount of analytes can be calculated using the ratios of the signal intensities of the protons of different compounds and internal reference standard <ns0:ref type='bibr'>(Frezza et</ns0:ref> <ns0:ref type='bibr' target='#b7'>Staneva et al., 2011</ns0:ref>). However, a major challenge associated with NMR is the difficulty involved in the quantification of complex mixtures. The NMR spectra of complex mixtures often exhibit severe peak overlap, thereby affecting the accuracy of the quantity of analyte. To prevent severe peak overlap, Staneva et al.(2011) fractionated the extracts of TCMs before quantifying the target components by NMR <ns0:ref type='bibr' target='#b6'>(Chauthe et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b7'>Staneva et al., 2011)</ns0:ref>. This procedure often leads to the dissociation of target components into separate parts or absorption by sorbents. Consequently, the results do not accurately reflect the contents of target compounds in TCMs.</ns0:p><ns0:p>We proposed a novel method for the quantitative analysis of the active components in TCMs without individual reference standards, namely, detection-confirmation-standardizationquantification (DCSQ). Danshen (the dried root of Salvia miltiorrhiza), which was a popular traditional Chinese medicinal herb best known for its putative cardioprotective and antiatherosclerotic effects, was used as the matrix <ns0:ref type='bibr' target='#b8'>(Jia et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b9'>Liu et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Wang et al., 2011)</ns0:ref>. The main components responsible for its pharmacological properties were hydrophilic depsides and lipophilic diterpenoid quinines <ns0:ref type='bibr' target='#b12'>(Wang et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b13'>Li et al., 2018</ns0:ref>). An application of the DCSQ method for the quantitative analysis of three main diterpenoid quinones, cryptotanshinone, tanshinone I, and tanshinone IIA (Fig. <ns0:ref type='figure' target='#fig_16'>1, A-C</ns0:ref>) was reported for the first time.</ns0:p><ns0:p>HPLC is a suitable technique for the quantitative analysis of the active components in TCMs because they contain numerous compounds. The separation of target components from a complex mixture in TCMs can be achieved by optimizing the HPLC conditions. Based on the combination of HPLC, MS, and NMR techniques, the quantitative analysis of the target components in TCMs without reference standard can be performed as follows: First, the 'extraction' function in HPLC-MS instrument was used to find the peaks corresponding to the target components from the complex TIC chromatogram of TCMs, even without adequate separation( Wang et al.,2017). Next, the peaks were confirmed by the analysis of their MS/MS spectra. Finally, the HPLC conditions were optimized for the quantitative analysis of target components. The mixture mainly consisting of the target components, referred to as the multicomponent reference standard (MCRS), was prepared by preparative HPLC. The contents of the target components in the MCRS were determined directly by NMR, and the MCRS was used as the reference standard instead of individual reference standards.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Materials and Chemicals</ns0:head><ns0:p>Danshen was purchased from Qiancao herbal wholesale company (Beijing). The standards of cryptotanshinone (1), tanshinone I (2), and Tanshinone IIA (3) (Fig. <ns0:ref type='figure' target='#fig_16'>1, A-C</ns0:ref>) were obtained from Traditional Chinese Medicine Solid Preparation National Engineering Research Center (Nangchang, Jiangxi Province, China), with a purity of 98%. Dimethyl fumarate (4) (Fig. <ns0:ref type='figure' target='#fig_16'>1, D</ns0:ref>) with a purity of 99% was obtained from Alfa Aesar (Ward Hill, Massachusetts, USA). CDCl 3 (99.8% pure) was obtained from Cambridge Isotope Laboratories Inc. (Andover, MA, USA). Acetonitrile and methanol for HPLC were obtained from Fisher Scientific (Fair Lawn, New Jersey, USA). Formic acid (>98% purity) for HPLC was obtained from Sinopharm Chemical Reagent Co. Ltd. (Shanghai, China). All other chemicals were of analytical grade. Deionized water was obtained from Wahaha Company (Hangzhou, Zhejiang, China).</ns0:p></ns0:div>
<ns0:div><ns0:head>Identification of cryptotanshinone, tanshinone I, and tanshinone IIA in Danshen by HPLC-ESI-MS/MS</ns0:head><ns0:p>Sample preparation: Dried Danshen samples were powdered using a mill and sieved through a No. 24 mesh. Approximately 0.3 g of the powder sample was weighed and extracted by refluxing in 50 mL of methanol for 1 h. The weight loss was adjusted with methanol after the extraction. One mL of the sample solution was filtered through a 0.45 m nylon filter into a HPLC amber sample vial for injection. Data acquisition: An Agilent 1100 series HPLC system (Agilent Technologies, Santa Clara, CA, USA) equipped with a quaternary solvent delivery system, an on-line degasser, an autosample injector, a column temperature controller, and a variable wavelength detector (VWD) was coupled to an Agilent 6460 triple quadruple mass spectrometer (QQQ-MS) equipped with a dual electrospray ion source (ESI) (Agilent Technologies, Santa Clara, CA, USA) formed the HPLC-ESI-MS/MS system. The samples were separated using a Diamonsil C18 column (4.6 × 250 mm, 5 μm, Dikma Technologies Inc.). The mobile phase was a mixture of acetonitrile (mobile phase A) and water containing 0.1% formic acid (mobile phase B). The gradient elution started at 60% A, followed by a linear increase to 80% A at 30 min, 90% A at 40 min, a linear decrease to 60% A at 41 min, and held constant at 55 min before the next injection was performed. The flow rate was 1.0 mL min -1 . The column temperature was maintained at 22 C. The injection volume was 20 μL.</ns0:p><ns0:p>The mass spectrometric data were acquired using the positive electrospray mode. The fullscan mass spectrum was recorded over the range of m/z 250-325. N 2 was used as the sheath and auxiliary gas of the mass spectrometer. The capillary voltage for ESI spectra was 3.5 kV, and the capillary temperature was set at 300 C. Ultra-high pure helium was used as the collision gas in the collision-induced dissociation (CID) experiments. The MS/MS product ion spectra were acquired through the CID of the peaks with the protonated molecular ion [M + H] + of each analyte. The collision energy for the target protonated molecular ions was set at 20 eV to obtain the appropriate fragment information.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of multicomponent reference standard</ns0:head><ns0:p>Preparation of total tanshinones: The total tanshinones were prepared using a method of preparing tanshinone extract as recorded in the Chinese Pharmacopoeia 2020 edition. Fifty gram of the Danshen powder was extracted by refluxing in 500 mL 95% ethanol for 2 h. The Danshen extract was evaporated under reduced pressure at 40 C. The resulting solid was washed three times with hot water (80 C) to remove the water-soluble components and obtain the total tanshinones.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of multicomponent reference standard (MCRS):</ns0:head><ns0:p>The MCRS containing the three tanshinones was prepared using a CXTH LC-3000 semi-preparative HPLC series equipped with a binary solvent delivery system, a Rheodyne 7725i manual injection valve (5 mL sample loop) and a UV-visible detector (Chuangxintongheng Co. Ltd., Beijing, China). The total tanshinones were dissolved in 30 mL of methanol and separated using a Thermo BDS Hypersil C18 prepcolumn (22.2 × 150 mm 2 , 5 μm, Thermo Scientific) eluting with methanol/water (77:23, v/v). The flow rate was 9.0 mL min 1 , and the detection wavelength was set at 200 nm. The injection volume was 1.0 mL. All the effluents containing cryptotanshinone, tanshinone I, and tanshinone IIA were collected and evaporated to dryness. A total of 210 mg of the MCRS was obtained from 30 mL of the total tanshinone solution.</ns0:p><ns0:p>Quantitative determination of each tanshinones in multicomponent reference standard by QNMR Generation of NMR calibration series: A series of volumetric solutions containing 20.50, 10.25, 6.83, 3.42, and 1.71 mg/mL of the MCRS and 2.96, 1.48, 0.74, 0.37, and 0.19 mg/mL of dimethyl fumarate in CDCl 3 were prepared and measured by NMR at 600 MHz.</ns0:p><ns0:p>NMR spectral acquisition and processing parameters: The spectra was acquired using a 14.1 T Bruker Avance 600 MHz NMR spectrometer equipped with a 5 mm broad band (BB) inverse detection probe tuned to detect 1 H resonances. The 1 H resonance frequency was 600.13 MHz. All the spectra were acquired at 298 K. A total of 64 scans of 32 K data points were acquired with a spectral width of 9615.4 Hz (16 ppm). A pre-acquisition delay of 6.5 μs, with an acquisition time of 1.7 s, recycle delay of 24 s, and flip angle of 90° were used. The chemical shift of all the peaks was referenced to the tetramethylsilane (TMS) resonance at 0 ppm. The spectra were Fourier transformed to afford a digital resolution in the frequency domain of 0.293 Hz/Point. The phase and baseline corrections of the spectra were carried out manually. Preliminary data processing was carried out using the Bruker software TOPSPIN 2.1.</ns0:p><ns0:p>The signals for the H-15 of cryptotanshinone (4.884 ppm, t, 2H), H-17 of tanshinone I (2.295 ppm, d, 3H), H-17 of tanshinone IIA (2.259 ppm, d, 3H), and olefinic H of dimethyl fumarate (6.864 ppm, s, 2H) were used to determine the contents of cryptotanshinone, tanshinone I, and tanshinone IIA in the MCRS.</ns0:p><ns0:p>The concentrations of the three tanshinones in the MCRS were calculated using the following general equation:</ns0:p><ns0:formula xml:id='formula_0'>Eq. (1) Cx = Ax × Wi × Ni Ai × Mi × Nx × V</ns0:formula><ns0:p>where C x (in mM) corresponds to the concentrations of the three individual tanshinones; A x and A i correspond to the peak areas of tanshinones and the internal standard. W i corresponds to the mass of the internal standard (in mg); N i and N x correspond to the number of protons of the respective signals of the internal standard and tanshinones used for the quantitative analysis; M i corresponds to the molecular weight (in Da) of the internal standard, and V (in L) corresponds to the volume of CDCl 3 .</ns0:p><ns0:p>The contents of the three tanshinones in the MCRS were calculated using the following general equation:</ns0:p><ns0:formula xml:id='formula_1'>Eq. (2) Px = Cx × Mx × V Wm × 100%</ns0:formula><ns0:p>where P x (%) corresponds to the percentage of the three individual tanshinones in the MCRS; M x corresponds to the molecular weight (in Da) of the three tanshinones, and W m corresponds to the mass of the MCRS (in mg).</ns0:p></ns0:div>
<ns0:div><ns0:head>Quantitative analysis of the three tanshinones in the Danshen sample by HPLC using the MCRS as the reference standard</ns0:head><ns0:p>Sample preparation and HPLC conditions: The Danshen sample was prepared according to the procedure described according to the above steps.</ns0:p><ns0:p>Analytical HPLC was carried out using a Shimadzu LC-20AT series equipped with a quaternary solvent delivery system, an on-line degasser, and auto-sample injector, a column temperature controller, and an SPD-M20A diode-array detector (DAD) (Shimadzu corporation, Kyoto, Japan). Finally, the samples were separated using a Diamonsil C18 column (4.6 × 250 mm, 5 μm, Dikma Technologies Inc.). Notably, the HPLC conditions are compatible to the HPLC-ESI-MS experiments.</ns0:p><ns0:p>Construction of calibration curves: Appropriate amounts of the MCRS were weighed and dissolved into 100 mL of acetone. To construct the calibration curves, 2, 4, 8, 12, 16, and 20 μL of the solution were injected in triplicates. The calibration curves were constructed by plotting the peak areas versus the quantity of the three tanshinones in the MCRS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Detection of the peaks corresponding to cryptotanshinone, tanshinone I, and tanshinone IIA from the TIC and HPLC chromatograms of Danshen</ns0:head><ns0:p>The initial investigation focused on determining the target components from the TIC and HPLC chromatograms of Danshen and establishing the optimized HPLC conditions suitable for the quantitative analysis of the active components in Danshen. A rough gradient elution was first performed, after which the peaks corresponding to cryptotanshinone, tanshinone I, and tanshinone IIA were searched with their protonated molecular ion [M + H] + from the total ion chromatogram of Danshen( Wang et al.,2017).The results (Figs. <ns0:ref type='figure' target='#fig_11'>2, A and C</ns0:ref>) indicate that the peak with the retention times of 6.4, 6.2, and 8.5 min corresponded to the m/z of 297, 277, and 295, respectively. Under these conditions, cryptotanshinone and tanshinone I were not adequately separated, even though the positions of cryptotanshinone and tanshinone I in the TIC and HPLC spectra of Danshen were successfully assigned. The result obtained by the optimization of gradient elution is shown in Fig. <ns0:ref type='figure' target='#fig_11'>2 (B and D</ns0:ref>); the peaks with the retention times of 17.2, 19.0, and 26.1 min corresponded to the m/z of 297, 277, and 295, respectively, in the TIC of LC-MS (Fig. <ns0:ref type='figure' target='#fig_11'>2, B</ns0:ref>). Moreover, the three tanshinones were adequately separated from other compounds. This condition was adequate for the subsequent quantitative analysis of the three tanshinones in Danshen.</ns0:p></ns0:div>
<ns0:div><ns0:head>Confirmation of the peak assignments by ESI-MS/MS product ion and UV spectra</ns0:head><ns0:p>The ESI-MS/MS product ion spectra were acquired through the CID of the peaks obtained with the protonated molecular ion [M + H] + of the three tanshinones. The proposed MS fragmentation pathway of the compounds with the protonated molecular ions of m/z 297, 277, and 295 were summarized in Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, consistent with the MS fragmentation pathway of cryptotanshinone, tanshinone I, and tanshinone IIA as studied by Wang et al., 2017. The UV spectra shown in Fig. <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> were obtained directly from the HPLC experiments. The UV spectra of the three peaks matched with those of cryptotanshinone, tanshinone I, and tanshinone IIA, as reported in the current literature <ns0:ref type='bibr' target='#b18'>(Huang et al., 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of the MCRS containing mainly cryptotanshinone, tanshinone I, and tanshinone IIA</ns0:head><ns0:p>The MCRS containing mainly cryptotanshinone, tanshinone I, and tanshinone IIA was prepared by the preparative HPLC of the total tanshinone to perform a quantitative analysis of the three components, without individual reference standards. The chromatograms of the total tanshinone and MCRS were shown in Fig. <ns0:ref type='figure' target='#fig_15'>5 (picture A</ns0:ref>). Peaks 1, 2, and 3 were assigned to cryptotanshinone, tanshinone I, and tanshinone IIA, respectively, by collecting the main peaks in the preparative HPLC separately and then analyzing them by analytical HPLC.</ns0:p></ns0:div>
<ns0:div><ns0:head>Determination of the contents of cryptotanshinone, tanshinone I, and tanshinone IIA in the MCRS by NMR</ns0:head><ns0:p>The contents of the three tanshinones in the resulting MCRS were determined directly by 1 Next, the deconvolution was performed, and the peak areas were automatically generated. Regarding the quantitative analysis of the mixtures by NMR using the data processing, the accuracy was determined by the likelihood of lineshape fitting. Results showed that the contents of crytotanshinone, tanshinone I, and tanshinone IIA were at 1.635 g/100g, 0.718 g/100g, 1.953 g/100g in Danshen sample, respectively, by MCRS method.</ns0:p><ns0:p>Under the preparation of MCRS of Danshen, the selection of the signals in the NMR spectra for the quantitative analysis of the three tanshinones and the validity of the NMR method were performed. The analytical method of NMR was achieved a high separation efficiency which was shown in Fig. <ns0:ref type='figure' target='#fig_17'>7 (A-O</ns0:ref>). The NMR spectrum is very reproducible with good reproducibility and target three compounds high separation efficiency.</ns0:p></ns0:div>
<ns0:div><ns0:head>Determination of the contents of the three tanshinones in the Danshen sample by HPLC using the MCRS as the reference standard and comparison of the results with those obtained using individual reference standards.</ns0:head><ns0:p>The calibration curve of the MCRS was constructed to investigate the validity of the NMR method. The calibration curves were constructed by plotting the concentrations of the three tanshinones versus that of dimethyl fumarate. Next, the linear regression lines were calculated. The calibration graph demonstrating the linearity of the NMR response with increasing concentrations of the three tanshinones is shown in Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>.</ns0:p><ns0:p>Approximately 5 mg of the MCRS was dissolved into 100 mL of acetone to construct the calibration curve of the three tanshinones, corresponding to 1.5, 0.5, and 1.7 mg of individual cryptotanshinone, tanshinone I, and tanshinone IIA, respectively. The contents of the three tanshinones in the Danshen sample were determined according to the calibration curves based on individual standards. The content of cryptotanshinone was determined at 1.635 g/100g and 1.627 g/100g, respectively, using MCRS and standard quantitation. Tanshinone I was detected at 0.718 g/100g and 0.727 g/100g, respectively, based on the two methods. Tanshinone IIA was measured at 1.953 g/100g and 1.886 g/100g, respectively. The mean contents of the three compounds obtained by the two methods were tested for no significance difference at the P ≤ 0.05 level. The results were compared with those obtained using individual reference standards. The deviations of the three tanshinones determined by the two methods were minimal (Fig. <ns0:ref type='figure' target='#fig_9'>9</ns0:ref>), and this could be attributed to the maximum likelihood lineshape fitting of their signals in the NMR spectrum of the MCRS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The multicomponent reference standard (MCRS) of Danshen was preparing successfully to use for the samples' quantitative analysis. The MCRS containing cryptotanshinone, tanshinone I, and tanshinone IIA was obtained by collecting peaks 1, 2, and 3 together (Fig. <ns0:ref type='figure' target='#fig_15'>5, picture B</ns0:ref>). The preparation of MCRS was found to be much easier than that of individual reference standards. The presence of some other compounds whose NMR signals did not overlap severely with those of the target components may be allowed. The target components need not be separated from each other in their preparative HPLC spectrum because they were finally collected for preparing the MCRS.</ns0:p><ns0:p>Apart from structure elucidation, NMR had been progressed to become a useful technique for the direct quantitative analysis of complex mixtures. However, it remained difficult to directly quantitatively analyze the target components from the extracts of TCMs using the NMR technique, owing to the existence of numerous components, which causing severe overlap of the NMR signals. In this study, HPLC was selected for the quantification of the three tanshinones in Danshen for its outstanding separation ability compared to NMR. By using the 'extraction' function in HPLC-MS instrument to find the target components and MCRS as the reference standard diminished the dependency of individual reference standards in the quantitative analysis of the target components in TCMs. Evidently, the increased amount of components in TCM samples would not make the quantitative analysis of target components more difficult, which could be attributed to the efficient separation function of HPLC. The components whose signals overlap with each other could be divided into different MCRSs. The amount of MCRS prepared from one sample could be adjusted according to the components requiring quantification and considering the prevalence of NMR signal overlapping.</ns0:p><ns0:p>The results of MCRS shown in Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> indicated that the extents of the likelihood of the lineshape fitting of the signals for the three tanshinones performed by the Lorentz deconvolution are different. <ns0:ref type='figure'>Pictures A-E, F-K, L-M, and N-O</ns0:ref> showed the lineshape fitting results of the signals for cryptotanshinone, tanshinone I, tanshinone IIA, and dimethyl fumarate, respectively. The lineshapes of the following signals were fitted with relatively higher likelihood: the triplet in B at 4.884 ppm belonged to cryptotanshinone, the doublets in G and K (right side) belonged to tanshinone I, the doublet in K (left side) belongs to tanshinone IIA, and the singlet in N belonged to dimethyl fumarate.</ns0:p><ns0:p>The calibration graph demonstrating the linearity of the NMR response with increasing concentrations of the three tanshinones in Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref> indicated that the R 2 values calculated from the signals in C and K (left side) (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>) belonging to cryptotanshinone and tanshinone IIA were 1.0000 and 0.9999, respectively. The signals in G and K (right side) (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>)belonging to tanshinone I also exhibited a high likelihood of lineshape fitting, and the R 2 values calculated from the two signals were 0.9995 and 0.9997, respectively. The result confirmed that the concentrations of the three tanshinones in the MCRS could be accurately determined using 1 H NMR after deconvolution. The average contents of three tanshinones was calculated with five different concentrations from the NMR spectra of the MCRS. And the results were 28.01%, 9.43%, and 34.36%, respectively. The signals in B, K (left side and right side) (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>), and N (Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref>) were finally selected for the quantitative analysis of the three tanshinones, because their lineshapes fitted with high likelihood and their calibration curves possessed high linearity.</ns0:p><ns0:p>The calibration curves of the three tanshinones were constructed using the same method as that of individual reference standards. A significant advantage of using the MCRS as the reference standard is that it reduced the dependency of quantitative analysis on the individual reference standard and also reduced the consumption of reference standards.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, the advantageous combination of HPLC, MS, and NMR techniques facilitated the accurate quantification of the target components in TCMs, without individual reference standards, which could be defined as Detection-confirmation-standardization-quantification (DCSQ). By combination of multiple analytical techniques, the MCRS was detected, confirmed, prepared and determinated successfully. In the study, NMR' direct quantitative analysis was utilized to the MCRS even though the resolution of HPLC was not enough in the mixture. The MCRS of danshen with three major constituents, cryptotanshinone, tanshinone I, and tanshinone IIA, was prepared and used for samples. We provide a innovative method to get reference standards. The MCRS could be a novel approach as standards for quantification about corresponding samples, which is less dependent than individual reference standards. The MCRS as the reference standard will be used to quantitative analysis and for the more industrial applications. It will be more accurate and convenient for the target analyte. It is a great advance in quantitative analysis for complex composition, especially TCMs. <ns0:ref type='figure'>C-D</ns0:ref>). The HPLC conditions were as follows: The water phase contained 0.1% formic water. The flow rate was 1.0 mL/min. The detection wavelength was set at 270 nm. The eluting condition of (A-B) was as follows: 85% acetonitrile was maintained for 15 min, and the column temperature was 35 C. The eluting condition of (C-D) was as follows: 60% acetonitrile was maintained for 0-30 min, and the concentration of acetonitrile was increased from 60% to 80% and then to 90% of acetonitrile at 40 min. The column temperature was 22 C. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed The MCRS was analyzed under the same condition as HPLC-MS experiment (Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, eluting gradient of (C-D) ). Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Chemistry Journals Figure 1</ns0:note><ns0:note type='other'>Chemistry Journals Figure 2</ns0:note><ns0:note type='other'>Chemistry Journals Figure 3</ns0:note><ns0:note type='other'>Chemistry Journals Figure 5</ns0:note><ns0:note type='other'>Figure 6</ns0:note><ns0:note type='other'>Chemistry Journals Figure 7</ns0:note><ns0:note type='other'>Chemistry Journals Figure 8</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>H NMR analyses. The 1 H NMR spectrum of the MCRS mixed with the internal standard, dimethyl fumarate, was shown in Fig. The main signals were assigned to the three tanshinones according to the references, excluding the signals belonging to dimethyl fumarate(Mei et al., 2019; Zeng et al., 2017; Wu et al., 2015). The composition of the MCRS was confirmed again by analyzing the signals observed in the 1 H NMR spectrum. To perform an accurate quantitative analysis, the data processing technique of the Lorentz deconvolution function in the Bruker software TOPSPIN 2.1 was used. The lineshapes of signals were fitted by the Lorentz method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 Chemical structures of cryptotanshinone (A), tanshinone I (B), tanshinone IIA (C</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2Total ion current chromatograms of the Danshen extract and extracted ion chromatograms of target protonated molecular ions (A-B) and UV spectrum of the Danshen extract (C-D). The HPLC conditions were as follows: The water phase contained 0.1% formic water. The flow rate was 1.0 mL/min. The detection wavelength was set at 270 nm. The eluting condition of (A-B) was as follows: 85% acetonitrile was maintained for 15 min, and the column temperature was 35 C. The eluting condition of (C-D) was as follows: 60% acetonitrile was maintained for 0-30 min, and the concentration of acetonitrile was increased from 60% to 80% and then to 90% of acetonitrile at 40 min. The column temperature was 22 C.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 Positive ion ESI-MS/MS spectra and the proposed MS fragmentation pathways of peaks 1 (A), 2 (B), and 3 (C).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 UV spectra of the three peaks shown in Fig. 2 (picture D). (A) UV spectrum of the peak with the retention time of 16.8 min; (B) UV spectrum of the peak with the retention time of 18.6 min; (C) UV spectrum of the peak with the retention time of 25.7 min.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure<ns0:ref type='bibr' target='#b4'>5</ns0:ref> Representative preparative HPLC chromatogram of total tanshinones (A) and analytical HPLC chromatogram of the MCRS (B). The MCRS was analyzed under the same condition as HPLC-MS experiment (Fig.2, eluting gradient of (C-D)).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 600</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 600 MHz 1 H NMR spectrum of the MCRS mixed with dimethyl fumarate. The concentrations of the MCRS and dimethyl fumarate in CDCl3 were 20.50 and 2.96 mg•mL -1 , respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Representative results of the lineshape fitting of the signals belonging to the three tanshinones and dimethyl fumarate performed by the Lorentz deconvolution. Images A-E reflect the representative lineshape fitting of signals belonging to cryptotanshinone. Images F-K (the right side) reflect the representative lineshape fitting of signals belonging to tanshinone I. The doublet in the left side of K, images L and M (the doublet in the right side) reflect the representative lineshape fitting of signals belonging to tanshinone IIA. Images N and O reflect the lineshape fitting of signals belonging to dimethyl fumarate.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 Concentrations of the three tanshinones determined by specific signals.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9 Deviations of the contents of the three tanshinones in the Danshen sample as determined by the MCRS contrast to individual reference standards.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 Chemical structures of cryptotanshinone (A), tanshinone I (B), tanshinone IIA (C), and dimethyl fumarate (D).</ns0:figDesc><ns0:graphic coords='15,42.52,199.12,525.00,142.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Total ion current chromatograms of the Danshen extract and extracted ion chromatograms of target protonated molecular ions (A-B) and UV spectrum of the Danshen extract (C-D).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 Positive ion ESI-MS/MS spectra and the proposed MS fragmentation pathways of peaks 1 (A), 2 (B), and 3 (C).</ns0:figDesc><ns0:graphic coords='17,42.52,199.12,525.00,375.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 UV spectra of the three peaks</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Representative preparative HPLC chromatogram of total tanshinones (A) and analytical HPLC chromatogram of the MCRS (B).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 6 600 MHz 1 H</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 6 600 MHz 1 H NMR spectrum of the MCRS mixed with dimethyl fumarate.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Representative results of the lineshape fitting of the signals belonging to the three tanshinones and dimethyl fumarate performed by the Lorentz deconvolution.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 Concentrations of the three tanshinones determined by specific signals.</ns0:figDesc><ns0:graphic coords='23,42.52,178.87,525.00,362.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 9 Deviations</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>al., 2018; Luisa et al., 2017; Petrakis et al., 2017; Chauthe et al., 2012;</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:note>
</ns0:body>
" | "Dear Professor Hai-Long Wu, reviewers and editors:
We are very thankful for your comments and suggestions on the manuscript “Detection-Confirmation-Standardization-Quantification: A novel method for the quantitative analysis of active components in traditional Chinese medicines”. After considering the editor and reviewer’ comments, the manuscript was carefully revised point by point:
Reviewer 1
Basic reporting
The English language should be improved to ensure that an international audience can clearly understand your text. Some examples where the language could be improved include lines 292, 322 – the word ‘reliability’ makes comprehension difficult, suggest replacing with ‘dependency’.
Answer: Agreed. It has been revised (Manuscript Line 294,323, before Line 292,322). And we have reviewed the full text and updated some expression to make the manuscript more clear.
Lines 91- Is it necessary the word ‘directly’ because the preparative HPLC is different from the above-mentioned quantitative HPLC.
Answer: Agreed. It has been deleted ,which was described as “The mixture mainly consisting of the target components, referred to as the multicomponent reference standard (MCRS), was prepared by preparative HPLC. ” (Manuscript Line 84 now, before Line 91).
Please check:
Lines 169- The signals for the H-15 of cryptotanshinone (4.884 ppm, t, 1H), 1H or 2H?
Answer: It is a description error in the manuscript,and it has been revised as “2H” after checking. (Manuscript Line 162, before Line 169).
Lines 218- times of 17.2, 19.0, and 26.1 min, It doesn't match the coordinates in Fig. 2 (A2 and B2);
Answer: All the samples are detected first through the UV detector than into the mass spectrum, the retention time of the peaks on the mass spectrum are relatively backward compared UV retention time. 17.2, 19.0, and 26.1 min coordinated in Fig B, we also make revision in the manuscript Line 210-212(Before Line 218) in order to make it more clearly.
Lines 304- the doublets in 2b and 2f (left side) belonged to 305 tanshinone I, the doublet in 2f (right side) belongs to tanshinone IIA, there is a contradiction with the annotation of Figure 7.
Answer: It is a description error in the manuscript Line 306- the doublets in G and K (right side) belonged to tanshinone I, the doublet in K (left side) belongs to tanshinone IIA, and the singlet in N belonged to dimethyl fumarate. We have made revision in Line 306-308 and Line 469-475(Before Line 304 and 434) .
Lines 309- in 1c and 2f (right side) (Fig. 7) 1c or 1b?
Answer: The signals in C and K (left side) (Fig. 7) belonging to cryptotanshinone and tanshinone IIA, respectively. (Manuscript Line 311, before Line 309).
Lines 315- tanshinone IIA calculated from the NMR spectra of the MCRS with five different concentrations, five or three?
Answer: The average contents of three tanshinones was calculated with five different concentrations from the NMR spectra of the MCRS. And the results were 28.01%, 9.43%, and 34.36%, respectively. (Manuscript Line 316-318, before Line 315).
Experimental design
no comment
Validity of the findings
no comment
Reviewer: Hai-Long Wu
Basic reporting
The paper reported a novel strategy for the quantitative analysis of active components in traditional Chinese medicines, without individual reference standard. The proposed strategy was used for the quantitative analysis of cryptotanshinone, tanshinone I, and tanshinone IIA in Danshen Samples. I think it is worthy research work and has potentials in traditional Chinese medicines. However, there are still some details and issues as follows that need further clarification and discussion. Thus, I recommend the acceptance of this work after major revisions.
1. Accurate quantitative analysis of target analytes in complex matrices is very important, so, quantitative results of cryptotanshinone, tanshinone I, and tanshinone IIA in Danshen samples should be provided in the text.
Answer: We have added the quantitative results in the manuscript from Line 247-249.
2. The NMR spectra of complex mixtures often exhibit severe peak overlap, which can affect the accuracy of the quantity of analyte. The separation degree of the peaks in 1b, 2f (Figure 7) and 4a should be calculated.
Answer: We have revised the manuscript from Line 252-254.
3. The experiment process seems a little complicated, whether this will cause the loss of the target analyte?
Answer: Our experiment was designed from the original medicinal materials to the MCRS. The experiment process includes detection by TICs, confirmation by HPLC-ESI-MS/MS and UV spectra, the preparation of MCRS and the apply. The experiment design was for the more accurate quantitative results. The explain has been supplemented in the manuscript from Line 336-338.
4. The results of the proposed method should be verified by using individual reference standards and evaluated by using statistical methods.
Answer: The proposed method was verified by using individual reference standards. We have revised the manuscript Line 90-93. The MCRS and reference standards evaluated by the two methods were showed in the manuscript, we have also revised the manuscript from Line 267-272.
Experimental design
sss
Validity of the findings
sss
Editors:
1. Author Names
The author list on your manuscript Author Cover Page, does not match the list you entered online. We only use the information in the metadata provided in the submission system. To make them match, either A) Upload a manuscript with a corrected Author Cover Page or B) Edit your online author list.
Answer: The author names had been amended in the manuscript and online, and the author list on my manuscript author cover page can match the list which from the online. (Manuscript Line 5-9.)
2. Cover Letter and Highlights
You have provided the cover letter and highlights as Supplemental Files. Please remove them from the Supplemental Files section.
Answer: The files of cover letter and highlights had been deleted as required.
3. Unsupported Files Types
You have uploaded your Figures in an unsupported format. Figures should be saved as EPS, PNG, or PDF (vector images only) file format, measure at least 900 by 900 pixels (but no more than 3000 by 3000 pixels) and eliminate excess white space around the images. JPG must only be used for photographs.
Answer: The format of figure1-9 had been adjusted as the regulation. All the figure are submitted as the PNG.
4. Tracked Changes Manuscript Source File
Please could you upload the manuscript with computer-generated tracked changes to the Revision Response Files section. The reviewers and Academic Editor will want to see all of the changes documented and will normally request it if some changes appear to be missing. Please use the Compare Function in Microsoft Word to track all changes made to the manuscript since the last submission.
Answer: All modifications had been marked using the “computer-generated tracked changes to the Revision Response Files section” in the file “Tracked changes manuscript”. The description on the modifications is explained detailedly in the file “rebuttal letter” . Reviewers and editors could review definitely and easily. Thanks!
5. Figures
Figures 1, 2 and 7 have multiple parts. Each figure with multiple parts should have alphabetical (e.g. A, B, C) labels on each part and all parts of each single figure should be submitted together in one file. In this case:
• The 4 parts of Figure 1 should be labeled A-D.
• The 4 parts of Figure 2 should be labeled A-D.
• The 15 parts of Figure 7 should be labeled A-O.
Please provide replacement figures measuring minimum 900 pixels and maximum 3000 pixels on all sides, saved as PNG, EPS, or PDF (vector images only) file format without excess white space around the images here.
Answer: The figures 1, 2, 7 had been adjusted like the editors’ suggestion. In Figure 1, four part include A-D; In Figure 2, four part include A-D; In Figure 7, fifteen part include A-O; The details could be seen in Figure 1, 2 and 7. All the descriptions about figure 1,2,7 in the manuscript had been modified.
6. Figure Accessibility
If possible, please adjust the red/green colors used on Figure 9 to make it accessible to those with color blindness. Please review our color blindness guidelines for figures.
Please provide a replacement figure measuring minimum 900 pixels and maximum 3000 pixels on all sides, saved as PNG, EPS or PDF (vector images) file format without excess white space around the images.
Note: Please do not replace the red/green colors with patterns in your figures.
Answer: The figure 9 had been adjusted which the blue, yellow and purple replacing the previous. The details could be seen in Figure 9.
Looking forward to hearing from you, I remain,
Yours Sincerely,
Associate Professor Dr. Sha Chen
Institute of Chinese Materia Media,
China Academy of Chinese Medical Sciences.
" | Here is a paper. Please give your review comments after reading it. |
668 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Quantitative analysis of the active ingredients of Traditional Chinese Medicine is a research tendency. The objective of this study was to build a novel method, namely, Detection-confirmationstandardization-quantification (DCSQ) for the quantitative analysis of active components in traditional Chinese medicines, without individual reference standard.</ns0:p><ns0:p>Methods. Danshen (the dried root of Radix Salvia miltiorrhiza) was used as the matrix. The 'extraction' function in high-performance liquid chromatography-mass (HPLC-MS) instrument was used to find the peaks corresponding to cryptotanshinone, tanshinone I, and tanshinone IIA in the total ion current (TIC) chromatogram of Danshen. The multicomponent reference standard (MCRS) containing the three tanshinones mainly was prepared by preparative HPLC. The contents of them in the resulting MCRS were determined by NMR; moreover, the constituents of the MCRS were confirmed. The MCRS containing known content of the three tanshinones was used as the reference standard for the quantitative analysis of cryptotanshinone, tanshinone I, and tanshinone IIA in Danshen Samples by analytical HPLC.</ns0:p><ns0:p>Results. Establishing the optimized HPLC conditions for the quantitative analysis of the active components in Danshen. And the assignments of the extracted peaks were confirmed by analyzing the characteristic fragments in their MS/MS product ion spectra and UV spectra. Then the MCRS containing the three tanshinones was prepared successfully. The results of determination about contents by NMR showed linearity fitted with high likelihood and calibration curves possessed high linearity. The results of determination on Danshen Samples obtained through DCSQ exhibited minimal deviations, in contrast to those obtained through individual reference standards.</ns0:p><ns0:p>Conclusion. The establishing DCSQ is independent and convenient for the quantitative analysis of the active components in TCMs by MCRS, without individual reference standard. This method is a great advance in quantitative analysis for complex composition, especially TCMs.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Traditional Chinese medicines (TCMs) have been widely used in the treatment of various diseases because of their remarkable and reliable biological activities. Therefore, the determination of the active components in TCMs by chromatography methods such as highperformance liquid chromatography (HPLC), thin-layer chromatography (TLC), and highperformance capillary electrophoresis (HPCE) are considered as the main strategy by which the quality of TCMs may be controlled. Conventionally, the active components of TCMs are quantified by the corresponding reference standards(National Commission of Chinese Pharmacopoeia, 2020). Popular compounds are purchased from authoritative organizations, whereas rare compounds are purified inhouse. Another approach, namely, single standard to determine multicomponents (SSDMC), had been developed to reduce the reliance of quantity on the reference standards (Fang et al., 2017; <ns0:ref type='bibr' target='#b2'>Liu et al.,2017)</ns0:ref> . The reference standards are still needed when assigning peaks and calculating conversion factors (the molar response ratio of reference standard to analyte). NMR spectroscopy is considered as a promising quantification method for the direct determination of target compounds in mixtures without reference standards; the amount of analytes can be calculated using the ratios of the signal intensities of the protons of different compounds and internal reference standard <ns0:ref type='bibr'>(Frezza et</ns0:ref> <ns0:ref type='bibr' target='#b7'>Staneva et al., 2011</ns0:ref>). However, a major challenge associated with NMR is the difficulty involved in the quantification of complex mixtures. The NMR spectra of complex mixtures often exhibit severe peak overlap, thereby affecting the accuracy of the quantity of analyte. To prevent severe peak overlap, Staneva et al.(2011) fractionated the extracts of TCMs before quantifying the target components by NMR <ns0:ref type='bibr' target='#b6'>(Chauthe et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b7'>Staneva et al., 2011)</ns0:ref>. This procedure often leads to the dissociation of target components into separate parts or absorption by sorbents. Consequently, the results do not accurately reflect the contents of target compounds in TCMs.</ns0:p><ns0:p>We proposed a novel method for the quantitative analysis of the active components in TCMs without individual reference standards, namely, detection-confirmation-standardizationquantification (DCSQ). Danshen (the dried root of Salvia miltiorrhiza), which was a popular traditional Chinese medicinal herb best known for its putative cardioprotective and antiatherosclerotic effects, was used as the matrix <ns0:ref type='bibr' target='#b8'>(Jia et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b9'>Liu et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Wang et al., 2011)</ns0:ref>. The main components responsible for its pharmacological properties were hydrophilic depsides and lipophilic diterpenoid quinines <ns0:ref type='bibr' target='#b12'>(Wang et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b13'>Li et al., 2018</ns0:ref>). An application of the DCSQ method for the quantitative analysis of three main diterpenoid quinones, cryptotanshinone, tanshinone I, and tanshinone IIA (Fig. <ns0:ref type='figure' target='#fig_15'>1, A-C</ns0:ref>) was reported for the first time.</ns0:p><ns0:p>HPLC is a suitable technique for the quantitative analysis of the active components in TCMs because they contain numerous compounds. The separation of target components from a complex mixture in TCMs can be achieved by optimizing the HPLC conditions. Based on the combination of HPLC, MS, and NMR techniques, the quantitative analysis of the target components in TCMs without reference standard can be performed as follows: First, the 'extraction' function in HPLC-MS instrument was used to find the peaks corresponding to the target components from the complex TIC chromatogram of TCMs, even without adequate separation( Wang et al.,2017). Next, the peaks were confirmed by the analysis of their MS/MS spectra. Finally, the HPLC conditions were optimized for the quantitative analysis of target components. The mixture mainly consisting of the target components, referred to as the multicomponent reference standard (MCRS), was prepared by preparative HPLC. The contents of the target components in the MCRS were determined directly by NMR, and the MCRS was used as the reference standard instead of individual reference standards.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Materials and Chemicals</ns0:head><ns0:p>Danshen was purchased from Qiancao herbal wholesale company (Beijing). The standards of cryptotanshinone (1), tanshinone I (2), and Tanshinone IIA (3) (Fig. <ns0:ref type='figure' target='#fig_15'>1, A-C</ns0:ref>) were obtained from Traditional Chinese Medicine Solid Preparation National Engineering Research Center (Nangchang, Jiangxi Province, China), with a purity of 98%. Dimethyl fumarate (4) (Fig. <ns0:ref type='figure' target='#fig_15'>1, D</ns0:ref>) with a purity of 99% was obtained from Alfa Aesar (Ward Hill, Massachusetts, USA). CDCl 3 (99.8% pure) was obtained from Cambridge Isotope Laboratories Inc. (Andover, MA, USA). Acetonitrile and methanol for HPLC were obtained from Fisher Scientific (Fair Lawn, New Jersey, USA). Formic acid (>98% purity) for HPLC was obtained from Sinopharm Chemical Reagent Co. Ltd. (Shanghai, China). All other chemicals were of analytical grade. Deionized water was obtained from Wahaha Company (Hangzhou, Zhejiang, China).</ns0:p></ns0:div>
<ns0:div><ns0:head>Identification of cryptotanshinone, tanshinone I, and tanshinone IIA in Danshen by HPLC-ESI-MS/MS</ns0:head><ns0:p>Sample preparation: Dried Danshen samples were powdered using a mill and sieved through a No. 24 mesh. Approximately 0.3 g of the powder sample was weighed and extracted by refluxing in 50 mL of methanol for 1 h. The weight loss was adjusted with methanol after the extraction. One mL of the sample solution was filtered through a 0.45 m nylon filter into a HPLC amber sample vial for injection. Data acquisition: An Agilent 1100 series HPLC system (Agilent Technologies, Santa Clara, CA, USA) equipped with a quaternary solvent delivery system, an on-line degasser, an autosample injector, a column temperature controller, and a variable wavelength detector (VWD) was coupled to an Agilent 6460 triple quadruple mass spectrometer (QQQ-MS) equipped with a dual electrospray ion source (ESI) (Agilent Technologies, Santa Clara, CA, USA) formed the HPLC-ESI-MS/MS system. The samples were separated using a Diamonsil C18 column (4.6 × 250 mm, 5 μm, Dikma Technologies Inc.). The mobile phase was a mixture of acetonitrile (mobile phase A) and water containing 0.1% formic acid (mobile phase B). The gradient elution started at 60% A, followed by a linear increase to 80% A at 30 min, 90% A at 40 min, a linear decrease to 60% A at 41 min, and held constant at 55 min before the next injection was performed. The flow rate was 1.0 mL min -1 . The column temperature was maintained at 22 C. The injection volume was 20 μL.</ns0:p><ns0:p>The mass spectrometric data were acquired using the positive electrospray mode. The fullscan mass spectrum was recorded over the range of m/z 250-325. N 2 was used as the sheath and auxiliary gas of the mass spectrometer. The capillary voltage for ESI spectra was 3.5 kV, and the capillary temperature was set at 300 C. Ultra-high pure helium was used as the collision gas in the collision-induced dissociation (CID) experiments. The MS/MS product ion spectra were acquired through the CID of the peaks with the protonated molecular ion [M + H] + of each analyte. The collision energy for the target protonated molecular ions was set at 20 eV to obtain the appropriate fragment information.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of multicomponent reference standard</ns0:head><ns0:p>Preparation of total tanshinones: The total tanshinones were prepared using a method of preparing tanshinone extract as recorded in the Chinese Pharmacopoeia 2020 edition. Fifty gram of the Danshen powder was extracted by refluxing in 500 mL 95% ethanol for 2 h. The Danshen extract was evaporated under reduced pressure at 40 C. The resulting solid was washed three times with hot water (80 C) to remove the water-soluble components and obtain the total tanshinones.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of multicomponent reference standard (MCRS):</ns0:head><ns0:p>The MCRS containing the three tanshinones was prepared using a CXTH LC-3000 semi-preparative HPLC series equipped with a binary solvent delivery system, a Rheodyne 7725i manual injection valve (5 mL sample loop) and a UV-visible detector (Chuangxintongheng Co. Ltd., Beijing, China). The total tanshinones were dissolved in 30 mL of methanol and separated using a Thermo BDS Hypersil C18 prepcolumn (22.2 × 150 mm 2 , 5 μm, Thermo Scientific) eluting with methanol/water (77:23, v/v). The flow rate was 9.0 mL min 1 , and the detection wavelength was set at 200 nm. The injection volume was 1.0 mL. All the effluents containing cryptotanshinone, tanshinone I, and tanshinone IIA were collected and evaporated to dryness. A total of 210 mg of the MCRS was obtained from 30 mL of the total tanshinone solution.</ns0:p><ns0:p>Quantitative determination of each tanshinones in multicomponent reference standard by QNMR Generation of NMR calibration series: A series of volumetric solutions containing 20.50, 10.25, 6.83, 3.42, and 1.71 mg/mL of the MCRS and 2.96, 1.48, 0.74, 0.37, and 0.19 mg/mL of dimethyl fumarate in CDCl 3 were prepared and measured by NMR at 600 MHz.</ns0:p><ns0:p>NMR spectral acquisition and processing parameters: The spectra was acquired using a 14.1 T Bruker Avance 600 MHz NMR spectrometer equipped with a 5 mm broad band (BB) inverse detection probe tuned to detect 1 H resonances. The 1 H resonance frequency was 600.13 MHz. All the spectra were acquired at 298 K. A total of 64 scans of 32 K data points were acquired with a spectral width of 9615.4 Hz (16 ppm). A pre-acquisition delay of 6.5 μs, with an acquisition time of 1.7 s, recycle delay of 24 s, and flip angle of 90° were used. The chemical shift of all the peaks was referenced to the tetramethylsilane (TMS) resonance at 0 ppm. The spectra were Fourier transformed to afford a digital resolution in the frequency domain of 0.293 Hz/Point. The phase and baseline corrections of the spectra were carried out manually. Preliminary data processing was carried out using the Bruker software TOPSPIN 2.1.</ns0:p><ns0:p>The signals for the H-15 of cryptotanshinone (4.884 ppm, t, 2H), H-17 of tanshinone I (2.295 ppm, d, 3H), H-17 of tanshinone IIA (2.259 ppm, d, 3H), and olefinic H of dimethyl fumarate (6.864 ppm, s, 2H) were used to determine the contents of cryptotanshinone, tanshinone I, and tanshinone IIA in the MCRS.</ns0:p><ns0:p>The concentrations of the three tanshinones in the MCRS were calculated using the following general equation:</ns0:p><ns0:formula xml:id='formula_0'>Eq. (1) Cx = Ax × Wi × Ni Ai × Mi × Nx × V</ns0:formula><ns0:p>where C x (in mM) corresponds to the concentrations of the three individual tanshinones; A x and A i correspond to the peak areas of tanshinones and the internal standard. W i corresponds to the mass of the internal standard (in mg); N i and N x correspond to the number of protons of the respective signals of the internal standard and tanshinones used for the quantitative analysis; M i corresponds to the molecular weight (in Da) of the internal standard, and V (in L) corresponds to the volume of CDCl 3 .</ns0:p><ns0:p>The contents of the three tanshinones in the MCRS were calculated using the following general equation:</ns0:p><ns0:formula xml:id='formula_1'>Eq. (2) Px = Cx × Mx × V Wm × 100%</ns0:formula><ns0:p>where P x (%) corresponds to the percentage of the three individual tanshinones in the MCRS; M x corresponds to the molecular weight (in Da) of the three tanshinones, and W m corresponds to the mass of the MCRS (in mg).</ns0:p></ns0:div>
<ns0:div><ns0:head>Quantitative analysis of the three tanshinones in the Danshen sample by HPLC using the MCRS as the reference standard</ns0:head><ns0:p>Sample preparation and HPLC conditions: The Danshen sample was prepared according to the procedure described according to the above steps.</ns0:p><ns0:p>Analytical HPLC was carried out using a Shimadzu LC-20AT series equipped with a quaternary solvent delivery system, an on-line degasser, and auto-sample injector, a column temperature controller, and an SPD-M20A diode-array detector (DAD) (Shimadzu corporation, Kyoto, Japan). Finally, the samples were separated using a Diamonsil C18 column (4.6 × 250 mm, 5 μm, Dikma Technologies Inc.). Notably, the HPLC conditions are compatible to the HPLC-ESI-MS experiments.</ns0:p><ns0:p>Construction of calibration curves: Appropriate amounts of the MCRS were weighed and dissolved into 100 mL of acetone. To construct the calibration curves, 2, 4, 8, 12, 16, and 20 μL of the solution were injected in triplicates. The calibration curves were constructed by plotting the peak areas versus the quantity of the three tanshinones in the MCRS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Detection of the peaks corresponding to cryptotanshinone, tanshinone I, and tanshinone IIA from the TIC and HPLC chromatograms of Danshen</ns0:head><ns0:p>The initial investigation focused on determining the target components from the TIC and HPLC chromatograms of Danshen and establishing the optimized HPLC conditions suitable for the quantitative analysis of the active components in Danshen. A rough gradient elution was first performed, after which the peaks corresponding to cryptotanshinone, tanshinone I, and tanshinone IIA were searched with their protonated molecular ion [M + H] + from the total ion chromatogram of Danshen( Wang et al.,2017).The results (Figs. <ns0:ref type='figure' target='#fig_10'>2, A and C</ns0:ref>) indicated that the peak with the retention times of 6.4, 6.2, and 8.5 min corresponded to the m/z of 297, 277, and 295, respectively. Under these conditions, cryptotanshinone and tanshinone I were not adequately separated, even though the positions of cryptotanshinone and tanshinone I in the TIC and HPLC spectra of Danshen were successfully assigned. The result obtained by the optimization of gradient elution is shown in Fig. <ns0:ref type='figure' target='#fig_10'>2 (B and D</ns0:ref>); the peaks with the retention times of 17.2, 19.0, and 26.1 min corresponded to the m/z of 297, 277, and 295, respectively, in the TIC of LC-MS (Fig. <ns0:ref type='figure' target='#fig_10'>2, B</ns0:ref>). Moreover, the three tanshinones were adequately separated from other compounds. This condition was adequate for the subsequent quantitative analysis of the three tanshinones in Danshen.</ns0:p></ns0:div>
<ns0:div><ns0:head>Confirmation of the peak assignments by ESI-MS/MS product ion and UV spectra</ns0:head><ns0:p>The ESI-MS/MS product ion spectra were acquired through the CID of the peaks obtained with the protonated molecular ion [M + H] + of the three tanshinones. The proposed MS fragmentation pathway of the compounds with the protonated molecular ions of m/z 297, 277, and 295 were summarized in Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>, consistenting with the MS fragmentation pathway of cryptotanshinone, tanshinone I, and tanshinone IIA as studied by Wang et al., 2017. The UV spectra shown in Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref> were obtained directly from the HPLC experiments. The UV spectra of the three peaks matched with those of cryptotanshinone, tanshinone I, and tanshinone IIA, as reported in the current literature <ns0:ref type='bibr' target='#b18'>(Huang et al., 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of the MCRS containing mainly cryptotanshinone, tanshinone I, and tanshinone IIA</ns0:head><ns0:p>The MCRS containing mainly cryptotanshinone, tanshinone I, and tanshinone IIA was prepared by the preparative HPLC of the total tanshinone to perform a quantitative analysis of the three components, without individual reference standards. The chromatograms of the total tanshinone and MCRS were shown in Fig. <ns0:ref type='figure' target='#fig_14'>5 (picture A</ns0:ref>). Peaks 1, 2, and 3 were assigned to cryptotanshinone, tanshinone I, and tanshinone IIA, respectively, by collecting the main peaks in the preparative HPLC separately and then analyzing them by analytical HPLC.</ns0:p></ns0:div>
<ns0:div><ns0:head>Determination of the contents of cryptotanshinone, tanshinone I, and tanshinone IIA in the MCRS by NMR</ns0:head><ns0:p>The contents of the three tanshinones in the resulting MCRS were determined directly by 1 H NMR analyses. The 1 H NMR spectrum of the MCRS mixed with the internal standard, dimethyl fumarate, was shown in Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>. The main signals were assigned to the three tanshinones according to the references, excluding the signals belonging to dimethyl fumarate <ns0:ref type='bibr'>(Mei et</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>al., 2019; Zeng et al., 2017; Wu et al., 2015).</ns0:head><ns0:p>The composition of the MCRS was confirmed again by analyzing the signals observed in the 1 H NMR spectrum. To perform an accurate quantitative analysis, the data processing technique was used which named the Lorentz deconvolution function in the Bruker software TOPSPIN 2.1 . The linearity of signals were fitted by the Lorentz method. Next, the deconvolution was performed, and the peak areas were automatically generated. Regarding the quantitative analysis of the mixtures by NMR using the data processing, the accuracy was determined by the likelihood of lineshape fitting. The results showed that the contents of crytotanshinone, tanshinone I, and tanshinone IIA were at 1.635 g/100g, 0.718 g/100g, 1.953 g/100g in Danshen sample, respectively, by MCRS method.</ns0:p><ns0:p>Under the preparation of MCRS of Danshen, the selection of the signals in the NMR spectra for the quantitative analysis of the three tanshinones and the validity of the NMR method were performed. The analytical method of NMR was achieved a high separation efficiency which was shown in Fig. <ns0:ref type='figure' target='#fig_16'>7 (A-O</ns0:ref>). The NMR spectrum is very reproducible with good reproducibility and target three compounds high separation efficiency.</ns0:p></ns0:div>
<ns0:div><ns0:head>Determination of the contents of the three tanshinones in the Danshen sample by HPLC using the MCRS as the reference standard and comparison of the results with those obtained using individual reference standards.</ns0:head><ns0:p>The calibration curve of the MCRS was constructed to investigate the validity of the NMR method. The calibration curves were constructed by plotting the concentrations of the three tanshinones versus that of dimethyl fumarate. Next, the linear regression lines were calculated. The calibration graph demonstrating the linearity of the NMR response with increasing concentrations of the three tanshinones was shown in Fig. <ns0:ref type='figure' target='#fig_7'>8</ns0:ref>.</ns0:p><ns0:p>Approximately 5 mg of the MCRS was dissolved into 100 mL of acetone to construct the calibration curve of the three tanshinones, corresponding to 1.5, 0.5, and 1.7 mg of individual cryptotanshinone, tanshinone I, and tanshinone IIA, respectively. The contents of the three tanshinones in the Danshen sample were determined according to the calibration curves based on individual standards. The content of cryptotanshinone was determined at 1.635 g/100g and 1.627 g/100g, respectively, using MCRS and standard quantitation. Tanshinone I was detected at 0.718 g/100g and 0.727 g/100g, respectively, based on the two methods. Tanshinone IIA was measured at 1.953 g/100g and 1.886 g/100g, respectively. The mean contents of the three compounds obtained by the two methods were tested for no significance difference at the P ≤ 0.05 level. The results were compared with those obtained using individual reference standards. The deviations of the three tanshinones determined by the two methods were minimal (Fig. <ns0:ref type='figure' target='#fig_8'>9</ns0:ref>), and this could be attributed to the maximum likelihood lineshape fitting of their signals in the NMR spectrum of the MCRS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The multicomponent reference standard (MCRS) of Danshen was preparing successfully to use for the samples' quantitative analysis. The MCRS containing cryptotanshinone, tanshinone I, and tanshinone IIA was obtained by collecting peaks 1, 2, and 3 together (Fig. <ns0:ref type='figure' target='#fig_14'>5, picture B</ns0:ref>). The preparation of MCRS was found to be much easier than that of individual reference standards. The presence of some other compounds whose NMR signals did not overlap severely with those of the target components may be allowed. And the target components need not be separated from each other in their preparative HPLC spectrum because they were finally collected for preparing the MCRS.</ns0:p><ns0:p>Apart from structure elucidation, NMR had been progressed to become a useful technique for the direct quantitative analysis of complex mixtures. However, it remained difficult to directly quantitatively analyze the target components from the extracts of TCMs using the NMR technique, owing to the existence of numerous components, which causing severe overlap of the NMR signals. In this study, HPLC was selected for the quantification of the three tanshinones in Danshen for its outstanding separation ability compared to NMR. By using the 'extraction' function in HPLC-MS instrument to find the target components and MCRS as the reference standard diminished the dependency of individual reference standards in the quantitative analysis of the target components in TCMs. Evidently, the increased amount of components in TCM samples would not make the quantitative analysis of target components more difficult, which could be attributed to the efficient separation function of HPLC. The components whose signals overlap with each other could be divided into different MCRSs. The amount of MCRS prepared from one sample could be adjusted according to the components requiring quantification and considering the prevalence of NMR signal overlapping.</ns0:p><ns0:p>The results of MCRS shown in Fig. <ns0:ref type='figure' target='#fig_6'>7</ns0:ref> indicated that the extents of the likelihood of the lineshape fitting of the signals for the three tanshinones performed by the Lorentz deconvolution are different. <ns0:ref type='figure'>Pictures A-E, F-K, L-M, and N-O</ns0:ref> showed the lineshape fitting results of the signals for cryptotanshinone, tanshinone I, tanshinone IIA, and dimethyl fumarate, respectively. The lineshapes of the following signals were fitted with relatively higher likelihood: the triplet in B at 4.884 ppm belonged to cryptotanshinone, the doublets in G and K (right side) belonged to tanshinone I, the doublet in K (left side) belongs to tanshinone IIA, and the singlet in N belonged to dimethyl fumarate.</ns0:p><ns0:p>The calibration graph demonstrating the linearity of the NMR response with increasing concentrations of the three tanshinones in Fig. <ns0:ref type='figure' target='#fig_7'>8</ns0:ref> indicated that the R 2 values calculated from the signals in C and K (left side) (Fig. <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>) belonging to cryptotanshinone and tanshinone IIA were 1.0000 and 0.9999, respectively. The signals in G and K (right side) (Fig. <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>)belonging to tanshinone I also exhibited a high likelihood of lineshape fitting, and the R 2 values calculated from the two signals were 0.9995 and 0.9997, respectively. The result confirmed that the concentrations of the three tanshinones in the MCRS could be accurately determined using 1 H NMR after deconvolution. The average contents of three tanshinones was calculated with five different concentrations from the NMR spectra of the MCRS. And the results were 28.01%, 9.43%, and 34.36%, respectively. The signals in B, K (left side and right side) (Fig. <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>), and N (Fig. <ns0:ref type='figure' target='#fig_6'>7</ns0:ref>) were finally selected for the quantitative analysis of the three tanshinones, because their linearity fitted with high likelihood and their calibration curves possessed high linearity.</ns0:p><ns0:p>The calibration curves of the three tanshinones were constructed using the same method as that of individual reference standards. A significant advantage of using the MCRS as the reference standard was that it reduced the dependency of quantitative analysis on the individual reference standard and also reduced the consumption of reference standards.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, the advantageous combination of HPLC, MS, and NMR techniques facilitated the accurate quantification of the target components in TCMs, without individual reference standards, which could be defined as Detection-confirmation-standardization-quantification (DCSQ). By combination of multiple analytical techniques, the MCRS was detected, confirmed, prepared and determinated successfully. In the study, NMR' direct quantitative analysis was utilized to the MCRS even though the resolution of HPLC was not enough in the mixture. The MCRS of danshen with three major constituents, cryptotanshinone, tanshinone I, and tanshinone IIA, was prepared and used for samples. We provided an innovative method to get reference standards. The MCRS could be a novel approach as standards for quantification about corresponding samples, which was less dependent than individual reference standards. The MCRS as the reference standard will be used to quantitative analysis and for the more industrial applications. It will be more accurate and convenient for the target analyte. It was a great advance in quantitative analysis for complex composition, especially TCMs. <ns0:ref type='figure'>C-D</ns0:ref>). The HPLC conditions were as follows: The water phase contained 0.1% formic water. The flow rate was 1.0 mL/min. The detection wavelength was set at 270 nm. The eluting condition of (A-B) was as follows: 85% acetonitrile was maintained for 15 min, and the column temperature was 35 C. The eluting condition of (C-D) was as follows: 60% acetonitrile was maintained for 0-30 min, and the concentration of acetonitrile was increased from 60% to 80% and then to 90% of acetonitrile at 40 min. The column temperature was 22 C. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed The MCRS was analyzed under the same condition as HPLC-MS experiment (Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>, eluting gradient of (C-D) ). Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Chemistry Journals Figure 1</ns0:note><ns0:note type='other'>Chemistry Journals Figure 2</ns0:note><ns0:note type='other'>Chemistry Journals Figure 3</ns0:note><ns0:note type='other'>Chemistry Journals Figure 5</ns0:note><ns0:note type='other'>Figure 6</ns0:note><ns0:note type='other'>Chemistry Journals Figure 7</ns0:note><ns0:note type='other'>Chemistry Journals Figure 8</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 Chemical structures of cryptotanshinone (A), tanshinone I (B), tanshinone IIA (C</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2Total ion current chromatograms of the Danshen extract and extracted ion chromatograms of target protonated molecular ions (A-B) and UV spectrum of the Danshen extract (C-D). The HPLC conditions were as follows: The water phase contained 0.1% formic water. The flow rate was 1.0 mL/min. The detection wavelength was set at 270 nm. The eluting condition of (A-B) was as follows: 85% acetonitrile was maintained for 15 min, and the column temperature was 35 C. The eluting condition of (C-D) was as follows: 60% acetonitrile was maintained for 0-30 min, and the concentration of acetonitrile was increased from 60% to 80% and then to 90% of acetonitrile at 40 min. The column temperature was 22 C.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 Positive ion ESI-MS/MS spectra and the proposed MS fragmentation pathways of peaks 1 (A), 2 (B), and 3 (C).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 UV spectra of the three peaks shown in Fig. 2 (picture D). (A) UV spectrum of the peak with the retention time of 16.8 min; (B) UV spectrum of the peak with the retention time of 18.6 min; (C) UV spectrum of the peak with the retention time of 25.7 min.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure<ns0:ref type='bibr' target='#b4'>5</ns0:ref> Representative preparative HPLC chromatogram of total tanshinones (A) and analytical HPLC chromatogram of the MCRS (B). The MCRS was analyzed under the same condition as HPLC-MS experiment (Fig.2, eluting gradient of (C-D)).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 600</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 600 MHz 1 H NMR spectrum of the MCRS mixed with dimethyl fumarate. The concentrations of the MCRS and dimethyl fumarate in CDCl3 were 20.50 and 2.96 mg•mL -1 , respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Representative results of the lineshape fitting of the signals belonging to the three tanshinones and dimethyl fumarate performed by the Lorentz deconvolution. Images A-E reflect the representative lineshape fitting of signals belonging to cryptotanshinone. Images F-K (the right side) reflect the representative lineshape fitting of signals belonging to tanshinone I. The doublet in the left side of K, images L and M (the doublet in the right side) reflect the representative lineshape fitting of signals belonging to tanshinone IIA. Images N and O reflect the lineshape fitting of signals belonging to dimethyl fumarate.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 Concentrations of the three tanshinones determined by specific signals.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 9</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9 Deviations of the contents of the three tanshinones in the Danshen sample as determined by the MCRS contrast to individual reference standards.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 Chemical structures of cryptotanshinone (A), tanshinone I (B), tanshinone IIA (C), and dimethyl fumarate (D).</ns0:figDesc><ns0:graphic coords='15,42.52,199.12,525.00,142.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Total ion current chromatograms of the Danshen extract and extracted ion chromatograms of target protonated molecular ions (A-B) and UV spectrum of the Danshen extract (C-D).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 Positive ion ESI-MS/MS spectra and the proposed MS fragmentation pathways of peaks 1 (A), 2 (B), and 3 (C).</ns0:figDesc><ns0:graphic coords='17,42.52,199.12,525.00,375.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 UV spectra of the three peaks</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Representative preparative HPLC chromatogram of total tanshinones (A) and analytical HPLC chromatogram of the MCRS (B).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 6 600 MHz 1 H</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 6 600 MHz 1 H NMR spectrum of the MCRS mixed with dimethyl fumarate.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Representative results of the lineshape fitting of the signals belonging to the three tanshinones and dimethyl fumarate performed by the Lorentz deconvolution.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 Concentrations of the three tanshinones determined by specific signals.</ns0:figDesc><ns0:graphic coords='23,42.52,178.87,525.00,362.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 9 Deviations</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>al., 2018; Luisa et al., 2017; Petrakis et al., 2017; Chauthe et al., 2012;</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:note>
</ns0:body>
" | "Dear editors:
We are very thankful for your comments and suggestions on the manuscript “Detection-Confirmation-Standardization-Quantification: A novel method for the quantitative analysis of active components in traditional Chinese medicines”. After considering the editor and reviewer’ comments, the manuscript was carefully revised point by point:
Comments: It is important to revise the language. The English needs to be improved due to the ambiguous expression, improper expression and grammar mistakes. Please revise the whole manuscript carefully and try to avoid any grammar or syntax errors before submission.
Answer: Thanks for your comments. I had revised the whole manuscript carefully and made some changes in the manuscript. The revision included wording, grammar, and so on, which could be seen obviously in “Tracked changes manuscript” line 23, 30, 206 and others.
Looking forward to hearing from you, I remain,
Yours Sincerely,
Associate Professor Dr. Sha Chen
Institute of Chinese Materia Media,
China Academy of Chinese Medical Sciences.
" | Here is a paper. Please give your review comments after reading it. |
669 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Produced water is the largest by-product of oil and gas production. At off-shore installations, the produced water is typically reinjected or discharged into the sea. The water contains a complex mixture of dispersed and dissolved oil, solids and inorganic ions.</ns0:p><ns0:p>A better understanding of its composition is fundamental to 1) improve environmental impact assessment tools and 2) develop more efficient water treatment technologies. The objective of the study was to screen produced water sampled from a producing field in the Danish region of the North Sea to identify any containing organic compounds. The samples were taken at a test separator and represent an unfiltered picture of the composition before cleaning procedures. The analytes were isolated by liquid-liquid extraction and derivatized using a silylation reagent to increase the volatility of oxygenated compounds.</ns0:p><ns0:p>The final extracts were analyzed by comprehensive multi-dimensional gas chromatography coupled to a high-resolution mass spectrometer. A non-target processing workflow was implemented to extract features and quantify the confidence of library matches by correlation to retention indices and the presence of molecular ions. Approximately 120 unique compounds were identified across nine samples. Of those, 15 were present in all samples. The main types of compounds are aliphatic and aromatic carboxylic acids with a small fraction of hydrocarbons. The findings have implications for developing improved environmental impact assessment tools and water remediation technologies.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Produced water is by volume the largest byproduct in oil and gas production. For reservoirs on the Danish continental shelf, where production is often supported by water flooding, the volume of produced water typically exceeds the volume of oil (The Danish Energy Agency, 2014). This water contains a complex mixture of inorganic and organic compounds. Implementation of offshore water management strategies is challenging due to the large volumes and practical limitations on infrastructure. Current procedures consist of cleaning of the produced water, followed by discharge into the sea or reinjection in producing or disposal wells <ns0:ref type='bibr' target='#b24'>(Lyngbaek & Blidegn, 1991;</ns0:ref><ns0:ref type='bibr' target='#b35'>Røe & Johnsen, 1996;</ns0:ref><ns0:ref type='bibr' target='#b36'>Røe Utvik, 1999;</ns0:ref><ns0:ref type='bibr' target='#b11'>Durell et al., 2004)</ns0:ref>. For discharge, the OSPAR Convention, which was set in to force in 1998, sets a limit of 30 mg of dispersed oil per liter of water as an annual average <ns0:ref type='bibr'>(OSPAR Commission, 2001)</ns0:ref>. There is currently no limit on the dissolved compounds. Furtthermore, detailed knowledge on their structure and abundance is lacking. In simplified terms, produced water can be seen as the product of aqueous extraction of crude oil by mixing within the reservoir and during production. In reality, the process is complex and the composition is highly dependent on reservoir properties, field history and water injection strategies <ns0:ref type='bibr' target='#b8'>(Bergfors, Schovsbo & Feilberg, 2020)</ns0:ref>. For example, studies have shown that salinity influences to the organic content, likely caused by a salting-out effect <ns0:ref type='bibr' target='#b6'>(Barth, Borgund & Riis, 1990;</ns0:ref><ns0:ref type='bibr' target='#b3'>Barth, 1991;</ns0:ref><ns0:ref type='bibr' target='#b7'>Barth & Riis, 1992;</ns0:ref><ns0:ref type='bibr' target='#b12'>Endo, Pfennigsdorff & Goss, 2012;</ns0:ref><ns0:ref type='bibr' target='#b9'>Dudek et al., 2020)</ns0:ref>. At the producing platform, a gravitational separation of oil, water and gas is carried out. The resulting water phase contains dissolved and dispersed oil droplets. The latter is largely removed by physical methods, i.e. separators, hydrocyclones and gas flotation tanks/degassers implemented in series on the platform, typically resulting in levels below 10 mg/L at the discharge point <ns0:ref type='bibr' target='#b26'>(Meldrum, 1988;</ns0:ref><ns0:ref type='bibr' target='#b39'>Saththasivam, Loganathan & Sarp, 2016;</ns0:ref><ns0:ref type='bibr' target='#b10'>Durdevic & Yang, 2018)</ns0:ref>. In contrast, the removal of dissolved organics requires chemical treatment, e.g. degradation via advanced oxidation processes (AOPs) <ns0:ref type='bibr' target='#b15'>(Jiménez et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b21'>Lin et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b22'>Liu et al., 2021)</ns0:ref>. This is challenging to implement on offshore installations due to safety and the requirement of low residency times. Furthermore, the current generation of AOPs are mainly based on Fenton's reagent which produces toxic chlorinated byproducts when applied to saline water <ns0:ref type='bibr' target='#b16'>(Kiwi, Lopez & Nadtochenko, 2000;</ns0:ref><ns0:ref type='bibr' target='#b18'>De Laat & Le, 2006;</ns0:ref><ns0:ref type='bibr' target='#b42'>Sirtori et al., 2012)</ns0:ref>. The goal of produced water management is not necessarily zero discharge but zero harmful discharge. Improved knowledge of its composition is thus not only beneficial to develop environmental impact assessment tools. The information may also be used to develop improved and targeted treatment methods, i.e specific removal of harmful components, and not total organic content. This has the potential to increase efficiency and reduce the cost of water management strategies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Chemicals and reagents</ns0:head><ns0:p>Benzoic acid, phenol, cycolhexanecarboxylic acid, octanoic acid, dichloromethane (LiChroSolv, Merck), n-hexane (SupraSolv for gas chromatography MS, Merck), magnesium sulfate (ReagentPlus, Redi-Dri, Sigma Aldrich), N,Obis(trimethylsilyl)trifluoroacetamide containing 1% of trimethylchlorosilane (BSTFA+TMCS, Supelco) were used as received. Deuterated internal standards (naphthalene-d 8 , acenaphthene-d 10 , phenanthrene-d 10 , chrysene-d 12 , Supelco analytical standards) were used to monitor retention time shifts. 1 D retention index calibration was performed using a linear C7 to C30 saturated alkanes mixture (Supelco, TraceCERT, Sigma Aldrich). High purity water was obtained from a Milli-Q Advantage A10 unit. All chemicals and reagents were used as received.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sampling and sample preparation</ns0:head><ns0:p>Produced water samples were donated by Maersk Oil & Gas (now Total E&P). The samples were acquired from a producing field (water-injected) located in the Danish region of the North Sea. The sampling campaign took place between June 2018 and February 2019. The samples were taken at irregular intervals with no replicates. Water samples were collected at the test separator on the production platform following protocols established by the operator. The test separator is a simple gravity-based three-phase separator where the water settles below the oil and can be sampled. No cleaning or further processing of the samples was carried out at the platforms. The samples were received in plastic bottles (1 L), aliquoted (500 mL), and immediately treated with dilute hydrochloric acid (18%, 1 mL per 100 mL sample) for a final pH < 2. Nine samples from the received batch were selected for analysis. The samples were selected due to the absence of dispersed oil droplets as evaluated by visual inspection (using microscope). The samples were stored in the dark at 4 °C until extraction and analysis. Three aliquots (50 mL) of each produced water sample were extracted using separate glassware. The aliquots were filtered through a 0.45 µm PTFE-filter to remove solids and particles. Each 50 mL aliquot was extracted with DCM (50 mL). The organic phase was washed with saturated aqueous sodium chloride (50 mL) and carefully removed in vacuo. The residue was reconstituted in n-hexane (1.5 mL) and dried over MgSO 4 . An aliquot (1000 µL) of the sample was transferred to a 2 mL vial, combined with deuterated internal standards (for monitoring of retention time stability), combined with BSTFA+TMCS (50 µL) and incubated at 70 °C for 30 minutes, whereafter, it was allowed to return to room temperature. The derivatized sample was further diluted tenfold with n-hexane and immediately analyzed on the GC×GC-MS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Extraction recovery and reproducibility</ns0:head><ns0:p>A model produced water containing five representative model compounds (benzoic acid, phenol, 2-naphthoic acid, cyclohexanecarboxylic acid, and octanoic acid, each at 5 ppm, total organics 25 ppm) in synthetic formation water (see supplemental information) was prepared to establish variability in the sample preparation protocol and instrumental analysis. The concentration of total organics was chosen to emulate typical levels encountered at production platforms. The model water was extracted four times in two batches following an identical procedure as for the produced water samples. Three procedural blanks were prepared to establish background levels and experimental sources of contamination. GC×GC-MS Analyses GC×GC-MS data were acquired using an Agilent 7890B GC coupled to a 7200B QTOF high-resolution mass spectrometer (Agilent Technologies, Palo Alto, CA, USA). The system was equipped with a Zoex ZX-2 thermal modulator (Zoex Corporation, Houston, TX, USA). The separation was achieved using a combination of an Agilent DB-5MS UI ( 1 D, 30 m, 0.25 mm i.d., 0.25 µm d f ) and a Restek Rxi-17Sil MS ( 2 D, 2 m, 0.18 mm i.d., 0.18 µm d f ) capillary columns connected using a SilTite µ-union. The oven was temperature programmed as follows; 1 min hold-time at 50 °C, ramp to 320 °C, 3 °C min -1 in constant flow mode (1 mL/min). The modulation period was set to 3 s with a 400 ms hot-jet duration. The MS transfer line was held at 280 °C. The MS acquired spectra in electron ionization mode (70 eV) with a mass range of 45 -500 and an acquisition speed of 50 Hz. The instrument was operated in its 2 GHz sampling rate mode to increase the dynamic range. Automatic mass calibration was performed for every 5 th sample (approximately 7.5 hours).</ns0:p></ns0:div>
<ns0:div><ns0:head>Data processing</ns0:head><ns0:p>Baseline correction <ns0:ref type='bibr' target='#b34'>(Reichenbach et al., 2003)</ns0:ref>, peak detection and library search were performed using GC Image v2.8.3 (Zoex, Houston, TX). Mass spectra were matched against the NIST Mass Spectral Library (National Institute of Standards and Technology, 2017 edition) with a minimum match factor of 700. All compound tables were exported as comma-separated texts for external processing. A data processing workflow was implemented in Python (Python Software Foundation. Python Language Reference, version 3.7.4. Available at http://www.python.org). The script is available as supplementary information deposited in a Zenodo repository (doi:// 10.5281/zenodo.4009045).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Sample preparation and analyses</ns0:head><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM- <ns0:ref type='table'>2020:08:52271:1:2:NEW 9 Mar 2021)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Previous studies on dissolved organics in oilfield produced water have employed LLE or solid-phase extraction (SPE) <ns0:ref type='bibr' target='#b48'>(Thomas et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b17'>Kovalchik et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b2'>Barros et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b37'>Samanipour et al., 2019)</ns0:ref>. For produced water, LLE using DCM has been shown to have a similar recovery as SPE <ns0:ref type='bibr' target='#b37'>(Samanipour et al., 2019)</ns0:ref>. The main difference was observed in the size distribution, where larger species tend to have a higher recovery using SPE due to their low solubility in DCM. LLE is non-discriminative in comparison to SPE which is used to fractionate compound classes based on adsorption characteristics. Thus, it allowed us to extract the broad range of organics present in produced water. Furthermore, the Norwegian Oil and Gas specialist network recommends LLE for the quantification of phenols in produced water (Norwegian Oil and Gas, 2012). Thus, it would allow us to 'see' what is missing using routine targeted analyses. Approximately 30 samples were received as part of the campaign. The water samples had varying levels of oil, likely due to the sampling and status of the test separator. Samples that contained a clear separate layer of oil, including smaller amounts, were excluded from the study to minimize the risk of contamination (Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Optical inspection of these samples showed that they contained large amounts of dispersed oil droplets, even when sampling below the oil layer (Figure <ns0:ref type='figure'>2</ns0:ref>). After exclusion, 9 samples were chosen to be included in the study with extraction and characterization. All samples were filtered through a 0.45 µm PTFE-filter to remove insoluble material and particles. A small aliquot (50 mL) of each sample was extracted in triplicate using an equivalent volume of DCM. The dissolved components of produced water were largely expected to be oxygenated organics, i.e. alcohols (mainly phenols) and acids. To increase the volatility of the aforementioned compounds, the samples were silylated. After removal of the solvent invacuo, each sample was reconstituted in 1.5 mL of n-hexane. A 1 mL aliquot of the reconstituted extract was treated with BSTFA-TMCS and incubated at 70 °C for 30 minutes. After derivatization, the samples were further diluted ten-fold, meaning that the samples ultimately were concentrated approximately three-fold (from 50 mL to 15 mL). The final concentration factor was chosen as a compromise between the detection of trace compounds and avoiding column and detector overload. Due to the large concentration range of our analytes, it would be beneficial to run each sample at multiple dilution factors. However, due to the long run time (90 minutes) and the number of extraction replicates (3) this was not feasible due to time constraints. It is important to remember that this strategy has two effects; 1) high concentration compounds may overload the detector with spectral skewing and poor library match as result, and 2) trace-compounds may be diluted to a level below the limit of detection. Three procedural blanks were extracted and analyzed to determine background levels and potential sources of contamination. Minor amounts of fatty acids were detected in addition to a series of polysiloxanes (discrete peaks, not common column bleed). The source of the latter could not be identified but as it lacked retention in the 2 D it did not interfere with our analytes of interest. To establish recovery and repeatability, a model produced water was prepared by spiking four organic acids and phenol into synthetic formation water (see Experimental section and supplemental data). All model compounds were detected in one or more produced water samples. The model water was extracted in four replicates. The recovery values were calculated based on peak volumes in comparison to those obtained by analyzing pure stock solutions (Table <ns0:ref type='table'>1</ns0:ref>). The recovery varied from 36% for benzoic acid up to 90% for phenol. Considering the multi-step sample extraction, including a derivatization reaction, this was deemed acceptable. The relative standard deviation of the peak volumes (measured after baseline correction using the GC Image package) of model compounds as detected in a representative produced water sample varied from 2% to 23% (calculated from three extraction replicates). Chromatography Crude oil is an ultracomplex mixture of saturated and aromatic hydrocarbons with a smaller fraction of N,S,O-containing compounds <ns0:ref type='bibr' target='#b25'>(Marshall & Rodgers, 2004;</ns0:ref><ns0:ref type='bibr' target='#b14'>Hsu et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b31'>Palacio Lozano et al., 2020)</ns0:ref>. This complexity will be reduced but reflected in the produced water. Based on previous studies of Danish oils we know that the dominant oxygenated species belong to the O1 and O2 classes with a large diversity in aromaticity <ns0:ref type='bibr' target='#b45'>(Sundberg & Feilberg, 2020)</ns0:ref>. Thus, the compositional variation in the produced water was assumed to be dominated by carbon number and level of saturation. Ultimately, the boiling point range was assumed to be larger than the variation in saturated versus aromatic structures. Therefore, we choose to use a nonpolar column in the 1 D (providing the highest separation power) with a shorter medium polarity column in the 2 D. Conventional polar columns are based on polyethyleneglycol (PEG) chemistry and are incompatible with silylation reagents. Therefore, we choose to use a 50%-phenyl-type column where aromaticity is the largest factor affecting retention. By using this column combination, the retention in both dimensions will increase with aromaticity. For example, cyclohexane acetic acid has a retention of 17.2 min/0.84 s in 1 D/ 2 D as compared to 24.6 min/1.33 s for benzeneacetic acid (measured as the corresponding trimethylsilyl esters). In contrast, alkylation of a core aromatic or saturated structure will only affect retention in the 1 D. For example, phenol and its alkylated homologs (methyl, ethyl and propyl) have 1 D retention times of 13.4, 16.9, 19.9 and 23.3 minutes, respectively, where the retention in 2 D is within 0.89 to 0.95 seconds. By investigation of a typical chromatogram, two things become obvious; 1) the desired separation of saturates, mono-and diaromatics is achieved and 2) a large portion of the 2 D space is relatively uncopied due to the small number of polycyclic species where the majority of aromatics are benzene derivatives. A representative chromatogram with selected analytes marked is presented in Figure <ns0:ref type='figure'>3</ns0:ref>. A 3 °C min -1 temperature gradient was found to be the optimal compromise between peak width, resolution and run time. At this rate, the typical peak width was 10 s in the first dimension. Thus, using a modulation period of 3 s allowed us to obtain the recommended minimum of three modulations per peak <ns0:ref type='bibr' target='#b27'>(Murphy, Schure & Foley, 1998)</ns0:ref>. Due to instrumental complexity and long run times, retention time shifts are commonly observed both in inter-sample and inter-batch runs. Deuterated PAH standards were used to monitor retention times over time. The 1 D/ 2 D retention time variability is presented in Table <ns0:ref type='table'>1</ns0:ref>. Both 1 D and 2 D were shown to be stable over the whole analyses run, covering 27 seven injections (9 samples with three extraction replicates) and 7 days.</ns0:p></ns0:div>
<ns0:div><ns0:head>Non-target screening and compound identification</ns0:head><ns0:p>Approximately 1500 compounds were detected in each sample. A data processing workflow was implemented to sort, organize and score the data. A schematic representation is presented in Figure <ns0:ref type='figure'>4</ns0:ref>. The associated files are available as supplementary information. The detected features were matched against the NIST EI Mass Spectral Library (2017). All compounds with a match factor below 700 were removed. To increase the annotation confidence, two additional factors were included; retention index (Kovats) and the presence of the molecular ion. To quantify the identification confidence the following scoring rules were implemented:</ns0:p><ns0:p>1. A match factor above 800 gives a score of 10 points. 2. A match factor between 700 and 800 gives a score of 5 points. 3. A retention index match within 50 units gives a score of 5 points. 4. The detection of the molecular ion within 20 ppm mass accuracy gives a score of 5 points. A total score of above 10 was (i.e. match factor of a minimum of 700 and either a retention index and or molecular ion match) was classified as a probable match, whereas a total score below 10 was classified as a tentatively identified structure <ns0:ref type='bibr' target='#b40'>(Schymanski et al., 2014)</ns0:ref>. To reduce experimental errors, only features that were present in all three replicates were included in the final table of compounds. Furthermore, all duplicate compounds were removed. This is a crude step that inherently removes, for example, isomeric species which would not be differentiated using automatic library search. However, for our purpose the identification of isomers is not an inherent goal. The aim of the study was to identify the presence of broad compound-class types and dominant species. The correct identification of isomeric species or species with highly similar fragmentation patterns requires manual intervention and is beyond the scope of this study. The workflow reduced the number of features from approximately 1) 1500 detected to 2) 200 library hits with match factor > 700, to 3) 50 -70 when with duplicates based on name were removed. Ultimately, the merging of inter-sample features resulted in 120 unique hits across all samples. Of those, 42 had the maximum score of 20 (i.e. match factor > 800, molecular ion detected and retention index within 50 units). Near all (87%) identified compounds are oxygen-containing, with amines, sulfides and hydrocarbons being the remaining constituents. Only 15 compounds (after removal of internal standards and background species) were present in all samples (Table <ns0:ref type='table'>2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>A few papers have previously described non-target screening of produced water, primarily from the Norwegian continental shelf. Sørensen et. al. described the characterization of unpurified DCM extracts with comparisons to the polar and non-polar fractions as isolated by SPE <ns0:ref type='bibr' target='#b43'>(Sørensen et al., 2019)</ns0:ref>. In their study, a high concentration of hydrocarbons including naphthalenes and linear alkanes was detected in the produced water. This is in contrast to our study, where only a few hydrocarbons, in relative trace amounts, were observed as measured by extracted ion chromatograms of known species. The aqueous solubility of saturated hydrocarbons is low. However, BTEX-type (benzene, toluene and xylene) compounds have relatively high solubility but were still not identified. Furthermore, hydrocarbons could be present in the water phase as dispersed oil droplets. We observed that the presence of such droplets in a water sample is heavily dependent on the oil-water ratio and that optical inspection using a microscope was required for their detection. Oil droplets were not observed in the samples included in our study which we believe explain why only small amounts of hydrocarbons were detected. A second explaination could be losses during sample transport and storage, either via volatilization/diffusion through the plastic container or microbial degradation. However, this cannot be validated without further sampling with more control of the process. Only a small fraction of the detected compounds were identified with an acceptable confidence level. 36 unique compounds (across all samples) received the maximum identification score, i.e. a match factor > 800, retention index match and detection of the molecular ion. The match factor had the most severe impact on feature reduction. Lowering the match factor limit to >600 increased the number of tentatively identified features by approximately 40% (compared to match factor > 700). A manual evaluation showed that although several hits were chemically reasonable based on structure and retention index, the lower limit also led to multiple apparent false positives. Added confidence to questionable identities could be obtained by a corroborative study using soft ionization techniques, i.e. chemical ionization or low voltage electron ionization, where the molecular ion is better observed for non-aromatic species. Looking at the obtained data, two conclusions were made; 1) samples were dominated by oxygenated organics, and 2) sample-to-sample variation was large, both in terms of composition and relative abundance. Oxygenated hydrocarbons form during diagenesis but may also be the result of microbial and or chemical processes during oil production <ns0:ref type='bibr' target='#b0'>(Aitken, Jones & Larter, 2004;</ns0:ref><ns0:ref type='bibr' target='#b13'>Head, Gray & Larter, 2014;</ns0:ref><ns0:ref type='bibr' target='#b32'>Pannekens et al., 2019)</ns0:ref>. The oxygenation leads to high partitioning into the aqueous phase during oil-water separation, and these compounds will likely require attention when developing successful water management technologies. Approximately 50% of the compounds were aromatic, primarily benzene-derivatives with few naphthalenes present. The molecular structure of six representative compounds are presented in Figure <ns0:ref type='figure'>5</ns0:ref>. Some of the identified compounds are suspected residual production chemicals, e.g. hexadecanol which does not occur naturally in crude oil. Even when comparing two samples that were sampled from the same well and only three days apart, the difference was substantial. As we lack more detailed information on the sampling step, it is difficult to conclude where these differences stem from. The intra-sample variation can be an effect of sampling and oil-to-water ratio in the test separator. The level of dispersed and or layered oil in samples will likely influence the aggregation and solubility of organics in the aqueous phase. A more controlled sampling campaign is required to identify the source of variability.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The composition of produced water is highly complex with several unknowns. Our study aimed to narrow this gap by a broad identification of dissolved organics. The implemented identification workflow excluded approximately 95% of the detected compounds (1000 -1500 per sample), resulting in 50 -80 tentatively identified compounds per sample. This demonstrates both the power and pitfalls of non-target screening; more than 100 compounds were identified with an acceptable level of confidence, and more than 1000 compounds remain unknown. To our knowledge, this is the most comprehensive list of identified compounds in produced water that has been publicly published. However, being a screening study, quantification was not carried out for any compounds. As this is an important factor for environmental assessment, the obtained compound lists should be used to develop targeted methods to look at absolute concentrations. Furthermore, it would be beneficial to reduce the number of unknowns by using complementary techniques (e.g. HPLC-MS and other soft ionization methods), improved custom libraries and in-silico mass spectral prediction.</ns0:p><ns0:p>When performing environmental impact assessments, both structure and concentration have to be taken into consideration <ns0:ref type='bibr' target='#b46'>(Tang et al., 2019)</ns0:ref>. A large part of the detected compounds are present at trace levels. Their concentration will be further reduced at discharge to sea where a rapid dilution to a large body of water occurs. However, potential cocktail effects where the combined effect of a series of micro-pollutants is harmful but not the single species should be accounted for <ns0:ref type='bibr' target='#b33'>(Di Poi et al., 2018)</ns0:ref>. Here, it would be highly beneficial to link non-target screening studies with toxicological measurements. Ultimately, we hope that our study contributes a small piece of the puzzle, and a stepping stone towards further studies to uncover the full picture. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Chemistry Journals Figure 4</ns0:note><ns0:p>Schematic of the data processing workflow.</ns0:p><ns0:p>Graphic description of how the data processing workflow scores tentatively identified features for added confidence. The scoring was implemented in a Python script and is available as supplementary data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 2(on next page)</ns0:head><ns0:p>A list of compounds that were detected in a minimum of 6 of 9 samples. Manuscript to be reviewed Chemistry Journals Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 Four</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Five</ns0:head><ns0:label /><ns0:figDesc>representative compounds have been annotated in the chromatogram. The annotation corresponds to; a) Cyclopentylcarboxylic acid, TMS derivative b) Octanoic acid, TMS derivative c) 1-Methylnaphthalene d) Benzoic acid, 3-methyl-, trimethylsilyl ester e) 2-Ethyl-1-decanol, TMS derivative PeerJ An. Chem. reviewing PDF | (ACHEM-2020:08:52271:1:2:NEW 9 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>*</ns0:head><ns0:label /><ns0:figDesc>The MS-ready name corresponds to the non-derivatized parent compound. Calculated XlogP values were obtained from the PubChem database. #Samples correspond to the number of samples in which the compound was detected. <!--[if !supportLineBreakNewLine]--> <!--[endif]--> PeerJ An. Chem. reviewing PDF | (ACHEM-2020:08:52271:1:2:NEW 9 Mar 2021)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='18,42.52,229.87,525.00,196.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,255.37,525.00,228.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,234.98,525.00,263.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Compound</ns0:cell><ns0:cell>MS-ready name</ns0:cell><ns0:cell>Formula</ns0:cell><ns0:cell>Mol. Weight</ns0:cell><ns0:cell>XlogP</ns0:cell><ns0:cell>SMILES</ns0:cell><ns0:cell>Classification</ns0:cell><ns0:cell>#Sample s</ns0:cell></ns0:row><ns0:row><ns0:cell>(±)-2-Phenylpropanoic Acid, trimethylsilyl</ns0:cell><ns0:cell>2-phenylpropanoic acid</ns0:cell><ns0:cell>C9H10O2</ns0:cell><ns0:cell>150.17</ns0:cell><ns0:cell>1.9</ns0:cell><ns0:cell>CC(C1=CC=CC=C1)C(=O)O</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>ester</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>1-Naphthoic acid, TMS derivative</ns0:cell><ns0:cell>naphthalene-1-carboxylic acid</ns0:cell><ns0:cell>C11H8O2</ns0:cell><ns0:cell>172.18</ns0:cell><ns0:cell>3.1</ns0:cell><ns0:cell>C1=CC=C2C(=C1)C=CC=C2C(=O)O</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3,4-Dimethylbenzoic acid, TMS derivative</ns0:cell><ns0:cell>3,4-dimethylbenzoic acid</ns0:cell><ns0:cell>C9H10O2</ns0:cell><ns0:cell>150.17</ns0:cell><ns0:cell>2.7</ns0:cell><ns0:cell>CC1=C(C=C(C=C1)C(=O)O)C</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Trimethylsilyl 2,3-dimethylbenzoate</ns0:cell><ns0:cell>2,3-dimethylbenzoic acid</ns0:cell><ns0:cell>C9H10O2</ns0:cell><ns0:cell>150.17</ns0:cell><ns0:cell>2.8</ns0:cell><ns0:cell>CC1=C(C(=CC=C1)C(=O)O)C</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Trimethylsilyl 4-propylbenzoate</ns0:cell><ns0:cell>4-propylbenzoic acid</ns0:cell><ns0:cell>C10H12O2</ns0:cell><ns0:cell>164.2</ns0:cell><ns0:cell>3.4</ns0:cell><ns0:cell>CCCC1=CC=C(C=C1)C(=O)O</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4-tert-Butylphenol, TMS derivative</ns0:cell><ns0:cell>4-tert-butylphenol</ns0:cell><ns0:cell>C10H14O</ns0:cell><ns0:cell>150.22</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>CC(C)(C)C1=CC=C(C=C1)O</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3-Methyl-1-cyclohexanecarboxylic acid,</ns0:cell><ns0:cell>3-methylcyclohexane-1-carboxylic</ns0:cell><ns0:cell>C8H14O2</ns0:cell><ns0:cell>142.2</ns0:cell><ns0:cell>2.1</ns0:cell><ns0:cell>CC1CCCC(C1)C(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>trimethylsilyl ester (stereoisomer 2)</ns0:cell><ns0:cell>acid</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3-Methylbutanoic acid, TMS derivative</ns0:cell><ns0:cell>3-methylbutanoic acid</ns0:cell><ns0:cell>C5H10O2</ns0:cell><ns0:cell>102.13</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>CC(C)CC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cyclohexaneacetic acid, TMS derivative</ns0:cell><ns0:cell>2-cyclohexylacetic acid</ns0:cell><ns0:cell>C8H14O2</ns0:cell><ns0:cell>142.2</ns0:cell><ns0:cell>2.5</ns0:cell><ns0:cell>C1CCC(CC1)CC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cyclohexanecarboxylic acid, TMS derivative</ns0:cell><ns0:cell>cyclohexanecarboxylic acid</ns0:cell><ns0:cell>C7H12O2</ns0:cell><ns0:cell>128.17</ns0:cell><ns0:cell>1.9</ns0:cell><ns0:cell>C1CCC(CC1)C(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cyclopentylcarboxylic acid, TMS derivative</ns0:cell><ns0:cell>cyclopentanecarboxylic acid</ns0:cell><ns0:cell>C6H10O2</ns0:cell><ns0:cell>114.14</ns0:cell><ns0:cell>1.3</ns0:cell><ns0:cell>C1CCC(C1)C(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Heptanoic acid, TMS derivative</ns0:cell><ns0:cell>heptanoic acid</ns0:cell><ns0:cell>C7H14O2</ns0:cell><ns0:cell>130.18</ns0:cell><ns0:cell>2.5</ns0:cell><ns0:cell>CCCCCCC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Nonanoic acid, TMS derivative</ns0:cell><ns0:cell>nonanoic acid</ns0:cell><ns0:cell>C9H18O2</ns0:cell><ns0:cell>158.24</ns0:cell><ns0:cell>3.5</ns0:cell><ns0:cell>CCCCCCCCC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Octanoic acid, TMS derivative</ns0:cell><ns0:cell>octanoic acid</ns0:cell><ns0:cell>C8H16O2</ns0:cell><ns0:cell>144.21</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>CCCCCCCC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>1-Hexadecanol, TMS derivative</ns0:cell><ns0:cell>hexadecan-1-ol</ns0:cell><ns0:cell>C16H34O</ns0:cell><ns0:cell>242.44</ns0:cell><ns0:cell>7.3</ns0:cell><ns0:cell>CCCCCCCCCCCCCCCCO</ns0:cell><ns0:cell>Saturated alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Benzenepropanoic acid, TMS derivative</ns0:cell><ns0:cell>3-phenylpropanoic acid</ns0:cell><ns0:cell>C9H10O2</ns0:cell><ns0:cell>150.17</ns0:cell><ns0:cell>1.8</ns0:cell><ns0:cell>C1=CC=C(C=C1)CCC(=O)O</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>m-Toluic acid, TMS derivative</ns0:cell><ns0:cell>3-methylbenzoic acid</ns0:cell><ns0:cell>C8H8O2</ns0:cell><ns0:cell>136.15</ns0:cell><ns0:cell>2.4</ns0:cell><ns0:cell>CC1=CC(=CC=C1)C(=O)O</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2,4-Di-tert-butylphenol</ns0:cell><ns0:cell>2,4-ditert-butylphenol</ns0:cell><ns0:cell>C14H22O</ns0:cell><ns0:cell>206.32</ns0:cell><ns0:cell>4.9</ns0:cell><ns0:cell>CC(C)(C)C1=CC(=C(C=C1)O)C(C)(C)C</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3-Ethylphenol, TMS derivative</ns0:cell><ns0:cell>3-ethylphenol</ns0:cell><ns0:cell>C8H10O</ns0:cell><ns0:cell>122.16</ns0:cell><ns0:cell>2.4</ns0:cell><ns0:cell>CCC1=CC(=CC=C1)O</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4-Isopropylphenol, TMS derivative</ns0:cell><ns0:cell>4-propan-2-ylphenol</ns0:cell><ns0:cell>C9H12O</ns0:cell><ns0:cell>136.19</ns0:cell><ns0:cell>2.9</ns0:cell><ns0:cell>CC(C)C1=CC=C(C=C1)O</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>o-Cresol, TMS derivative</ns0:cell><ns0:cell>2-methylphenol</ns0:cell><ns0:cell>C7H8O</ns0:cell><ns0:cell>108.14</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>CC1=CC=CC=C1O</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2-Methylbutanoic acid, TMS derivative</ns0:cell><ns0:cell>2-methylbutanoic acid</ns0:cell><ns0:cell>C5H10O2</ns0:cell><ns0:cell>102.13</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>CCC(C)C(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3-Methylvaleric acid, TMS</ns0:cell><ns0:cell>3-methylpentanoic acid</ns0:cell><ns0:cell>C6H12O2</ns0:cell><ns0:cell>116.16</ns0:cell><ns0:cell>1.6</ns0:cell><ns0:cell>CCC(C)CC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2-Octanol, TMS derivative</ns0:cell><ns0:cell>octan-2-ol</ns0:cell><ns0:cell>C8H18O</ns0:cell><ns0:cell>130.23</ns0:cell><ns0:cell>2.9</ns0:cell><ns0:cell>CCCCCCC(C)O</ns0:cell><ns0:cell>Saturated alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Benzeneacetic acid, TMS derivative</ns0:cell><ns0:cell>2-phenylacetic acid</ns0:cell><ns0:cell>C8H8O2</ns0:cell><ns0:cell>136.15</ns0:cell><ns0:cell>1.4</ns0:cell><ns0:cell>C1=CC=C(C=C1)CC(=O)O</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Benzenebutanoic acid, TMS derivative</ns0:cell><ns0:cell>4-phenylbutanoic acid</ns0:cell><ns0:cell>C10H12O2</ns0:cell><ns0:cell>164.2</ns0:cell><ns0:cell>2.4</ns0:cell><ns0:cell>C1=CC=C(C=C1)CCCC(=O)O</ns0:cell><ns0:cell>Aromatic acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>m-Cresol, TMS derivative</ns0:cell><ns0:cell>3-methylphenol</ns0:cell><ns0:cell>C7H8O</ns0:cell><ns0:cell>108.14</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>CC1=CC(=CC=C1)O</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2-Hydroxy-4-methylquinoline, trimethylsilyl</ns0:cell><ns0:cell>4-methyl-1H-quinolin-2-one</ns0:cell><ns0:cell>C10H9NO</ns0:cell><ns0:cell>159.18</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>CC1=CC(=O)NC2=CC=CC=C12</ns0:cell><ns0:cell>Aromatic</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>ether</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>amine/alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4-Methylvaleric acid, TMS derivative</ns0:cell><ns0:cell>4-methylpentanoic acid</ns0:cell><ns0:cell>C6H12O2</ns0:cell><ns0:cell>116.16</ns0:cell><ns0:cell>1.4</ns0:cell><ns0:cell>CC(C)CCC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2-Ethylphenol, TMS derivative</ns0:cell><ns0:cell>2-ethylphenol</ns0:cell><ns0:cell>C8H10O</ns0:cell><ns0:cell>122.16</ns0:cell><ns0:cell>2.5</ns0:cell><ns0:cell>CCC1=CC=CC=C1O</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4-Trimethylsilylphenol</ns0:cell><ns0:cell>phenol</ns0:cell><ns0:cell>C6H6O</ns0:cell><ns0:cell>94.11</ns0:cell><ns0:cell>1.5</ns0:cell><ns0:cell>C1=CC=C(C=C1)O</ns0:cell><ns0:cell>Aromatic alcohol</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Benzoic acid, 4-ethoxy-, ethyl ester</ns0:cell><ns0:cell>ethyl 4-ethoxybenzoate</ns0:cell><ns0:cell>C11H14O3</ns0:cell><ns0:cell>194.23</ns0:cell><ns0:cell>3.2</ns0:cell><ns0:cell>CCOC1=CC=C(C=C1)C(=O)OCC</ns0:cell><ns0:cell>Aromatic ester</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Naphthalene, 1,7-dimethyl-</ns0:cell><ns0:cell>1,7-dimethylnaphthalene</ns0:cell><ns0:cell>C12H12</ns0:cell><ns0:cell>156.22</ns0:cell><ns0:cell>4.4</ns0:cell><ns0:cell>CC1=CC2=C(C=CC=C2C=C1)C</ns0:cell><ns0:cell>Aromatic hydrocarbon</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3-Methyl-1-cyclohexanecarboxylic acid,</ns0:cell><ns0:cell>3-methylcyclohexane-1-carboxylic</ns0:cell><ns0:cell>C8H14O2</ns0:cell><ns0:cell>142.2</ns0:cell><ns0:cell>2.1</ns0:cell><ns0:cell>CC1CCCC(C1)C(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>trimethylsilyl ester (stereoisomer 1)</ns0:cell><ns0:cell>acid</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Pentanoic acid, TMS derivative</ns0:cell><ns0:cell>pentanoic acid</ns0:cell><ns0:cell>C5H10O2</ns0:cell><ns0:cell>102.13</ns0:cell><ns0:cell>1.4</ns0:cell><ns0:cell>CCCCC(=O)O</ns0:cell><ns0:cell>Saturated acid</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>2-(1-Adamantyl)ethanol, TMS derivative</ns0:cell><ns0:cell>2-(1-adamantyl)ethanol</ns0:cell><ns0:cell>C12H20O</ns0:cell><ns0:cell>180.29</ns0:cell><ns0:cell>3.4</ns0:cell><ns0:cell>C1C2CC3CC1CC(C2)(C3)CCO</ns0:cell><ns0:cell>Saturated alcohol</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:08:52271:1:2:NEW 9 Mar 2021)</ns0:note></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
<ns0:note place='foot'>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:08:52271:1:2:NEW 9 Mar 2021)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Oliver Jones and reviewers,
Thank you for taking the time to read, evaluate and constructively review our manuscript titled “Non-target screening of organic compounds in offshore produced water by GC×GC-MS”.
We have carefully read and addressed the reviewers’ comments, most of which we agree with. Please see our responses below. For clarity, all reviewer comments have been marked as bold italic. Our rebuttals are written as normal text. All changes in the manuscript have been marked with yellow. The captions for Figures 1 to 5 have been updated. Figure 3 has been graphically revised to add identification of selected compounds to better illustrate the chromatographic separation. Furthermore, the missing information on the composition of the synthetic formation water has been uploaded as a supplemental file.
We believe the revision has improved the manuscript, and that it now is ready for publication. We look forward to hearing your response.
Best regards, Jonas Sundberg (on behalf of all authors)
Researcher, Centre for Oil & Gas
Technical University of Denmark, Elektrovej 375, 2800 Kgs. Lyngby
Denmark
Reviewer 1 (anonymous)
Basic reporting
The basic reporting is good and the manuscript is structured well. The authors should double check for typos (although they are only minor) - for example, line 25 'furtthermore' should be corrected to 'furthermore.'
Thank you. The revised manuscript has carefully been proofread to remove grammatical errors and typos.
Experimental design
While this is an interesting manuscript and good concept, I think the experimental design is flawed. From my understanding of the manuscript, you are using the NIST library to identify compounds. However, you only use TMS derivatisation to derivatise your samples for running on the GCxGC-MS. This means you are missing many samples or will be unable to identify them using the NIST library. The prime example is fatty acids. Those greater than C12 in length, in particular. While they can and are routinely TMS derivatised, this is not the gold standard for derivatisation and, therefore, much of this data is missing/lacking from the NIST library. These compounds are rountinely derivatised to make methyl esters and it is the MS data from these that are found in the NIST library. For such as scoping study, it would be better to use LC-MS as derivatisation isn't (always) needed.
We agree that several compounds will be missed using GC-MS using automatic library matching. Any analytical method only will cover a subset of the present compounds. When working with complex samples, it is not possible to get full compound coverage using a single technique. Furthermore, due to the large number of features we must to a certain extent rely on semi-automatic workflows which have several limitations. We believe our sample preparation and data processing workflow was the correct choice due to the assumption that produced water would contain a complex mixture of semi-volatile hydrocarbons and their oxygenated analogs. Thus, we disagree that LC-MS would be better for a scoping study for several reasons.
Although carboxylic routinely are analyzed using LC-MS, they also suffer from poor ionization efficiency using electrospray ionization. Thus, here we might miss compounds present only at trace levels. Secondly, our samples contain a mixture of semi-polar and non-polar analytes. Here, GC-MS provides both a good compromise both in terms of chromatography but also more general ionization capabilities. Furthermore, these are highly complex samples and 1D-LC probably would have caused a large number of co-eluting peaks. We are currently not set up for using 2D-LC.
Secondly, one of the strengths of using GC-MS-based methods is the high reproducibility of mass spectra (independent of experimental conditions) and the availability of large libraries, e.g. NIST/Wiley. Non-target screening using LC-MS is more dependent on formula annotations, in-silico fragmentation, and in-house libraries. While powerful, it also has severe drawbacks especially at a stage where the composition and compound class distribution is largely unknown.
Regarding the choice of derivatization reagent, we routinely use BSTFA (containing 1% trimethylsilyl chloride as a catalyst) for the analysis of carboxylic acids and alcohols. Although not perfect, we have found it to work well when validated using model compounds, including fatty acids. The NIST library (2017 edition) contains the TMS derivatives of all fatty acids up to C30. For a comparison of various derivatization reagents towards naphthenic acids, see Energy & Fuels 2010, 24, 2300–2311 (doi:10.1021/ef900949m). Here, BSTFA was found to be more efficient than methyl esterification using BF3/MeOH.
Ultimately, we agree that the study would benefit from the use of complementary analyses, which we also argue for in the conclusion. However, due to time restraints and the current COVID-19 situation with limited access to lab facilities, we were not able to widen the scope beyond GCxGC-MS. The first author defended her Ph.D. thesis in October 2020, and it was therefore decided to limit the current study. We believe that we have generated a strong dataset that covers an important fraction of the organic compounds present in produced water.
Validity of the findings
Unfortunately, not all supplemental data was available to me. There was only jpeg images of 2D scans with no figure legends. Therefore, it was difficult to assess many of the points in the manuscript where it said to look at supplementary data.
Due to size limitations at PeerJ, the full supplemental data was deposited at a Zenodo repository (doi:// 10.5281/zenodo.4009045). This was stated on the first page of the review PDF. However, it may not have been made clear enough in the manuscript and we have now changed the text to better reflect this. The repository contains all feature tables (raw, merged and scored), Python code and one raw GCxGC chromatogram (Agilent-format) as an example. The full sample set contains more than 500 Gb of chromatographic data which is beyond the limit at Zenodo. However, we will more than happily share all data with anyone interested.
Comments for the author
This is a very interesting study and does show that there is more compounds than we think in produced water.
Thank you!
Line 69 - reference requires a date
The correct year of publication has been added.
Line 76 - '(LLE)' needs to be placed after the word 'extraction'
The abbreviation has been moved to the correct location.
Line 130 - why was 25 ppm chosen? This is not made clear in the manuscript
The OSPAR regulation requires that produced water contains less than 30 ppm of dispersed oil for discharge to sea. However, produced water is a much more complex mixture with both dispersed and dissolved oil and where the individual components are present at lower levels. It is always difficult to create a realistic model sample when balancing cost, time and availability of model compounds. The 25 ppm value was to mimic somewhat representative conditions., A short sentence on this has been added to the manuscript.
Line 130 - this information is not in the supplemental data
The specification for our synthetic formation water was unfortunately omitted during manuscript submission. The associated data has now correctly been deposited as supplemental data.
Line 132 - it says the synthetic water was extracted 4 times, but it doesn't say how. More detailed is required, even if it is just to say 'using the above method.'
The sentenced have been modified to clarify that the same extraction procedure was used for both real-life samples and the synthetic seawater.
Line 158 - the script isn't in the supplementary information provided
The script (implemented in a Jupyter Notebook) is included in the dataset deposited at the Zenodo repository (doi:// 10.5281/zenodo.4009045).
Line 186 - there is no need to say '1.5 mL of n-hexane' because this is actually a false number, because in the actual protocol you are summarising 1 mL is transferred to be derivatised
This is correct, only 1.0 mL of the total 1.5 mL sample volume was used for derivatization. The sentence has been modified to reflect this.
Figure legends need to be improved to make it easier for the reader to understand what is happening in the figure and they should be able to stand alone of the text.
We have revised all figure legends to better represent the content, and
Reviewer 2 (Anonymous)
Basic reporting
See comments grouped below.
Experimental design
See comments grouped below.
Validity of the findings
See comments grouped below.
Comments for the Author
The author study the important problem of produced water, released into the environment after treatment, using GCxGC-MS tools.
- lines 80-91: the authors just mentioned GCxGC, now they write about GC and LC. The authors should clarify if this text (lines 80-91) is about unidimensional or multidimensional GC and LC.
The main point of this section was to briefly discuss why gas and not liquid chromatography was chosen as the analytical methodology. We now see that it was slightly unclear/unfocused, and the section has been revised in an attempt to make it more obvious.
- lines 107-114: were the samples more or less regularly spaced (in time) during this period? Were there some replicate samples?
The samples were obtained at irregular intervals with no replicates. We were “at the mercy of” the operator, and the samples were also for use in related projects. The lack of control over sampling conditions, timing and storage are frustrating as analytical chemists. Ultimately, a more well-planned sampling campaign would have been beneficial. However, we believe we did the best we could considering the difficult conditions.
- line 112: might the plastic from the bottles have affected the results (partitioning of hydrophobic molecules to the plastic?)
Yes! There is a large uncertainty involved in both the sampling, transport and storage of these samples. We were not allowed to receive samples in glass bottles due to safety concerns on the platform. We were neither in control of when samples were taken or how they were stored during transport onshore. To make the best of the situation, 500 mL aliquots of each sample were immediately transferred to glass bottles upon retrieval. The aliquots were acidified with 18% hydrochloric acid and stored at 4 °C until analyses. Again, many factors were beyond our control and we did our best to reduce any variability from storage, handling etc.
- line 116: 'extracted in three experimental replicates'. Maybe clarify if three 'analysis replicates' or 'aliquot replicates' are meant. The use of the word 'experimental' feels a bit unclear it seems. (Or was there always three replicate samples collected on the platform at each time point?) (If what is described here is the same as at lines 180-181, the text at lines 180-181 is more clear.)
We agree. The correct wording is “aliquot replicates”, i.e. three 50 mL volumes of each sample was extracted in order to reduce sample preparation variation and establish repeatability. The text has been altered to better reflect this.
- lines 107-114: it would help to specify at what stage in the process the samples were taken (upon entering the test separator? upon leaving it (effluent)?). What were the different steps of processing before the sample was taken, and are there further steps that are performed on the platform before release into the sea? (It is basically somehow unclear if the sample corresponds to raw water coming out of the well, partly-treated water, or treated water ready for release/disposal). It is also unclear what is meant with 'test separator'. additionally: it would help to specify the sample volume.
The test separator is a simple three-phase gravity-based separating system. Here, the oil separates from the water by layering (like a large-scale separatory funnel). Water samples can then be obtained by tapping. This occurs immediately after production, so before any type of water treatment (or further separation) has been carried out. A sentence explaining it’s function, and sample volume, has been added to the Experimental section.
- line 177 (Figure 1): could the authors specify in the caption to that figure (or in the text) which ones of the bottles on Figure 1 were rejected? Also, could the authors clarify if the nine samples mentioned at line 107 were the number of samples that passed this test, or if (how many?) of these nine samples were excluded because of a visible oil layer.
A total of nine (9) samples were included in the study (i.e. extracted, analyzed and processed). The number of received samples was larger but of highly varying quality. Some included a very large portion of crude oil, others a thin layer. Therefore it was decided to inspect samples using an optical microscope. Only samples with very little-to-no oil were included as it was difficult to sample the other (oil sticking to pipettes, glass etc). We have revised the text to better reflect the sample number and situation regarding which were included or excluded.
- line 177 (Figure 1): the caption to Figure 1, it is said that these are the samples 'as received'. Are these really plastic bottles (line 112)? (I may have a wrong feeling, it looks like glass bottle with a plastic cap...)
You are correct. The water samples are as received, after transfer to glass bottles (from the original plastic bottles). The figure caption have been revised to reflect this.
- line 179 (Figure 2): the caption to Figure 2 is unclear. What are the two panels?
The title and caption to Figure 2 have been updated for improved clarity.
- lines 183-184: what about small aromatics like benzene, toluene, etc.? Are these not expected to be present? Are these lost (volatilized) during the produced water treatment on the platform?
We were surprised by the lack of hydrocarbons, especially BTEX-type compounds that have relatively high solubility in water. It’s difficult to explain the lack of such compounds. We can only hypothesize. One scenario is that they aggregate in the oil phase to a higher extent than assumed and thus are extracted from the aqueous phase in the test separator/during storage. Another scenario is that they are lost during sample storage and transport, either through volatilization/diffusion through plastics or via microbial degradation (which would give oxygenated transformation products as intermediates, i.e. same type of compounds we observe). However, this is speculative and such discussion was not included in the manuscript.
We know from experience working with crude oil samples, that it is very easy to lose volatile to semi-volatile hydrocarbons during sample preparation. Therefore, the evaporation step of our sample preparation protocol was carefully undertaken without excessive heat and stopping immediately prior to when all solvent was removed. We, therefore, believe that these hydrocarbon compounds are lost prior to arriving
- lines 195-197: is retention time shifting also a problem (peak deformation, leading to shifting of the peak apex, which is sometime taken as peak position). Or are retention times irrelevant in the authors's analysis? (Line 257 mentions Kovats retention indices, and I think these must be calculated from retention times (?)).
As discussed under the Chromatography section, retention time shifting was monitored using deuterated standards. The R.T in both 1D and 2D was stable over the whole batch. Kovats retention indices were calculated based on the calibration using a C7 to C30 normal alkane external standard. The RIs were matched against library values and included in the final scoring of each tentatively identified compound.
- lines 204-205: what is 'synthetic produced water'? clarify the water used..
The composition of the synthetic formation water (referenced in the Experimental section) used to prepare the model produced water was unfortunately omitted during the first submission. It is now included in the supplemental data as an Excel sheet containing precise ionic composition.
- line 207: 'peak volumes': the authors may like to clarify how peak volumes were determined. This may be relatively software-independent for such well-separated peaks, but in some cases peak volumes may be very dependent on the approach use (e.g. software, baseline approach, etc., see Samanipour et al., 2015).
Baseline correction and peak/blob detection were carried out using the GC Image software as mentioned under the Experimental section. More details have been added to the text to clarify.
- line 210: I believe that the authors mean 'relative standard', not 'relative-standard'.
Yes. The dash has been removed.
- line 211: 'the model compounds': do the author mean 'peak volumes' of these compounds(?)
Yes. The wording has been corrected in the text.
- line 247 (and throughout): 'experimental replicates': as written above, is 'experimental' really the best choice of word?
We agree that the wording might cause confusion. As pointed out, “experimental replicates” can be interpreted to mean replicate at the sampling stage, which is incorrect in our case. We are referring to three separate extractions of sample aliquots. This was carried out to reduce/confirm sample preparation and analytical variability. We have changed the term into extraction replicates throughout the manuscript.
- line 253: this is shown in Figure 4, not Figure 3. (By the way, for figure 3, up to the authors, but it might help to annotate the chromatogram to show the positions of a few compounds known to the authors--it seems they describe some in the text. I am personally more familiar with a different set of columns and with crude oil chromatograms, and I'd love to be provided such information on that figure to guide my understanding... (this is a mere suggestion, maybe resulting from my ignorance...))
We agree. Displaying 2D chromatography in a static format is quite difficult, as the information content is extremely high and the “physical” space is limited. This results in highly busy plots. We initially tried to annotate the example chromatogram, but felt we did not succeed very well. We have now tried again and updated the figure to contain structural information on a few selected representative compounds. Hopefully it makes sense and creates a better understanding of the chromatographic separation.
- around line 294: it would help to provide additional explanation of the two contrasting studies. Was the study by Sorensen et al. for produced water containing droplets? Was the sample at a different stage of processing of the produced water? Was there any difference in the analysis method that can explain that?
Sørensen et. Al. does not provide any details on how and where the produced water was sampled (beyond being ‘collected from platforms’). Based on the sample in the abstract, it looks like the sample contain a large amount of dispersed oil (visibly brownish appearance). However, there were no discussion regarding the state of samples in their manuscript and we were very careful in trying to interpret as to why this difference is seen. We were surprised to not see higher levels of hydrocarbons, and our hypothesis is that these compound have aggregated at the oil droplets during sample storage. We have updated the section to reflect this as much as possible without introducing too much “guesswork”.
- line 318: relative concentrations: it might be appropriate for the authors to provide a quantitative statement? ('high' is very unspecific...)
As this was a qualitative and not quantitative study, we cannot provide absolute values. Although we agree that the statement can be seen as unspecific, it’s difficult to be more precise. What we mean is that some samples show very low to none abundance of certain compounds, where others have a high abundance of the same compound. We have slightly changed the wording, replacing “high” with large and “concentration” with “abundance”.
- line 328: 'from the same well and only three days apart,': there is a lack of detail about the sampling times in the methods, which limit the ability to appreciate this statement...
Yes. Unfortunately, we only have information on the sampling well and date. Still, we believe it was valid to include this statement this indicates that the water composition is dynamic on a relatively short timeframe. We cannot identify the source of this variability in the current study, as it would have required a differently designed sampling campaign. Hopefully, this can be done in further studies. We have added two sentences to clarify the situation.
- line 129: 'sampling and test separator status' is unclear.
We agree, this sentence was poorly constructed. We have changed the section to better reflect what we intended to say.
- supplementary files: the chromatograms are given file names that probably pertain to sample ID, but this info is (it seems) not available to the author. Maybe add a table providing understanding for the readers.
All samples were donated to us by an industrial partner. The sampling date and field are known to us, but we were kindly asked to not include this information in the manuscript. As the information does not provide additional value to the objective of the study (i.e. identification of new and unknown compounds), all samples have been assigned anonymized ID names. These sample IDs are consistent throughout the full supplementary data, i.e. raw and scored feature tables as provided as supplementary data at the Zenodo repository (doi://).
" | Here is a paper. Please give your review comments after reading it. |
670 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Characterization of crude oil remains a challenge for analytical chemists. With the development of multi-dimensional chromatography and high-resolution mass spectrometry, an impressive number of compounds can be identified in a single sample.</ns0:p><ns0:p>However, the large diversity in structure and abundance makes it difficult to obtain full compound coverage. Sample preparation methods such as solid-phase extraction and SARA-type separations are used to fractionate oil into compound classes. However, the molecular diversity within each fraction is still highly complex. Thus, in the routine analysis, only a small part of the chemical space is typically characterized. Obtaining a more detailed composition of crude oil is important for production, processing and environmental aspects. We have developed a high-resolution fractionation method for isolation and preconcentration of trace aromatics, including oxygenated and nitrogencontaining species. The method is based on semi-preparative liquid chromatography. This yields high selectivity and efficiency with separation based on aromaticity, ring size and connectivity. By the separation of the more abundant aromatics, i.e. monoaromatics and naphthalenes, trace species were isolated and enriched. This enabled the identification of features not detectable by routine methods. We demonstrate the applicability by fractionation and subsequent GC-MS analysis of 14 crude oils sourced from the North Sea.</ns0:p><ns0:p>The number of tentatively identified compounds increased by approximately 60 to 150% compared to solid-phase extraction and GC×GC-MS. Furthermore, the method was used to successfully identify an extended set of heteroatom-containing aromatics (e.g. amines, ketones). The method is not intended to replace traditional sample preparation techniques or multi-dimensional chromatography but acts as a complementary tool. An in-depth comparison to routine characterization techniques is presented concerning advantages and disadvantages.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The use of petroleum as a feedstock for energy production is declining. However, certain critical functions cannot safely be replaced by renewable energy ('Net Zero by 2050 A Roadmap for the Global Energy Sector,' 2021). Secondly. petroleum is a fundamental feedstock for the production of a large number of chemical starting materials <ns0:ref type='bibr' target='#b0'>(Aftalion, 2001;</ns0:ref><ns0:ref type='bibr'>Yadav, Yadav & Patankar, 2020)</ns0:ref>. Therefore, reducing the environmental impact of oil production is an important goal. This requires a better understanding of petroleum on the molecular level. Crude oil is a complex mixture of saturated and aromatic hydrocarbons with a smaller fraction of heteroatom-containing compounds, i.e. the resins and asphaltenes. The molecular distribution typically ranges from 16 to 1000 amu <ns0:ref type='bibr'>(Marshall & Rodgers, 2008)</ns0:ref>. The number of unique compounds is extensive and more than 240 000 molecular species have been resolved in a single sample <ns0:ref type='bibr'>(Krajewski, Rodgers & Marshall, 2017;</ns0:ref><ns0:ref type='bibr'>Palacio Lozano et al., 2019)</ns0:ref>. Due to this complexity, a large portion of the petroleum chemical space is structurally unknown. We have previously looked at the resins fraction (i.e. polar heteroatom-containing species) of North Sea oils <ns0:ref type='bibr'>(Sundberg & Feilberg, 2020)</ns0:ref>. Herein, we extend our work with a focus on aromatics. Within this fraction, the dominant species (in terms of abundance) are monoaromatic followed by a smaller amount of polycyclic aromatic hydrocarbons (PAHs) <ns0:ref type='bibr'>(Requejo et al., 1996;</ns0:ref><ns0:ref type='bibr'>Wei et al., 2018)</ns0:ref>. The PAHs class is dominated by smaller (2 to 3 rings) PAHs, with larger species (e.g. chrysene, coronene) being present at trace levels. It also contains small amounts of heteroatomic-containing ring structures <ns0:ref type='bibr'>(Mössner & Wise, 1999;</ns0:ref><ns0:ref type='bibr'>Zhang et al., 2018;</ns0:ref><ns0:ref type='bibr'>Carvalho Dias et al., 2020)</ns0:ref>. Due to their toxicity, PAHs have been extensively studied <ns0:ref type='bibr'>(Lawal, 2017)</ns0:ref>. A large focus has been on the 16 priority pollutants PAHs defined by the U.S. Environmental Protection Agency <ns0:ref type='bibr'>(Keith, 2015)</ns0:ref>. However, this list is not representative of crude oils which contain a more structurally diverse PAH set <ns0:ref type='bibr'>(Andersson & Achten, 2015;</ns0:ref><ns0:ref type='bibr'>Stout et al., 2015, p. 16</ns0:ref>). Low molecular weight PAHs are susceptible to weathering, primarily by volatilization, whereas high molecular weight aromatics are more resilient <ns0:ref type='bibr'>(John, Han & Clement, 2016)</ns0:ref>. Therefore, these are useful targets for oil-oil and oil-source correlation and spill identification and environmental monitoring <ns0:ref type='bibr'>(Pampanin & Sydnes, 2017;</ns0:ref><ns0:ref type='bibr'>Poulsen et al., 2018)</ns0:ref>. Comprehensive identification of the aromatics is challenging due to the large concentration variance. Traditionally, petroleum analysis is based on pre-fractionation using silica chromatography or solid-phase extraction (SPE) cartridges followed by GC-MS n <ns0:ref type='bibr'>(Wang, Fingas & Li, 1994;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alzaga et al., 2004;</ns0:ref><ns0:ref type='bibr'>Pillai et al., 2005;</ns0:ref><ns0:ref type='bibr'>Gilgenast et al., 2011)</ns0:ref>. SPE is a low-efficiency separation technique, depending on chemical selectivity. This allows crude isolation of the aromatics fraction, but not separation of the compounds within it (Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). Thus, an aromatic fraction obtained by SPE contains both the benzenes, naphthalenes and larger rings. Here, a typical crude oil will have a high abundance of monoaromatics, with diminishing concentrations with increasing ring size. The appropriate GC on-column concentration of the naphthalenes typically results in the larger ring systems being below the limit of detection (LOD). By increasing concentration to push trace aromatics above the LOD, both the column and detector will be saturated by the more abundant compounds. This leads to high background levels which affect quantitation and may obscure mass spectra complicating structural identification of unknowns <ns0:ref type='bibr'>(Zhao et al., 2014;</ns0:ref><ns0:ref type='bibr'>Wilton, Wise & Robbat, 2017)</ns0:ref>. Furthermore, the poor resolution of SPE often leads to an overlap between the saturated and aromatic hydrocarbons, which interferes with subsequent analysis. Thus, although SPE is efficient for routine applications, a large portion of the sample remains undetected. Comprehensive multi-dimensional chromatography (GC×GC) is often used as an alternative to simplify or remove the need for sample pre-fractionation <ns0:ref type='bibr'>(Nizio, McGinitie & Harynuk, 2012;</ns0:ref><ns0:ref type='bibr'>Jennerwein et al., 2014;</ns0:ref><ns0:ref type='bibr'>Stilo et al., 2021)</ns0:ref>. However, it does not solve the issue with variable abundance and column/detector overload. Thus, a complete qualitative and explorative oil analysis requires a more selective sampleprefractionation method. Herein, we present a high-performance liquid-chromatography (HPLC) method for the automated high-resolution fractionation of crude oil using commercially available columns. The method can resolve aromatics based on ring size and connectivity, i.e. fused and non-fused rings (e.g. naphthalene versus biphenyl). The fractions may be diluted or concentrated, depending on the target, for subsequent analysis and can thus be used to concentrate trace species. The method is easily modified to selectively collect only fractions of interest, and the aromatics may be collected either as one or several fractions. We demonstrate the method's applicability by fractionation of fourteen crude oils with subsequent GC-MS analysis. The method is compared to data obtained using SPE and GC×GC-MS. We demonstrate how it's especially suitable for the analysis and identification of trace aromatics by the successful tentative identification of several compounds not observed using comparable methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Chemicals and reagents</ns0:head><ns0:p>Chloroform, dichloromethane, n-hexane, deuterated standards and model compounds (ethylbenzene, naphthalene, biphenyl, phenanthrene, 1-benzylnaphthalene and chrysene) were purchased from Sigma Aldrich and used as received.</ns0:p></ns0:div>
<ns0:div><ns0:head>Samples</ns0:head><ns0:p>Fourteen crude oils sampled from producing fields in the Danish region of the North Sea were obtained from Maersk Oil (now Total E&P). The samples were received in metal containers (jerrycans) and transferred to glass bottles upon arrival. The samples were stored at room temperature protected from light.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sample preparation</ns0:head><ns0:p>Solid-phase extraction Crude oil (10 µL) was combined with 100 µL of a solution containing alkane internal standards (decane-D22, hexadecane-d34 and eicosane-D42, 400 µg/mL in n-hexane), 50 µL of PAH internal standards (naphthalene-D8, phenanthrene-D10, acenaphthene-d10, chrysene-d12 and perylene-d12, 30 µg/mL in n-hexane) and further diluted with nhexane (840 µL). A solid-phase extraction column (Phenomenex EPH Strata, 200 µm, 70 Å, 500 mg / 3 mL) was cleaned and conditioned by CH 2 Cl 2 (3x1 mL) followed by nhexane (3x1 mL). 100 µL of oil solution was carefully applied to the column and was allowed to settle for 5 minutes. Saturated hydrocarbons were eluted into one fraction with three portions of n-hexane (3x600 µL). Aromatic hydrocarbons were eluted using dichloromethane (1x1800 µL). The solvent level of each fraction was reduced to 500 µL under a gentle stream of nitrogen without applied heating to avoid losses of volatile components.</ns0:p></ns0:div>
<ns0:div><ns0:head>Liquid chromatography fractionation</ns0:head><ns0:p>Fractionation of crude oil was carried out on a Dionex UltiMate 3000 HPLC equipped with a DAD-3000 diode array, a RefractoMax RI-521 refractive index (RI) detector and an AFC-3000 fraction collector. The system was fitted with one six-port/two-way and one ten-port/two-way port to enable selective backflush of the primary column. A Thermo Scientific Hypersil Gold NH 2 (4.6 mm i.d., 3 µm, 150 mm) and a Hypersil Silica (4.6 mm i.d., 3 µm, 150 mm) were connected in series. The sample manager was kept at 20 °C and the column oven at 30 °C. The injection volume was 50 µL. Samples were diluted at 1:2000 in n-hexane and stored at -20 °C for 24 hours to precipitate asphaltenes. The samples were centrifuged and an aliquot of the mother liquor was carefully transferred to an autosampler vial for analysis. Separation of saturates and aromatics was achieved via isocratic n-hexane elution during which 30 s wide fractions were collected. After elution of aromatics, the primary column was rinsed using a backflush gradient from n-hexane to 1:1 2-propanol:chloroform. The collected fractions were diluted (saturates, mono-and di-aromatics) or concentrated (tri-aromatics and larger) for analysis on GC-MS. For enrichment experiments, consecutive fractionations (typically 3 to 6) were performed with pooling of the eluents followed by solvent reduction under a gentle stream of N 2 at 30 °C.</ns0:p></ns0:div>
<ns0:div><ns0:head>Analytical methods</ns0:head></ns0:div>
<ns0:div><ns0:head>GC-MS</ns0:head><ns0:p>GC-MS data were recorded using an Agilent 5977B GC-MSD as follows; 250 °C inlet, 320 °C transfer line, splitless injection (1 µL), Agilent DB-5MS (30 m, 0.25 mm i.d., 0.25 µm). The oven temperature gradient was programmed as follows; 50 (1 min. hold-time) -320 °C (8 min hold-time, 10 °C/min.), helium carrier gas at 1.5 mL/min. in constant flow mode. GC×GC-MS data were recorded using an Agilent 7200B GC-QTOF equipped with a Zoex ZX-2 thermal modulator (Zoex Corporation, Houston, TX, USA) as follows; 250 °C inlet, 320 °C transfer line, splitless injection (1 µL), Agilent DB-5MS UI (1D, 30 m, 0.25 mm i.d., 0.25 µm df) and a Restek Rxi-17Sil MS (2D, 2 m, 0.18 mm i.d., 0.18 µm df) capillary columns connected using a SilTite µ-union. The oven was temperature programmed as follows; 50 (1 min hold-time) -320 °C (3 °C/min.), helium carrier gas at 1 mL/min. in constant flow mode. The modulation period was set to 6 s with a 400 ms hot-jet duration. Data processing Data were screened using Masshunter Qualitative Navigator (Agilent, B.08.00). Peak detection and compound identification were performed using MassHunter Unknowns Analysis (Agilent, B.09.00) and the NIST Standard Reference Database (1A v17, Gaithersburg, MD, USA). Feature tables were exported as CSV files and imported into a Jupyter Notebook for further processing using the Python scripting language. Duplicates based on the CAS number were removed from the feature tables. All compounds containing silica and halogens were removed. The double-bond equivalent values were calculated for each compound and all features with a DBE of less than 4 were excluded. Finally, experimental and literature retention indices (RI) were compared with flagging of all compounds where the difference was larger than 50 units.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>Method development</ns0:head><ns0:p>The objective of the method was to 1) separate saturates and aromatics and 2) intraclass separation of the aromatics with enrichment capabilities. A dual-column setup using normal phase analytical LC-columns provided the required selectivity and efficiency. The primary column (Thermo Scientific Hypersil Gold NH 2 ) acted as a retainer for polar components, whereas a secondary pure silica-based column (Thermo Scientific Hypersil Silica) was required for the separation of saturated and aromatic hydrocarbons. The separation was optimized using six model compounds commonly found in crude oil (Figure <ns0:ref type='figure'>2</ns0:ref>). An isocratic n-hexane elution yielded separation of saturated and mono-aromatic hydrocarbons, as well as separation of polycyclic aromatics based on ring size and connectivity (Figure <ns0:ref type='figure'>3</ns0:ref>). The fraction collector was programmed to collect 30 second wide fractions based on the peak widths of the model compounds. At this fraction width, we observed only a minor overlap of fractions with co-elution of the most abundant components. A reduction of the fraction width can be set if a higher peak purity is required. The cost is a slight loss of recovery. After elution of the last aromatics as observed by UV/Vis, the flow path was selectively reversed for the primary column. The column was then rinsed using a gradient from 100% n-hexane to 50:50 chloroform:2-propanol. This effectively removed the adsorbed resins on the amide column. The fraction collector is within the flow path during all stages of chromatography and the resins may therefore be isolated for further analysis <ns0:ref type='bibr'>(Sundberg & Feilberg, 2020)</ns0:ref>. The final step is a re-equilibration of the whole system by a return to isocratic n-hexane and flushing at an increased flow rate to remove the polar solvents from the flow path. Improper re-equilibration resulted in a severe loss of retention in subsequent fractionations due to the adsorption of 2isopropanol on the silica phase. Recovery values were calculated using the model compounds by comparison of peak areas obtained on GC-MS from LC-fractions compared to direct analysis of the standards (Table <ns0:ref type='table'>1</ns0:ref>). Three analytes have recovery values slightly above 100%. This is likely due to a discrepancy between programmed and real injection volume on the HPLC auto-sampler. In contrast, the recovery is less than 90% for chrysene. For this compound, we observe peak broadening due to the high capacity factor. As the fraction collection width is static during the full run, the low recovery is attributed to the peak being wider than the collection width. To evaluate the reproducibility of complex samples, a single oil was fractionated three consecutive times. Each fraction was analyzed on GC-MS and the relative standard deviation was determined from peak areas. 1,2,4-Trimethylbenzene, naphthalene and phenanthrene gave 4.6,8.1 and 6.4% respectively. The results are similar to those obtained using a model mixture. This shows that the method performs consistently in the presence of a highly complex oil matrix. Furthermore, the method successfully removes interferences and yields a high signal-to-noise ratio for the target analytes in each fraction (Figure <ns0:ref type='figure'>4</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Applicability in crude oil analyses</ns0:head><ns0:p>The applicability of the method was demonstrated by fractionation and analysis of 14 crude oils. The oil samples were sourced from producing fields in the Danish region of the North Sea. Crude oils from this region typically have an aromatics content of 25 -30%, of which the majority are BTEX-type monoaromatics (benzene, toluene, ethylbenzene, xylene) with a continuous decrease in abundance with increasing ring size <ns0:ref type='bibr'>(Sundberg & Feilberg, 2020)</ns0:ref>. The primary fraction is the saturated hydrocarbons followed by the resins (up to 5%) with only traces of asphaltenes. This is evident from the fractionation, where a typical dilution factor of 50/20 had to be applied to the saturated and monoaromatic fractions respectively (Table <ns0:ref type='table'>2</ns0:ref>). The fractions containing larger aromatics were analyzed either undiluted or concentrated by solvent reduction. The first fraction contains the paraffins and naphthenes and is poorly retained on the primary LC-column (Figure <ns0:ref type='figure'>5</ns0:ref>). The second silica column is required to separate them from the monoaromatics, which elute as the second fraction (Figure <ns0:ref type='figure'>6</ns0:ref>). The third and fourth fractions contain diaromatic species, with the latter non-fused ring systems (e.g. naphthalene versus biphenyl). Fractions 5 and 6 contain the triaromatics (e.g. phenanthrene versus 1-phenylnaphthalene). Here, the abundance starts to diminish and the sixth fraction had to be concentrated for subsequent GC-MS analysis. Fractions 7 and above contain larger ring systems, e.g. chrysene, perylene. These fractions are less well-defined, likely because compounds eluting within this retention range are fewer in number and present in trace amounts. We also observed a slight loss of resolution, with minor overlap and cross-contamination. This is a result of two things; 1) diffusion and peak broadening during the liquid chromatography 2) collection of low abundance (undiluted/concentrated) fraction after a high abundance (diluted) fraction. If higher purity peaks are required the fraction collection width can be reduced. Attempts to concentrate fractions 9 and later were not successful and the gas chromatograms were dominated by background contaminants likely originating from the solvents, HPLC-tubing and glassware (e.g. siloxanes, surfactants).</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance comparison with SPE-GC-MS and GC×GC-MS</ns0:head><ns0:p>Solid-phase extraction of crude oil into its saturated and aromatic fraction is a wellestablished sample preparation method. The physical properties of SPE adsorbents (large particle size, low mass loadings) result in limited separation power <ns0:ref type='bibr'>(Berrueta, Gallo & Vicente, 1995;</ns0:ref><ns0:ref type='bibr'>Buszewski & Szultka, 2012)</ns0:ref>. Thus, the technique is mainly applicable for the crude separation of different compound classes. It does not provide sufficient resolution to separate closely related compounds within subfractions. To compare our method to SPE we fractionated each oil using a Phenomenex Strata EPH (200 µm, 70 Å, 500 mg / 3 mL). The cartridge contains a proprietary phase specifically developed to separate hydrocarbon fractions <ns0:ref type='bibr'>(Countryman, Kelly & Garriques, 2005)</ns0:ref>. In terms of spectral quality, an approximately 10-fold reduction in background noise is observed in the LC fractions as compared to SPE. A comparison of the extracted mass spectra for the peak corresponding to 1,3-dimethylpyrene is presented in Figure <ns0:ref type='figure'>7</ns0:ref>. Selected ion monitoring can be used to reduce background interferences for target species but results in loss of spectral detail for qualitative analysis. Furthermore, when using low-resolution instruments, i.e. single quadrupole MS, there is a large risk of overlap in complex samples <ns0:ref type='bibr'>(Rosenthal, 1982;</ns0:ref><ns0:ref type='bibr'>Davis & Giddings, 1983)</ns0:ref>. The reduction in background noise improved library matching, especially for analytes present at trace levels.</ns0:p><ns0:p>To evaluate identification performance, peak picking and library matching were carried out using MassHunter Unknowns Analysis and the NIST mass spectral library. The match factor limit was set to 700. The number of compounds was compared both on a sample-to-sample basis and by merging all features from all samples (with duplicates removal based on CAS number). A comparison of the merged compound tables of all samples shows that using the LC-GC method we can identify 957 compared to 601 compounds using SPE. This is an increase of 37.2%. To increase the match confidence, we applied a retention index (RI) filter, only retaining compounds with a match within 100 units of the library value. By doing so, we identified 426 compared to 300 (42% increase). This excludes all compounds of which a library RI is not available (approximately 1% of our feature set). However, a large portion of the compounds only have computationally approximated retention indices and not experimentally determined values. Thus, all filtering and data analyses should be carried out with care and manual intervention. The SPE fraction contains approximately 190 unique compounds with 404 compounds overlapping both analyses. Manual inspection reveals this list contains several petroleum-type compounds and not predominantly background noise or contamination (e.g. plasticizers, column contamination). One plausible source is errors occurring during the automatic processing routines. Small differences in mass spectra (e.g. due to abundance or background level) can lead to closely related library matches being given similar (but different) priority (e.g. isomeric species). Figure <ns0:ref type='figure'>8</ns0:ref> shows the DBE distributions of assigned compounds uniquely observed from SPE-GC and LC-GC methods. Noticeably, the DBE distributions are significantly different between the two methods. The distribution of unique compounds from SPE-GC is centered around low DBE 4 and 5 (e.g. monoaromatic), whereas unique compounds from LC-GC are distributed more evenly at higher DBE values. This is expected as the LC-GC method isolates and enriches high aromaticity fractions. These findings showcase the ability of LC-GC as a high-resolution fractionation method for crude oil. For comparison to comprehensive multi-dimensional chromatography, the samples were analyzed by our in-house routine GC×GC-MS method (i.e. solvent dilution, filtration and analysis) (Figure <ns0:ref type='figure'>9</ns0:ref>). The objective of the GC×GC method is not to maximize feature ID but enable multi-class analysis/fingerprinting with minimal to no sample preparation. Furthermore, the SPE-LC-GC and GC×GC analyses were carried out on different instruments which makes direct comparison challenging. For GC×GC an Agilent 7200B QTOF high-resolution mass spectrometer was used. For SPE-LC-GC, an Agilent 5977B single quadrupole equipped with a High-Efficiency Source (HES) was used. The HES has both higher sensitivity and dynamic range. Secondly, for GC×GC, the dilution factor was adjusted so that the analytes with the highest abundance were at detector saturation. Here, we see that although GC×GC is not restricted in terms of peak capacity, it does fall short in terms of dynamic range. After blob detection, library matching and filtering we obtain 63 tentative hits in a single sample. With corresponding processing settings, we identified 143 compounds by multi-fraction LC-GC-MS analysis of the same sample. This is an increase of 127%. In terms of manual intervention, ease of use and time of analysis, GC×GC is preferred compared to LC-GC-MS. However, the amount of data generated using the latter is more comprehensive in our case. A Venn diagram was constructed to compare three methods (Figure <ns0:ref type='figure'>8</ns0:ref>). The number within each colored circle represents the number of assigned unique compounds for each method, whereas numbers in overlapped zones represent the number of compounds that have been co-assigned from corresponding methods. The amount of compositions obtained from LCxGC (957) significantly surpasses GC×GC (181) and SPE-GC (601). We obtained approximately 50% unique compounds with GC×GC and LC-GC and 32% with SPE-GC. The Venn diagram also shows that co-assigned compounds of those three methods cover a narrow range of overall chemical composition (59 co-assigned compounds) of crude oil. Again, it is worth noting that there are differences in terms of dilution factor and instrumental parameters for those methods. Therefore, the comparison is biased but still relevant to evaluate the LC-GC method for trace components analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We have developed a method for high-resolution fractionation of complex crude oil matrices. By using sub-micron LC columns we obtained high efficiency and resolution which allowed intra-class compound separation. This is in contrast with traditional methods, e.g. SPE, which yields a single aromatics fraction. The method is especially advantageous for the isolation of trace species. Multiple compounds not observed by SPE-GC-MS were pre-concentrated yielding high abundance and spectral quality. The increase in the number of tentatively identified peaks is thus a result of both reduced coelution and an increase in analyte signal-to-noise ratio. By characterization of 14 crude oils, we extended the identification to a large number of hydrocarbon and N,S,O-containing aromatics. Of the 517 uniquely identified compounds, 69% (357) contain either N,S,O (or a combination of) atoms (Table <ns0:ref type='table'>3</ns0:ref>). The structures of five representative compounds are presented in Figure <ns0:ref type='figure' target='#fig_0'>10</ns0:ref>. Aromatic nitrogen and sulfur compounds are detrimental in petroleum processing. Furthermore, they potentially have biological activity and may pose an environmental and toxicological hazard <ns0:ref type='bibr'>(López García et al., 2002;</ns0:ref><ns0:ref type='bibr'>Anyanwu & Semple, 2015;</ns0:ref><ns0:ref type='bibr'>Zhang et al., 2018;</ns0:ref><ns0:ref type='bibr'>Vetere, Pröfrock & Schrader, 2021)</ns0:ref>. Therefore, their characterization is an important pursuit. They are routinely analyzed by direct infusion mass spectrometry that provides the molecular formula but not connectivity <ns0:ref type='bibr'>(Guan, Marshall & Scheppele, 1996;</ns0:ref><ns0:ref type='bibr'>Purcell et al., 2007b,a;</ns0:ref><ns0:ref type='bibr'>Corilo, Rowland & Rodgers, 2016)</ns0:ref>. Thus, isolation and GC-MS analysis with library matching provide valuable information on their presence in oil samples. The relatively long fractionation time (60 minutes) and the number of fractions generated lead to a full sample analysis time of 6 hours (when characterizing the first 7 fractions using GC-MS). Several steps require manual intervention, i.e. dilution and preconcentration of fractions and moving the samples from the LC to the GC. It would therefore be beneficial to implement more automation, e.g. by using liquid handling robotics (ultimately with direct hyphenation to the GC). We observed minor co-elution during the analysis of latter fractions. Combining the LC-fractionation with subsequent GC×GC analysis would increase the power of the method further. However, it would require an intense data processing workflow with high demands in computational power. Something that is already challenging in comprehensive GC×GC studies <ns0:ref type='bibr'>(Reichenbach et al., 2019;</ns0:ref><ns0:ref type='bibr'>Wilde et al., 2020;</ns0:ref><ns0:ref type='bibr'>Stefanuto, Smolinska & Focant, 2021)</ns0:ref>. </ns0:p><ns0:note type='other'>Figure 10</ns0:note><ns0:p>The molecular structures of five unique compounds identified in separate fractions (2 to 6) obtained by using the LC-method.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:07:63902:1:0:NEW 21 Oct 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,224.62,525.00,96.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,199.12,525.00,299.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='30,42.52,199.12,525.00,299.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,224.62,525.00,129.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,219.37,525.00,194.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,199.12,525.00,283.50' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:07:63902:1:0:NEW 21 Oct 2021)Manuscript to be reviewedChemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science</ns0:note>
<ns0:note place='foot'>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:07:63902:1:0:NEW 21 Oct 2021)Manuscript to be reviewed Chemistry Journals</ns0:note>
</ns0:body>
" | "
Dear Oliver Jones and reviewers,
Thank you for taking the time to thoroughly read and evaluate our manuscript titled ‘Extended characterization of petroleum aromatics using off-line LC-GC-MS’. We were very happy to read your kind comments and appreciate the detail of your review finding all the small errors that had slipped our attention in the writing process.
We have carefully read all comments and addressed the issues in the manuscript. All changes have been marked with yellow in the tracked document. Figure numbering, legends and filenames have been updated. Furthermore, the data processing script have been uploaded as a supplemental file. A brief discussion on the questions from reviewer 2 regarding the experimental design is included below.
We look forward to hearing your response.
Best regards, Jonas Sundberg (on behalf of all authors)
Researcher, Centre for Oil & Gas
Technical University of Denmark, Elektrovej 375, 2800 Kgs. Lyngby
Denmark
Reviewer 1
Additional comments
1. Experimental results are presented in three tables but two of them (Tables 1 and 2) are not mentioned anywhere in the text relevant for Table 1 (lines 216 to 230, and for Table 2 (lines 239 to 242).
Table 1 and 2 have properly been references in the main text of the revised document.
2. There is a discrepancy between the figure numbers (Figures 5–9) listed in related manuscript paragraphs and the numbers and content of attached figures:
line 244: Figure 4 should be corrected to Figure 5
line 245: Figure 5 should be corrected to Figure 6
line 274: Figure 6 should be corrected to Figure 7
lines 301 and 328: Figure 7 should be corrected to Figure 8
line 312: Figure 8 should be corrected to Figure 9
Consequently, figure number should be corrected also in the Supplemental file:
Figure_5_Full_chromatograms to Figure_6_Full_chromatograms
All figure numbers have been corrected and renamed appropriately.
3. In Table 3 title GCxGC should be added. The first sentence should read: Comparison of the number of tentatively identified species in SPE-GC and GCxGC versus LC-GC.
The table title has been updated to include all techniques compared.
4. Figure 1: SPE enables separation of saturates and aromatics (not saturates and saturates).
The figure has been updated with the correct compound classes.
5. Line 72: corect (M. Pampanin & O. Sydnes, 2017) to (Pampanin & Sydnes, 2017)
The reference has been corrected.
6. Lines 145 and 146: correct µM to µm
Particle sizes have been updated to the correct unit.
7. Line 236: correct BTEX-type monoaromatics (benzene, toluene, xylene) to BTEX-type monoaromatics (benzene, toluene, ethylbenzene, xylenes)
Ethylbenzene has been added to the abbreviation definition.
8. Line 244: delete the word secondary
The superfluous wording was removed.
9. Line 270: correct (Countryman, Kelly & Garriques) to (Countryman, Kelly & Garriques, 2005)
The reference has been corrected with the publication year.
10. Lines314 and 316: abbreviation SPE/LC-GC is confusing; I would suggest writing SPE-GC and LC-GC instead.
We completely agree, and SPE-LC-GC is also used in other parts of the paper. The abbreviations have been updated.
11. References:
Line 408: correct Countryman S, Kelly K, Garriques M. to Countryman S, Kelly K, Garriques M. 2005.
12. Line 410: correct Davis JM, Giddings JCalvin. to Davis JM, Giddings JC.
13. Line 442: correct M. Pampanin D, O. Sydnes M to Pampanin DM, Sydnes MO.
All references have been corrected both in the reference list and in the main text.
Reviewer 2
Basic reporting
Tables 1 and 2 are not referenced in the body of the text
Table 1 and 2 have properly been references in the main text of the revised document.
Figure 9 is not referenced in the text
There was an error with the figure numbering. All numbers have been properly updated, so Figure 9 is now referenced on line 312.
The supplemental figure 6 has a file name denoting figure 5
The filename has been updated to the correct figure numbering.
Line 181 - there is an erroneous '29,30'
Good catch! The erroneous numbers have been removed.
Is the Python script you used for the data processing available?
The script has been uploaded to the supplemental files as a Jupyter Notebook. However, it comes with no documentation but should be relatively self-explanatory.
The figure legends need more information. As they stand, the figures require far too much reader intepretation to guess what they are showing.
We agree. The legends were written a tad too fast. All figure titles and legends have been revised to become more descriptive.
Experimental design
An interesting premise, a DB-5 column was used for the GC portion of the experiment. Why was this preferred to the more routinely used DB-1 column? Or even a PAH column as this separates all isomers? Would using a different column have an impact on the identification on isolated chemicals?
We have previously done extensive PAH analysis using the DB-5MS column. In our experience it works well for separation of petroleum aromatics and yields better baseline separation compared to DB-1 for larger ring structures (due to the phenyl content). One of the purposes of this project was to compare our established method with a more comprehensive pre-fractionation. Therefore, we focused mainly on the development of the LC-method as to further improve the GC-based separation.
We agree that it would be interesting to try a wider range of columns (including a PAH column). Using a different column chemistry would most likely yield a slightly different result. This is important to remember, especially when working with complex mixtures. Everything from sample preparation method to column selection and even instrument choice will introduce bias to your results. The difficulty is that when you optimize your conditions for one compound class (or even specific analytes), you de-optimize it for another. So, it’s always a fine balance and finding the optimal conditions for a as wide range of compounds as possible.
Secondly, as with all projects there is a time limitation and issue of getting access to shared instruments.
You used one type of SPE sorbent for the SPE experimentation. Were other SPE sorbents tested and did these have any affect on the extraction?
No other SPE cartridges were evaluated in this project. We have previously tested the more common SPE sorbents for separation of saturates and aromatics and found that the Strata EPH column gave the best results. It’s a well-established sorbent for this purpose. We believe that using a different SPE chemistry would mainly be of benefit to isolate the non-hydrocarbon compounds. However, sorbents capable of this typically does not provide any separation of saturated and aromatic structures. Again, it would be interesting to do a more extensive testing on this part (as with the GC column chemistry), but time limitations (especially in the current pandemic) have prohibited us from doing so.
" | Here is a paper. Please give your review comments after reading it. |
671 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Background</ns0:head><ns0:p>The main process limitation of microalgae biofuel technology is lack of cost-effective and efficient lipid extraction methods. Thus, the aim of this study was to investigate the effectiveness and efficiency of six caprolactam-based ionic liquids (CPILs) namely, Caprolactamium chloride, Caprolactamium methyl sulphonate, Caprolactamium trifluoromethane sulfonate, Caprolactamium acetate, Caprolactamium hydrogen sulphate and Caprolactamium trifluoromethane-acetate -for extraction of lipids from wet and dry Spirulina platensis microalgae biomass. Of these, the first three are novel CPILs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head><ns0:p>The caprolactam-based ionic liquids (CPILs) were formed by a combination of caprolactam with different organic and inorganic Brønsted acids, and used for lipid extraction from wet and dry Spirulina platensis microalgae biomass. Extraction of microalgae was performed in a reflux at 95 o C for 2 h using pure CPILs and mixtures of CPIL with methanol (as co-solvent) in a ratio of 1:1 (w/w). The microalgae biomass was mixed with the ILs/ methanol in a ratio of 1:19 (w/w) under magnetic stirring.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The yield by control experiment from dry and wet biomass was found to be 9.5 % and 4.1 %, respectively. A lipid recovery of 10 % from dry biomass was recorded with both caprolactamium acetate (CPAA) and caprolactamium trifluoroacetate (CPTFA), followed by caprolactamium chloride (CPHA, 9.3 ± 0.1 %). When the CPILs were mixed with methanol, observable lipids' yield enhancement of 14 % and 8 % (CPAA), 13 % and 5 % (CPTFA), and 11 % and 6 % (CPHA) were recorded from dry and wet biomass, respectively. The fatty acid composition showed that C 16 and C 18 were dominant, and this is comparable to results obtained from the traditional solvent (methanol-hexane) extraction method. The lower level of pigments in the lipids extracted with CPHA and CPTFA is one of the advantages of using CPILs because they lower the cost of biodiesel production by reducing the purification steps.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In conclusion, the three CPILs, CPAA, CPHA and CPTFA can be considered as promising green solvents in terms of energy and cost saving in the lipid extraction and thus biodiesel production process.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 29 Dec 2021) 80 Biodiesel is a clean and renewable energy source that is considered as an important 81 option to petroleum consumption <ns0:ref type='bibr'>(Gonçalves et al., 2013)</ns0:ref>. Petroleum retails at a high 82 cost, thus threatening energy security, in addition to causing global climate change 83 concerns <ns0:ref type='bibr' target='#b53'>(Pragya et al. 2013)</ns0:ref>. Biodiesel is primarily made from oil obtained from both 84 edible and non-edible plants, and residual waste <ns0:ref type='bibr' target='#b53'>(Pragya et al., 2013)</ns0:ref>. The use of these 85 plants has serious drawbacks, including high costs, food shortages, and a lack of steady 86 and reliable supply. These difficulties could be mitigated by the synthesis of biodiesel from 87 microalgae, that has long been considered as a promising potential alternative biomass 88 for biodiesel production due to its extremely fast biomass productivity rate <ns0:ref type='bibr'>(Arumugam et 89 al., 2013)</ns0:ref>. Other advantages of microalgae include higher lipid accumulation capacity and 90 its requirement for lesser land compared to other biofuel crops. 91 92 Nevertheless, the major constraint in the biofuel production from microalgae, is the lack 93 of cost-effective and efficient extraction and transesterification of lipids. Although higher 94 lipid yields have been recorded after pre-treatment of microalgae with various cell 95 disruption methods <ns0:ref type='bibr' target='#b29'>(Halim et al., 2012)</ns0:ref> such as bead milling, microwave, and 96 ultrasonication, the additional energy needed makes the process economically unviable. <ns0:ref type='bibr'>97</ns0:ref> On the other hand, conventional lipid extraction methods also require refluxing with 98 flammable and highly toxic organic solvents. Therefore, the exploration of alternative 99 microalgal lipid processing methods that are simpler, cost-effective, and environmentally 100 friendly, has become increasingly necessary. <ns0:ref type='bibr'>(Zhao & Baker, 2013)</ns0:ref>. Moreover, ILs can dissolve essential 106 biopolymers like cellulose and lignin and, as a result, induce the structure disruption of 107 algae cells or affect the permeability of cell walls, depending on their cation and anion 108 structure <ns0:ref type='bibr' target='#b14'>(Cevasco & Chiappe, 2014)</ns0:ref>. availability and applicability <ns0:ref type='bibr'>(Andreani & Rocha, 2012)</ns0:ref>; <ns0:ref type='bibr' target='#b25'>(George et al., 2015)</ns0:ref>. Thus, the 113 use of protic ionic liquids (PILs) has attracted significant attention as a novel technology 114 for microalgal lipid extraction and biodiesel production <ns0:ref type='bibr' target='#b34'>(Kim et al., 2012)</ns0:ref>; <ns0:ref type='bibr'>(Kim et al.,</ns0:ref><ns0:ref type='bibr'>115</ns0:ref> 2013); <ns0:ref type='bibr' target='#b18'>(Choi et al., 2014)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>(Chiappe et al., 2016)</ns0:ref>. PILs are substantially less expensive 116 than common ILs because they can be synthesized by neutralizing a selected base with 117 a protic acid under mild conditions <ns0:ref type='bibr' target='#b27'>(Greaves & Drummond, 2008)</ns0:ref>; <ns0:ref type='bibr' target='#b31'>(Hayes et al., 2015)</ns0:ref>; 118 <ns0:ref type='bibr' target='#b56'>(Xu & Angell, 2003)</ns0:ref> . Besides, PILs are less toxic <ns0:ref type='bibr' target='#b52'>(Oliveira et al., 2016)</ns0:ref>; <ns0:ref type='bibr'>(Mukund et al., 119 2019)</ns0:ref>; <ns0:ref type='bibr' target='#b13'>(Bodo et al., 2021)</ns0:ref> and are known to form strong hydrogen bonds due to their 120 labile protons <ns0:ref type='bibr' target='#b16'>(Chhotaray et al., 2014)</ns0:ref>. In particular, caprolactam-based ionic liquids 121 <ns0:ref type='bibr'>(CPILs)</ns0:ref> have recently been identified as lipid extraction solvents <ns0:ref type='bibr' target='#b45'>(Mukund et al., 2019)</ns0:ref> 122 and catalysts for lipid transesterification reactions <ns0:ref type='bibr' target='#b43'>(Luo et al., 2017)</ns0:ref>. The findings showed 123 the capability of these CPILs to disrupt cells and extract lipids in a single step.</ns0:p><ns0:p>124 Despite their many potential advantages, CPILs are rarely synthesized and their 125 application is therefore limited. However, CPILs researchers are currently working to 126 produce new forms of CPILs that could be used as green solvents. In spite of these 127 efforts, the efficacy of CPILs extraction of lipids from S. platensis is hardly reported.</ns0:p><ns0:p>128 To establish whether it is possible to improve the long-term viability and sustainability of 129 the extraction procedure for lipids, we have investigated the effectiveness and efficiency 130 of six CPILs -Caprolactamium chloride (CPHA), Caprolactamium methyl sulphonate 131 (CPMS), Caprolactamium trifluoromethane sulfonate (CPTFS), Caprolactamium acetate 132 (CPAA),Caprolactamium hydrogen sulphate (CPSA) and Caprolactamium 133 trifluoromethane-acetate (CPTFA) -for extraction of lipids from wet and dry S. platensis 134 microalgae biomass. Of these, the first three are novel ILs <ns0:ref type='bibr' target='#b50'>(Naiyl et al., 2021)</ns0:ref>, whereas 135 the others -except for Caprolactam acetate -were used for the first time in lipids' 136 extraction. Extractions with both pure ionic liquids and ionic liquids/methanol mixtures 137 were done to establish whether organic co-solvents could improve lipid extraction yields.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>followed by stirring at room temperature for 24h. Four of these ILs are liquids at room 153 temperature, namely CPAA, CPSA, CPTFA and CPTFS. The methodologies for 154 synthesis and characterization are explained in detail elsewhere in the literature <ns0:ref type='bibr'>(Naiyl et 155 al., 2021)</ns0:ref>.</ns0:p><ns0:p>156 Extraction of lipids using the traditional Hexane: Methanol method 157 As described by <ns0:ref type='bibr'>(Chiappe et al., 2016) a hexane-methanol mixture (54:46, v/v, 150 ml)</ns0:ref> 158 was used to extract lipids from wet (80 %) and dry microalgae (3.0 g) by the Soxhlet 159 extraction method, which has three main compartments. 250 ml round bottom flask 160 holding the solvent, extraction chamber and condenser. First, the sample was placed in 161 a porous thimble, the flask was heated, and then the solvent was evaporated and carried 162 to a condenser, where it was converted to a liquid and collected in the extraction chamber 163 containing the sample. As the solvent passed through the sample, the lipids were 164 extracted and transported to the flask. This process lasted 10 hours. After extraction, a 165 rotary evaporator was used to remove the solvent. Then extracted lipids fraction was 166 transferred into a weighed beaker and dried in an oven at 60 °C until it reached a constant 167 PeerJ An. Chem. reviewing PDF | (ACHEM- <ns0:ref type='table'>2021:09:65625:1:0:NEW 16 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM- <ns0:ref type='table'>2021:09:65625:1:0:NEW 29 Dec 2021)</ns0:ref> weight. The experiments were carried out in duplicate and the crude lipids extraction yield 168 was then calculated using the following formula:</ns0:p><ns0:formula xml:id='formula_0'>𝑅 𝑙𝑖𝑝𝑖𝑑 % = 𝑊 𝑙𝑖𝑝𝑖𝑑 𝑊 𝑏𝑖𝑜𝑚𝑎𝑠𝑠 × 100 (1) 169 170</ns0:formula><ns0:p>Where the R lipid and W lipid are the recovery and the weight of crude lipid extracts, 171 respectively. Wbiomass is the initial dry biomass weight (g).</ns0:p><ns0:p>173 Lipid extraction using ionic liquids 174 The effect of reaction time and temperature on lipid extraction.</ns0:p><ns0:p>175 Three ionic liquids, CPAA, CPHA, and CPSA were utilized for lipid extraction for 5 h at 176 75 °C and for 2 h at 95 °C using reflux (150 ml round bottom flask fitted with a condenser).</ns0:p><ns0:p>The dry biomass of microalgae was mixed with the ionic liquid in a ratio of 1:19 (w/w) 177 under magnetic stirring. After extraction, a tri-phasic system was obtained by 178 centrifugation. The top phase contained lipids, the middle phase contained IL with 179 methanol, and the bottom phase contained the algae residue. The upper lipid phase 180 could not be easily retrieved due to the small scale of the experiment and, therefore, the 181 mixture was treated with n-hexane (10 ml) or a mixture of hexane: methanol 2:1 (v/v) to 182 ascertain the actual lipid yield. The recovered n-hexane phase was washed two times 183 with water to remove polar compounds. The lipid fraction was dried in a thermostat oven 184 at 60 °C until it reached a constant weight, and the residue was weighed to calculate the 185 gravimetric yield using formula (1). The overall lipid extraction process is shown in Fig <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>.</ns0:p><ns0:p>186 Lipid extraction using pure ionic liquids 187 Lipid extraction using the six-caprolactam ionic liquids was performed at 95 o C for 2 h. 188</ns0:p><ns0:p>Afterwards, lipids were extracted from the ionic liquids following the same procedure 189 described above. In the case of CPSA, CPMS, and CPHA, hexane: methanol 2:1 (v/v) 190 was used, rather than hexane, because these CPILs solidify after mixing with hexane. showed that using CPAA for extraction resulted in the lowest yield compared to the control 254 <ns0:ref type='bibr' target='#b45'>(Mukund et al., 2019)</ns0:ref>. The reason for this can be due to the fact that the cell wall 255 structures of microalgae, which contains cellulose, glycoprotein, silica, and peptidoglycan 256 <ns0:ref type='bibr'>(Zhou et al., 2019)</ns0:ref>, may vary from one type to another. Therefore, the ability of different 257</ns0:p><ns0:p>ILs to penetrate the cell wall may also vary. Hence, the wall structure of S. platensis might 258 be more affected by CPAA. On the other hand, lower yields (circa. 5%) were obtained 259 using CPSA (P-value = 0.003), CPTFS (P-value = 0.031), and CPMS (P-value = 0.01). 260</ns0:p><ns0:p>Overall, the CPILs containing sulphate and sulphonate anions recorded the lowest lipids 261 yield relative to the control experiment. negative control for comparative purposes and the obtained lipids yield was (1.31 ± 266 0.27%). The highest yield was (14.2 ± 0.11%, P-value = 0.007) and (13.1 ± 0.1%, P-value 267 = 0.02) for the CPAA/ MeOH and CPHA/ MeOH mixtures, respectively. Here, a significant 268 increase (P˂ 0.05) over that of pure CPAA and CPHA was observed. However, the lipids 269 yield of the CPTFA/ MeOH mixture (11.1 ± 0.13 %, P-value = 0.028) is slightly increased 270 compared to the pure one, but was still significantly different from that of the control 271 experiment (P˂ 0.05). This can be attributed to different interaction mechanisms between 272 the ILs and methanol. The mixtures of methanol with CPSA, CPTFA, and CPMS showed 273 an improvement in lipid yield of around (6%).</ns0:p><ns0:p>274 The co-solvent effect can be explained by enhancement of microalgae cell disruption 275 through the action of the polar methanol, which improves the efficiency of lipid extraction 276 from biomass <ns0:ref type='bibr' target='#b29'>(Halim et al., 2012)</ns0:ref>; <ns0:ref type='bibr' target='#b20'>(Dong et al., 2016)</ns0:ref>. The reason behind this is that 277 some non-polar lipids are found in the cytoplasm as a complex with polar lipids. This 278 complex is strongly bound to proteins in the cell membrane via hydrogen bonds. 279</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Manuscript to be reviewed which makes lipid transfer easier <ns0:ref type='bibr'>(Zhou et al., 2019)</ns0:ref>. The same author has also reported 285 that, the addition of methanol may reduce the viscosity of the ILs, boosting the possibility 286 of hydrogen bonds forming between fibres on the microalgae cell wall and ionic liquids.</ns0:p><ns0:p>287 Moreover, the differences in intra molecular interactions of these ionic liquids seem to be 288 the main reason for their capability to form hydrogen bonds with the microalgae cell walls, stage, which is considered a major cause for the high cost of biodiesel production.</ns0:p><ns0:p>294 For this investigation, we used the three mixtures of CPAA/ MeOH, CPTFA/ MeOH and 295 CPHA / MeOH, which recorded the highest lipid yields from dry biomass. The control 296 sample of wet S. platensis biomass (80 %) provided a lipid yield of (4.1 ± 0.06 %). The 297 results show that CPAA/ MeOH mixture provided the maximum yield of (8.07 ± 0.09 %, 298 Pvalue = 0.001), followed by CPTFA/ MeOH (6.1 ± 0.1 %, P-value = 0.005) and CPHA/ 299</ns0:p><ns0:p>MeOH mixture recorded the lowest yield of (5.1 ± 0.08 %, P-value = 0.008). As can be 300 seen the three mixtures provided a higher yield (P˂ 0.05) over that of the control (Fig <ns0:ref type='figure' target='#fig_9'>5</ns0:ref>). 301</ns0:p><ns0:p>On the other hand, the lipids yield of CPAA is almost similar (P-value = 0.047) to that of 302 the control sample from dry biomass (9.5 ± 0.23%). Therefore, from an economic point of 303 view, CPAA could be the most promising ionic liquid for production of biodiesel from S. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Chemistry Journals Figure 1</ns0:note><ns0:note type='other'>Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Figure 2</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 3</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 4</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 5</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 6</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 7</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 8</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>101102</ns0:head><ns0:label /><ns0:figDesc>Ionic liquids (ILs) have lately been identified as promising green solvents in the extraction 103 of microalgal lipids based on their fascinating physicochemical properties such as being 104 non-volatile, non-flammable, chemically and thermally stable, and having the potential for 105 recovery and design</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>191 The solidification happened, due to the fact that, CPHA and CPMS are solid at room 192 temperature, whereas CPSA is highly viscous. Thus, when hexane was added at room 193 PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022) Manuscript to be reviewed Chemistry Journals Germany) equipped with a flame ionization detector (FID) and an Agilent CPSil 88 223 capillary column was used to analyze the recovered FAME, using nitrogen as a carrier 224 gas and other gases such as hydrogen and air. The FID and the injector port temperatures 225 were kept at 260 °C and 240 °C, respectively. The injection volume was 0.5 µL and gas 226 flow rate was 100 ml/min. The temperature program was held at 150 °C for 1 min, 227 increased to 220 °C at 10 °C/min and held for 2 min, then increased to 240 °C at 3 °C/min, 228 and finally maintained at 240 °C for 8 min. For external calibration, a 37-component 229 FAMEs standard mixture was used. 230 Data Analysis 231 The analysis was performed in duplicate and the obtained data was expressed as (Mean 232 ± standard deviation). The T-test was used to compare results with controls, and an effect 233 was considered to be significant when P ≤ 0.05. 234 Results and discussion 235 Optimization of lipid extraction time and temperature 236 Figure 2 shows the comparison of lipids yield between extraction for 5 h at 75 o C and for 237 2 h at 95 o C of three selected CPILs. In order to minimize reaction time, long period/low 238 temperature and short period/high temperature were compared. The results showed that 239 there was no significant difference in lipids extraction yields (P-value = 0.23) of CPHA, 240 CPAA and CPSA at 75 o C for 5 h and at 95 o C for 2 h. Therefore, a synthetic ionic liquid 241 was used for lipids extraction at 95 o C for 2 hours. 242 Extraction of lipids by a conventional method and pure ILs 243 The total recovered lipids obtained by conventional Soxhlet extraction using (Hexane-244 MeOH) as solvents, as well as the extraction yields of the synthesized CPILs, are shown 245 in Fig. 3. The yield from the conventional organic solvent extraction method was 9.5 ± 246 0.23 %, which is similar to the yield obtained by (Mandal, Patnaik, Singh, & Mallick, 2013), 247 who used the following conditions: Chloroform: methanol (2:1), 6 h at r.t. Three of the 248 CPILs had no significant differences in extraction yields compared to the control 249 PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 29 Dec 2021)experiment (P > 0.05). In particular, CPAA, CPTFA and CPHA had lipid yields of (10.1 ± 250 0.28 %, P-value = 0.153), (10.1 ± 0.25%, P-value =0.159), and (9.3 ± 0.1 %, P-value = 251 0.326), respectively. This contrasts with previous findings on lipid extracts from dried and 252 dehydrated marine Nannochloropsis oculata and Chlorella salina microalgae, which 253</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>262</ns0:head><ns0:label /><ns0:figDesc>The effect of organic co-solvents on ionic liquid extraction of lipids 263 Fig 4 shows the yields for extraction of lipids from dry biomass using mixtures of ionic 264 liquids and methanol (as co-solvent). Methanol (MeOH) was also used separately as a 265</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 29 Dec 2021)Therefore, the ILs and MeOH, which are hydrophilic in nature, can disrupt lipid-protein 280 associations by forming hydrogen bonds with the polar lipids of the complex(Halim et al., 281 2012). As a result, whereas ILs improve the permeability of the cell wall, methanol 282 accelerates the precipitation of lipids from the cell(Zhou et al., 2019). It is also thought 283 that the action of the ILs -Methanol system creates a more hydrophobic environment, 284</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>289 and thus lead to the differences in their effectiveness for lipid extraction.290 The effect of IL/Methanol mixture on lipid extraction from wet biomass 291 In order to reduce the cost of extraction of lipids from microalgae, we have also studied 292 the potential of these CPILs to extract the lipids from wet biomass to avoid the drying 293</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. A schematic presentation of the lipid extraction process</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig. 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Fig. 2. The yield of extracted Lipids by ionic liquids at75 o C for 5 h and 95 o C for 2 h</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Fig 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig 3. The yields of extracted lipid by pure ionic liquids and Hexane: MeOH (control)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Fig. 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Fig. 4. Comparison of lipid extraction yields by ILs and mixtures of IL/Methanol (1:1)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Fig 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Fig 5. Lipid extraction yields from wet Spirulina platensis biomass</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Fig. 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Fig. 6. The relative fatty acids composition in lipids recovered by (a) control (Hexane: MeOH), (b) by CPAA/MeOH, (c) by CPHA/MeOH, and (d) by CPTFA/MeOH mixtures</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Fig. 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Fig. 7. Saturated and unsaturated fatty acid methyl esters profile in biodiesel produced by IL/MeOH mixtures</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure. 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure. 8. Extracted lipid samples by CPAA (A), CPHA (B), and CPTFA (C).</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>of fatty acids composition in the lipid fraction</ns0:head><ns0:label /><ns0:figDesc>would increase the number and length of biodiesel production processes, resulting 342 in less sustainable economics. Fig.8(A)-(C) shows equal lipids fractions (w/w, diluted in 343 hexane to facilitate color observation) produced by CPAA/Me, CPHA/Me and CPTFA/Me 344 mixtures, respectively. The color intensity of the lipids extracted by CPAA in Figure8(A) 345 seems to be much darker than the lipids extracted by CPHA and CPTFA in Figures8(B)346 and (C), respectively. This indicates that utilization of the last two CPILs causes pigments' 347 deterioration, particularly CPHA -which produces a yellow extract, a beneficial outcome 348 that would potentially minimize the number of steps required in processing biodiesel. In 349 general, the results reveal that the three CPILs can efficiently extract lipids from S. 350 platensis microalgae and that CPHA aids in lowering the pigment content in the lipid 351 samples.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>352 Conclusions</ns0:cell></ns0:row><ns0:row><ns0:cell>353 Six caprolactam-based ionic liquids (CPILs) were investigated for lipid extraction from wet Zhao, H., & Baker, G. A. (2013). Ionic liquids and deep eutectic solvents for</ns0:cell></ns0:row><ns0:row><ns0:cell>biodiesel 354 and dried S. platensis microalgae, using both pure CPILs and co-solvent mixtures 355 455 synthesis: A review. Journal of Chemical Technology and Biotechnology, 88(1), 3-</ns0:cell></ns0:row><ns0:row><ns0:cell>(CPILs/methanol), and compared to the conventional organic solvent (methanol/hexane) 356 456 12. https://doi.org/10.1002/jctb.3935</ns0:cell></ns0:row><ns0:row><ns0:cell>extraction method. The pure forms and IL/methanol mixtures of three of these CPILs -357</ns0:cell></ns0:row><ns0:row><ns0:cell>Caprolactamium acetate (CPAA), Caprolactamium chloride (CPHA), and caprolactam 358</ns0:cell></ns0:row><ns0:row><ns0:cell>trifluoromethane acetate (CPTFA) -showed higher or similar lipid recovery efficiency from 359</ns0:cell></ns0:row><ns0:row><ns0:cell>dry biomass compared to the conventional organic solvent (hexane -methanol) extraction 360</ns0:cell></ns0:row><ns0:row><ns0:cell>method. The use of CPAA provided a maximum lipid recovery of 14 % and 8 % from dry 361</ns0:cell></ns0:row><ns0:row><ns0:cell>and wet biomass, respectively. On other hand, CPHA and CPTFA minimized pigment co362</ns0:cell></ns0:row><ns0:row><ns0:cell>304 extraction, resulting in reduced purification steps in biodiesel production. Furthermore, 363 the</ns0:cell></ns0:row><ns0:row><ns0:cell>platensis biomass -in terms of energy and cost savings, when compared to conventional 305 lipids profiles of the three CPILs were dominated by palmitic acid, oleic and stearic 364 fatty</ns0:cell></ns0:row><ns0:row><ns0:cell>extraction processes. acids, comparable to those produced by the conventional method. Therefore, the 365 three</ns0:cell></ns0:row><ns0:row><ns0:cell>CPILs are promising green solvents, with potential energy and cost savings in 366 biodiesel</ns0:cell></ns0:row><ns0:row><ns0:cell>production from microalgae. Further studies should investigate the intra 367 molecular</ns0:cell></ns0:row><ns0:row><ns0:cell>interactions of these ILs and their effectiveness for extraction of lipids.</ns0:cell></ns0:row><ns0:row><ns0:cell>368</ns0:cell></ns0:row></ns0:table><ns0:note>306 Determination 307 The experiments of CPILs lipids extraction with dried microalgae show that CPAA/MeOH, 308 CPHA/MeOH, and CPTFA/MeOH mixtures gave the highest lipid yields. However, it is 309 PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022) Manuscript to be reviewed Chemistry Journals colour 457 Zhou, W., Wang, Z., Alam, A., Xu, J., Zhu, S., Yuan, Z., … Ma, L. (2019). Repeated 458 Utilization of Ionic Liquid to Extract Lipid from Algal Biomass. 2019.459</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 (on next page)Table 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Caprolactam-based ionic liquids used in the study.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Water content</ns0:cell></ns0:row><ns0:row><ns0:cell>(w/w %)</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Analytical, Inorganic, Organic, Phy Abbreviation sical, Materials Scie Structural formula nce pt to be reviewed PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022)</ns0:note></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
<ns0:note place='foot' n='149'>The CPILs used for lipid extraction experiments are listed in (Table1), and were 150 synthesized by adding equimolar quantities of caprolactam and an acid (hydrochloric, 151 methane sulphonic, trifluoromethanesulphonic, acetic, Trifluoroacetic, and sulfuric), 152</ns0:note>
<ns0:note place='foot'>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:09:65625:1:0:NEW 16 Jan 2022)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear editor,
We thank the reviewers for their valuable comments on the manuscript, and we have edited it to address their concerns.
We believe that the manuscript can now be published in PeerJ.
Rania Awad Naiyl
On behalf of all authors
Reviewer 1 (Anonymous)
Basic reporting
The authors here describe a protocol of extracting lipids from wet and dried Spirulina platensis microalgae using ionic liquids. This is indeed an interesting work towards a step to follow green methodology for finding resources of biofuels. The manuscript may be published in the PeerJ Analytical Chemistry after considering the points mentioned in the detailed report.
Experimental design
In some cases, the authors have mentioned the experimental procedure to be found in literatures, such as, 'Soxhlet extraction'. Authors may consider to provide a brief description of all such methods to help the readers to immediately have an idea while going through the paper.
We appreciate these keen observations. We have reviewed this and made corrections. We hope this is agreeable. See highlighted part in L157 and L 176.
Validity of the findings
The results are interesting which are validated by reproducible data set.
Thank you for the commendation
Additional comments
1. The authors may clarify if the ionic liquids (ILs) used in the study are the room temperature ILs. Mentioning their melting point is important as a reader may find different uses of these molecules.
Four of the ILs are liquid at room temperature, namely CPAA, CPSA CPTFS, CPTFA whereas, CPHA and CPMS are solid at room temperature, but their melting points have not been determined because of instrumental limitation, and we have recommended this aspect for future studies. However, we observed that the solid ILs melted below 90 ˚C during lipid extraction. Therefore, we can say the first four ILs are room temperature ILs. I have added a clarifying comment at lines 152-153. More information about the physical properties are available in our article (Naiyl et al. 2021).
2. All the ILs have same cation but differing in their anions. The different effectiveness of extracting lipids seems to depend on the type of anions. Why so?
The reason may due to the fact that, the disruption and penetration of the microalgae cell wall is performed through the formation of hydrogen bonds between the ionic liquids and lipids complex on the cell wall, and thus the lipids are released from the cells. Therefore, the differences in the intra-molecular interactions of these ionic liquids seem to be the main reason for their capability to form the hydrogen bonds with the microalgae cell walls, and thus lead to different effectiveness of lipids extraction. Therefore, we have recommended a deep study of cation-anion interactions of these ILs in the future. We have added this comment and recommendation to the discussion (lines 276 - 280 and 287 - 289) and conclusion (lines 367 - 368).
3. The authors primarily have described their extracted lipids to be used for biofuel. Are they suitable for preparing model membrane, such as, vesicles? A discussion may be added to extend the uses of the lipids.
This is an interesting suggestion. Yes, these lipids can be used for preparing vesicles. Specifically, palmitic (C16) and Oleic (C18) acid as reported in some studies. However, in this study, we focus only on finding a solution to the process limitation of microalgae biofuel technology. Therefore, we will consider this suggestion in our future study.
Reviewer 2 (Anonymous)
Basic reporting
Species names should be properly mentioned (following the guidelines and put in italic, etc).
Corrected, except the one at lines 319 - 320, because the reference does not specify the species.
Furthermore, I suggest the authors not to overgeneralize the statement from literature and I invite the authors to perform critical reflections on the results.
Agreed. We have added another reference that support our claim at lines 119 - 120. The critical reflection on the results from L245 - L251 in the annotated manuscript is detailed on pages 7 & 8 of this rebuttal.
Experimental design
The water content was unfortunately not reported, thus the different ionic liquids are not directly comparable.
You are right. The water content has been added in Table 1.
I especially disagree with the method to determine the yield, which I address several times in the annotated PDF. This could result in misleading interpretation, serious claims and wrong conclusion.
This is quite correct. I extremely apologize for this mistake. I have reviewed and corrected the method for determination of lipids yield to the exact one that was used (lines 168 -172).
Validity of the findings
Due to the improper method (described above), the authors obtained peculiar data, which seems to be 'too good to be true'. A critical reflection on the obtained results is not observed from the report.
I completely agree and I have corrected it as explained above. I hope this is agreeable.
Replicates or standard deviations in some sections were not reported.
The method was used to identify and determine the percentage of free fatty acid methyl esters (FAMEs) content in the lipids extracts using a 37-component FAMEs standard mixture based on retention time and the total areas of FAMEs that present in each sample, respectively. Therefore, there is no need to replicate in this case. The replication will be important if we determine the biodiesel yield using EN 14103:2011 standard test method. I have added the use of the standard mixture (lines 228 – 229).
Annotated manuscript
The reviewer has also provided an annotated manuscript as part of their review:
L 80-82: This sentences is too long, please consider to split it for better readability.
Done.
L 90: the transition from biofuel story to the cell disruption and lipid extraction is not sufficient.
Agreed. I have deleted the part of cell disruption because it had already been sufficiently captured in lines 105 - 108.
L 105: The authors made several sloppy mistakes of double commas.
Corrected.
L 112: literature is required to support the statement of high cost of imidazole-based IL. What about imidazole-based protic IL? Is it still more expensive than caprolactam-based IL?
We have added two references to support the statement at line 112, which are (Andreani & Rocha, 2012 and George et al., 2015). Yes, imidazole-based protic IL still more expensive than caprolactam-based IL. According to the sigma Aldrich current prices of the starting material are € 41.8 and € 207 per Kg for caprolactam and imidazole, respectively.
L 119: the author state here that PILs are less toxic. However, in the literature that they referred, it is not specifically mentioned that PILs, in general, are less toxic. Instead, caprolactam-based ILs have low toxicity. Have not convinced with the author generalization that PILs are less toxic than aprotic IL.
Agreed. We have added other references that supports our claim at L 119 – 120, which are (Oliveira et al., 2016), (Mukund et al., 2019) and (Bodo et al., 2021).
L 126: Are there particular reason why CPILs are rarely synthesized?
There is no reason reported in the literature, despite their fascinating properties such as being less expensive and less toxic, as well as availability of caprolactam in large quantities from industry. Therefore, they have attracted considerable attention in recent years, including applications in lipid extraction and biodiesel production, as mentioned in this manuscript. Another application of CPILs that has been reported in literature by Liu et al. 2013, is their capability to sorb SO2. I did not put the last application in the manuscript because the focus is on their application in lipids extraction and biodiesel production but it could be included if necessary.
L 131-133: Please also include the abbreviation from Table 1 here.
Have included.
L 153: Ionic liquids are hygroscopic, which absorbs moisture. Is there water removal step prior to the use of the ILs? If there is no water removal step, CPHA [caprolactamium chloride] would be diluted in water. Please mind that the starting hydrochloric acid is not pure HCl, but concentrated aqueous solution this would imply that ILs used in this study are not comparable to one another.
Not all ionic liquids are hygroscopic. For example, caprolactamium trifluoroacetate (CPTFA), which was first studied by Du et al. 2005, is moisture stable - as are all ILs that were used in our study. Following is the procedure that we used to prepare the IL as reported in our previous study (Naiyl et al 2021), including the water removal step and determination of water content:
Water (10 mL) was added to a 100 mL flask containing 11.32 g of Caprolactam (0.1 mol) and stirred. Then 3.65 g of hydrochloric acid (0.1 mol) was added slowly into the flask, within 30 min in an ice bath. The flasks were left to stand for 24 hours at room temperature. Thereafter, water was removed by vacuum distillation and the mixture was washed with toluene, followed by drying at 80 ˚C in a vacuum oven, and the resultant percentage yield was calculated. Then water content of the synthesized CPILs were measured. I have added the water content measurements to Table 1
L 167-168: A negative control is required for this experiment. At high temperature, the cell disruption might already occur even without ILs. A negative control with water or other non- IL solvent would give a good baseline.
Of course. Therefore, we have used methanol separately for comparative purposes, which can be consider as a negative control. We have already mentioned that in the result and discussion section, at lines 264 - 265 and Fig 4. We have added this comment to the methodology section at line 199.
L 171 please be consistent about the unit, use ml instead of cm3.
Corrected.
L 172: The addition of hexane might enhance the lipid extraction from the biomass. However, the solubility of hexane in the ILs were not reported. This would implicate the process description. If hexane is partially soluble in ILs.
In fact, the ILs in this study are hydrophilic in nature (more details are available in our article (Naiyl et al. 2021), which means that they are insoluble in hexane but miscible with methanol. Therefore, a tri-phasic system was obtained by centrifugation. The top phase contained lipids, the middle phase contained IL with methanol, and the bottom phase contained the algae residue as shown in Fig 1. We have added this comment (L178 - L180). I hope it is clarified.
L 174: The yield can be determine directly from the obtained lipid fraction, right. Or did the author still use the equation from line 165? If so, the complete separation of biomass and ILs is required.
Yes, the yields were determined directly from the obtained lipid fraction as corrected previously. I have added a clarification at line 186.
L 179: should this be CPSA
Corrected.
L 180: CPSA and CPHA were both used also in in the previously described method (evaluation of time and temperature), but methanol was required there. Or did I not interpret the method correctly? Furthermore, is there an explanation why the solidification happened when mixed with only hexane and not with hexane/methanol.
Methanol was not required in the evaluation of time and temperature method. In this method, hexane was added only in the case of CPAA, whereas the mixture of hexane and methanol (2:1) was added for CPSA and CPHA. The only purpose of this step is to ascertain the actual lipids yield, as explained in the method. I have added the comment in the method (evaluation of time and temperature) at line 182. Furthermore, there was already a clarification on the addition of hexane/MeOH mixture in the method (Lipid extraction using pure ionic liquids) at lines 189 - 190.
The solidification happened, due to the fact that, CPHA and CPMS are solid at room temperature, whereas CPSA is highly viscous. Thus, when hexane was added at room temperature (in our case this ranged from 18 - 20 ˚C), and being immiscible with the ILs, there was a decrease in temperature which led to solidification of the mixture. Therefore, methanol was added because it is miscible with the ILs, to ease their transfer to centrifuge tubes. We have added this explanation at lines 191 – 196.
L 195: Were replicates included in this analysis? Replicates is vital in this analysis, especially if the lipid concentration is small.
This method was used to identify and determine the percentage of free fatty acid methyl esters (FAMEs) content in the lipids’ extracts using a 37-component FAMEs standard mixture, based on retention time and the total areas of FAMEs present in each sample, respectively. This was essentially qualitative analysis and, therefore, there was no need for replicates in this case. Replication is important when we determine the biodiesel yield using EN 14103:2011 standard test method. I have added the use of the standard mixture (L 229 - 230).
L 230-233: Did the CPILs only extract lipids? Or there could be other compound extracted? This is related to the yield determination described in line 163.
No, they did not. Polar compound (i.e protein, polar lipids and cellulose) may also be extracted, but all this are dissolved in CPILs. However, the CPILs can release the non-polar lipids such as triglyceride from the cell through the formation of hydrogen bonds with the cell fiber, which is not soluble in the CPILs, and thus they are easy to recover as explained in the manuscript. Thus, the other compounds are not related to the lipid yield determination.
L 243: Is there explanation, why S-containing CPILs did not extract the lipids well?
I am attempting to find out the reason in our current ongoing study.
L 245-251: The authors reported that methanol/ILs extracted 50% more lipids compared to the standard protocol (Soxhlet). This is a big claim, with the fact that the Soxhlet extraction is widely accepted to extract 100% lipids from biomass. Thus, I invite the authors to critically evaluate this result. The other compound might also isolated during this extraction leaving loss weigh in the lipids.
1. The claim that the Soxhlet method extracted 100 % of lipids from biomass, depend on the solvent system, the ratios of solvents that use for the extraction and the volume of the solvent as well. The ‘conventional method’ as used in this MS refers to the use of organic solvents (pure or mixtures). In any case, there is evidence in literature that some methods are more efficient than Soxhlet extraction. For instance, in a study published by Pohndorf et al. 2016, which compared the cold and hot extraction methods of lipid from Spirulina sp biomass, the methods were as follows:
I. In the cold method. To 2g of biomass, 40 ml of chloroform: methanol (2:1 v/v) was
added and stirred for 1 h.
II. In the hot method. To 2g of biomass, 150 ml of hexane was added and refluxed for
6 h in a Soxhlet apparatus.
The lipids yields from the cold and hot extraction are 5.86 ± 0.37 and 1.33 ± 0.03, respectively. Thus, this finding refutes the above claim, and gives credence to our findings that the methanol/ILs method could be more efficient than the Soxhlet method.
2. As for the possibility of isolation of other compounds, I have detailed these in the response to your previous comment at lines 230-233 in the annotated manuscript. Moreover, the washing step of the lipid/hexane layer was also performed to ensure that the extract was free of other compounds.
L 270-271: From this result, I distill that the CPAA/MeOH is 100 % better for wet extraction than the Soxhlet method. I could understand that the presence of water decrease the efficiency of the organic solvent, but 2-fold value is good, even a bit suspicious for me. And the unit of the extraction yield is unclear to me. Did the authors use the initial and final wet biomass for this?
The efficiency of CPAA/MeOH in lipid extraction from wet biomass may be due to its ability to penetrate the cell wall, even in presence of water. I have explained this in the response to reviewer #1, and at lines (lines 276 - 280 and 287 - 289) in the MS.
Regarding the extraction yield, it was determined directly from the obtained lipid fraction as explained previously. However, the initial weight of wet biomass was not used for calculation of the lipid yield. We used 1 g dry weight equivalent wet microalgae biomass (80 %) as explained under the sub-section, “Extraction of lipids from wet biomass” at line 206. I have clarified this in lines 210 and 172.
Note:
Some reference have been added (see highlighted part in the reference section)
" | Here is a paper. Please give your review comments after reading it. |
672 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The applications of Cu and CuNPs based on the earth-abundant and inexpensive Cu metal have generated a great deal of interest in recent years, including medical applications. A novel, specific, precise, accurate and sensitive reverse-phase highperformance liquid chromatography (RP-HPLC) method with UV detection has been developed and validated to quantify copper (Cu) and copper nanoparticles (CuNPs) in different biological matrices and pharmaceutical products. Methods. The developed method has been validated for linearity, precision, sensitivity, specificity and accuracy. Cu concentration was detected in pharmaceutical products without an extraction process.</ns0:p><ns0:p>Moreover, liver, serum and muscle tissues were used as biological matrices. High Cu recovery in biological samples was afforded by using citric acid as a green chelating agent, exact extraction time and pH adjustment. Cu pharmaceutical and biological samples were eluted by acetonitrile: ammonium acetate (50mM) with 0.5 mg/ml EDTA (30:70 v: v) as an isocratic mobile phase. EDTA reacted with Cu ions forming a Cu-EDTA coloured complex, separated through the C18 column and detected by UV at 310nm. Results. The developed method was specific with a short retention time of 4.95 min. It achieved high recovery from 100.3% -109.9% in pharmaceutical samples and 96.8%-105.7 % in biological samples. The precision RSD percentage was less than two. The method was sensitive by achieving low detection limits (DL) and quantification limits (QL). Conclusion. The validated method was efficient and economical for detecting Cu and CuNPs by readily available chemicals as EDTA and Citric acid with C18 column, which present the best results on RP-HPLC.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Copper (Cu) is an essential micronutrient due to its vital role in the body's biological and biochemical processes. It is found in all body tissues and plays a role in making red blood cells and maintaining nerve cells and the immune system <ns0:ref type='bibr' target='#b39'>(Soetan et al., 2010)</ns0:ref>. However, Cu is very toxic in excessive doses and leads to some metabolic disorders and more tissue accumulation and damage <ns0:ref type='bibr' target='#b9'>(Chen et al., 2021 and</ns0:ref><ns0:ref type='bibr' target='#b31'>Ognik et al., 2016)</ns0:ref>. In recent years, copper nanoparticles (CuNPs) have had a strong focus on health-related processes since they possess antibacterial properties and antifungal activity besides their catalytic, optical, and electrical properties <ns0:ref type='bibr' target='#b3'>(Argueta-Figueroa et al., 2014)</ns0:ref>. The development of these CuNPs is constantly growing and progressing for future technologies <ns0:ref type='bibr' target='#b4'>(Camacho-Flores et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Recently, some studies declared the effect of nanoparticles in their metallic form as CuNPs to be antiviral <ns0:ref type='bibr'>(Tomaszewski et al., 2017)</ns0:ref>. Nanometric-sized particles are also efficient in drug delivery, ionizing agents and diagnostic imaging. Additionally, a rapid increase in the improper usage of medications such as antibiotics has led the medical field to investigate new alternatives of biocides against infectious diseases <ns0:ref type='bibr' target='#b33'>(Patra et al., 2018)</ns0:ref>. The emergence of the CuNPs analysis lies in the growing area of applications, and it enhances the knowledge of this new material's nature <ns0:ref type='bibr'>(Khan et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Nano-metals can penetrate biological membranes due to their high physiological solubility and physicochemical properties <ns0:ref type='bibr' target='#b2'>(Awaad et al., 2021)</ns0:ref>. Thus, the data declared by <ns0:ref type='bibr' target='#b15'>Escalona et al. (2017)</ns0:ref> proved that CuNPs had higher plasma ceruloplasmin activity than cupper sulfide (CuS). In PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science addition, much excretion of CuS than that of CuNPs confirmed that Cu administered as CuNPs was better dissolved than CuS in an acidic environment and probably better absorbed in the digestive tract <ns0:ref type='bibr' target='#b6'>(Cholewińska et al.,2018)</ns0:ref>.</ns0:p><ns0:p>There are several methods for the characterization of CuNPs. One of the standard methods to analyze the shape and size of the CuNPs is the Transmission Electron Microscopy (TEM). Several other methods, such as Dynamic Light Scattering (DLS) and X-Ray Scattering at Small Angles (SAXS), are also used to measure the particle size. Besides this, only the TEM analysis gives authentic images of the morphology and the shape of the nanostructures. The inorganic material's morphological information is collected using an instrument known as the Scanning Electron Microscope (SEM). The most crucial usage of high-resolution EDS/SEM (~100Å) is to achieve three-dimensional images with large depth fields using a simple sample preparation <ns0:ref type='bibr' target='#b8'>(Choudhary et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Copper colloids and all other metals are usually absorbed in the ultraviolet-visible (UV-Vis) range because of the excitation of surface plasmon resonance (SPR). However, UV-Vis spectroscopy is considered to be a convenient method to characterize CuNPs. On the macroscopic scale, some of the colloidal metal materials are comparably different, and in the visible region, some give distinct absorption peaks. The metals such as copper, silver, and gold have shown prominent absorption peaks <ns0:ref type='bibr' target='#b29'>(Moniri et al., 2017)</ns0:ref>. Meanwhile, these methods cannot quantify Cu concentration either in its original or nano form in different pharmaceutical products and biological matrices.</ns0:p><ns0:p>Different techniques and instruments accomplished the detection of Cu ions concentration as atomic absorption spectrometry (AAS) <ns0:ref type='bibr' target='#b45'>(Wang et al., 2014)</ns0:ref>, inductively coupled plasma mass spectrometry (ICP-MS) <ns0:ref type='bibr' target='#b11'>(Cao et al., 2020)</ns0:ref>, ion-pair HPLC <ns0:ref type='bibr'>(Shen et al., 2006)</ns0:ref> Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science diverse analytes types, from small organic molecules and ions to large biomolecules and polymers with highly reproducibility, sensitivity, specificity, precision and robustness <ns0:ref type='bibr' target='#b14'>(Dong, 2013)</ns0:ref>.</ns0:p><ns0:p>Although UV-Vis detectors are the most common type of detectors used for HPLC because of their relative ease-of-use and high sensitivity, it is not applicable for Cu concentration detection.</ns0:p><ns0:p>This study presents a method a novel insight into the quantification of Cu and CuNPs in biological matrices and pharmaceutical products by the development and validation of the UV-HPLC method.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Materials and Methods</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1.'>Detection Theory</ns0:head><ns0:p>Cu ions cannot be detected by the UV detector for RP-HPLC by the ordinary extraction method.</ns0:p><ns0:p>This study depends on the reaction between Cu ions and EDTA to form a stable complex. EDTA is a strong chelating agent. It can form very stable complexes with the transition elements due to EDTA has hexadentate ligand (Al-Qahtani,2017).</ns0:p><ns0:p>This concept was nearly similar to that in the studies by <ns0:ref type='bibr' target='#b27'>Khuhawar and Lanjwani (1995)</ns0:ref> and Rasul Fallahi and Khayatian (2017), who used colored reagents to detect metal ions. This Cu-EDTA complex (Figure <ns0:ref type='figure'>1</ns0:ref>) is easily detected by the UV detector at 310nm.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2.'>Standards, Drugs, and Chemicals:</ns0:head><ns0:p>Copper nitrate (Cu(NO 3 ) 2 ) standard in HNO₃ (0.5mol/L) 1000mg/L was purchased from Merck. Acetonitrile (ACN) and methanol (MeOH) were of HPLC grade (Fischer). Ammonium acetate was purchased from Riedel-de Haen (Buchs, SG, Switzerland). Ethylenediaminetetraacetic acid (ETDA) was from Oxford Lab Chem, India. The analytical grade chemicals and reagents were supplied from BDH Laboratories Supplies (BDH Chemical Ltd., Poole, U.K.). The used </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.'>Instrumentation:</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.3.1.'>HPLC System:</ns0:head><ns0:p>Agilent Series 1200 quaternary gradient pump, Series 1200 autosampler, Series 1200 UV-Vis detector, and HPLC 2D ChemStation software (Hewlett-Packard, Les Ulis, France). Agilent C18 column (4.6mm id, 150mm, 5µm particle size).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2.'>Inductively coupled plasma mass spectrometry (ICP-MS):</ns0:head><ns0:p>Thermo ICP-MS model iCAP-RQ.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.'>Sample Preparation</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.4.1.'>Pharmaceutical Samples:</ns0:head><ns0:p>Drug samples were prepared by taking an accurate drug volume in 1% nitric acid to have a final concentration (1mg/ mL). The mobile phase diluted variable concentrations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4.2.'>Biological Matrices Extraction</ns0:head><ns0:p>Liver and muscle (blank) tissues were obtained from SPF chicken aged 28 days. These chickens were supplied from QRD experimental farm for research and scientific service, Giza, Egypt. It is owned by a QVETeh veterinary services company for developing the animal health industry in the Middle East, Cairo, Egypt. The slaughtered chickens were transported in an icebox to be prepared for blank and spiked samples in the laboratory. Tissues were grounded and homogenized and kept at -70 0 C until the analysis started. Two grams of tissue samples were weighed. The extraction procedures started with 2ml 10mM citric acid with an adjusted pH at 2.3 by 10% sodium hydroxide (NaOH) and 2ml methanolic ammonium acetate (50mM) with 0.5mg/ml EDTA. The mixture was vortexed thoroughly for two minutes. They were then shaken for two hours at 200rpm at room temperature. Two millilitres of chloroform were also added and mixed well. The samples were centrifuged at 3000rpm for ten minutes. The organic phase was separated, and another 2ml of chloroform was added for repetition of the extraction. The extract (4ml) was evaporated, and the residues were dissolved in the 1ml mobile phase. The sample (20µl) was injected onto the HPLC column.</ns0:p><ns0:p>The same procedures were adopted in the serum sample extraction. The difference was in the sample volume (0.5mL serum) with equal volumes of the other extracted chemicals. Also, the evaporated sample was reconstituted with a 250µl mobile phase.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5.'>Chromatographic Conditions (SIELC Technologies):</ns0:head><ns0:p>The elution mixture consisted of ACN: ammonium acetate (50mM) with 0.5mg/ml EDTA (30:70 v: v) as an isocratic mobile phase. The flow rate was 1ml/min with UV wavelength detection at 310 nm. The stop time was eight minutes with a post time of one minute.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.6.'>Standard Preparation and Calibration Curve:</ns0:head><ns0:p>The copper stock standard solution was prepared at a 1mg/mL concentration in 1% nitric acid and diluted to prepare fortified solution at a 100µg/ml concentration. Calibration standards were prepared at various concentrations (0.05, 0.1, 0. Quality control samples for pharmaceutical products and nanocomposite were 10 and 1µg/mL, respectively. Accuracy was determined by the standard addition of 50ng/mL on three pharmaceutical products levels at 5, 10, and 15µg/mL and 0.5, 1, and 1.5µg/mL nanocomposite.</ns0:p><ns0:p>Quality control serum samples were prepared at three different concentrations 0.8, 1.6, and 2.4µg/mL. Moreover, liver quality control levels were at 1, 2, and 3.5µg/g and muscle samples at 0.2, 0.4, and 0.6µg/g.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.7.'>Method Validation:</ns0:head><ns0:p>This was accomplished in concrete terms according to ICH,2005 and USP,2019 as specificity, linearity, and range, precision, recovery, and accuracy, detection limit (DL) and quantification limit (QL), and robustness and system suitability test (SST).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.8.'>Statistical Evaluation</ns0:head><ns0:p>The obtained results were analyzed using SPSS Inc., version 22.0, Chicago, IL, the USA to calculate the mean, standard deviation (SD) and the relative standard deviation (RSD) <ns0:ref type='bibr' target='#b28'>(Morgan et al., 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Results</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1.'>High-Performance Liquid Chromatograph (HPLC):</ns0:head><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.1.'>Method Validation:</ns0:head><ns0:p>1. Linearity, range, precision, accuracy, DL, and QL were illustrated in Table <ns0:ref type='bibr' target='#b0'>(1)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Linearity and range:</ns0:head><ns0:p>The linearity of Cu and CuNPs was evaluated by the calibration curve on a range of eight concentrations. The correlation coefficient (R) ranged from 0.9995 to 0.9999.</ns0:p></ns0:div>
<ns0:div><ns0:head>Precision:</ns0:head><ns0:p>The precision of a method is the degree of agreement among individual test results when the procedure is applied repeatedly to multiple samplings. It is carried out on intra-day and inter-day precisions. It is expressed as the relative standard deviation (coefficient of variance, CV) of a series of measurements. Intra-day precision was performed on six replicates of the analyte on the same day. Inter-day precision was performed on different days and by different analyzers.</ns0:p><ns0:p>Intra-and inter-day precisions RSD percentage in dosage form were evaluated and found to be 0.696 and 1.04, respectively. In biological samples, intra-day precision RSD percentage ranged from 0.52% to 0.6%, while the inter-day precision range was from 0.85% to 1.36%.</ns0:p></ns0:div>
<ns0:div><ns0:head>Accuracy (standard addition):</ns0:head><ns0:p>The accuracy of an analytical method is the degree of agreement of test results generated by the actual value. The accuracy must be done on three levels: 50%, 100%, and 150%.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sensitivity:</ns0:head><ns0:p>The sensitivity was determined by detection limit (DL) and quantification limit (QL).</ns0:p><ns0:p>DL is defined as the lowest concentration at which the instrument can detect but not quantify, and the noise to signal ratio for DL should be 1:3. QL is defined as the lowest concentration at which the instrument can detect and quantify. The noise to signal ratio for LOQ should be 1:10 (Rao, 2018 and <ns0:ref type='bibr' target='#b43'>Uddin et al.,2008)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Robustness:</ns0:head><ns0:p>It was performed by slight changes of the mobile phase composition, the wavelength of UV, and column temperature, which did not lead to any essential changes in the chromatographic system's performance as specificity and system PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science suitability parameters. The method was robust by calculating pooled RSD percentage for all shifts at a certain concentration level (1µg/mL), as shown in Table <ns0:ref type='table'>1</ns0:ref>. The acceptance criterion of the pooled RSD percentage is ≤ 6%.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Specificity:</ns0:head><ns0:p>The specificity demonstrated with chromatogram through short, specific retention time (4.955 minutes), as there was no impurities interference between the extracted samples and pure standard (Figure <ns0:ref type='figure'>3</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.2'>System Suitability Test (SST):</ns0:head><ns0:p>The method showed to be suitably performed under the optimized conditions, and the RSD percentage was found to be less than 1% for system suitability parameters in Table <ns0:ref type='table'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1.3'>Application of Pharmaceutical Products:</ns0:head><ns0:p>The method developed here was applied to various concentrations (5, 10, and 15µg/mL) of solutions made from pharmaceutical products for determining the content of Cu and CuNPs (0.5, 1, 1.5µg/mL). The values of the overall drug percentage recoveries and the RSD value of Cu are 100.3-101.1% and 0.01-0.2%, respectively, and were 103.4-109.9% and 0.1-1.1% for CuNPs as presented in Table <ns0:ref type='table'>3</ns0:ref>, indicating that these values are acceptable and the method is accurate and precise. Furthermore, there was no interference and no degradation products. High specificity of this method was confirmed by absence of interference from the sample excipients at the detection wavelength.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.'>Inductively coupled plasma mass spectrometry (ICP-MS):</ns0:head><ns0:p>Biological samples and pharmaceutical products were analyzed for copper detection by ICP-MS (Varian 810/820-MS ICP Mass Spectrometer) to ensure results of HPLC. Results illustrated in table <ns0:ref type='bibr' target='#b3'>(4)</ns0:ref>. Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Discussion</ns0:head><ns0:p>Copper (Cu) is an essential trace element but causes toxic effects with high doses. In this study, Cu and CuNPs were detected by a precise, accurate, and selective UV RP-HPLC method in dosage form and different matrices (serum, liver, and muscles). Cu detection in pharmaceutical products was done without extraction procedures. It is an economical method as fewer solvents and rapid detection are used. This is unlike the findings by Takele et al. (2017), who detected bromazepamcopper (II) complex through an extraction process. The developed method for Cu detection in biological matrices was validated after some modifications in the original one <ns0:ref type='bibr' target='#b27'>(Khuhawar and Lanjwani, 1995)</ns0:ref>. Copper is one of the heavy metals that requires potent chelating agents like citric acid and EDTA to be extracted and detected <ns0:ref type='bibr' target='#b25'>(Jafri et al., 2017)</ns0:ref>. Green chelating agents are readily biodegradable and safer chemicals with less phytotoxicity <ns0:ref type='bibr' target='#b12'>(Chauhan et al., 2015)</ns0:ref>. In this study, citric acid was used as a green chelating agent for the recovery of copper, which is in accordance with the study by <ns0:ref type='bibr' target='#b1'>Asemave (2018)</ns0:ref>, who estimated its recovery by 88%. Besides, pH adjustment and time of extraction give higher recovery. Herein, citric acid is a natural organic acid, and the green chelating agents afford safe and powerful extraction and support the concept of sustainable chemistry <ns0:ref type='bibr' target='#b20'>(Gómez-Garrido et al., 2018)</ns0:ref>.</ns0:p><ns0:p>The mixing of the extraction solvent to have EDTA, ammonium acetate, and citric acid gave the best chelating power for the purification of Cu <ns0:ref type='bibr' target='#b23'>(Hu et al., 2013)</ns0:ref>. On keeping this line, EDTA is the most widely used acid in modified forms in extracting cationic micronutrients as Cu 1+ . Copper is an electron donor in its oxidative state (Cu 1+ and Cu 2+ ) in enzyme synthesis and cofactor in ceruloplasmin for iron hemostasis <ns0:ref type='bibr' target='#b46'>(Wolf et al., 2015)</ns0:ref>. The solvent mixture chelated Cu and formed a complex soluble in aqueous-methanolic solution and double extracted by chloroform to be easily Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science transferred from the aqueous phase to the organic one. Chloroform (organic phase) was evaporated to get very stable hexadentate ligand complex, and the mobile phase eluted this complex. The mobile phase had EDTA as the visualizing agent of Cu in the Cu-EDTA coloured complex. This coloured complex is a stable chelate and is detected by UV <ns0:ref type='bibr' target='#b32'>(Pati et al., 2019)</ns0:ref>. This method is selective to Cu cations rather than the other cations that do not form coloured complexes with EDTA. Cu-EDTA complex was detected through the C18 column, which gave the best results on reverse-phase HPLC.</ns0:p><ns0:p>The developed method is more economical, and there are less health and environmental hazards. This is because of using easily applicable chemicals with RP-HPLC, which is the most conventional chromatography technique. This is in line with the theory of green analytical chemistry, which is part of the concept of sustainable development (Płotka-Wasylka et al., 2021). Also, minimize analytical equipment and shorten the time elapsed between conducting analysis and obtaining reliable analyses <ns0:ref type='bibr' target='#b42'>(Turner, 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusion</ns0:head><ns0:p>This advanced RP-HPLC method has been validated for accuracy, precision, linearity, and reproducibility following ICH and USP guidelines. The limits of detection and quantification are very low for both pharmaceutical products and biological matrices. This is indicated high sensitivity. RP HPLC is used to quantify copper and Cu NPs via potent chelating agents as citric acid and EDTA for its extraction and detection. Cu-EDTA complex was successfully detected by UV with high selectivity to Cu cations. Cu-EDTA complex was separated by C18 column, which gave the best results on reversed-phase HPLC. The method is novel, simple, sensitive, selective, precise, and accurate analytically for Cu and CuNPs quantification. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>pharmaceutical products to check the applicability of the method were Clo-Fix (25gm/L) by Cardimyer pharmaceutical industries (Al Buhayrah, Egypt) and Meracid (10mg/L) by EVP Co. (Alexandria, Egypt). The CuNPs product was supplied as copper-chitosan nanocomposite (CuCNPs) from the Nanotechnology Research and Synthesis Unit, Animal Health Research Institute (AHRI), Egypt. Copper nanoparticles in extracted matrices were characterized by Transmission electron microscope (TEM) in Central lab., National Research Center (NRC), Egypt. Copper nanoparticles were with an average size 24.71±1.68 nm Figure (2).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>2, 0.4, 0.8, 1, 5, 10, 15, and 20µg/ml) from the PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science fortified solution by diluting with 1% nitric acid to ascertain the actual concentration of Cu in pharmaceutical products and Cu-nanoparticles preparation (CuNPs).The calibration curve of biological matrices was prepared by spiking blank samples (serum, liver, and muscle) with various fortified solution concentrations to have calibration samples (0.1, 0.2, 0.4, 0.8, 1, 5, 10, and 15µg/g).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>, and colorimetric methods (Zheng et al., 2018, Xu et al., 2010, and Ge et al., 2014). HPLC has high applicability to</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table ( 1): Validation parameters results of Cu analytical method: Parameter Cu standard Cu in Serum Cu in Liver Cu in Muscle</ns0:head><ns0:label>(</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Range (ppm)</ns0:cell><ns0:cell>0.05-20</ns0:cell><ns0:cell /><ns0:cell>0.1-15</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Regression</ns0:cell><ns0:cell>y = 72.149x -</ns0:cell><ns0:cell>y=73.234x</ns0:cell><ns0:cell>y = 73.263x +</ns0:cell><ns0:cell>y = 75.323x +</ns0:cell></ns0:row><ns0:row><ns0:cell>equation</ns0:cell><ns0:cell>0.104</ns0:cell><ns0:cell>+ 0.128</ns0:cell><ns0:cell>0.3742</ns0:cell><ns0:cell>0.0594</ns0:cell></ns0:row><ns0:row><ns0:cell>Correlation</ns0:cell><ns0:cell>0.9999</ns0:cell><ns0:cell>0.9999</ns0:cell><ns0:cell>0.9995</ns0:cell><ns0:cell>0.9998</ns0:cell></ns0:row><ns0:row><ns0:cell>coefficient(r 2 )</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Intraday precision</ns0:cell><ns0:cell>0.696</ns0:cell><ns0:cell>0.6</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>(RSD%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Inter-day precision</ns0:cell><ns0:cell>1.04</ns0:cell><ns0:cell>1.18</ns0:cell><ns0:cell>0.85</ns0:cell><ns0:cell>1.36</ns0:cell></ns0:row><ns0:row><ns0:cell>(RSD%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Recovery%</ns0:cell><ns0:cell>99.13-</ns0:cell><ns0:cell cols='2'>96.8-100.7 99.98-104.75</ns0:cell><ns0:cell>98.4-105.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>101.01</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell cols='2'>100.03±0.46 99.4±1.4</ns0:cell><ns0:cell>101.5±1.88</ns0:cell><ns0:cell>101.65±2.3</ns0:cell></ns0:row><ns0:row><ns0:cell>DL (ppm)</ns0:cell><ns0:cell>0.038</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.055</ns0:cell><ns0:cell>0.023</ns0:cell></ns0:row><ns0:row><ns0:cell>QL (ppm)</ns0:cell><ns0:cell>0.115</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.166</ns0:cell><ns0:cell>0.069</ns0:cell></ns0:row><ns0:row><ns0:cell>Robustness (pooled</ns0:cell><ns0:cell>2.22</ns0:cell><ns0:cell>1.1</ns0:cell><ns0:cell>2.23</ns0:cell><ns0:cell>2.25</ns0:cell></ns0:row><ns0:row><ns0:cell>RSD%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:08:65001:1:2:NEW 15 Jan 2022)</ns0:note></ns0:figure>
</ns0:body>
" | "Researcher of pharmacology
Chemistry department
Animal Health Research Institute
7 Nadi Elsaid st.
Tel: 02 33372934
Fax: +202 35722609
December 29th ,2021
Dear Editors
We thank the reviewers for their generous comments on the manuscript and we have edited the manuscript to address their concerns.
In particular all of the code we wrote is available and I have included multiple links throughout the paper to the appropriate code repositories. We believe that the manuscript is now suitable for publication in PeerJ.
Dr. Mai A Fadel
Researcher of pharmacology
On behalf of all authors.
Reviewer 1
Basic reporting
The work concerns the quantification of copper in biological matrices and pharmaceutical preparations. The RP-HPLC method with DAD detection was selected for analysis. The principle of detection is based on the formation of a complex of copper with citric acid and EDTA as known complexing agents. My doubts are raised by the lack of copper specifications, and although the article shows that the authors denote copper in a metallic form and in the form of nanoparticles, it is doubtful, because they use nitric acid for dissolution. No confirmation of the presence of nanoparticles in the tested matrices, e.g. using TEM or SEM. In my opinion, the authors actually mean Cu (II) and therefore in ionic form.
Thanks for reviewer comment.
[1] In concern to the doubt of Lack of copper specification; EDTA is a strong chelating agent. According to the following reference:
Al-Qahtani K. M. A. Extraction Heavy Metals from Contaminated, Water Using Chelating Agents. Orient J Chem 2017;33(4).
It can form very stable complexes with the transition elements due to EDTA was hexadentate ligand.
In addition, Mobile phase used for elution in this method was previously reported by SiELC website in the following link: https://www.sielc.com/Application-HPLC-UV-Analysis-of-Copper-Ions-in-Salt-Mixture.html.
The method development was carried out in the extraction of copper from different tissues matrices and the pharmaceutical products instead of salt mixture in the original report.
[2] Use of nitric acid was in diluted form (1%) to form copper nitrate according to the following equation:
3Cu+8HNO3(dil.) →3Cu(NO3)2+4H2O+2NO
It is the same in standard used for calibration. The following dilutions were done in mobile phase to be chelated by EDTA.
[3] The confirmation of presence of nanoparticles. The following image declared the presence of Cu-Chitosan nanocomposite in the extracted matrices before HPLC injection in an average size 24.71±1.68 nm.
Fig (2) TEM of Cu-Chitosan nanocomposite showed 24.71±1.68 nm with polydispersity index (PdI):0.691±0.02, sphere shape and no aggregation (Central lab. in NRC).
The manuscript should be reworded appropriately and accurately name the form they denote. Validation meets the criteria and it is a pity to waste this effort. I don't really understand the choice of method. In fact, simple colorimetry would suffice in this case. What are the benefits of using HPLC which is a separation method as there is only one analyte? Authors should justify their choice.
Most of pharmaceutical products containing multiple salts with organic acids. These factors lead to lesser sensitivity of colorimetric method for detection of copper in the pharmaceutical products. Colorimetric method has some disadvantages as it isn’t work on UV or IR spectrum and reflect light in surface. These parameters lead to less specificity and sensitivity.
Example of less specificity was reported by Zentner et al.,2018 who found that the specificity of colorimetric method to identify HIV/tuberculosis was 50%.
Zentner I., Modongo, C., Zetola, N. M., Pasipanodya, J. G., Srivastava, S., Heysell, S. K.,& Vinnard, C. (2018). Urine colorimetry for therapeutic drug monitoring of pyrazinamide during tuberculosis treatment. International Journal of Infectious Diseases, 68, 18-23.
While, HPLC has high applicability to diverse analytes types, from small organic molecules and ions to large biomolecules and polymers with highly reproducibility, sensitivity, specificity, precision and robustness Dong, (2013).
Dong, M. (2013). The essence of modern HPLC: advantages, limitations, fundamentals, and opportunities.
Also, we reported in the introduction section of this study, UV-Vis detector is easy in use and high sensitive in lines 88 and 89.
There is also no comparison of the precision and accuracy of the method with the determinations described so far. Therefore, it is not known what benefits result from the methodology used.
Biological samples and pharmaceutical products were analyzed for copper detection by ICP-MS (Thermo ICP–MS model iCAP-RQ) to ensure results of HPLC. Results illustrated in table (4).
Table (4): Comparison of the results obtained by the developed HPLC method and ICP-MS for the determination of copper in spiked biological samples, pharmaceutical product and CuCNPs composite preparation:
Sample
Spiked (PPM)
HPLC method
Recovery (%)
ICP/MS (PPM)
Serum
0.5
0.503±0.002
100.7
0.51±0.001
1
1.029±0.005
102.9
1.03±0.004
Liver
0.5
0.498±0.006
99.7
0.501±0.002
1
1.026±0.003
102.6
1.022±0.005
Muscle
0.5
0.502±0.002
100.5
0.499±0.01
1
1.017±0.01
101.8
1.012±0.01
Pharmaceutical product (0.5ppm)
0.05
0.54±0.03
98.2
0.55±0.1
0.1
0.603±0.2
100.5
0.599±0.01
CuCNPs (0.5ppm)
0.05
0.55±0.01
109.9
0.54±0.1
0.1
0.598±0.02
99.7
0.601±0.2
Fig. 2 - A peak that needs to be identified appears in the chromatogram.
The peak was already identified and written above it a symbol of copper (Cu) with a specific retention time (4.955 minutes).
Fig.3- There are actually two unresolved peaks in this chromatogram so the quantification is questionable.
The early eluted peak is a solvent peak. The solvent in which your sample is contained. This peak is not pertinent to the results (European Commission.,2002). The quantifiable peak is the peak with a specific retention time (4.955 minutes) as mentioned in Result Section in line 186.
Reference:
European Commission. (2002). Commission Decision 2002/657/EC of 12 August 2002 implementing Council Directive 96/23/EC concerning the performance of analytical methods and the interpretation of results. Official Journal of the European Communities, 50, 8-36.
No UV-VIs spectrum of the analyte determined.
UV-Vis spectrum was illustrated on all the supplied figures (The detection wavelength was 310nm).
Experimental design
Experimental design should be improved.
DONE
Validity of the findings
It should be improved.
DONE
Reviewer 2
Basic reporting
English language used is not professional and requires extensive editing with the help of a colleague proficient in English.
The paper was edited by professional English editing entity and the editing certificate is provided as a separate PDF file.
Some errors are listed below:
Line 209: reference is not appropriate
The reference was removed. It was in lines 225 to 227 in the revised manuscript.
Abstract:
• This following sentence is not clear to understand: “The use of citric acid as a green chelating agent and extraction time with pH adjustment afforded high Cu recovery in biological samples.”
The statement was modified as the following:
High Cu recovery in biological samples was afforded by using citric acid as a green chelating agent, exact extraction time and pH adjustment.
• Also correct for appropriate symbols in” The precision RSD% was ˂ 2.”
This sentence was retyped in the revised manuscript to be “The precision RSD percentage was less than two”. It was yellow highlighted in line 35.
Title:
• Line 2: Use “copper nanoparticles” instead of “nano copper”
Corrected
Text:
• et al., needs to be in italics in all references.
Corrected
• line 54: ionizing agents and diagnostic imaging
Corrected.
• line 57 and 60: inaccurate sentence structure
Corrected.
• line 84: detectors
Corrected. In line 88 at the revised manuscript.
• line 86: This study presents a method (this is not a review)
Corrected. In line 91 at the revised manuscript.
• line 219: Cu1+
Corrected. In line 237 at the revised manuscript.
• line 239: RP HPLC is used to quantify copper and Cu NPs via potent
Corrected. In line 258 at the revised manuscript.
• line 211-213: inaccurate sentence structure
Corrected. From lines 228 to 231 at the revised manuscript.
Experimental design
A reverse phase HPLC method to quantify copper and copper nanoparticles is reported in this study. However, this is not a novel method. Along with the references provided, SiELC website has a page with similar method using a different column is published on its website: https://www.sielc.com/Application-HPLC-UV-Analysis-of-Copper-Ions-in-Salt-Mixture.html. The authors have used an existing method and validated its use with different biopharmaceutical samples.
This method was used for detection of copper in salt mixture. Also, it supplied the mobile phase and other HPLC conditions.
The method development was carried out in the extraction of copper from different tissues matrices and the pharmaceutical products instead of salt mixture in the original report.
Validity of the findings
The validation of the method was executed and data presented accordingly.
Additional comments
Editing is required to accurately describe the results and scope of this study.
Done
" | Here is a paper. Please give your review comments after reading it. |
673 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Optical biosensors such as those based on surface plasmon resonance (SPR) are a key analytical tool for understanding biomolecular interactions and function as well as the quantitative analysis of analytes in a wide variety of settings. The advent of portable SPR instruments enables analyses in the field. A critical step in method development is the passivation and functionalisation of the sensor surface. We describe the assembly of a surface of thiolated oleyl ethylene glycol/biotin oleyl ethylene glycol and its functionalisation with streptavidin and reducing end biotinylated heparin for a portable SPR instrument. Such surfaces can be batch prepared and stored. Two examples of the analysis of heparin-binding proteins are presented. The binding of fibroblast growth factor2 and competition for the binding of a heparan sulfate sulfotransferase by a library of selectively modified heparins and suramin, which identify the selectivity of the enzyme for sulfated structures in the polysaccharide and demonstrate suramin as a competitor for the enzyme's sugar acceptor site. Heparin functionalised surfaces should have a wide applicability, since this polysaccharide is a close structural analogue of the host cell surface polysaccharide, heparan sulfate, a receptor for many endogenous proteins and viruses.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> The regulation of biochemical processes depends on the dynamic interactions of biological molecules. The characterisation of these interactions represents an important step in gaining an understanding of function and how this may be perturbed in a biological experiment. Moreover, the ability of biological molecules such as antibodies and receptors, as well as various host entities, to recognise molecular partners with high selectivity has become integral to (bio)analytical techniques. A large number of different surface measurement platforms have been adapted for such analyses and of these, surface plasmon resonance and related techniques are some of the most popular, perhaps due to their early commercialisation <ns0:ref type='bibr' target='#b3'>(Brigham-Burke et al. 1992;</ns0:ref><ns0:ref type='bibr' target='#b33'>Watts et al. 1994)</ns0:ref>, rapidity and the ease of measurement. The technique has since evolved to encompass both analysis of functional molecular interactions and quantification of the amount of an analyte in a sample. The recent development of portable, low-cost instrumentation, e.g., <ns0:ref type='bibr' target='#b36'>(Zhao et al. 2015)</ns0:ref>, has substantially increased the reach of the technique. In an SPR measurement, the ligand is immobilised on the sensor surface and the analyte is usually flowed over the sensor, though some systems do not use fluidics. The instruments use changes in the refractive index near the surface to measure in real-time the interactions of analyte with ligand.</ns0:p><ns0:p>Like any analytical method, SPR has limitations. One relates to the surface itself, while fluidics systems, which are characterised by inefficient exchange between the bulk, flowing liquid and the stationary layer, are a source of further artefacts. These have been reviewed extensively e.g., <ns0:ref type='bibr' target='#b30'>(Schuck & Zhao 2010)</ns0:ref>. Another limitation relates to signal and noise. The principal source of noise is the interaction of the analyte with the surface to which the ligand is immobilised. Thus, an important part of experimental design is the synthesis of surfaces suitable for capturing ligands that exhibit very low analyte binding. Gold binds many groups that include thiols, amines and carbonyls, so has to be passivated. Dextran-based hydrogels were an early solution, but these are prone to high non-specific binding, as well as to generating artefacts arising from the collapse or expansion of the hydrogel upon interaction with an analyte <ns0:ref type='bibr' target='#b30'>(Schuck & Zhao 2010)</ns0:ref>. Since then, substantial advances involving useful cross-fertilisation between nanoparticle and surface sensing fields have been made in passivating gold surfaces with self-assembled monolayers (SAMs). These approaches include mercaptoproprionic acid <ns0:ref type='bibr' target='#b21'>(Mauriz et al. 2006)</ns0:ref>, supported lipid bilayers <ns0:ref type='bibr' target='#b13'>(Ferhan et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Marques et al. 2014)</ns0:ref>, peptides <ns0:ref type='bibr' target='#b0'>(Bolduc et al. 2009;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bolduc et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bolduc et al. 2010</ns0:ref>) and thiolated oleyl ethylene glycol (OEG) <ns0:ref type='bibr' target='#b22'>(Migliorini et al. 2014)</ns0:ref>.</ns0:p><ns0:p>SAMs have many advantages; in particular, functionalisation can be achieved in the same step as passivation by the inclusion of a mole percent of a functional ligand. This allows statistical control of the functionalisation of the SAM and thereby the surface. Control of the surface density of functionalisation provides a means by which steric hindrance artefacts can be avoided <ns0:ref type='bibr' target='#b12'>(Edwards et al. 1995)</ns0:ref>. Since the SAMs consist of small molecules, the immobilised ligand will also be close to the surface. Thus, a SAM affords greater sensitivity than a hydrogel, because the SPR signal decreases exponentially with the distance from the surface. No passivation system offers a universal panacea, however, so having access to several distinct SAMs increases the probability of finding a surface suitable for a particular measurement.</ns0:p><ns0:p>We have adapted an existing strategy for the functionalisation of gold surfaces with a monolayer of streptavidin that is particularly resistant to non-specific binding <ns0:ref type='bibr' target='#b22'>(Migliorini et al. 2014)</ns0:ref> to the SPR chips of a portable SPR instrument <ns0:ref type='bibr' target='#b36'>(Zhao et al. 2015)</ns0:ref>. Self-assembled monolayers of defined mixtures of thiolated oleyl ethylene glycol (OEG) and biotin OEG were synthesised on the gold surface and used to capture streptavidin. Reducing-end biotinylated heparin <ns0:ref type='bibr'>(Thakar et al. 2014</ns0:ref>) was then captured on the streptavidin surface. Heparin is a convenient experimental proxy for the sulfated domains of cellular heparan sulfate, which by virtue of the latter's more than 800 extracellular protein partners is a major regulator of cell function in development, homeostasis and disease <ns0:ref type='bibr' target='#b26'>(Nunes et al. 2019)</ns0:ref>. Indeed, this surface has recently been used to characterise the interaction of the spike protein of SARS-CoV-2 with heparin <ns0:ref type='bibr'>(Mycroft-West et al. 2020</ns0:ref>). Here we provide an in-depth description of the preparation of this surface, benchmark it against the well-characterised interaction of fibroblast growth factor-2 (FGF2) with heparin <ns0:ref type='bibr' target='#b27'>(Rahmoune et al. 1998</ns0:ref>) and demonstrate its application in the characterisation of heparan sulfate sulfotransferases, in this instance a fusion protein of glutathione-S-transferase (GST) and 3-O heparan sulfate sulfotransferase 3A1 (HS3ST3A1). We demonstrate that suramin, a previously characterised inhibitor of HS2ST1 that was postulated to compete for both the donor and acceptor sites of the enzyme <ns0:ref type='bibr' target='#b6'>(Byrne et al. 2018b)</ns0:ref>, did indeed compete with GST-HS3ST3A1 binding to heparin. Additionally, the OEG/biotin OEG surfaces can be stored and re-used, and complement the existing peptide surfaces <ns0:ref type='bibr' target='#b0'>(Bolduc et al. 2009;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bolduc et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bolduc et al. 2010</ns0:ref>) that have been published for this portable SPR instrument.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Materials</ns0:head><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> The chemically modified heparins (Table <ns0:ref type='table'>1</ns0:ref>) were produced and characterised as described <ns0:ref type='bibr' target='#b35'>(Yates et al. 1996)</ns0:ref>. Human fibroblast growth factor 2 (FGF2) and was produced as described by <ns0:ref type='bibr' target='#b11'>Duchesne et al.(Duchesne et al. 2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Oxime biotinylation of heparin at the reducing end</ns0:head><ns0:p>Porcine intestinal heparin (H4784, Sigma) was biotinylated at the reducing end using hydroxylamine biotin, as described by <ns0:ref type='bibr' target='#b32'>Thakar et al.,(Thakar et al. 2014)</ns0:ref>, as this provides for a stable product. Heparin (4 mM) was reacted with 4 mM N-(aminooxyacetyl)-N'-(D-Biotinoyl) hydrazine, (A10550, ThermoFisher Scientific, UK) in 100 mM aniline and 100 mM acetate buffer, pH4.6, total volume 120 µl, at 37 °C for 48 hours. After, unreacted biotin was removed and the buffer exchanged to PBS (Thermo Scientific Oxoid, phosphate-buffered saline tablets, BR0014G) by six rounds of filtration at 10,000 g for 10 min in a 3k MWCO centrifugal filter <ns0:ref type='bibr'>(ThermoFisher Scientific,</ns0:ref><ns0:ref type='bibr'>88512)</ns0:ref>. The concentration of biotin-heparin was then measured using the carbazole assay. Briefly, 50 µL of 10-fold dilution of the biotin-heparin and a range of heparin standards at known concentration were added to a 96-well-plate, followed by the addition of 200 µL H 2 SO 4 and then incubated at 98 °C for 20 min. After cooling to room temperature over 10 min, 20 µL of 1.25 mg/mL carbazole was added to the wells, which were then incubated at 98 °C for 10 min. After cooling to room temperature for 10 min the absorbance in the wells was measured at 535 nm, with the heparin standard curve enabling quantification of the biotin-heparin.</ns0:p></ns0:div>
<ns0:div><ns0:head>Expression and Purification of GST-HS3ST3A1</ns0:head><ns0:p>A cDNA fragment encoding the catalytic domain of human HS3ST3A1 (Uniprot accession Q9Y663, residues 139-406, purchased from Thermo Fisher, UK) was cloned into the pGEX4T3 vector using EcoRI and NotI. Glutathione-S-transferase-(GST) HS3ST3A1 was expressed and purified following published procedures for GST-HS3ST3A1 <ns0:ref type='bibr' target='#b23'>(Moon et al. 2004</ns0:ref>) and GST-HS3ST1 <ns0:ref type='bibr' target='#b34'>(Wheeler et al., 2021)</ns0:ref> fusion proteins. Protein was expressed in C41 (DE3) E. coli, induced with 200 µM isopropyl 1-thio-ß-D-galactopyranoside (IPTG) at 22°C overnight.</ns0:p><ns0:p>Following centrifugation (15 min, 4,150x g) and resuspension in lysis buffer (100 mM NaCl, 50 mM Tris-Cl, pH 7.4) cells were lysed by sonication (six 1 min 30 s cycles of sonication on ice) and the lysate cleared by centrifugation (45 min, 38 000 g). Soluble GST-HS3ST3A1 in the supernatant was purified by application to a 2 mL glutathione resin (Genscript Biotech Corporation, Netherlands), washing with 100 mM NaCl, 50 mM Tris, pH 7.4 and eluting with 10 PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science mM reduced glutathione, 100 mM NaCl, 50 mM Tris, pH 7.4. The eluate was immediately applied to a 1 mL heparin (Affi-Gel hep, Bio-Rad, UK) column, washed with 50 mM NaCl, 50 mM Tris, pH 7.4, and eluted with 600 mM NaCl, 50 mM Tris, pH 7.4 (Supplementary Fig. <ns0:ref type='figure'>1</ns0:ref>).</ns0:p><ns0:p>The specific extinction coefficient for GST-HS3ST3A1 at 280 nm, calculated from its amino acid sequence was used for quantification and protein was snap frozen in liquid nitrogen and stored in aliquots at -80°C.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of functionalised gold surfaces</ns0:head><ns0:p>The functionalisation of the gold sensor surface was based on an existing method <ns0:ref type='bibr'>(Migliorini et al. 2014)</ns0:ref>. The detailed protocol for this is available at protocols.io <ns0:ref type='bibr' target='#b31'>(Su & Fernig 2021)</ns0:ref>. Briefly, mixtures of thiolated oleyl ethylene glycol (OEG) and thiolated oleyl ethylene glycol-biotin (OEG-biotin) at the stated % mole/mole and at a final concentration of 100 mM were prepared in ethanol. A plasma cleaned gold SPR sensor chip was incubated in this solution for 36 h to form an OEG/OEG-biotin SAM. The sensor chip was washed in ethanol and could be stored at 4 °C in ethanol for at least 3 months. The sensor chip placed in the P4SPR, multi-channel SPR instrument (Affinté Instruments; Montréal, Canada) and the three measurement channels and the control, background channel, were then equilibrated in PBS at a flow rate of 500 µL/min using an Ismatec peristaltic pump. A high flow rate was chosen to minimize mass transport limitations <ns0:ref type='bibr' target='#b30'>(Schuck & Zhao 2010)</ns0:ref>, which are particularly acute when molecular interactions are driven by electrostatic interactions, as for proteins binding heparin, though lower flow rates are often appropriate, depending on the characteristics of the interaction and the assay. All subsequent steps used this flow rate. After equilibration in PBS, streptavidin (Sigma), 1 mg/mL in PBS was injected over the four channels. Often a second injection of streptavidin was performed to determine whether the surfaces were fully functionalised. Streptavidin not specially bound to biotin was removed by two regeneration steps, the first using 2 M NaCl, the second 20 mM HCl.</ns0:p><ns0:p>Then 1 mL, 90 µg/mL biotinylated heparin was injected over the three measurement channels.</ns0:p><ns0:p>Again, a repeat injection of biotinylated heparin was performed to see if the streptavidin in the three measurement channels was fully derivatised. Finally, the three measurement channels were regenerated by sequential 1 mL injections of 2 M NaCl and 20 mM HCl (Fig. <ns0:ref type='figure'>1</ns0:ref>). The heparin functionalised surfaces could be stored in PBS at 4 °C for at least 10 days.</ns0:p></ns0:div>
<ns0:div><ns0:head>Measurement of protein-heparin interactions</ns0:head><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM- <ns0:ref type='table'>2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science All measurements of the binding of protein analytes to immobilised heparin used PBS supplemented with 0.01% (v/v) Tween 20 (PBST) as the running buffer, to prevent adsorption of analyte to the fluidics, at a flow rate of 500 µL/min. FGF2 and HS3ST3A were injected over the three measurement channels and the control channel at the concentrations indicated in the figure legends. The biotin-heparin surfaces were regenerated with 1 mL injections of 2 M NaCl and 20 mM HCl or 0.25 % (w/v) SDS and 20 mM HCl. In some experiments the protein analyte was pre-mixed with a potential competitor for its binding to heparin prior to injection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>SAMs of thiolated OEG:thiolated biotin OEG functionalised with streptavidin on gold have been shown to be very resistant to non-specific binding and have been extensively characterised in terms of areal density of immobilized streptavidin and biotinylated polysaccharides <ns0:ref type='bibr' target='#b22'>(Migliorini et al. 2014)</ns0:ref>. This approach also allows control of the surface density of streptavidin and hence biotinylated ligand immobilised on the surface by simply controlling the molar stoichiometry of thiolated OEG: thiolated biotin OEG (Fig. <ns0:ref type='figure'>1</ns0:ref>). Moreover, by capturing a biotinylated ligand, its orientation can be pre-determined if the biotinylation reaction is selective.</ns0:p><ns0:p>Combined, these features reduce the likelihood of artefacts associated with diffusion in fluidics systems <ns0:ref type='bibr' target='#b30'>(Schuck & Zhao 2010</ns0:ref>) and steric hindrance of the binding site(s) of the immobilised ligand <ns0:ref type='bibr' target='#b12'>(Edwards et al. 1995)</ns0:ref>. Three (mole/mole) ratios were chosen, 1 %, 0.3 % and 0.1 %.</ns0:p><ns0:p>Assuming equal probability of incorporation of the thiolated biotin OEG into the monolayer and a diameter of streptavidin of ~5.5 nm, these should yield, respectively, a very tightly packed monolayer, a monolayer, and partial surface coverage of streptavidin.</ns0:p><ns0:p>Following plasma cleaning and incubation of the gold surfaces with the thiolated OEG / thiolated biotin OEG mixture for 36 h, the surfaces were washed with ethanol. The OEG/biotin-OEG SAMS were found to be stable for at least three months at 4 °C, which enabled batch preparation of surfaces. The thiolated OEG surface was placed in the instrument and equilibrated in running buffer (PBS, 500 µL/min), and then 1 mL streptavidin (20 µg/mL in PBS) was injected at the same flow rate providing a final response of ~ 2 nm (Fig. <ns0:ref type='figure'>2A</ns0:ref>). The surface was then subjected to a wash with 2 M NaCl and then 20 mM HCl (1 mL each) to remove any non-specifically bound streptavidin. A second injection of streptavidin only resulted in a very small increase in bound protein (Fig. <ns0:ref type='figure'>2A</ns0:ref>, red arrows, 800 min to 1100 min), indicating that the biotinylated OEG on the PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science surface was near saturated with streptavidin after the first injection. Nevertheless, all subsequent surfaces were prepared with two sequential injections of streptavidin, to ensure that this was indeed the case. The surface was again washed sequentially with 2 M NaCl and 20 mM HCl.</ns0:p><ns0:p>Biotinylated heparin (1 mL, 90 µg/mL) was then injected over the three measurement channels, A-C. Only a small response was observed, as the refractive index of this sulfated polysaccharide is at the lower limit of detection (Fig. <ns0:ref type='figure'>2B</ns0:ref> red arrows 1850 min to 2000 min). After washing the surface with 2 M NaCl and 20 mM HCl, biotin heparin was again injected over the three measurement channels, A-C, but this did not cause any further increase in signal, demonstrating that the first single injection was sufficient to occupy the available sites on the immobilised streptavidin. Following washes with 2 M NaCl and 20 mM HCl, the surface was returned to running buffer. The heparin functionalised surfaces could be removed from the instrument and stored at 4 °C in PBS for up to 10 days without any detectable loss of analyte binding.</ns0:p><ns0:p>To determine whether the amount of streptavidin immobilised on the surface could indeed be controlled by changing the percentage thiolated OEG in the SAM, SAMs with three different percentage thiolated biotin OEG were functionalised with streptavidin. The data demonstrate a dependence of the level of streptavidin captured on the SAM and the percentage thiolated biotin OEG in the original ligand mixture used to assemble the SAM (Fig. <ns0:ref type='figure'>1C</ns0:ref>). This in turn enables the density of biotinylated immobilised ligand to be controlled. In the previous work, maximal areal density of streptavidin was obtained with 0.1 % mole/mole thiolated biotin OEG <ns0:ref type='bibr'>(Migliorini et al. 2014</ns0:ref>), but in the present work this did not afford anything like full coverage of the surface.</ns0:p><ns0:p>That 1% (mole/mole) is likely to do so is supported by several lines of evidence. Supported lipid bilayers on glass require 5% mole/mole biotinylated lipid for full coverage by streptavidin, which is higher than that obtained on the gold surface, likely due to be due to the mobility of the lipids increasing the streptavidin's packing density <ns0:ref type='bibr' target='#b22'>(Migliorini et al. 2014)</ns0:ref>. OEG is considerably smaller than a lipid and with streptavidin occupying ~23 nm 2 , then at 1% mole/mole thiolated biotin OEG will be sufficient to provide a streptavidin monolayer. This monolayer will have gaps, simply due to inefficient packing that are the consequence of the thiolated OEG molecules being immobilized on the gold surface.</ns0:p><ns0:p>The heparin functionalised surfaces were then tested for FGF2 binding, as the interaction of this growth factor with heparin has been well characterised, including the measurement of binding kinetics in optical biosensors, e.g., <ns0:ref type='bibr' target='#b8'>(Delehedde et al. 2002)</ns0:ref>. First the running buffer was changed PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science to PBST. FGF2 (1 mL, 100 nM) was injected over the three measurement channels, previously functionalised with biotin heparin immobilised on streptavidin (Fig. <ns0:ref type='figure' target='#fig_3'>3A</ns0:ref>) and the control channel (functionalised solely with streptavidin, Fig. <ns0:ref type='figure' target='#fig_3'>3A</ns0:ref> dotted line). There was little response in the control channel indicating that, as previously observed, FGF2 did not bind streptavidin <ns0:ref type='bibr' target='#b27'>(Rahmoune et al. 1998)</ns0:ref> or the underlying OEG SAM <ns0:ref type='bibr' target='#b22'>(Migliorini et al. 2014)</ns0:ref>. In contrast, in the measurement channels there was a significant response (~1.2 nm). When the channels were returned to running buffer, the response was stable. The lack of dissociation (or very slow dissociation) is a hallmark of the rebinding of a dissociated ligand before it can diffuse into the flowing bulk solution <ns0:ref type='bibr' target='#b30'>(Schuck & Zhao 2010)</ns0:ref>. Rebinding is particularly acute with interactions possessing a significant electrostatic component, as is the case with protein-sulfated glycosaminoglycan binding, since these have fast association rate constants due to electrostatic steering. Rebinding can be overcome by injection of non-biotinylated ligand, in this case heparin <ns0:ref type='bibr' target='#b28'>(Sadir et al. 1998)</ns0:ref>. As expected, this resulted in a dissociation curve (Fig. <ns0:ref type='figure'>4A</ns0:ref>). Washing the surface with 1 mL 2 M NaCl and then 20 mM HCl completely regenerated the surface, as the baseline returned to its original value (Fig. <ns0:ref type='figure' target='#fig_3'>3A</ns0:ref>). The second 20 mM HCl regeneration step was included to ensure both full regeneration of the surface and that no analyte remained associated with the fluidics. Recently, we have described two new assays for the identification of inhibitors of heparan sulfate sulfotransferases (HSSTs) <ns0:ref type='bibr' target='#b6'>(Byrne et al. 2018b</ns0:ref>), the enzymes responsible for the sulfation of precursors of heparan sulfate and heparin. These assays can measure whether a compound interacts with a sulfotransferase resulting in a change to its thermal stability or the activity of the sulfotransferase towards a model oligosaccharide substrate. The latter assay can also determine whether the compound is competitive with 3'-phosphoadenosine-5'-phosphosulfate (PAPS), the universal sulfate donor. However, this assay cannot determine whether a compound is competing with the sugar acceptor, since the sulfation of the sugar acceptor is measured. Consequently, it was of interest to determine whether it would be possible to develop SPR assays to explore the selectivity of a HSST for particular patterns of sulfation of the sugar acceptor, and whether heparin-binding might be used to identify likely acceptor competitors. When 50 nM HS3STA1 was injected over the control, streptavidin functionalised surface, the small response observed returned to baseline as soon as the injection ended, and the surface had been returned to running buffer (Fig. <ns0:ref type='figure' target='#fig_3'>3B</ns0:ref>, dotted line). This response is characteristic of a bulk shift, presumably arising PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science from the refractive index of the GST-HS3ST3A1 and its buffer being higher than that of running buffer. Thus, HS3STA3A1 does not interact with streptavidin or the underlying OEG SAM.</ns0:p><ns0:p>Injection of GST-HS3ST3A1 over the heparin-derivatised surface resulted in a response of 0.5 nm (Fig. <ns0:ref type='figure' target='#fig_3'>3B</ns0:ref>). As observed with FGF2 (Fig. <ns0:ref type='figure' target='#fig_3'>3A</ns0:ref>), when the surface returned to running buffer, there was little dissociation <ns0:ref type='bibr'>(Fig 3B)</ns0:ref>. Again, dissociation could be induced by the injection of heparin over the surface (Fig. <ns0:ref type='figure'>4B</ns0:ref>). The heparin-derivatised surface was successfully regenerated with a wash with 2 M NaCl followed by one with 20 mM HCl (Fig. <ns0:ref type='figure' target='#fig_3'>3B</ns0:ref>). However, upon multiple injections, a small, but measurable rise in baseline was sometimes observed, indicative of incomplete removal of the bound analyte. Consequently, the surface was thereafter regenerated with a wash of 0.25 % (w/v) SDS; a second wash with 20 mM HCl was undertaken to ensure that no residual SDS was present on the fluidics or the surface (Fig. <ns0:ref type='figure' target='#fig_3'>3C</ns0:ref>). These latter data demonstrate that 0.25 % (w/v) SDS does not remove bound SA or damage the OEG SAM, but did afford full regeneration when many injections of the enzyme were performed.</ns0:p><ns0:p>We next examined the competition for GST-HS3ST3A1 binding to the heparin-derivatised surface by soluble heparin. GST-HS3ST3A1 (50 nM final concentration) was pre-mixed with heparin to yield the indicated final concentration. Samples were injected over the heparinderivatised surface and then regenerated with 2 M NaCl and 20 mM HCl prior to the next injection. The data demonstrate dose-dependent inhibition of binding by soluble heparin with an IC 50 of under 0.34 µg/ml heparin (Fig. <ns0:ref type='figure'>5A</ns0:ref>). To identify sulfation patterns preferentially recognised by GST-HS3ST3A1, competition assays using a library of modified heparins were performed. Selective removal of sulfate groups to produce the different modified heparins (Table <ns0:ref type='table'>1</ns0:ref>) was expected to reduce their inhibition of the binding of the enzyme to the immobilised heparin, as seen previously with other heparin/HS binding proteins <ns0:ref type='bibr' target='#b17'>(Li et al. 2016)</ns0:ref>.</ns0:p><ns0:p>Consequently, a final concentration for the heparin derivatives of 1.7 µg/mL was chosen, since with the parent heparin, this provided close to 100% inhibition (Fig. <ns0:ref type='figure'>5A</ns0:ref>). Selective removal of any one of the sulfate groups on heparin reduced somewhat the effectiveness of the derivative as a competitor, although these different derivatives, D2 to D4, were not equivalent, indicating that effective interaction also involved a degree of conformational compatibility. Thus, heparin derivatives with sulfate at the C2 position of iduronate residue and sulfate at either the C6 or the C2 positions of the glucosamine residue, D2 and D4, respectively, were nearly as successful competitors as the parent heparin (Fig. <ns0:ref type='figure'>5B</ns0:ref>). In contrast, the heparin with sulfate at C2 and C6 of PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science glucosamine residues, D3, was a less effective inhibitor. These data indicate that GST-HS3ST3A1 has a preference for a polysaccharide containing 2-O sulfated iduronate and either 6-O or N-sulfated glucosamine. Loss of any two, or all three, sulfate groups resulted in heparin derivatives that were less effective competitors (Fig. <ns0:ref type='figure'>5A</ns0:ref>). Persulfated heparin (D9) was as effective a competitor as the parent heparin, but not more effective, which indicated that simple charge density was not entirely responsible for the interaction. These data are consistent with the observation that GST-HS3ST3A1 is a 'gD' HS 3-O sulfotransferase <ns0:ref type='bibr' target='#b23'>(Moon et al. 2004</ns0:ref>), which sulfates the hydroxyl group at the C3 position of N-sulfated glucosamine residues in the context of flanking 2-O sulfated iduronate residues, to produce structures that bind the gD protein of herpes simplex virus <ns0:ref type='bibr' target='#b23'>(Moon et al. 2004</ns0:ref>).</ns0:p><ns0:p>Suramin, one of the compounds identified in a recent screen of inhibitors of HS2ST1 was predicted by modelling to compete at least in part for the sugar acceptor site on the enzyme <ns0:ref type='bibr' target='#b6'>(Byrne et al. 2018b)</ns0:ref>. This was consistent with the documented ability of suramin to compete with, for example, the binding of FGF2 to heparan sulfate <ns0:ref type='bibr' target='#b14'>(Fernig et al. 1992</ns0:ref>). The observation that suramin was also an inhibitor of tyrosylprotein sulfotransferase 1 <ns0:ref type='bibr' target='#b5'>(Byrne et al. 2018a</ns0:ref>), however, suggested that suramin might be a fairly generic sulfotransferase inhibitor that competed with the universal sulfate donor PAPS. We therefore tested whether suramin would compete for GST-HS3ST3A1 binding to heparin. Suramin was premixed with 50 nM HS3ST3A1 to provide the indicated final concentrations (Fig. <ns0:ref type='figure' target='#fig_4'>6</ns0:ref>). The data demonstrate a dosedependent inhibition of heparin binding by suramin with an IC 50 of around 5 µM (Fig. <ns0:ref type='figure'>5A</ns0:ref>). This provides experimental support for the modelling studies presented previously <ns0:ref type='bibr' target='#b6'>(Byrne et al. 2018b</ns0:ref>). Thus, the heparin derivatised surface also provides the means to identify compounds that compete for the polysaccharide acceptor substrate that are likely to be inhibitors of the HSSTs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The surface assembled here provides a versatile means by which to capture biotinylated ligands and is characterised by a low level of non-specific binding. At each stage of assembly, these surfaces may be stored and, in the case of the final heparin functionalised surface, re-used and are resistant to a range of regeneration methods. The thiolated OEG/thiolated biotin OEG surfaces can be stored at 4°C in ethanol for at least three months, while streptavidin and the biotin heparin functionalised surfaces can be stored at 4 °C in PBS for at least 10 days, and the PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science latter re-used over a week. Moreover, functionalised surfaces can be plasma cleaned and re-used 2-3 times. Consequently, these surfaces are entirely compatible with the low cost and portability of the instrument and provide an alternative to the peptide surface employed previously e.g., <ns0:ref type='bibr' target='#b0'>(Bolduc et al. 2009;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bolduc et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b2'>Bolduc et al. 2010)</ns0:ref>. The stability of the heparin derivatised surface makes this an interesting route to the development of generic capture surfaces for 'sandwich' type assays, since many proteins associated with pathological changes <ns0:ref type='bibr' target='#b26'>(Nunes et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b29'>Salamat-Miller et al. 2007</ns0:ref>) including viral ones bind heparin, a close structural analogue of the naturally occurring cell surface receptor for many viruses, including SARS-CoV-2 <ns0:ref type='bibr' target='#b15'>(Kim et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b18'>Liu et al. 2021;</ns0:ref><ns0:ref type='bibr'>Mycroft-West et al. 2020)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1(on next page</ns0:head><ns0:note type='other'>Chemistry Journals Figure 2</ns0:note><ns0:p>Capture of streptavidin on biotin OEG/OEG self-assembled monolayers and functionalisation with reducing end biotinylated heparin Manuscript to be reviewed</ns0:p><ns0:note type='other'>Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Figure 4</ns0:note><ns0:p>Dissociation induced by soluble heparin Channels A-C on a chip with streptavidin captured on a 1% mole/mole biotin OEG SAM were functionalised with biotinylated heparin. 1 mL (A) 100 mM FGF2 and (B) 100 mM GST-HS3ST3A1 were injected at 500 µL/min. After the surface was returned to running buffer 1 mL heparin in PBS was injected over the surface.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p><ns0:note type='other'>Figure 5</ns0:note><ns0:p>Competition by heparin derivatives for HS3STA1 binding to immobilised heparin Channels A-C on a chip with streptavidin captured on a 1% mole/mole biotin OEG SAM were functionalised with biotinylated heparin. GST-HS3ST3A1 (50 nM, 1 mL) mixed with increasing concentrations of (A) heparin, and (B) 1.7 µg/mL heparin and heparin derivatives (Table <ns0:ref type='table'>1</ns0:ref>).</ns0:p><ns0:p>Between each injection the surface was regenerated by a 1 mL injection of 2 M NaCl followed by 1 mL 20 mM HCl. The response was the difference in nm of signal between the initial baseline in PBST and the baseline achieved after the surface returned to PBST following the injection of analyte; the percentage binding was calculated with respect to the response in the absence of added competitor. The data in this figure and in Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> were acquired on the same surface. All values were measured by a single injection over the three measurement channels, so as triplicate technical repeats. However, the control values for 50 nM GST-HS3ST3A1 were determined independently eight times, twice at the start, then every fourth injection, to ensure the surface was responding consistently throughout. The control value is, therefore, the mean of eight independent triplicate repeats and is the same for Panels (A) and (B) and for Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref>. Error bars are the means ±SD.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Chem. reviewing PDF | (ACHEM-2021:10:67210:1:1:NEW 18 Jan 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>)</ns0:head><ns0:label /><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Following</ns0:head><ns0:label /><ns0:figDesc>ex-situ assembly of the biotin OEG/OEG monolayer, the chip was inserted into the P4SPR instrument and a flow rate of 500 µL/min PBS was maintained. (A) One mL of 20 µg/mL streptavidin in PBS was injected over a 1 % (mole/mole) biotin OEG/OEG SAM in channels A-C and D, after which unbound streptavidin was removed by a 1 mL wash with 2 M NaCl in 10 mM (buffer), pH 7.2, and, after returning the surface to PBS with 1 mL 20 mM HCl.A repeat injection of 1 mL 20 µg/mL streptavidin in PBS over channels A-C and D (800 min, red arrows, inset) did not result in the capture of any further streptavidin on the surface. (B)Reducing end biotinylated heparin (90 µg/mL) was injected over the streptavidin derivatised a 1% (mole/mole) biotin OEG SAM in channels A-C from panel (A). After washing the surfaces with 2 M NaCl and 20 mm HCl to remove any loosely bound biotin heparin, a second injection of reducing end biotin (90 µg/mL) was performed over channels A-C, (2260 min, red arrows, inset. (C) Effect of altering the mole percentage of biotin OEG in the SAM on the amount of streptavidin captured. Three surfaces with SAMs assembled using the indicated mole percentage biotin OEG were prepared. (1 mL, 20 µg/mL) was injected over the surfaces. The traces from each experiment are superimposed to enable comparison. Data for the three measurement channels A-C are the mean of the response in these channels.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,326.62,525.00,315.00' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "Dear Dr Urban,
We would like to thank the reviewers for their time in evaluating our manuscript. The paper had its genesis in our frustration in reproducing published work from collaborators, due to conventional methods sections of papers being inadequate. Indeed, I had to contact authors on this paper to obtain key details to successfully produce functional surfaces. Thus, the data we present serve to exemplify applications and while there is new information in some of the measurements, e.g., HS3ST3A1, this is not the main thrust of the work.
Below we provide our response to the reviewers’ comments and note where the manuscript has been changed. All changes in the manuscript are tracked.
Yours sincerely,
Dave Fernig
General comments
The journal editorial office felt that the method described for the expression and purification of HS3ST3A1 was too similar to that already described for HS3ST1. This is indeed the case, as the methods are identical, though the two proteins are different and encoded by different genes. We have not put the methods up on Protocols.io, because we are currently exploring high throughput methods to optimise expression. Thus, while the method described is the one used, we may have an improved method in due course. With the Wheeler et al., paper now published, we cite this and have paraphrased the text to reduce the amount of text re-use.
Lines 122-138
1: Re-used Text
As part of our pre-submission checks, we noticed that your manuscript has an unacceptable level of apparently re-used text. We are unable to consider your submission unless you address this issue and submit a new version of the manuscript.
https://pubs.rsc.org/en/content/articlelanding/2022/OB/D1OB02071D Lines 130-134
Despite the changes to lines 122-128, the editorial office felt that lines 130-134 were still too similar to the previously published method. The passage in question (with duplicated text highlighted) is:
Lines 129-134
Soluble GST-HS3ST3A1 in the supernatant was purified by application to a 2 mL glutathione resin (Genscript Biotech Corporation, Netherlands), washing with 100 mM NaCl, 50 mM Tris, pH 7.4 and eluting with 10 mM reduced glutathione, 100 mM NaCl, 50 mM Tris, pH 7.4. The eluate was immediately applied to a 1 mL heparin (Affi-Gel hep, Bio-Rad, UK) column, washed with 50 mM NaCl, 50 mM Tris, pH 7.4, and eluted with 600 mM NaCl, 50 mM Tris, pH 7.4 (Supplementary Fig. 1).
There is not much that can be done, since most of the duplicated text relates to specific experimental conditions, without which there isn’t an adequate, so reproducible, description of the method. As the two proteins are different, it is important to provide a complete method, to avoid the uncertainty that might arise should a reader instead be directed to the Wheeler et al. paper, which describes the purification of recombinant HS3ST1.
2: Equal Authorship
You have designated equal co-authors in the submission system, but not on the manuscript cover page. If your manuscript is accepted, we will only use the information entered in the system.
Drs Hao and Li are indeed equal co-authors. As the system information is correct, this is fine. However, to ensure consistency, the manuscript cover page has been altered accordingly.
3: Figure Quality
Please reformat Figure 3B, 5A and 6 by removing the line break in the middle of your data range. Truncating the data range makes your measurements appear closer than they actually are.
In Figure 3B the data visualisation problem is that the signal from the binding of HS3ST3A is small compared to the refractive index change that occurred during the regeneration steps. We resorted to two line breaks, one on the x-axis and another on the y-axis for the regeneration step. We have now provided the figure in a different format, with single x- and y-axes, and an inset showing an expansion of the y-axis for the binding of HS3ST3A, which is much more satisfactory.
In Figs 5A and 6, the x-axis is a log scale, appropriate for such a dose response. There is of course no value for log 0, so the zero value will in fact be an arbitrary distance from the first non-zero value. For this reason, the line breaks on the x-axes and the data lines should remain.
Please provide a replacement figure measuring minimum 900 pixels and maximum 3000 pixels on all sides, saved as PNG, EPS, or vector PDF file format without excess white space around the image.
We now provide all figures in PNG format of the appropriate size.
We have also corrected some minor typos throughout the manuscript.
Reviewer 1
The reviewer is fully satisfied with the paper and description of the methodology to produce a streptavidin functionalised OEG SAM, as well as the examples of the applications of such surfaces.
Reviewer 2
The reviewer is satisfied that the paper describes original research that is relevant and meaningful and has some queries, which are in bold italic. We address these as follows.
Articles structure generally OK, except that the authors go back & forth in citing Figures, particularly Figure 3-6
This is unavoidable due to the structure of the paper. We felt it important to take the reader through binding, dissociation and regeneration in that order. Given the issue of analyte rebinding, which is often ignored, having a separate figure for dissociation in the presence of non-biotinylated ligand, helps emphasise the importance of considering this part of the measurement carefully. However, this then necessitates returning the reader’s attention to Fig. 3. Since, the data in Figures 5 and 6 depend on the rather stringent regeneration required for HS3STA1, we again need to draw the reader’s attention to the earlier data. We feel that this is appropriate for a paper that is heavily focussed on methodology, but which in doing so enables the acquisition of new information on HS3ST3A1.
Validity of the findings
Supplemental/Raw data was not provided
Zip files of Excel spreadsheets containing the raw data were uploaded. However, we now realise that for the data in Figs 5 and 6, we only uploaded the spreadsheet with the calculated values and not the original data. We now supply these too.
Replicate information was not clear
Did the authors performed replicate injections and only the representative data were provided?
We thank the reviewer from bringing this to our attention, as the text was not clear on this point. As is best practice, the level of binding of 50 nM HS3ST1 was measured repeatedly in the experiment: twice at the start, then after several measurements with a competitor and finally at the end. This provides a means to check the surface functionality is not changing due to repeated regenerations or exposure to a competitor. In contrast, the response in the presence of competitors was measured once, so with three technical repeats. Moreover, as these are inhibition measurements, the control value and its uncertainty is extremely important. We have pooled these 8 values. We have altered the text in the Figure legends to make these points clear.
New text
Figure 5 legend
“The data in this figure and in Figure 6 were acquired on the same surface. All values were measured by a single injection over the three measurement channels, so as triplicate technical repeats. However, the control values for 50 nM GST-HS3ST3A1 were determined independently eight times, twice at the start, then every fourth injection, to ensure the surface was responding consistently throughout. The control value is, therefore, the mean of eight independent triplicate repeats and is the same for Panels (A) and (B) and for Figure 6. Error bars are the means ±SD.”
Figure 6 legend
“Results are the mean ±SD of three technical repeats and are expressed as a percentage of the response measured in the absence of suramin, which was determined in eight independent triplicate repeats, and is the same as that in Figure 5.”
Did the authors test percentage of biotin-OEG over >1%? If yes, could the data be shown? If not, why not?
This is an interesting point, as the areal density of streptavidin reported in Fig. 3A of Migliorini et al., 2014 with 0.1% (mole/mole) biotin OEG is near the maximum possible (200 ng/cm2). It is worth noting that the formation of the SAM is done ex situ so there is no direct measurement relating to its formation or density. In our hands 0.1% (mole/mole) biotin OEG did not afford high coverage by streptavidin and we assumed there was an order of magnitude error in reporting the method in the paper we cite. This conclusion is consistent with the work on supported lipid bilayers described in the same paper, which at 5% biotinylated lipid (mole/mole) produces a quasi 2-dimensional streptavidin monolayer.
While SPR is not suitable for measuring areal density, it is reasonable to assume that the density of the biotin OEG in the SAM reflects its relative concentration. A perfectly packed SAM of OEG is not attainable, but 1% (mole/mole) biotin OEG, which has ten ethylene glycol units, is sufficiently small in comparison to the lipids used in the supported lipid bilayers to provide for more than one biotin molecule for the area occupied by streptavidin, which is of the order of 23 nm2.
Thus, we did not test relative concentrations of biotin OEG higher than 1% (mole/mole). We have added text to alert the reader that this is not an experimentally determined level for maximum streptavidin coverage.
New text
Lines 204-205 “and have been extensively characterized in terms of areal density of immobilized streptavidin and biotinylated polysaccharides”
Lines 242-252 “In the previous work, maximal areal density of streptavidin was obtained with 0.1 % mole/mole thiolated biotin OEG (Migliorini et al. 2014), but in the present work this did not afford anything like full coverage of the surface. That 1% (mole/mole) is likely to do so is supported by several lines of evidence. Supported lipid bilayers on glass require 5% mole/mole biotinylated lipid for full coverage by streptavidin, which is higher than that obtained on the gold surface, likely due to the mobility of the lipids increasing the streptavidin’s packing density (Migliorini et al. 2014). OEG is considerably smaller than a lipid and with streptavidin occupying ~23 nm2 then at 1% mole/mole, thiolated biotin OEG will be sufficient to provide a streptavidin monolayer. This monolayer will have gaps, simply due to inefficient packing that are the consequence of the thiolated OEG molecules being immobilized on the gold surface.”
The authors optimized/prepared surfaces using PBS as the running buffer but then switched to PBST for their FGF2 measurements. Is there a reason for it? If measuring FGF2 in PBST is important, I suggest that the surfaces be prepared in PBST as well.
To produce the best quality SAM, we did not include any other reagents, as we cannot be sure that these will not occupy space in the SAM and then subsequently be displaced by an analyte, as they are not thiolated, so not covalently bound to the gold surface. The inclusion of Tween-20, which contains ethylene glycol units like OEG in binding buffers is standard and serves to reduce adsorption of the analyte with surfaces. This includes the fluidics. When working in the nM concentration range or lower, such adsorption, followed by desorption, e.g., when a competitor that binds the analyte and induces a conformational change, can result in considerable noise.
We have added some text to explain why Tween-20 was included in the running buffer in protein binding experiments.
New text
Lines 195-196 “to prevent adsorption of analyte to the fluidics”
Why the authors choose 50 nM GST-HS3ST3A1 for Figure 3B and not 100 nM as they did in Figure 4B? This is important for a proper comparison between streptavidin/OEG and heparin binding efficiencies.
We are not measuring the concentration-dependence of binding, but of course a higher concentration of analyte should, until saturation is reached, result in greater binding. Thus, to illustrate a dissociation with good signal-to-noise we chose to double the concentration of HS3ST3A1.
In Figure 3C, a side-by-side comparison of SDS/HCl and NaCl/HCl will be useful for comparison.
The need for SDS/HCl regeneration became apparent to use in our initial work, which is not included in the paper. Over a series of injections of HS3ST1 that were followed by NaCl/HCl regeneration steps, we noticed that there was a progressive deterioration of the surface in terms of the binding capacity, which correlated with a modest rise in the baseline. We attributed this to imperfect regeneration, which only became apparent after many repeat injections. When we switched to SDS/HCl, the binding capacity over repeat injections remained stable. A side-by-side comparison is difficult to illustrate in this instance, since differences are only apparent with the NaCl/HCl regeneration after several injections. We have added a comment alerting the reader to this point and added text on regeneration strategies.
New text
Lines 270-272 “The second 20 mM HCl regeneration step was included to ensure both full regeneration of the surface and that no analyte remained associated with the fluidics.”
Lines 293-295 “However, upon multiple injections, a small, but measurable rise in baseline was sometimes observed, indicative of incomplete removal of the bound analyte. Consequently, the surface was thereafter”
Line 299 “but did afford full regeneration when many injections of the enzyme were performed”
Line 351 “and are resistant to a range of regeneration methods'
What is the motivation for using 0.25% SDS in Figure 3C? The authors should explain this in the text as it appears as a surprise!
This was not well explained, we trust the text added in response to the comment above provides clarity.
In Figure 3C, Ch_D dropped below baseline after HCl, why?
The response at the initial baseline (4454-4474 s) is 625.90± 0.03 nm (error is SD of the points) and after the HCl regeneration step (5330-5350 s) is 625.85±0.03 nm. The difference is 0.05 nm and the variation on each baseline is 0.03 nm. Thus, the ‘drop’ is well within the noise envelope.
Reviewer 3
The reviewer is satisfied that the paper is clear and describes new research and has some queries, which are in bold italic, addressed as follows.
The authors don't cite enough references to claim the necessary of using heparin of this research.
The focus of the paper is on the methodology for assembling a SAM that can be functionalised with any biotinylated ligand. The reviewer’s point is well made in that the importance of heparin as a ligand is only alluded to in the context of infectious diseases (line 89), which perhaps does not do justice to the conclusion, lines 356-362, and consequently the more general utility of the surface for capturing biotinylated ligands was not clear. We have added text and a reference to the extensive heparin-binding extracellular proteome to cover these points.
New text
Lines 85-88
“Heparin is a convenient experimental proxy for the sulfated domains of cellular heparan sulfate, which by virtue of the latter’s more than 800 extracellular protein partners is a major regulator of cell function in development, homeostasis and disease (Nunes et al.,). Indeed, “
The flow rate of the running buffer is 500μL/min, while it is too fast for molecular reaction and cost a lot of reaction reagent in the SPR experiment The authors should explain the reason for choosing this flow rate which is always lower than 50 μL/min in other common SPR instrument.
As highlighted in the paper from Schuck that we cite and other papers from the same author, the problem of laminar flow in these systems is substantial. We now state that we used the highest flow rate possible to reduce these issues, and alert the reader to the importance of considering flow rate and mass transport. The problem is particularly acute with protein-heparin interactions, as binding is kinetically dominated by electrostatic interactions, which due to steering have very fast association rate constants.
New text
Lines 154-157
“A high flow rate was chosen to minimize mass transport limitations (Schuck & Zhao 2010), which are particularly acute when molecular interactions are driven by electrostatic interactions, as for proteins binding heparin, though lower flow rates are often appropriate, depending on the characteristics of the interaction and the assay.”
Why two regeneration steps were conducted, and usually acid or alkali solution can achieve one-step regeneration. It should be more clearly discussed.
With some protein-heparin interactions a single regeneration with 2M NaCl is sufficient. We avoid alkali with these polysaccharides due to the possibility of alkali elimination. Acid was used to ensure full regeneration and to remove any analyte adsorbed to the fluidics. While an acid regeneration would likely be sufficient for FGF2, to reduce the contact time with acid or the need for repeat regenerations we first used the 2 M NaCl step.
The need for the SDS regeneration is discussed above in response to reviewer 2.
The other point to remember is that the demonstration of different regeneration steps provides the reader with a range of approaches in their own work, as some users may be less familiar with what approaches may be applicable to their regeneration problem.
" | Here is a paper. Please give your review comments after reading it. |
674 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Fizzy extraction (FE) facilitates analysis of volatile solutes by promoting their transfer from the liquid to the gas phase. A carrier gas is dissolved in the sample under moderate pressure (Δp ≈ 150 kPa), followed by an abrupt decompression, what leads to effervescence. The released gaseous analytes are directed to an on-line detector due to a small pressure difference. FE is advantageous in chemical analysis because the volatile species are released in a short time interval, allowing for pulsed injection, and leading to high signal-to-noise ratios. To shed light on the mechanism of FE, we have investigated various factors that could potentially contribute to the extraction efficiency, including: instrument-related factors, method-related factors, sample-related factors, and analyterelated factors. In particular, we have evaluated the properties of volatile solutes, which make them amenable to FE. The results suggest that the organic solutes may diffuse to the bubble lumen, especially in the presence of salt. The high signal intensities in FE coupled with mass spectrometry are partly due to the high sample introduction rate (upon decompression) to a mass-sensitive detector. However, the analytes with different properties (molecular weight, polarity) reveal distinct temporal profiles, pointing to the effect of bubble exposure to the sample matrix. A sufficient extraction time (~ 12 s) is required to extract less volatile solutes. The results presented in this report can help analysts to predict the occurrence of matrix effects when analyzing real samples. They also provide a basis for increasing extraction efficiency to detect low-abundance analytes.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Sample preparation-whether performed in manual or automated manner-is frequently an unavoidable step in chemical analysis workflows <ns0:ref type='bibr' target='#b28'>(Prabhu & Urban, 2017;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alexovič et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Poole, 2020;</ns0:ref><ns0:ref type='bibr' target='#b47'>Zheng, 2020)</ns0:ref>. It can rely on analyte transfer between different phases in liquid-liquid, solid-liquid, liquid-gas, or solid-gas extraction systems. One of the available approaches is the recently introduced fizzy extraction (FE) approach, which relies on dissolution of a carrier gas in liquid sample under slightly elevated pressure, followed by a sudden decompression of the sample headspace leading to effervescence <ns0:ref type='bibr' target='#b3'>(Chang & Urban, 2016;</ns0:ref><ns0:ref type='bibr' target='#b44'>Yang, Chang & Urban, 2017)</ns0:ref>. Although the pressure applied to the sample is higher than the atmospheric pressure (Δp ≈ 150 kPa), it is still very low in comparison with the pressures utilized in supercritical fluid methods (~ 8-61 MPa) <ns0:ref type='bibr' target='#b13'>(Hawthorne, 1990)</ns0:ref>. On decompression, multiple (micro)bubbles gush toward the sample surface bringing analyte molecules to the gas phase. The effervescence in FE resembles the phenomenon occurring in shaken soda bottle or in blood vessels of an individual suffering from caisson disease. The surge of analyte molecules in the headspace (on the onset of effervescence) gives rise to high transient analyte signals. FE may be regarded as a pressure-controlled effervescence-assisted emulsification liquid-gas extraction approach.</ns0:p><ns0:p>FE was originally coupled with atmospheric pressure chemical ionization (APCI) mass spectrometry (MS), which enabled real-time monitoring of the released medium-polarity volatile and semivolatile compounds <ns0:ref type='bibr' target='#b3'>(Chang & Urban, 2016)</ns0:ref>. Chemical analysis by FE hyphenated with APCI-MS provides satisfactory detectability because the volatile organic compounds (VOCs) are liberated and reach the ion source of mass spectrometer in a short period of time (few seconds). The entire workflow is expeditious (< 5 min). Recently, FE was also hyphenated with gas chromatography <ns0:ref type='bibr'>(Yang & Urban, 2019)</ns0:ref>, and the extraction procedure was supplemented with automated features <ns0:ref type='bibr' target='#b46'>(Yang, Chang & Urban, 2019)</ns0:ref>. Partial automation is critical for repetitive control of the saturation and effervescence steps. Although the FE system has not yet been commercialized, it can readily be built by chemists using inexpensive modules (cf. <ns0:ref type='bibr' target='#b41'>Urban, 2018)</ns0:ref>.</ns0:p><ns0:p>The earlier work on FE raised questions about the possible mechanism responsible for the release of molecules into the gas phase. With the lack of a plausible description of the underlying principles, it is unclear in which cases the procedure can be applied and how to boost its performance. Therefore, in the present study, we attack the problem of FE mechanism. For that purpose, we first identify a number of factors that can potentially influence FE performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Materials and samples</ns0:head><ns0:p>Ethanol (anhydrous, 99.5+%) was from Echo Chemical (Miaoli, Taiwan). Ethyl propionate (EPR), ethyl pentanoate (EPE), ethyl hexanoate (EHX), ethyl nonanoate (EN), ethyl undecanoate (EUD), and polyethylene glycol (PEG) 400 were from Alfa Aesar (Ward Hill, MA, USA). Ethyl butyrate (EB) was from Acros Organics (Pittsburgh, PA, USA). Ethyl heptanoate (EHP) was from TCI (Tokyo, Japan). Ethyl octanoate (EO), ethyl decanoate (ED), gum arabic (GA, from acacia tree), and (R)-(+)-limonene (LIM) were from Sigma-Aldrich/Merck (St. Louis, MO, USA). 2-Propanol (Emsure ACS, ISO, Reag. Ph Eur grade), methanol (LC-MS-grade), and water (LC-MS-grade) were from Merck (Darmstadt, Germany). Sodium chloride was from Showa Chemical (Tokyo, Japan). Sodium dodecyl sulfate (SDS; >90%) was from Aencore Chemical (Melbourne, Australia).</ns0:p><ns0:p>Typically, the stock solutions of chemical standards were prepared in pure alcohols <ns0:ref type='bibr'>(methanol, ethanol, and isopropanol)</ns0:ref>. The stock solution concentration (EPR, EPE, EHP, EN, EUD, and LIM) was 1 M. The test samples containing ethyl esters with various carbon chains and LIM were prepared in alcohol/water mixture at varied percentage (depending on the experiments). The final concentration of chemical standards in the samples used in most experiments was 5×10 -6 M.</ns0:p></ns0:div>
<ns0:div><ns0:head>Apparatus</ns0:head><ns0:p>The extraction process is facilitated by two microcontrollers (chipKIT Uno32 and Arduino Mega 2560), which are programmed in C++. The program-controlled pressure changes lead to repeatable bubbling in the extraction chamber. The system also features a number of functions, which reduce human effort. Some of these functions are utilized in the present study (controlling valves and motor at defined time points, displaying experimental conditions on an LCD screen, and triggering MS data acquisition). V1 (solenoid valve) is used to control the flow of the carrier gas (typically, carbon dioxide), and it is connected to the sample chamber (20 mL screw top headspace glass vial with septum cap; cat. no. 20-HSVST201-CP; Thermo Fisher Scientific, Waltham, MA, USA) via a 20-cm section of polytetrafluoroethylene (PTFE) tubing (I.D. = 0.8 mm, O.D. = 1.6 mm, cat. no. 58700-U, Supelco; Sigma-Aldrich). A 14-cm section of soft rubber-like tubing (e. </ns0:p></ns0:div>
<ns0:div><ns0:head>Procedure</ns0:head><ns0:p>An automated FE system was disclosed earlier <ns0:ref type='bibr' target='#b46'>(Yang, Chang & Urban, 2019)</ns0:ref>, and it was applied in this study following some modifications (e.g. removing T-junction and Valve 3, thermal printer, wireless control, and real-time data processing) (Fig. <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>). The FE process consists of three steps <ns0:ref type='bibr' target='#b3'>(Chang & Urban, 2016)</ns0:ref>: Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> Supporting Information text and Table <ns0:ref type='table'>S1</ns0:ref>). In the stage 2 of this step (open V1 and V2), the carrier gas flow is switched on again to assist in the transfer of the remaining gas-phase analytes present in the sample headspace (28 s). The default conditions used in most experiments are as follows: sample, 5×10 -6 M analytes dissolved in 5 vol. % of alcohol/water mixture; carrier gas, carbon dioxide; extract transfer tubing, I.D. = 1.0 mm; extract transfer tubing length, 60 cm.</ns0:p><ns0:formula xml:id='formula_0'>I.</ns0:formula></ns0:div>
<ns0:div><ns0:head>Mass spectrometry</ns0:head><ns0:p>Unless noted otherwise, the sample chamber was coupled via V2 and 60-cm (ETFE) extract transfer tubing with triple quadrupole mass spectrometer (LCMS-8030; Shimadzu, Tokyo, Japan). It was used in conjunction with DUIS ion source (Shimadzu) operated in the APCI positive-ion mode. The potential applied to the APCI needle was 4.5 kV. Nebulizer gas (nitrogen) flow rate was 2.5 L min -1 . Drying gas (nitrogen) flow rate was 15 L min -1 . The temperature of the desolvation line was set to 250 °C, while the temperature of the heated block was set to 300 °C. The data acquisition was performed in multiple reaction monitoring (MRM) mode (see Table <ns0:ref type='table'>S1</ns0:ref> for transitions). The pressure of the collision gas (argon) was set to 230 kPa. The dwell time was 100 ms.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data treatment</ns0:head><ns0:p>Enhancement factor (EF) is defined as the maximal signal intensity (I max-extraction ) in the extraction step divided by the mean signal intensity (mean flushing ) during the flushing step (from 0.49 to 1.00 min):</ns0:p><ns0:p>.</ns0:p><ns0:p>(1) EF =</ns0:p><ns0:p>𝐼 maxextraction</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑚𝑒𝑎𝑛 flushing</ns0:head><ns0:p>Signal-to-noise ratio (S/N) was calculated using the maximal signal intensity in the extraction step minus the average intensity of the saturation step (local baseline, from 1.35 to 1.85 min) divided by the root mean square of the blank sample in the extraction step (from 2.04 to 2.97 min):</ns0:p><ns0:p>.</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>𝑆/𝑁 = (𝐼 max -extraction -𝑚𝑒𝑎𝑛 flushing ) 𝑅𝑀𝑆 extraction (blank)</ns0:formula><ns0:p>In one part of this study-in order to compensate for differences in ionization efficiencies of the tested analytes-the corrected signal (S corrected ) was calculated based on the maximal signal intensity obtained in FE (I max-FE ) multiplied by a correction factor:</ns0:p><ns0:p>.</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_2'>𝑆 corrected = 𝐼 max -FE × CF</ns0:formula><ns0:p>The correction factor (CF) is defined as the average (averaging interval: 1 min) extracted ion currents (EICs) of the target analytes (I target ) obtained in direct liquid infusion to APCI-MS (sample flow rate, 40 μL min -1 ; sample, 5×10 -6 M analytes dissolved in 5 vol. % ethanol/water mixture) divided by the average intensity of EPR (I EPR , used as a reference; averaging interval: 1 min):</ns0:p><ns0:p>(4) CF = </ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>To shed light on the mechanism of FE, we have studied the influence of a number of parameters on the extraction process. These parameters are grouped into four categories: (1) instrumentrelated; (2) method-related; (3) sample-related; (4) analyte-related.</ns0:p></ns0:div>
<ns0:div><ns0:head>Instrument-related factors affecting FE Extract transfer tubing diameter and length</ns0:head><ns0:p>First, we tested 30-cm sections of four types of polytetrafluoroethylene (PTFE) tubing with different inner diameters (0.3, 0.6, 0.8, and 1.0 mm) as extract transfer tubing. The EFs and S/N increased as the inner diameter increased, especially for EPE, EHP, EN and LIM (Fig. <ns0:ref type='figure' target='#fig_4'>2A</ns0:ref>). More gas-phase analyte molecules could be transferred to the MS ion source per time unit when a tubing with a larger diameter was used. We further verified the influence of the extract transfer tubing length (30, 40, 50, 60 cm; I.D. = 1.0 mm) on the extraction process. The EFs and S/N of mediumvolatility compounds (EPE, EHP, and LIM) increased with the increasing tubing length (Fig. <ns0:ref type='figure' target='#fig_4'>2B</ns0:ref>). This result may be related to the fact that the use of longer tubing leads to a lower gas flow rate <ns0:ref type='bibr' target='#b7'>(Coelho & Pinho, 2007)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Method-related factors affecting FE</ns0:head></ns0:div>
<ns0:div><ns0:head>Gas type</ns0:head><ns0:p>It is known that different gases form bubbles with different size <ns0:ref type='bibr' target='#b11'>(Hanafizadeh et al., 2015)</ns0:ref>. While carbon dioxide is mainly used to produce fizzy drinks, nitrogen is occasionally used <ns0:ref type='bibr' target='#b15'>(Hildebrand & Carey, 1969)</ns0:ref>. Here, five easily available gases (carbon dioxide, nitrogen, air, argon, and helium), with different physical properties, were investigated. Based on the recorded EFs and S/N, all the tested gases can be used in FE (Fig. <ns0:ref type='figure'>3A</ns0:ref>). However, extraction of one compound (EPE) was clearly affected by the type of gas used. In the case of carbon dioxide, the EF was the lowest, while in the case of helium, it was the highest. According to Eq. ( <ns0:ref type='formula'>1</ns0:ref>), high EF can be either due to high amplitude of signal in extraction step or low amplitude of signal in flushing step. Solubility of gases in water <ns0:ref type='bibr' target='#b10'>(Gevantman, 2000)</ns0:ref> can influence the dissolution of gases during pressurization, thus affecting the formation of bubbles. Moreover, the densities of the tested gases are dissimilar <ns0:ref type='bibr' target='#b31'>(Rathakrishnan, 2004)</ns0:ref>. These different densities can also contribute to some selectivity in VOC removal from the sample headspace in the flushing step. Owing to the poor solubility of helium in water, and its low density, helium may predominantly flow in the upper part of the headspace in the sample chamber, scavenging highly volatile analytes, which easily diffuse to the upper section of the sample chamber. Therefore, the signals of VOCs during headspace flushing are generally low when using helium as carrier gas (Fig. <ns0:ref type='figure' target='#fig_3'>S1</ns0:ref>).</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:10:42609:1:0:NEW 26 Nov 2019)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Fast decompression vs. slow decompression</ns0:head><ns0:p>The rationale of this test builds on a daily-life observation: When one slightly loosens cap of a carbonated drink bottle, numerous microbubbles or foam are formed. Conversely, when one unscrews the cap rapidly, the emerging bubbles quickly coalesce, leading to a smaller number of large bubbles. Microbubbles have higher surface area-to-volume ratio than large bubbles <ns0:ref type='bibr' target='#b48'>(Zimmerman et al., 2008)</ns0:ref>. Thus, liquid-gas phase equilibrium should be established in microbubbles faster than in large bubbles.</ns0:p><ns0:p>To emulate the process of slow decompression, we pulsed V2 with different open duty cycles while the cycle duration was fixed at 400 ms. At first, we only applied pulsations in the stage 1 of the extraction step (for 2 s). For all the tested conditions, the results did not show clear trends (Fig. <ns0:ref type='figure'>3B</ns0:ref>), and the temporal profiles recorded in different variants of decompression were similar (Fig. <ns0:ref type='figure' target='#fig_4'>S2</ns0:ref>). In order to verify the effect of slow decompression on FE, we further applied pulsations not only in the stage 1 but also in the stage 2 of the extraction step (for 30 s). Although no obvious changes were observed, EN exhibited a slightly higher EF and S/N with 25% open duty cycle than in other conditions (Fig. <ns0:ref type='figure'>3C</ns0:ref>). Moreover, the tested analytes revealed distinct temporal profiles when open duty cycles were varied (Fig. <ns0:ref type='figure'>S3</ns0:ref>). The areas under the curve (ion current of the extraction step) increased with decreasing open duty cycles (Fig. <ns0:ref type='figure'>S4</ns0:ref>). This observation shows that, when the gaseous extract is transferred to MS in pulses (rather than continuously), the sample can still be extracted continuously providing high VOC signals.</ns0:p><ns0:p>We also conducted an experiment, in which we changed the duration of the stage 1 of the extraction step. The EFs and S/N ratios were not affected to a great extent in most of the tested compounds except EHP (Fig. <ns0:ref type='figure'>3D</ns0:ref>). The decline in EF of EHP with increasing extraction time may be because the extracted EHP molecules were not concentrated in the headspace of the sample chamber sufficiently, and were transferred to the detector at different time points leading to two temporal peaks (Fig. <ns0:ref type='figure'>S5</ns0:ref>, extraction time, 10 s). The appearance of the first peak is linked to the VOCs that were extracted during effervescence (stage 1), while the second peak is linked to the scavenging of the headspace vapors by pumping the carrier gas (stage 2). The observation that the second peak was higher than the first peak (EHP and EN, 10 s) could be explained in the following way: The amount of dissolved carbon dioxide was insufficient to extract and scavenge the analytes with higher molecular weights and lower polarities in the stage 1. Scavenging of these analytes continued after opening V1. <ns0:ref type='figure'>Figures 4A-4C</ns0:ref> show the influence of organic solvents-present in the sample-on the extraction efficiency. Notably, the EFs and S/N decreased with increasing ethanol concentration. This effect is rationalized in the following way: Medium and low-polarity VOCs partition to the liquid phase because of their good solubility in ethanol. Moreover, the surface tension of sample matrix decreases with increasing ethanol concentration, which results in larger bubble size (Fig. <ns0:ref type='figure'>S6C</ns0:ref>). Overall, FE can be performed on samples containing common alcohols in a broad concentration range (0.1-25 vol. %). The three tested alcohols provide similar extraction performances. However, ethanol is the solvent of choice due to its lower toxicity. In addition, ethanol is present in alcoholic beverages, which can readily be analyzed by FE. In this case, the sample pretreatment can be reduced to simple dilution with pure water.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sample-related factors affecting FE Sample solvent</ns0:head></ns0:div>
<ns0:div><ns0:head>Presence of salt</ns0:head><ns0:p>Addition of an electrolyte into an aqueous solution can affect partitioning of the organic solutes between the liquid phase and gas phase. Here, we investigated the effect of NaCl at varied concentrations on extraction efficiency of the test analytes. Although the EF of the chosen compounds did not reveal major differences, the S/N showed clear ascending trends for the increasing concentrations of NaCl, especially for the highly volatile compounds (EPR, EPE, EHP; Fig. <ns0:ref type='figure'>4D</ns0:ref>). This result can be explained with the occurrence of 'salting out effect' (see below) <ns0:ref type='bibr' target='#b16'>(Hyde et al., 2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Presence of surfactants and related additives</ns0:head><ns0:p>It is known that surface-active agents (so-called 'surfactants') can drastically influence bubble behavior; reduce rising velocity, prevent coalescence, and deplete/enhance mass transfer <ns0:ref type='bibr' target='#b38'>(Takagi & Matsumoto, 2011)</ns0:ref>. Here, we evaluated the effect of two commonly used surfactants (GA and SDS)-at concentrations below the critical micelle concentrations <ns0:ref type='bibr' target='#b25'>(Moroi, Motomura & Matuura, 1974)</ns0:ref>-on the extraction of the target analytes. As expected, the foam height increased notably with the increasing concentration of GA (Fig. <ns0:ref type='figure'>S6C</ns0:ref>). This phenomenon can be explained with the reduction of solution surface tension by GA, which promotes liberation of the dissolved gas <ns0:ref type='bibr' target='#b2'>(Cao et al., 2013)</ns0:ref>, and stabilization of the interfacial film <ns0:ref type='bibr' target='#b43'>(Wyasu & Okereke, 2012)</ns0:ref>. Nonetheless, we only observed a slight increase of EF and S/N in LIM (Fig. <ns0:ref type='figure'>4E</ns0:ref>). Similar to GA, the foaming height increases in the presence of SDS (Fig. <ns0:ref type='figure'>S6C</ns0:ref>), an anionic surfactant. The EF and S/N of medium polarity compounds (EHP, EN, LIM) revealed a slightly ascending trend for increasing concentrations of SDS (Fig. <ns0:ref type='figure'>4F</ns0:ref>).</ns0:p><ns0:p>Because interfacial rheological properties can greatly alter bubble stability by affecting the mass transfer across the interface <ns0:ref type='bibr' target='#b26'>(Pelipenko et al., 2012)</ns0:ref>, we took advantage of the strongly hydrophilic and viscous nature of PEG400 (a surfactant-related additive) <ns0:ref type='bibr' target='#b12'>(Harris, 2013)</ns0:ref> to evaluate its effect on the extraction performance of the selected analytes. The foaming height of the PEG400containing sample was greater than the one without adding PEG400 (Fig. <ns0:ref type='figure'>S6C</ns0:ref>). However, only the EF and S/N ratios of two compounds (EHP, LIM) were slightly increased when the concentration of PEG400 was increased (Fig. <ns0:ref type='figure'>4G</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Presence of gas bubble nucleation sites</ns0:head><ns0:p>One technology used by beer industry to control bubbling involves coating the inner walls of bottles with cellulose fibers to increase the abundance of nucleation sites <ns0:ref type='bibr' target='#b21'>(Liger-Belair, Polidori & Jeandet, 2008;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lee & Devereux, 2011)</ns0:ref>. Formation of nucleation sites in carbonated liquids with low supersaturation ratios (e.g. carbonated beverages, sparkling wines) requires the presence of gas pockets with radii of curvature larger than the critical nucleation radius (typically, < 1 μm) <ns0:ref type='bibr' target='#b17'>(Jones, Evans & Galvin, 1999;</ns0:ref><ns0:ref type='bibr' target='#b19'>Liger-Belair, 2005)</ns0:ref>. At this condition, the nucleation energy barrier is overcome, thus promoting bubble growth (type IV non-classical nucleation, according to the nomenclature by <ns0:ref type='bibr'>Jones et al.)</ns0:ref>. <ns0:ref type='bibr' target='#b17'>(Jones, Evans & Galvin, 1999;</ns0:ref><ns0:ref type='bibr' target='#b23'>Lubetkin, 2003)</ns0:ref> The tiny and hollow cavities within cellulose fibers (lumens) are responsible for the production of bubble trains, and are regarded as one of the most abundant sources of nucleation sites <ns0:ref type='bibr' target='#b20'>(Liger-Belair, Voisin & Jeandet, 2005)</ns0:ref>. Therefore, we intentionally introduced different numbers of cotton fibers to the sample chamber to increase the abundance of nucleation sites. Although no obvious differences in EFs were observed, a slight increase of S/N ratios could be seen (Fig. <ns0:ref type='figure'>4H</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Analyte-related factors affecting FE Analyte properties</ns0:head><ns0:p>To characterize general applicability of FE, we tested a series of ethyl esters with different carbon numbers (C 5 -C 13 ) as model analytes (Table <ns0:ref type='table'>S2</ns0:ref>). The dependencies of EFs and S/N ratios on the investigated physical properties showed distinct trends revealing the amenability of most of the tested compounds to analysis by FE-APCI-QqQ-MS (Fig. <ns0:ref type='figure'>5</ns0:ref>). This result is in line with the result obtained by FE-GC-Q-MS <ns0:ref type='bibr'>(Yang & Urban, 2019)</ns0:ref>.</ns0:p><ns0:p>In the case of the highly volatile compounds (e.g. EPR, EB), the intensities of EICs in the flushing and extraction steps are similar, resulting in low EFs. This observation is explained with the fact that the liquid-gas equilibrium is established rapidly. In addition, these polar short-chain ethyl esters strongly interact with solvent molecules, thus limiting their transfer to the gas phase (bubble, headspace) during effervescence. On the other hand, in the case of less volatile compounds (e.g. ED, EUD), the poor EFs may be attributed to their low vapor pressures and high molecular weights. Since the mass transfer of big molecules into the bubbles (predominantly by diffusion) is limited, only a small portion of these low-volatility species can be transferred to the gas phase, leading to low MS signal intensities.</ns0:p><ns0:p>Ionization efficiencies may differ for the tested analytes, possibly leading to misinterpretation of the results. To compensate for the anticipated ionization bias, we applied correction factors (see Eq. ( <ns0:ref type='formula'>3</ns0:ref>)) obtained by direct liquid infusion to the FE results. Overall, the analytes characterized with satisfactory EFs, S/N, and corrected signal intensities are those that contain 7-10 carbon atoms, have boiling points in the range of 420-480 K, logarithm of octanol-water partition coefficient values in the range of 1.8-3.3, surface tensions in the range of 0.0265-0.0285 N m -1 , vapor pressure values in the range of 0.03-0.63 kPa, and solubilities in water in the range of 0.08-3.71 g kg -1 (Fig. <ns0:ref type='figure'>5</ns0:ref>). They are highly amenable to FE (EFs ranging from 15 to 75).</ns0:p></ns0:div>
<ns0:div><ns0:head>Note on the data variability</ns0:head><ns0:p>To verify the reliability of the results obtained in this study, we further evaluated the reproducibility of the FE technique at the default conditions (carrier gas, carbon dioxide; extract transfer tubing, I.D. = 1.0 mm; extract transfer tubing length, 60 cm; solvent, 5 vol. % ethanol in water). In the case of EFs, the dense distribution of the data points implies good reproducibility among numerous replicates (n = 33; 11 days; Fig. <ns0:ref type='figure' target='#fig_5'>S7</ns0:ref>). However, in the case of S/N, the data cloud shows a broader distribution, especially for EPR, EPE, and EHP (Fig. <ns0:ref type='figure' target='#fig_5'>S7</ns0:ref>). This can be rationalized with the technical variability of the instrument conditions, which may contribute to different MS signal intensities and spectral noise on different days. It is evident that using EFs is a proper way to evaluate extraction performance of FE. By calculating EFs, one can quantify the differences between FE and direct headspace vapor flushing in a single experiment. The use of EFs ensures satisfactory (inter-day) reproducibility, mitigates the effect experimental variability, which would otherwise obscure relevant trends.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>It was earlier demonstrated that removal of VOCs from aqueous samples can be greatly enhanced with the aid of microbubbles generated by purging such samples with gases delivered to the bottom of the sample vessel <ns0:ref type='bibr' target='#b42'>(Wang & Lenahan, 1984;</ns0:ref><ns0:ref type='bibr' target='#b36'>Shimoda et al., 1994)</ns0:ref>. However, the use of relatively long sample column and large sample volumes are critical to achieve satisfactory extraction efficiencies. To boost analytical performance, the effluent gas extracts are often trapped prior to analysis, as it is in purge-closed-loop and purge-and-trap methods <ns0:ref type='bibr' target='#b42'>(Wang & Lenahan, 1984;</ns0:ref><ns0:ref type='bibr' target='#b0'>Abeel, Vickers & Decker, 1994)</ns0:ref>. In general, the gas stripping approach-relying on concentration of the liberated analytes into small volumes-is time-consuming, requires the supply of energy (electricity for heating) and additional consumable materials (sorbent, cryogenic agent).</ns0:p><ns0:p>In FE-unlike in the sparging techniques-numerous microbubbles are formed in situ during fast decompression (Fig. <ns0:ref type='figure'>6</ns0:ref>). These microbubbles further grow due to absorption of dissolved gas <ns0:ref type='bibr' target='#b9'>(Fang et al., 2016)</ns0:ref>, and coalesce with other (micro)bubbles <ns0:ref type='bibr' target='#b4'>(Chaudhari & Hofmann, 1994)</ns0:ref>. Previously, we showed that increasing carbon dioxide pressure (0-150 kPa) during saturation leads to increased analyte peak areas during FE <ns0:ref type='bibr' target='#b3'>(Chang & Urban, 2016)</ns0:ref>. This effect can be attributed to the greater number of microbubbles formed when the concentration of the dissolved gas is higher. However, due to practical and safety-related issues, we did not further utilize higher pressures than 150 kPa. The addition of surfactant can reduce the surface tension, and alter the viscosity of the solution, thus increasing the number of microbubbles. However, the use of surfactants may also cause unmanageable foaming problems leading to contamination of flow lines with sample droplets and foam. In fact, the foaming height increased notably in the presence of surfactant-related additives (Fig. <ns0:ref type='figure'>S6C</ns0:ref>). Thus, in order to prevent contamination of the system, the concentration of surfactants cannot be too high. Only few compounds (EHP, LIM) showed slightly higher EFs and S/N when the surfactant concentration was increased (Fig. <ns0:ref type='figure'>4E-4G</ns0:ref>). Accordingly, one does not anticipate major matrix effects at low concentrations of surfactants present in the analyzed samples.</ns0:p><ns0:p>Microbubbles can readily form at nucleation sites (e.g. rough surface, lumen of cellulose fibers) <ns0:ref type='bibr' target='#b17'>(Jones, Evans & Galvin, 1999)</ns0:ref>. Thus, in one experiment, we introduced cotton fibers into the sample chamber to provide nucleation sites. The fact that the improvement of FE performance was only minor (Fig. <ns0:ref type='figure'>4H</ns0:ref>) could be due to an uneven distribution of cotton fibers or insufficient number of microbubbles generated with the aid of the air trapped within fiber lumens. It should be noted that the air entrapment is attributed to the fact that the time required for the liquid to fully invade the lumen by capillary action is longer than the time required to submerge the fibers (Liger-Belair, 2012). However, mechanical stirring-applied in the stage 2 of the extraction step-can enhance contact between fiber lumens and solvent, thus eliminating the trapped air and reducing the formation of microbubbles. On the other hand, stirring itself can promote formation of bubbles <ns0:ref type='bibr' target='#b8'>(Dean, 1944)</ns0:ref>.</ns0:p><ns0:p>FE showed poor performance at the high concentrations (> 25 vol. %) of alcohols (Fig. <ns0:ref type='figure'>4A-C</ns0:ref>). This result is explained with an elevated partitioning of organic solutes to the liquid phase. In addition-according to the Stokes' equation <ns0:ref type='bibr' target='#b37'>(Stokes, 1851)</ns0:ref>-when the bubble size becomes large, the rise rate of such bubbles increases:</ns0:p><ns0:formula xml:id='formula_3'>, (5) 𝑅 = 𝜌𝑔𝐷 2 18𝜇</ns0:formula><ns0:p>where R is bubble rise rate (m s -1 ), ρ is density (kg m -3 ), D is diameter (m), and μ is dynamic viscosity (kg m -1 s -1 ). High rise rates increase the probability of bubble coalescence, thus reducing bubble density <ns0:ref type='bibr' target='#b30'>(Rajib, Farzeen, & Ali, 2017)</ns0:ref>. The observed reduction of bubbling and large bubble size at high alcohol concentration (Fig. <ns0:ref type='figure'>S6C</ns0:ref>) can also be attributed to the decreased surface tension <ns0:ref type='bibr' target='#b30'>(Rajib, Farzeen, & Ali, 2017)</ns0:ref>. Down this path, few large bubbles provide smaller liquid-gas interface surface area than many microbubbles, while high rise rate shortens the time of their exposure to the sample, what-along the increased partitioning to the liquid phase-leads to low extraction yields.</ns0:p><ns0:p>Enrichment of organic solutes by 'bursting bubble aerosols' is based on the assumption that analyte molecules can adsorb on bubble surface during its exposure to sample matrix, and then are released to the headspace via microdroplets ejected from the bubble surface <ns0:ref type='bibr' target='#b5'>(Chingin et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b6'>Chingin et al., 2018)</ns0:ref>. The number of adsorbed molecules can gradually increase as bubbles grow. Moreover, other solutes can expel some of the analytes from the liquid phase, contributing to adsorption of these analytes onto bubble surface. The so-called 'salting effect' is described by Setschenow equation <ns0:ref type='bibr' target='#b35'>(Setschenow, 1889)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_4'>, (6) 𝑙𝑜𝑔 𝑆 𝑆 o = -𝐾 salt 𝐶 salt</ns0:formula><ns0:p>where S is the solubility of the organic solute in aqueous solution, S o is the solubility of the organic solute in pure water, K salt is the empirical Setschenow constant, and C salt is the molar concentration of the electrolyte. As the concentration of salt increases, the solubility of the organic solutes in the salt solution decreases due to the electrostatic interactions between the salt ions and water dipoles, which reduce the number of freely available water molecules in the solution <ns0:ref type='bibr' target='#b16'>(Hyde et al., 2017)</ns0:ref>. Hydrophobic interactions are promoted leading to aggregation of the organic solutes <ns0:ref type='bibr' target='#b39'>(Thomas & Elcock, 2007)</ns0:ref>. Therefore, the organic species distribute near the liquid-air interface, and are concentrated in the headspace by the bursting bubbles. In the case of highly volatile and polar compounds, a notable increase in S/N ratios but no major differences in EFs were observed in the presence of salt (Fig. <ns0:ref type='figure'>4D</ns0:ref>). However, no clear trend was observed for the less volatile and less polar compounds. These results suggest that-when a bubble rises up-it preferentially scavenges lowmolecular-weight (here, polar) compounds. An adsorption equilibrium at the bubble surface may be established rapidly when large quantities of low-molecular-weight polar compounds are present near the interface, and only limited space is available for non-polar solutes (with higher molecular weight) to adsorb. Insufficient adsorption space at the bubble surface may be responsible for poor extraction efficiency of less volatile compounds in FE.</ns0:p><ns0:p>The type of carrier gas only had a minor effect on FE. The solubility of the gas greatly influences the amount of bubbles produced during effervescence. Therefore, different MS ion signal intensities are recorded in the extraction step when different gases are used (Fig. <ns0:ref type='figure' target='#fig_3'>S1</ns0:ref>). Moreover, the five tested gases have non-polar character. Other available gases (e.g. with higher polarity) were not considered mainly due to safety reasons. It is imaginable that compounds with non-polar moieties may adsorb on the bubble surface. However, the signals of low-polarity compounds were lower than those of medium-polarity compounds (Fig. <ns0:ref type='figure'>5C</ns0:ref>).</ns0:p><ns0:p>It is also imaginable that the low-molecular-weight (volatile) molecules diffuse into the bubble lumen, or are incorporated into the bubble during bubble growth ('co-precipitation' into the gas phase). Since the applied gases are non-polar, the lower-polarity analytes should, in principle, be preferentially extracted by the bubbles, entering their lumens. However, against intuition, the lowest polarity compounds in the tested set do not show the highest EFs, S/N, and corrected signal intensities (Fig. <ns0:ref type='figure'>5C</ns0:ref>). Furthermore, the diffusion coefficients of the solutes in the solvent and across the solvent-gas interface can alter their transfer to the bubble surface and across the interface. Note that the diffusion coefficients of gases are related to their molecular weight <ns0:ref type='bibr' target='#b24'>(Mason & Kronstadt, 1967)</ns0:ref>. Turbulence-due to stirring and effervescence-can improve the mass transfer <ns0:ref type='bibr' target='#b34'>(Sandoval-Robles, Delmas & Couderc, 1981)</ns0:ref> of solutes to the proximity of liquid-gas interface (bubbles, headspace) but it may not necessarily affect the mass transfer across the liquid-gas boundary around the bubbles. When salt is present in the sample, the organic solutes may partition into this boundary. However, the extraction of the lower-polarity analytes was not improved by the addition of salt (Fig. <ns0:ref type='figure'>4D</ns0:ref>). This may be because of the inefficient mass transfer of big molecules across the liquid-gas interface into the bubble lumens (dominated by diffusion) as well as competition with small molecules (solvents, other analytes). Thus, the efficiency of mass transfer across the liquid-gas interface-rather than chemical similarity (polarity)-may be a more important factor determining extraction rates. Although the low-molecular-weight analytes are highly polar, they more readily 'escape' the 'crowded' liquid-phase environment (in favor of the 'dilute' gas-phase (bubble) environment) than the higher-molecular-weight analytes.</ns0:p><ns0:p>Following exposure of the newly formed bubbles to the sample matrix, the rising bubbles reach the surface of liquid-headspace interface. In the case of jet drops, few (< 10) big droplets are formed when the bubble internal cavity collapses <ns0:ref type='bibr' target='#b33'>(Resch, Darrozes & Afeti, 1986)</ns0:ref>. However, in the case of film drops, high numbers (e.g. hundreds, depending on the bubble size) of tiny droplets are produced when the disintegration of the bubble film cap takes place near the liquid-gas interface <ns0:ref type='bibr' target='#b33'>(Resch, Darrozes & Afeti, 1986)</ns0:ref>. It is worth noting that-according to the previous tests <ns0:ref type='bibr' target='#b3'>(Chang & Urban, 2016)</ns0:ref>-no liquid droplets (which might come from stirred sample and numerous bubbles from effervescence) enter the detection system. This implies that all the solvent molecules within the droplets evaporate before they reach the detector. Therefore, only gas-phase molecules are ionized and detected by APCI-MS.</ns0:p><ns0:p>The FE technique involves pressurization of carrier gas in the sample chamber, which leads to pressure increase in the headspace. The released gas-phase molecules are directed to the detector due to the pressure difference between the sample chamber headspace and detection system. Increasing extract transfer tubing length (Fig. <ns0:ref type='figure' target='#fig_4'>2B</ns0:ref>), and emulating slow decompression by pulsing V2 at a low duty cycle (Fig. <ns0:ref type='figure'>S3</ns0:ref>), reveals the influence of gas flow rate on FE performance. Low flow rate of the gaseous extract extends FE duration and increases the exposure of bubbles to the liquid matrix. The temporal profiles of the carbon dioxide flow rate showed that the gas flow rate increases abruptly when depressurization occurs (V2 open), then it drastically decreases within ~ 0.7 s after opening V2 (Fig. <ns0:ref type='figure'>S8</ns0:ref>). On turning on the carrier gas supply (V1 open), the gas flow rate again slightly increases, and finally stabilizes (~ 1.2 s after opening V1). Interestingly, the temporal profile of the initial gas flow rate bears a resemblance to the temporal profiles of the ion currents of some analytes (e.g. EPE, EHP; Fig. <ns0:ref type='figure'>S8 and S5</ns0:ref>, 2 s). It must be pointed out that APCI-MS is a mass-sensitive detector, and the ion signal is proportional to the absolute amount of the analyte molecule entering the detection system in a time unit (sample introduction rate) <ns0:ref type='bibr' target='#b40'>(Urban, 2016;</ns0:ref><ns0:ref type='bibr' target='#b29'>Prabhu, Witek & Urban, 2019)</ns0:ref>. Additionally, a sudden release of considerable amount of gaseous effluents into the ion chamber may lead to a transient carryover effect within the ion source. Extended residence of the large amount of gaseous extract in the ion source can lead to peak tailing. Interestingly, different compounds reveal distinct temporal profiles, and their signals reach maxima at different time points (Fig. <ns0:ref type='figure'>S5</ns0:ref>). This may be because the VOCs with various diffusivities are stratified in the headspace, or the fact that it takes more time to extract less volatile compounds (e.g. EN) by the emerging bubbles (Fig. <ns0:ref type='figure' target='#fig_5'>7</ns0:ref>). Nevertheless, with the aid of carrier gas flow (V1 open), larger molecules eventually arrive in the detection system following a short delay (~ 12 s after opening V2).</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We have characterized FE process taking into account various factors related to the instrument, method, sample, and analytes. Some of the tested factors (e.g. diameter and length of the extract transfer tubing, alcohol concentrations, presence of salt) have a significant effect on the extraction performance, while others (e.g. gas type, presence of surfactants and nucleation sites) do not affect it to a great extent. This information provides the basis for boosting extraction efficiency to detect low-abundance analytes. It can also help analysts to predict the occurrence of matrix effects when analyzing real samples. It is proposed that volatile analytes are extracted into bubble lumens. Small molecules more readily diffuse into the bubble lumens, while the diffusion of bigger molecules takes more time. Thus, the analytes with different molecular weights reveal distinct temporal profiles, and are detected at different times. The high amplitudes of the MS signals corresponding to small molecules are attributed to high extraction yields of such species as well as the characteristics of the APCI-MS detection system, which is vulnerable to changes in sample flow rate. Overall, the results delimit the applicability of the FE technique, thus allowing one to predict Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>g. silicone tubing, I.D. = 0.9 mm, O.D. = 2.0 mm) is attached to V2 (pinch valve), and it is connected in between a 2-cm section of PTFE tubing (I.D. = 0.3 mm, O.D. = 1.6 mm, cat. no. 58702, Supelco; Sigma-Aldrich) and a 60-cm section of ethylene tetrafluoroethylene (ETFE) extract transfer tubing (I.D. = 1.0 mm, O.D. = 1.6 mm, part no. 1517L; IDEX Health & Science, Lake Forest, IL, USA). During the optimization of extract transfer tubing diameter, various PTFE tubings (I.D. = 0.3 mm, O.D. = 1.6 mm, cat. no. 58702; I.D. = 0.6 mm, O.D. = 1.6 mm, cat. no. 58701; I.D. = 0.8 mm, O.D. = 1.6 mm, cat. no. 58700-U; Supelco; Sigma-Aldrich as well as I.D. = 1.0 mm, O.D. = 1.5 mm, from an unspecified supplier) were used instead of the ETFE tubing. The gaseous extract is transferred from the extraction chamber via those tubing sections (PTFE, soft rubber-like, and ETFE) to the ion source of mass spectrometer.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Flushing step: Flushing headspace vapors with carrier gas (typically, carbon dioxide) (60 s) (open Valves 1 and 2 (V1 and V2)). II. Saturation step: Pressurizing carrier gas in the extraction chamber (open V1, and close V2). Stirring is applied in this step to assist the dissolution of the carrier gas in the sample matrix (60 s). III. Extraction step: Depressurizing the sample chamber and transferring the gaseous extract to APCI-MS. In the stage 1 of this step (close V1, and open V2), the dissolved carrier gas is released from the sample matrix, leading to effervescence. The VOCs are extracted during this short period of time (2 s), and transferred to APCI-MS (see PeerJ An. Chem. reviewing PDF | (ACHEM-2019:10:42609:1:0:NEW 26 Nov 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>done using Excel (version 16.0; Microsoft, Redmond, WA, USA) and Matlab software (version R2017a; MathWorks, Natick, MA, USA). Origin software (version 2018b; OriginLab, Northampton, MA, USA) was used to plot the figures.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,178.87,525.00,377.25' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "Response to Reviewers
“On the mechanism of automated fizzy extraction” (modified title)
Manuscript ID: #ACHEM-2019:10:42609:0:1:REVIEW
by Chun-Ming Chang, Hao-Chun Yang, and Pawel. L. Urban
We would like to thank the Editor and both Reviewers for their feedback on our manuscript. We have considered all the comments while revising the text. All the alterations are indicated in the tracked manuscript file, and documented below. The line numbers correspond to the previous review PDF file.
-----------------------------
Editor's comments:
Please proceed with major revision by taking into account all comments and suggestions by both reviewers.
Response:
We have revised the manuscript accordingly.
-----------------------------
Reviewer 1
Basic reporting
A fizzy extraction (FE) is a novel liquid-gas separation system with emulsification of aqueous sample by effervescence effect. The pressure-controlled changes in the extraction environment heads to the immediate analyte flow through gaseous headspace to the on-line coupled analyser. In this regard, we may consider it as an pressure-controlled effervescence-assisted emulsification liquid-gas extraction approach.
Response:
We have added the sentence “FE may be regarded as a pressure-controlled effervescence-assisted emulsification liquid-gas extraction approach.” to Introduction (line 45).
As a very recently developed extraction, it deserves attention to be studied and gradually improved if it would have brought better sensitivity, selectivity, robustness and so forth.
Response:
In fact, this report is the 6th in a series of papers about FE. The previous 5 reports demonstrated: (i) the initial concept of FE-APCI-MS (Analytical Chemistry 2016, 88, 8735-8740); (ii) the FE protocol in a visual way (JoVE Journal of Visualized Experiments 2017, e56008); (iii) FE-GC-MS (Analytical and Bioanalytical Chemistry 2019, 411, 2511-2520); (iv) automation of FE-APCI-MS (Heliyon 2019, 5, e01639); and (v) the detailed protocol for making electronic control system for FE (Nature Protocols accepted). The present report focuses on the study of FE mechanism. The following reports (based on the on-going work with different co-authors) will focus on applications of FE in food science, and miniaturization of the FE system.
The mechanism of FE and important analytical parameters, have been studied in this work. The instruments, sample, and chemicals were investigated to eliminate potential interfering agents. The chemical conditions were also optimised to get the best mass spectrometry signal.
I miss to stress more on the automation as it gave refinment in repetitive control of the bubbling and effervescence.
Response:
We have added the following sentence: “Partial automation is critical for repetitive control of the saturation and effervescence steps.” to Introduction (line 53).
(Introduction, Note):
In the very first part of first paragraph, I miss the general starting of how important is the sample preparation when sensitive analysis is to be used. For the record, I would like to give you an example and related citations which significantly contributed in the area. Also there have been important contributions in unattended approaches applied in chemical laboratory. As this field gradually increases in analytical chemistry the also deserve mentioning.
(Introduction, Upgrade):
Sample preparation (whether done in manual or automated manner), which work on the basis of analyte transfer between different phases such as liquid-liquid, solid-liquid, liquid-gas, solid-gas etc., is frequently unavoidable step before analysis [https://www.sciencedirect.com/science/article/pii/S0165993616302916, https://www.sciencedirect.com/science/article/pii/B9780128169117000013, https://www.sciencedirect.com/science/article/pii/S1570023218306214, https://www.sciencedirect.com/science/article/pii/B9780128169063000212]. The one of them is a very recently introduced fizzy extraction (FE) which relies on dissolution of a carrier gas in liquid sample under ...
Response:
Following this suggestion, we have added the following phrases and references to the beginning of Introduction: “Sample preparation—whether performed in manual or automated manner—is frequently an unavoidable step in chemical analysis workflows (Prabhu & Urban, 2017; Alexovič et al., 2018; Poole, 2020; Zheng, 2020). It can rely on analyte transfer between different phases in liquid-liquid, solid-liquid, liquid-gas, or solid-gas extraction systems. One of the available approaches is the recently introduced fizzy extraction (FE) approach, which relies on dissolution of a carrier gas in liquid sample under…” (line 36).
Experimental design
Again, in this manuscript, would that do not be better to state and discuss more as it is automated method?
For example in title: On the mechanism of automated fizzy extraction?
Response:
We have revised the title accordingly.
It is known the extraciton automation has brought many advantages and solely thanks to instrumental performance the analytical parameters improved.
The program controlled pressure changes can head to a repetitive bubbling manner, in your extraction chamber.
Response:
We have stressed this point in the following sentence: “The program-controlled pressure changes lead to repeatable bubbling in the extraction chamber.” (line 83).
Validity of the findings
The mechanism of FE is exhaustively studied with necessary parameters impacting the extraction efficacy.
However, I do not see using any aqueous real sample analysis. I would say, it would be positive to validate the method applicability in doing so.
Response:
Please note that—in this particular study—we focused exclusively on the investigation of FE mechanism. In the previous reports on FE, we demonstrated the possibility to analyze several real matrices, and provided validation data. In one of the next reports (with a different set of co-authors), which will be focused on applications, we are also going to provide data from analyses of numerous real samples.
Comments for the Author
Authors present a fizzy extraction for the volatile compounds analysis.
The system is modified in comparsion with prior fizzy extractions.
The novelty are clearly declared as detail study of FE mechanism.
Response:
Thank you.
-----------------------------
Reviewer 2
Basic reporting
The authors of the manuscript 'On the mechanism of fizzy extraction' have presented a systematic study on fizzy extraction in order to shed light on its underlying mechanism. Although, fizzy extraction is a variant of conventional 'purge and trap', the new technique is lot faster than 'purge and trap' and therefore possesses good potential in modern high throughput laboratories. The manuscript is well written and properly presented. The references look okay. The rational provided to validate the hypothesis is scientifically fair.
Response:
Thank you.
Experimental design
Experimental design is okay except that the authors missed to evaluate the impact of pressure more systematically on the solubility of carrier gas and the correlation between the soluble carrier gas concentration and the EF values.
Response:
Thank you for pointing this out. We have not included this experiment in the current study because we had performed it earlier, and discussed it (including gas dissolution) in one of the previous reports (Analytical Chemistry 2016, 88, 8735-8740). Following this comment, we have recalled that result, and discussed it further in the narrative text (line 339): “Previously, we showed that increasing carbon dioxide pressure (0-150 kPa) during saturation leads to increased analyte peak areas during FE (Chang & Urban, 2016). This effect can be attributed to the greater number of microbubbles formed when the concentration of the dissolved gas is higher. However, due to practical and safety-related issues, we did not further utilize higher pressures than 150 kPa.”.
Validity of the findings
The presented results are valid. However, the schematic presented in Figure 6 needs additional explanation in the text. It is unclear as to why a polar or hydrophilic compound would move into the bubbles?
Comments for the Author
Although the authors tried to explain the mechanism of fizzy extraction, some additional factors need to be evaluated such as dependence of EF on the soluble gas concentration. The mechanism or hypothesis as to why a polar compound would move to bubbles needs further explanation.
Response:
Please note that the most polar and hydrophilic compounds are also those that have the smallest molecular weights and highest diffusivities. Thus, the mass transfer of such molecules toward the bubbles and across the liquid-gas interface is fastest. The average inter-molecular distance in the gas phase is large. Hence, the polarity of the gas and analyte-extractant interactions play little role in determining analyte transfer from the liquid to the gas phase. This explanation is summarized in the sentence: “Thus, the efficiency of mass transfer across the liquid-gas interface—rather than chemical similarity—may be a more important factor determining extraction rates.” (line 420). We have further added explanatory notes to the paragraph starting in lines 405-422, e.g. “Although the low-molecular-weight analytes are highly polar, they more readily “escape” the “crowded” liquid-phase environment (in favor of the “dilute” gas-phase (bubble) environment) than the higher-molecular-weight analytes.”.
" | Here is a paper. Please give your review comments after reading it. |
675 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Oligodeoxynucleotides (ODNs) are typically purified and analysed with HPLC equipped with a UV-Vis detector. Quantities of ODNs are usually determined using a UV-Vis spectrometer separately after HPLC, and are reported as optical density at 260 nm (OD 260 ). Here, we describe a method for direct determination of OD 260 of ODNs using the area of the peaks in HPLC profiles. It is expected that the method will save significant time for researchers in the area of nucleic acid research, and minimize the loss of oligonucleotide samples.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Oligodeoxynucleotides (ODNs) synthesized on an automated synthesizer are usually purified using reversed-phase (RP) or ion-exchange HPLC. <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref><ns0:ref type='bibr' target='#b1'>[2]</ns0:ref> Their purity is also usually determined using HPLC. For both preparative and analytical HPLC, the elution profile is mostly generated using a UV-Vis detector with the wavelength set to 260 nm. Quantities of ODNs are typically documented using optical density at 260 nm, which is abbreviated as OD 260 . It is defined as the UV absorbance at 260 nm (A 260 ) of the ODN to be quantified dissolved in 1 mL of water with a light path of 1 centimetre. The value of OD 260 is usually determined separately after an ODN is purified by HPLC using an UV spectrometer or other UV based apparatus such as a NanoDrop. Because the peak area in an HPLC profile is a quantitative measure of the UV absorbance of the ODN eluted within the peak, we reasoned that a separate step for the determination of OD 260 using a UV spectrometer as we usually do is unnecessary, and the OD 260 value can be determined directly using the peak area in the HPLC profile. Here, we describe the establishment of a correlation curve between HPLC peak areas and OD 260 values and demonstrate its use for the determination of OD 260 using HPLC peak area without having to measure UV absorbance separately. 1b (21-mer, 5'-TTG CCA TGA TTG ACA ACC AAT-3') and 1c (32-mer, 5'-TAG TTT TAT AAT TTC ATC AGC AGT GTT ACC GT-3') were either obtained from a commercial source or synthesized on a MerMade-6 DNA/RNA synthesizer at 1 (1a) or 0.2 (1b-c) µmol scale under standard synthesis, cleavage and deprotection conditions. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> RP HPLC was carried out under typical conditions described elsewhere. <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> UV absorbance was obtained on a Horiba Scientific Duetta Fluorescence and Absorbance Spectrometer at 260 nm in a 1 mL quartz cuvette with a 1 cm light path. Water was used as the blank.</ns0:p><ns0:p>For establishing the correlation curve of OD 260 vs HPLC peak area (Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, see supporting information for a protocol for establishing the curve), ODN 1a, which was synthesized at 1 µmol scale and purified with trityl-on RP HPLC, was dissolved in 1 mL water. Different volumes of the solution (see Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>) were injected into HPLC. For each injection, the peak area of the ODN was recorded, and the fractions corresponding to the peak were collected, combined and concentrated to dryness in a centrifuge evaporator under vacuum. The ODN was dissolved in 1 mL water. UV absorbance at 260 nm, which is the OD 260 of the ODN in the corresponding HPLC peak, was obtained on a UV-Vis spectrometer using the solution in a 1 mL cuvette with a 1 cm light path (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). The correlation curve of the values of OD 260 vs the HPLC peak areas for the injections were generated and presented in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. The slopes of the line is 0.00033.</ns0:p><ns0:p>To demonstrate the use of the curve for the determination of OD 260 from HPLC peak area, ODN 1b was synthesized at 0.2 µmol scale on CPG, cleaved, deprotected, and purified with tritylon RP HPLC. One twentieth of the ODN was injected into HPLC. The area of the ODN peak was found to be 498.3, which corresponds to an OD 260 of 0.164 according to the line obtained using the UV-Vis spectrometer in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. Thus, the OD 260 for the 0.2 µmol ODN synthesis was 3.29. To validate the result, the OD 260 of the synthesis was also determined using standard method by measuring the absorbance at 260 nm on a UV-Vis spectrometer. The number obtained was 3.31. The use of the curve was further validated using ODN 1c following the same procedure for 1b. The OD 260 values calculated from the line obtained using the UV-Vis spectrometer and obtained a UV-Vis spectrometer were 7.42 and 7.10, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>Oligonucleotides including oligodeoxynucleotides (ODN) are usually quantified by measuring UV absorbance at 260 nm (OD 260 ) in a separate step after HPLC purification or analysis. Once the value of OD 260 of an oligonucleotide is obtained, its mass in micro grams or micro moles can be easily calculated based on its sequence. Such calculations are usually carried out using free online tools by simply imputing the sequence and the OD 260 value. In this report, we describe a method to determine OD 260 directly from the area of HPLC peak instead of obtaining the value in a separate step.</ns0:p><ns0:p>To use HPLC peak area to determine OD 260 , a correlation curve between OD 260 and HPLC peak area needs to be established first (see supporting information for a protocol). ODN 1a is used to demonstrate the process. A solution of 1a from a 1 µmol synthesis was prepared (see Materials and Methods section). The concentration does not need to be known or accurate, but should be suitable for the generation of HPLC peaks with areas close to those in HPLC profiles a lab typically generates for ODN purification and analysis. For the case of 1a, the ODN from the 1 µmol synthesis was dissolved in 1 mL water. Various volumes as indicated in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> were injected into HPLC. For each injection, the ODN under the correct peak was collected and the area of the same peak was recorded (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). The ODN was evaporated to dryness, and dissolved in 1 mL of water. The value of OD 260 was then obtained by measuring the absorbance using a UV spectrometer. Plotting the OD 260 numbers against peak areas gave the required correlation curve (Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>). As expected, the data fitted well with straight line with a slope of 0.00033.</ns0:p><ns0:p>Once the correlation curve is obtained, the OD 260 for any ODN that has an HPLC profile can be easily calculated. The ODN 1b, which was synthesized at a 0.2 µmol scale, is used as an example. One twentieth of the crude ODN was injected into HPLC. The area of the ODN peak was found to be 498.3. Using the correlation line generated with the data from UV-Vis spectrometer in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>, the area corresponds to an OD 260 of 0.164. Alternatively, the OD 260 can be simply calculated using the slope of the line, which gave the same number (498.3 × 0.00033). With the number for a portion of the sample, the OD 260 for the 0.2 µmol synthesis can be easily calculated, which is 3.29 (0.1644 × 20). To validate the result, the OD 260 of the synthesis was measured in a standard way using a UV-Vis spectrometer, and the value was 3.31, which was close to the value calculated from the graph or the slope. The method was further validated using ODN 1c using the same procedure for 1b, and the numbers from the graph and from standard measurement were 7.10 and 7.42, respectively.</ns0:p><ns0:p>Although the method for the determination of OD 260 is simple and convenient, it is recommended that the validity of the standard curve or slope is checked whenever the HPLC conditions such as column, eluents, gradient, flow rate and temperature are changed. In addition, if the flow cell of the UV detector is replaced or cleaned, a new curve should be generated. Gratifyingly, in most labs, HPLC is typically performed under consistent conditions and UV detector flow cell can last for many years, there is no need to validate the graph frequently. Another suggestion is that when the area of the peak of an ODN is out of the areas used to generate the correlation graph, caution is needed to use the graph to calculate its OD 260 because the correlation may not be linear when the concentration of the ODN in the eluent is too high. We suggest only use HPLC peak area to determine OD 260 when the concentration of the eluent is not too high and is within the linear curve range. Finally, it is important to point out that the usually long and thick waste eluent line connecting the UV detector flow cell to waste container needs to be removed when collecting fractions of eluent to generate the OD 260 -HPLC peak area correlation graph. If this were not followed, the fractions collected may not be the fractions corresponding to the intended HPLC peak.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In summary, a simple and convenient method for the determination of OD 260 of oligonucleotides using HPLC peak area is described. Although only quantification of ODN is The ODN 1a synthesized at 1 µmol scale was purified with trityl-on HPLC, and dissolved in 1 mL water. Different volumes were injected into HPLC. The peak areas were recorded, and the fractions corresponding to the peak areas were collected. The fractions were evaporated to dryness and dissolved in 1 mL water. The values of UV absorbance at 260 nm (OD 260 ) were then obtained in a 1 mL cuvette with a 1 cm light path. The listed values were the average of three measurements. The ODN 1a synthesized at 1 µmol scale was purified with trityl-on HPLC, and dissolved in 1 mL water. Different volumes were injected into HPLC. The peak areas were recorded, and the fractions corresponding to the peak areas were collected. The fractions were evaporated to dryness and dissolved in 1 mL water. The values of UV absorbance at 260 nm (OD 260 ) were then obtained in a 1 mL cuvette with a 1 cm light path. The listed values were the average of three measurements. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2022:03:72030:1:0:NEW 16 Jun 2022)Manuscript to be reviewedChemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 The</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>HPLC peak areas and their corresponding OD 260 values.</ns0:figDesc><ns0:table /><ns0:note>a a</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>HPLC peak areas and their corresponding OD 260 values. a</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Entry</ns0:cell><ns0:cell>Volume (µL)</ns0:cell><ns0:cell>HPLC peak area</ns0:cell><ns0:cell>OD 260</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>4.0</ns0:cell><ns0:cell>485.0</ns0:cell><ns0:cell>0.176</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>6.0</ns0:cell><ns0:cell>781.3</ns0:cell><ns0:cell>0.269</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>9.0</ns0:cell><ns0:cell>1113.1</ns0:cell><ns0:cell>0.395</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>12.0</ns0:cell><ns0:cell>1497.6</ns0:cell><ns0:cell>0.496</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>15.0</ns0:cell><ns0:cell>1925.1</ns0:cell><ns0:cell>0.616</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>20.0</ns0:cell><ns0:cell>2408.3</ns0:cell><ns0:cell>0.778</ns0:cell></ns0:row><ns0:row><ns0:cell>a</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:note>
</ns0:body>
" | "Department of Chemistry
http://www.chemistry.mtu.edu
Dr. Shiyue Fang, Professor
Michigan Technological University
1400 Townsend Drive
Houghton, Michigan 49931-1295
Tel 906/487-2023 • Fax 906/487-2061
Email: [email protected]
http://www.mtu.edu/chemistry/department/faculty/fang/
June 16, 2022
Manuscript Title: Determination of optical density (OD) of oligodeoxynucleotide from HPLC peak area
Authors: Komal Chillar, Yipeng Yin, Dhananjani N. A. M. Eriyagama and Shiyue Fang
Dear Editor:
Thank you and the reviewers for evaluating the manuscript mentioned above for publication in PeerJ
Analytical Chemistry. We are pleased to submit the revised version of the manuscript for your consideration.
The following is our response to reviewer comments.
Comments: If they need to construct standard curve very often to get the accurate data, I did not see how this
method will save significant time for researchers.
Responses: The language of the last paragraph in the Results and Discussion section is adjusted so that it
does not give readers an impression that there is a need to construct the curve often. In addition, a protocol
for constructing a standard curve is provided in the revised version, which will make the work easier.
Comments: They should provide the original HPLC raw data. In addition, in Table 1 and Figure 1, they only
performed once for each different sample. They should at least run triplicate samples to validate the
repeatability.
Responses: HPLC data included. Each datum point is now an average of three measurements.
Comments: For Figure 1, they should provide the standard curve equation. More importantly, they should
expand their range of standard curves since they used the current standard curve to calculate the OD260 for
ODN 1b. Obviously, the peak area of 1690.6 is out of the detection range.
Responses: Equation is provided in Figure 1. We measured ODN 1b again so that it is within the range of the
curve.
Comments: To validate their method, they synthesize ODN 1b and 1c and compare the data obtained through
the HPLC peak area and the data by measuring the absorbance at 260 nm on a UV-Vis spectrometer. While
Michigan Tech is an EOE which includes protected veterans and individuals with disabilities
they only tested one sample for each ODN. They should at least test three samples for each by synthesizing
at different scales. In addition, they should also calculate the accuracy to check whether they are in an
acceptable range.
Responses: We’d rather not make things so complicated because this will make people less likely to use the
method. In most cases, when people measure OD260 of ODN for their work using traditional methods, they
do not prepare three samples and measure three times. Using the curve method is actually more accurate than
people typically do because the curve itself is an average of several measurements. This is especially true
when the ODN sample is dilute.
Comments: They claimed that by skipping the UV-Vis spectrometer measurement, they could be able to
minimize the loss of oligonucleotide samples. While the volume of samples needed for Nanodrop is around
1-2 µL. Is this really significant to save this small amount of samples by risking the results may not be that
accurate due to the poor quality of the standard curve that was caused by HPLC condition changes? They
should explain more here.
Responses: The greatest value of the new method is that even if students forget to obtain OD of an oligo,
they still have a way to determine it using the HPLC peak area as long as they have the profile, which they
usually do. In addition, traditional Nanodrop measurements are not very accurate. It takes time too if sample
is not a problem. The standard curve method is actually more accurate than traditional methods because the
curve is from several data points.
Comments: Referencing is not appropriate and some original papers on DNA cleavage and purification needs
to be cited. Currently, authors only cited their own their own previous works.
Responses: We thought that DNA synthesis, cleavage and purification was a very mature area, and citing the
procedures we are most familiar with should be the simplest. However, several additional papers from other
authors have been added.
Comments: Authors used water as eluent and therefore blank in their measurement. However, for many post
synthesis modifications, purification and quantification is done with gradient flow of organic and aqueous
solutions. How this method can be adopted to these cases? Authors need to show method applicability and
accuracy under such conditions.
Responses: This is a good point. Fortunately, the fractions under the HPLC peak is mostly water, and few
acetonitrile. Moreover, the data for the two test samples indicate the curve method works.
Comments: The authors did not specify the mathematic approach they used to calculate the peak area and
how accurate the calculation could be, which is quite important to the theory they claimed here.
Responses: The peak area is calculated using a standard software. The specific mathematic approach should
have minimum effects as long as the areas of samples for generating the curve and the samples to be
measured are determined using the same method.
2
Michigan Tech is an EOE which includes protected veterans and individuals with disabilities
Comments: The linear range of this method was not determined. Besides, the author claimed the
concentration of the targeted ODNs is too high, the correlation between the peak area and the absorbance at
260 nm may not be linear and a simple solution to that was to collect fraction of the peak and add the datum
to the graph. In this case, how accurate the graph can be used to determine the concentration of targeted
ODNs with the peak area value in the nonlinear range?
Responses: We removed the data points that are not in the linear range. We found that when the HPLC peak
is not sharp (i.e. saturated), the data are not in the linear range. In the paper, we suggested only use the
method when the concentration is in the linear range.
Comments: There was only one data point for each injection volume and peak area included in Table 1 and
Figure 1, and no error bars were included in Figure 1. More replicates need to be added to prove the
reproducibility and consistence of the method.
Responses: Each data point is an average of three data now. Standard deviation is given in the raw data
document.
Comments: The correlation coefficient (such as R2) of the two lines needs to be added in Figure 1.
Responses: It is given now. We removed the line from the Nanodrop data for clarity. Also, Nanodrop data
are not accurate in our hands and on our instrument.
Comments: The author mentioned the correlation graph between the peak area and the absorbance needed to
be checked occasionally, especially when the column, eluents, or flow rate were changed. How long one
correlation graph could be consistent to determine the absorbance of ODNs with the peak area? If all the
parameters for the purification process were the same, yet the correlation graph could change, maybe due to
day-to-day or sample-to-sample variations, would it be easier to measure the absorbance after drying the
sample with UV-Vis spectroscopy?
Responses: For most labs, HPLC is routine and conditions are not changed at all (unless projects are aimed
to develop new HPLC analysis methods). For nucleic acid chemist, we use the same conditions for HPLC.
According to our experience, the data do not change much due to day-to-day or sample-to-sample variations,
which is reasonable because it is just UV measurements in the fluid cell of the detector as it is in the UV
cuvette in a traditional UV spectrometer.
Comments: It was indicated for ODN 1b and 1c, the absorbances at 260 nm obtained from the HPLC peak
area and the UV spectrometer were quite different (1b is 12.16 and 11.20, while 1c is 16.92 and 15.40,
respectively). The authors claimed it was possibly due to the loss between the transfer from HPLC eluent to
UV measurements. What is the specific reason for the loss? Besides, does it mean the concentration of
targeted ODNs obtained from the peak area would be different from the real final concentration since there
was a loss in the process of solvent drying or centrifuge evaporation? How big the difference would be, and
which result people should rely on?
3
Michigan Tech is an EOE which includes protected veterans and individuals with disabilities
Responses: We did the measurements again and paid more attention on sample loss. The new data were
given in the paper, and the language has been changed accordingly.
Comments: Proper references need to be added to the introduction section.
Responses: Added several more references.
Comments: The word “intuitively” from the last paragraph of the result section is not proper for a scientific
article. Is there any specific reason or evidence that the fractions may not be the corresponding ones if
without removal of the eluent collection tube?
Responses: The word has been removed and the language has been adjusted. Yes, there is a reason. The
waste line is much thicker than the lines earlier in the flow path. Once the ODN reaches there, it will diffuse
to a larger volume of eluents.
Sincerely,
Dr. Shiyue Fang
4
Michigan Tech is an EOE which includes protected veterans and individuals with disabilities
" | Here is a paper. Please give your review comments after reading it. |
676 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Lipids are an integral part of cellular membranes that allow cells to alter stiffness, permeability, and curvature. Among the diversity of lipids, phosphonolipids uniquely contain a phosphonate bond between carbon and phosphorous. Despite this distinctive biochemical characteristic, few studies have explored the biological role of phosphonolipids, although a protective function has been inferred based on chemical and biological stability. We analyzed two species of marine mollusks, the blue mussel Mytilus edulis and pacific oyster Crassostrea gigas, and determined the diversity of their phosphonolipids and their distribution in different organs. High-resolution spatial metabolomics revealed that the lipidome varies significantly between tissues within one organ. Despite their chemical similarity we observed a high heterogeneity of phosphonolipid distributions that originated from minor structural differences. Some phosphonolipids are ubiquitously distributed, while others are present almost exclusively in the layer of ciliated epithelial cells. This distinct localization of certain phosphonolipids in tissues exposed to the environment could support the hypothesis of a protective function in mollusks. This study highlights that the tissue specific distribution of an individual metabolite can be a valuable tool for inferring its function and guiding functional analyses.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>This study highlights that the tissue specific distribution of an individual metabolite can be a valuable tool for inferring its function and guiding functional analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Lipids are a diverse class of biomolecules universally present across all kingdoms of life. They are building blocks of cellular membranes, store chemical energy, and are crucial effector molecules of the cell cycle, for instance by inducing proliferation and inhibition of apoptosis <ns0:ref type='bibr' target='#b23'>(Hoeferlin et al. 2013)</ns0:ref>. The physio-chemical properties and the localization of a given lipid within a cell depend on the lipid's molecular structure, but a lipid's biological function cannot be predicted from chemical structure alone. Some lipids, like phosphatidylcholines (PC) and phosphatidylethanolamines (PE), are ubiquitous components in eukaryotic membranes, whereas other lipids are specific to individual species and their lifestyles <ns0:ref type='bibr' target='#b7'>(Corcelli et al. 2004;</ns0:ref><ns0:ref type='bibr' target='#b9'>Dembitsky & Levitsky 2004;</ns0:ref><ns0:ref type='bibr' target='#b66'>van Meer et al. 2008)</ns0:ref>. The diversity of lipids with common headgroups like PCs and PEs is well represented in in current metabolite databases (e.g. LIPIDMAPS <ns0:ref type='bibr' target='#b15'>(Fahy et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b63'>Sud M. 2006</ns0:ref>)(accessed November 23 rd 2021).</ns0:p><ns0:p>In contrast lipids with uncommon headgroups such as arseno-, phosphono-or sulfo-lipids have often been overlooked in standard lipidomics workflows. The major difficulty in analyzing these less common lipids stems from an underrepresentation of their chemical diversity in databases used for metabolite annotation.</ns0:p><ns0:p>Analyzing lipid inventories with liquid chromatography mass spectrometry (LC-MS) on extracted tissue samples is an established lipidomics approach <ns0:ref type='bibr' target='#b42'>(Long et al. 2020)</ns0:ref>. While highly sensitive, LC-MS studies reveal averaged lipid profiles of homogenized samples, so differentiating celland tissue-specific metabolomes remains challenging. However, determining these locationspecific metabolomes can be critical when defining the metabolic phenotype and revealing the function of individual cell types of an organ <ns0:ref type='bibr' target='#b19'>(Guo et al. 2021)</ns0:ref>. Spatial metabolomics, enabled by techniques such as matrix-assisted laser desorption ionization mass spectrometry imaging (MALDI-MSI), can locate metabolites in tissues <ns0:ref type='bibr' target='#b58'>(Rappez et al. 2021</ns0:ref>) and complement our knowledge on the functions of lipids and other metabolites <ns0:ref type='bibr' target='#b13'>(Ellis et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b17'>Geier et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b41'>Liebeke et al. 2015)</ns0:ref>. These technologies recently reached single cell resolution; it is now possible to link individual cell types to their characteristic lipidome <ns0:ref type='bibr' target='#b50'>(Niehaus et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b58'>Rappez et al. 2021</ns0:ref>) and detect metabolic heterogeneity even within the same cell type <ns0:ref type='bibr' target='#b17'>(Geier et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b54'>Prade et al. 2020)</ns0:ref>.</ns0:p><ns0:p>While modern mass spectrometry (MS) methods can detect hundreds of signals from one sample, their annotation often follows an automated approach using a variety of tools <ns0:ref type='bibr' target='#b43'>(Misra 2021</ns0:ref>) that rely on metabolite databases such as HMDB or LIPIDMAPS <ns0:ref type='bibr' target='#b15'>(Fahy et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b73'>Wishart et al. 2018)</ns0:ref>. Still, the majority of peaks remains unidentified in MS experiments, and has been called 'dark metabolome' <ns0:ref type='bibr' target='#b8'>(da Silva et al. 2015)</ns0:ref>. In addition to classic compound databases, combinatorial chemistry allows for the identification of new structures and exploit small modifications of known metabolites, such as the degree of saturation or the chain length of a fatty acid, to elucidate the dark metabolome through dereplication <ns0:ref type='bibr'>(Wang et al. 2016)</ns0:ref>.</ns0:p><ns0:p>Despite those improvements, entire known lipid classes are still underrepresented in common metabolite/lipid databases <ns0:ref type='bibr' target='#b1'>(Aimo et al. 2015;</ns0:ref><ns0:ref type='bibr' target='#b15'>Fahy et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b63'>Sud M. 2006</ns0:ref>) and missed by database-dependent annotation workflows. Among these overlooked lipids, phosphonolipids are a class of lipids characterized by a carbonphosphorous bond, i.e. a phosphonate moiety, in their polar headgroup. The phosphonate headgroup is linked to a typical fatty acid backbone. For all known phospholipids there is a potential phosphonolipid analog. For example, replacing the phosphoethanolamine (PE) or phosphocholine (PC) headgroup with phosphonoethanolamine or phosphonocholine would produce the corresponding phosphonolipids (PnE and PnC). In the same fashion, the phosphonolipid analogs of ceramides (PnE-Cer) combine a sphingolipid base with phosphonoethanolamine. PnE, PnC and PnE-Cer have been found in across phyla, from bacteria to eukaryotes like the unicellular ciliate Tetrahymena pyriformis or the parasitic Trypanosoma cruzi, and even multicellular organisms including bivalves, vertebrates such as mammals and birds (C. <ns0:ref type='bibr' target='#b5'>Moschidis 1984;</ns0:ref><ns0:ref type='bibr' target='#b16'>Ferguson et al. 1982;</ns0:ref><ns0:ref type='bibr' target='#b31'>Keck et al. 2011;</ns0:ref><ns0:ref type='bibr' target='#b62'>Smith et al. 1970;</ns0:ref><ns0:ref type='bibr' target='#b65'>Tamari & Kametaka 1972)</ns0:ref>.</ns0:p><ns0:p>Phosphonates are abundant especially in the marine environment <ns0:ref type='bibr' target='#b36'>(Kolowith et al. 2001)</ns0:ref>, and phosphonolipids specifically have previously been detected in marine invertebrates <ns0:ref type='bibr' target='#b25'>(Imbs et al. 2021)</ns0:ref>. In bivalves, including species of Mytilus, Crassostrea and Bathymodiolus, phosphonolipids are highly abundant and have been used as tissue biomarkers <ns0:ref type='bibr' target='#b24'>(Hori et al. 1967;</ns0:ref><ns0:ref type='bibr' target='#b33'>Kellermann et al. 2012;</ns0:ref><ns0:ref type='bibr' target='#b61'>Sampugna et al. 1972)</ns0:ref>.</ns0:p><ns0:p>The phosphono-ceramides have been reported primarily in numerous species of marine invertebrates and seem to be common in this group of animals <ns0:ref type='bibr' target='#b24'>(Hori et al. 1967;</ns0:ref><ns0:ref type='bibr' target='#b49'>Mukhamedova & Glushenkova 2000)</ns0:ref>. The abundance of this lipid class underlines its importance -it was found to be one of the three major classes of phosphorous-containing lipids, next to PC and PE lipids, in a study of 32 mollusks <ns0:ref type='bibr' target='#b38'>(Kostetsky & Velansky 2009)</ns0:ref>. However, phosphonolipids cannot be assigned exclusively to one biological clade, all three mentioned lipid classes, PnE, PnC and PnE-Cer are also described in the insect Cicada oni <ns0:ref type='bibr'>(Moschidis 1987)</ns0:ref>.</ns0:p><ns0:p>Environmental and seasonal effects have previously been shown to influence the phosphonolipid content and composition in different species. A decrease in temperature led to an increase of phosphonolipids in cultured Tetrahymena pyriformis <ns0:ref type='bibr' target='#b22'>(Hirobumi et al. 1976)</ns0:ref>.</ns0:p><ns0:p>Oysters (Crassostea virginica) showed an increased relative abundance of phosphonolipids over phospholipids at the end of their reproductive cycle which could be reproduced by starving oysters in the laboratory <ns0:ref type='bibr' target='#b64'>(Swift 1977)</ns0:ref>. The selective conservation of phosphonolipids over phospholipids points towards an important function for the animal. Despite their abundance in nature, the biological role of phosphonolipids is poorly understood, though a protective function has been suggested <ns0:ref type='bibr' target='#b29'>(Kariotoglou & Mastronicolis 1998)</ns0:ref>.</ns0:p><ns0:p>Specifically the incorporation of phosphonate moieties in cell-surface structures has been suggested as a protective feature <ns0:ref type='bibr' target='#b0'>(Acker et al. 2022;</ns0:ref><ns0:ref type='bibr' target='#b71'>White & Metcalf 2007)</ns0:ref>. Phosphonates in general are resistant to abiotic hydrolysis by low pH and withstand boiling in concentrated acid <ns0:ref type='bibr' target='#b65'>(Tamari & Kametaka 1972)</ns0:ref>. Phosphonates can be potent inhibitors of enzymes while resisting hydrolysis because they are structurally similar to phosphate esters. Phosphonolipids in particular are resistant to hydrolysis by phospholipase enzymes (Kafarski 2019) and therefore their catabolism requires phosphonate-specific enzymes. This chemical and biological stability could make phosphonolipids well suited as protective lipids against harmful environmental abiotic factors, such as heat and pH changes, and biotic factors, as most marine bacteria lack the enzymatic machinery to degrade phosphonates <ns0:ref type='bibr' target='#b68'>(Villarreal-Chiu et al. 2012)</ns0:ref>.</ns0:p><ns0:p>As filter feeders, mussels can pump tens of liters of water through their gills and body cavity per day <ns0:ref type='bibr' target='#b59'>(Riisgård et al. 2011)</ns0:ref>. This feeding mechanism, unique to aquatic environments, exposes their epithelia to diverse microbes, including pathogens, and toxic metabolites <ns0:ref type='bibr' target='#b12'>(Eggermont et al. 2017)</ns0:ref>. Phosphonolipids may improve the barrier function of mollusk tissues exposed to the environment. The accumulation of one such phosphonate lipid in epithelia exposed to the environment has recently been demonstrated in deep-sea mussels. High resolution spatial metabolomics revealed that one phosphonolipid is abundant in ciliated epithelial cells of the animal's gills, but absent from non-ciliated neighboring cells that harbor bacterial symbionts <ns0:ref type='bibr' target='#b17'>(Geier et al. 2020)</ns0:ref>. The stark contrast between the phosphonolipid's abundance in ciliated vs. colonized non-ciliated cells led us to the hypothesis that in mollusks phosphonolipids are enriched or even confined to ciliated epithelia, where they serve a protective role.</ns0:p><ns0:p>To investigate the hypothesis that phosphonolipids have a protective role we analyzed lipid profiles of tissue samples from two species of marine bivalves, Mytilus edulis and Crassostrea gigas. Both Mytilus and Crassostrea species, better known as blue mussels and oysters, represent globally important fishery resources. We sampled gill and mantle tissue from both species, as well as the foot of M. edulis. These organs were chosen because they are outlined by a ciliated epithelial layer that is constantly in direct contact with the environment. Lipid extracts of these organs were screened by LC-MS and phosphonolipids were identified with MS 2 . The spatial distribution of phosphonolipids in tissue sections was imaged and resolved to a pixel size of around 10 µm using high resolution atmospheric pressure scanning microprobe MALDI-MSI (AP-SMALDI-MSI) <ns0:ref type='bibr' target='#b60'>(Römpp & Spengler 2013)</ns0:ref>, which allowed us to locate lipids even within epithelial monolayers. Correlating spatial metabolomics data with optical microscopy on consecutive tissue sections subjected to histological staining allowed a clear co-localization of phosphonolipids with distinctly stained cell populations in gill, mantle and foot. MSI revealed that while some phosphonolipids are ubiquitously distributed, others are present only in the ciliated epithelial cells.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Animals</ns0:head><ns0:p>Live Mytilus edulis and Crassostrea gigas were purchased at a local store as live animals imported from France. Specimens were transported to the lab on ice to anesthetize the animals before the organs were dissected and separated for lipid extraction. Additional parts were cryoembedded for mass spectrometry imaging and histology.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lipid Extraction</ns0:head><ns0:p>Lipids were extracted from mussel tissues by a modified Bligh & Dyer protocol <ns0:ref type='bibr' target='#b3'>(Bligh & Dyer 1959)</ns0:ref>. Small pieces of mussel organs (~50 mg) were submerged in methanol (8 µl * mg -1 tissue) and subjected to mechanical lysis with silica beads (SiLiBeads Ceramic Beads Type ZY-S 1.1 -1.2 mm diameter, Sigmund Lindner GmbH) in a bead-beating device (Fast-prep-24-5G, MP Biomedicals) for two 10 s bursts at 6.5 m * s -1 . The homogenized tissues were transferred into a 3 mL exetainer with chloroform (8 µl * mg -1 tissue). The exetainers were vortexed for 15 s before HPLC-grade water (7.2 µl * mg -1 tissue) was added and vortexed again for 30 s. Phase separation was allowed for 10 minutes. Cell debris was pelleted by a 10 minutes centrifugation step (4 °C, 2500 x g). The lipid fraction (organic solvent, lower phase) was transferred to a HPLC-MS vial via glass syringe. For analysis, 100 µL of a 1:10 dilution of the extract in acetonitrile was transferred to HPLC-MS vials (1.5-HRSV 9 mm Screw Thread Vials, Thermo Fisher™). For each organ, triplicate samples were taken from three specimens and a mixture of all samples from one species served as the quality control samples. All solvents were prechilled and the samples were kept on ice during the extraction procedure.</ns0:p></ns0:div>
<ns0:div><ns0:head>LC-MS/MS</ns0:head><ns0:p>LC-MS/MS analysis was performed on a Vanquish Horizon UHPLC (Thermo Fisher Scientific) with Accucore C30 column (150 × 2.1 mm, 2.6 μm, Thermo Fisher Scientific) at 40 °C connected to a Q Exactive Plus orbitrap mass analyzer with a HESI source (Thermo Fisher Scientific).</ns0:p><ns0:p>A solvent gradient of acetonitrile:water (60:40; vol./vol.) with 10 mM ammonium formate and 0.1% formic acid (buffer A ) and 2-propanol:acetonitrile (90:10; vol./vol.) with 10 mM ammonium formate and 0.1% formic acid (buffer B) <ns0:ref type='bibr' target='#b4'>(Breitkopf et al. 2017</ns0:ref>) was used at a flow rate of 350 μl * min −1 . The gradient started at 0% buffer B and reached 97% B in 25 minutes, and was then followed by 7.5 minutes isocratic elution.</ns0:p><ns0:p>Per sample 10 μl extract were injected. MS measurements were acquired alternating between positive-ion and negative-ion mode in a range of m/z 150-1500. The mass resolution was set to 70000 for MS scans and 35000 for MS/MS scans at m/z 200.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>After each full MS scan dynamic data acquisition recorded MS/MS scans of the eight most abundant precursor ions with dynamic exclusion enabled for 30 s, followed by polarity switching.</ns0:p><ns0:p>Fragments were generated by collision-induced dissociation (higher-energy C-trap dissociation) at an energy level of 30 eV. Raw data was analyzed with FreeStyle (Thermo Fisher Scientific Inc., Version 1.6.75.20). LC-MS data are available at Metabolights (https://www.ebi.ac.uk/metabolights/) accession number MTBLS2960 <ns0:ref type='bibr' target='#b21'>(Haug et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Lipids were identified via MS/MS comparison and exact mass match using Lipidmaps as database query and match to theoretical sum formulas of phosphonolipids. Specifically, phosphono-ceramides are named by number in brackets for number of carbon atoms:number of double bonds in the fatty acid, -OH indicating a hydroxyl group on the sphingosine base.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mass Spectrometry Imaging Sample Preparation</ns0:head><ns0:p>Tissue samples were embedded in precooled 20 mg * ml -1 carboxymethyl cellulose (MW ~700,000, Sigma Aldrich) and snap frozen in liquid nitrogen <ns0:ref type='bibr' target='#b30'>(Kawamoto 2003)</ns0:ref>. Tissue sections for -MSI were prepared from embedded samples cut to a 10 µm thickness in a cryotome (Leica CM3050 S, -30 °C chamber temperature, -20 °C object holder temperature) and thaw-mounted on poly-L-lysine coated slides (Thermo Fisher). Slides were stored in a desiccator with silica beads (Carl Roth) under reduced pressure to avoid lipid oxidation before analysis. For all sections from M. edulis and C. gigas gill and mantle tissue an ionization matrix composed of a mixture of 2,5-dimethoxy and 2-hydroxy-5-methoxybenzoic acid (Super-DHB, Sigma Aldrich) in acetone:water (60:40; vol./vol.) with 0.1% TFA was applied by the SMALDIPrep sprayer (TransMIT GmbH). Over 30 minutes 225 µl of a 30 mg * ml -1 solution was deposited with nitrogen as carrier gas in a chamber containing a rotating sample slide. For one M. edulis foot section the ionization matrix was 2′,5′-Dihydroxyacetophenone (DHAP), applied as previously reported <ns0:ref type='bibr' target='#b2'>(Bien et al. 2021</ns0:ref>) via a Sono-Tek SimCoat sprayer with ACCUMIST ultrasonic nebulizer (Sono-Tek Corporation). A 15 mg * ml -1 solution of DHAP was sprayed with a flowrate of 50 µl*min -1 at a nitrogen pressure of 1 psi and ultrasonic frequency of 48 kHz. Per slide, 20 layers of matrix with a line distance of 1.8 mm were applied in a meandering pattern, alternating along the X-and Y-axis.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mass Spectrometry Imaging</ns0:head><ns0:p>Mass spectrometry imaging was done with an AP-SMALDI10 (TransMIT GmbH) ion source at atmospheric pressure coupled to an orbitrap mass spectrometer (Q Exactive HF, Thermo Fisher Scientific). Laser focus was achieved by carefully adjusting the z-distance between sample and PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science source until a minimal spot size was reached. All datasets were acquired at a resolution (pixel size) of 8 -11 µm without oversampling. Spectral data was recorded in positive mode with a m/z range of either 350 -1500 or 300 -1200 and a mass resolution of 240000 at m/z 200 (see Table <ns0:ref type='table'>S2</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>MSI data conversion and analysis</ns0:head><ns0:p>Raw data was converted to centroided .mzML format with MSConvert GUI (ProteoWizard, Version 3.0.9810) and subsequently to .imzML using the imzML Converter version 1.3.0 <ns0:ref type='bibr' target='#b56'>(Race et al. 2012)</ns0:ref>. Datasets where then imported to SCiLS Lab v2020b and total ion count (TIC) normalized ion maps were exported from SCiLS Lab for use in figures.</ns0:p><ns0:p>The imZML files of all acquired datasets have been uploaded to Metaspace2020 (www.metaspace2020.eu) <ns0:ref type='bibr' target='#b53'>(Palmer et al. 2017)</ns0:ref> and can be publicly browsed and downloaded.</ns0:p><ns0:p>Colocalization analysis was performed based on the median-threshold cosine distance algorithm implemented in www.metaspace2020.eu <ns0:ref type='bibr' target='#b51'>(Ovchinnikova et al. 2020)</ns0:ref>.</ns0:p><ns0:p>All raw data was uploaded to https://www.ebi.ac.uk/metabolights/. See table <ns0:ref type='table'>S2</ns0:ref> for datasets and settings for measurements.</ns0:p></ns0:div>
<ns0:div><ns0:head>Histology</ns0:head><ns0:p>Consecutive tissue sections to the sections used for MSI, were prepared for histology for tissuespecific metabolite correlations. For histological analysis, tissue sections were stained with haematoxylin and eosin (H&E fast staining kit, Carl Roth). In brief, slides were submerged in solution 1 (modified haematoxylin solution), rinsed under de-ionized water for 10 s, submerged in 0.1 % HCl for 10 s, blued under running de-ionized water for 6 minutes, submerged in solution 2 (modified eosin-g solution) for 30 s, and rinsed again for 30 s. Optical images were acquired on a slide scanning microscope (Olympus VS 120, OLYMPUS EUROPA SE & CO. KG) in bright field with 20x magnification and exported as lossless PNG files.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>In total, we identified 20 different phosphonolipids with LC-MS/MS in tissues of M. edulis and C. gigas of which only two are represented in current lipid databases. We could further show the tissue distribution of the identified phosphonolipids with high resolution spatial metabolomics.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> After comparing the spatial distribution of the phosphonolipids in the mollusks' tissues we found a subgroup of those lipids which co-localized specifically with epithelial tissue.</ns0:p><ns0:p>Identification and structural diversity of phosphonolipids in marine mollusks By analyzing total lipid extracts with LC-MS/MS we identified a diverse set of phosphonolipids from the class phosphono-ceramide (PnE-Cer) in the organs of two different bivalve species (see fig. <ns0:ref type='figure'>1</ns0:ref>). Phosphonolipids' structural diversity is not reflected in lipid databases even though there is no obvious reason to expect their diversity to be lower than that of lipids with more commonly observed headgroups. Only two members of the PnE-Cer class are listed in LIPIDMAPS (accessed November 23 rd 2021) <ns0:ref type='bibr' target='#b15'>(Fahy et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b63'>Sud M. 2006</ns0:ref>), one of the major current lipid databases. Thus, automated annotation was not suitable for identifying phosphonolipids in our samples. Instead, we manually screened the LC-MS/MS data of all parent ions in negative ionization mode for the characteristic fragment of the deprotonated phosphonate (m/z 124.0164) <ns0:ref type='bibr' target='#b14'>(Facchini et al. 2016</ns0:ref>) (see fig. <ns0:ref type='figure'>1b,c</ns0:ref>). In positive ionization mode the neutral loss of the 2-aminoethylphosphonate (AEP) moiety m/z 125.0242 was used to confirm the identification of phosphonolipids (see fig. <ns0:ref type='figure'>1d</ns0:ref>). Using this approach, we detected 20 PnE-Cer with 45 possible structural isomers in our five sample types (see fig. <ns0:ref type='figure'>2</ns0:ref>, supporting info Table <ns0:ref type='table'>S1</ns0:ref>). The 20 phosphonolipids showed very different abundances across the species and organs, with signal intensities spanning three orders of magnitude. The most abundant phosphonolipids in the tissue extracts of M. edulis and C. gigas were PnE-Cer(32:1), PnE-Cer(34:2), PnE-Cer(35:3) and PnE-Cer(35:3)-OH. Among the less abundant phosphonolipids, we also detected PnE-Cer(34:1), the only phosphono sphingolipid besides PnE-Cer(32:1) included in LIPIDMAPS (entries: LM_ID LMSP04000002, LMSP04000001). <ns0:ref type='figure'>3</ns0:ref>). In the mantle, a thin tissue layer lining the inside of the shell and covered by an epithelium, those two lipids were limited to the monolayer of epithelial cells at the rim of the organ (see fig. <ns0:ref type='figure'>3</ns0:ref>). In the gills, the respiratory organ comprised almost entirely of epithelial cells with a ciliated surface, the epithelium-specific phosphonolipids were detected throughout the organ (see supporting info, fig. <ns0:ref type='figure'>S1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Spatial distribution of phosphonolipids in different organs of M. edulis and C. gigas</ns0:head><ns0:p>PnE-Cer(35:3) (m/z 677.4999, [M+Na] + ) which is among the most abundant phosphonolipids in all analyzed tissues, showed a homogenous distribution throughout the organs. It is uniformly distributed in the mantle tissues of both M. edulis and C. gigas (see fig. <ns0:ref type='figure'>3</ns0:ref>). A similar, homogenous distribution of PnE-Cer(35:3) was present in the gill tissues, where it was also detected in the basal membrane covered by endothelial cells. In those cells the epitheliumspecific lipids PnE-Cer(34:1) and PnE-Cer(35:3)-OH were absent (see fig. <ns0:ref type='figure'>S1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Distribution of phosphonolipids in the foot of Mytilus edulis</ns0:head><ns0:p>The foot of M. edulis is the organ used for locomotion and chemical sensing, while in C. gigas the foot is regressed in adults <ns0:ref type='bibr' target='#b6'>(Cannuel & Beninger 2006;</ns0:ref><ns0:ref type='bibr' target='#b40'>Lane & Nott 1975)</ns0:ref>. Thus, a comparison of lipid distributions in foot tissues between the two bivalve species is not possible.</ns0:p><ns0:p>We extended our study by testing a different matrix, dihydroxyacetophenone (DHAP) and a respective application protocol <ns0:ref type='bibr' target='#b2'>(Bien et al. 2021</ns0:ref>) which resulted in a higher number of metabolite annotations compared to sample preparation with DHB (see table <ns0:ref type='table'>S2</ns0:ref>).</ns0:p><ns0:p>Phosphonolipids showed a tissue-specific distribution in the M. edulis foot sample (see fig. <ns0:ref type='figure'>3</ns0:ref>), similar to the gill and mantle. In the foot we could again precisely localize the phosphonolipid PnE-Cer(34:1) (m/z 667.5155, [M+Na] + ) to the epithelial cells outlining the tissue (see fig. <ns0:ref type='figure'>3j</ns0:ref>).</ns0:p><ns0:p>The lipid was almost absent from other foot tissues, such as musculature and gland cells. The ubiquitous phosphonolipid PnE-Cer(35:3) and others were detected throughout the organ (see fig. <ns0:ref type='figure'>3j,k</ns0:ref>).</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> As expected, other lipids such as phosphoserines (PS) are present throughout the organ, comparable to the phosphonolipids we classified as ubiquitous (see fig. <ns0:ref type='figure'>4c</ns0:ref>).</ns0:p><ns0:p>To evaluate the distribution of epithelia-specific phosphonolipids in comparison to other lipids, we performed a colocalization analysis. We analyzed the M. edulis foot dataset as it provided the highest number of annotated lipids in the study (252 annotation @ FDR 10% in the LipidMaps database, see table <ns0:ref type='table'>S2</ns0:ref>). A number of lipids were found with high colocalization values to the phosphonolipid that outlines the organ (PnE-Cer(34:1), m/z 667.5155, [M+Na] + , see fig. <ns0:ref type='figure'>4d</ns0:ref>). However, none of the 252 annotated metabolites showed the same fine scale distribution confined to the outer epithelial region of the tissue sections. This shows that the distribution of the epithelia-specific phosphonolipids is unique among lipids and points towards a specialized function.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Despite their abundance in bacteria, protists and animals, especially in marine invertebrates, little is known about phosphonolipids biological functions. We discovered diverse spatial patterns of phosphonolipids of which some where highly tissue-specific. These distributions in the organs of marine mollusks possibly point towards specialized functions.</ns0:p><ns0:p>We found a high diversity of phosphonolipids in bivalve tissues and confirmed their identity with LC-MS/MS. Among phosphonolipids that were identified with high confidence we found 18 that are not covered in public databases, although these lipids have previously been reported in marine mollusks and other animals <ns0:ref type='bibr' target='#b14'>(Facchini et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hori et al. 1967;</ns0:ref><ns0:ref type='bibr'>Kafarski 2019;</ns0:ref><ns0:ref type='bibr' target='#b35'>Kennedy & Thompson 1970;</ns0:ref><ns0:ref type='bibr' target='#b49'>Mukhamedova & Glushenkova 2000)</ns0:ref>. We envision that the addition of phosphonolipids to public databases including MS/MS spectra could substantially improve the annotation and identification of this class of so far little-studied lipids. High-resolution spatial metabolomics revealed that phosphonolipids show unique distributions in bivalve tissues, allowing us to analyze the metabolic properties that underlie tissue functions, for example the epithelial barrier. While some lipid species are homogenously distributed others are spatially correlated with specific tissues within organs. Across different species of marine bivalves, we consistently found a subset of phosphonolipids matching the distribution of the ciliated epithelia outlining organs exposed to the environment. Across all analyzed organs, this subset of phosphonolipids was almost completely absent from other tissues. Similarly, In a deep-sea mussel species it was previously shown that the relative abundance of the phosphonolipid PnE-Cer(34:1) is five to ten times higher in the animals' gill compared to the foot <ns0:ref type='bibr' target='#b33'>(Kellermann et al. 2012</ns0:ref>). This can be explained by higher relative PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science abundance of epithelial cells in the gill and partly confirms our hypothesis that phosphonolipids are confined to the tissues of the mollusks directly exposed to the environment.</ns0:p></ns0:div>
<ns0:div><ns0:head>Could the enrichment of phosphonolipids provide stability and protection to cells in tissues</ns0:head><ns0:p>exposed to the environment?</ns0:p><ns0:p>The most striking protective feature of bivalves is their shell. However, once the shell is opened and the mussel begins to filter feed and breathe, much of the animal's tissues are exposed directly to the environment. Those exposed and often mucous-covered soft tissues are a potential target site for microbial pathogens, which are constantly pumped along with water through the animals' mantle cavity. Lining the inner sides of the shell, the mantle of bivalves is the outermost organ of the animal covered by a ciliated epithelium. The gills, the respiratory organ of bivalves, are comprised almost entirely of epithelial cells and consist of many parallel filaments, arranged thusly to increase surface area and facilitate gas exchange. Extensive surface enlargement comes at a price; history shows that extensive borders are harder to defend against possible intruders. It is thus not surprising that both the gill and mantle have been shown to be sites of parasite infection in wild Mytilus mussels <ns0:ref type='bibr' target='#b45'>(Mladineo et al. 2012</ns0:ref>). In the tissue sections of M. edulis and C. gigas gills, the distribution of certain phosphonolipids follows the exposed cells. In these exposed cells a protective lipid with high chemical and biological stability, as exhibited by phosphonolipids, would be most effective.</ns0:p><ns0:p>Ciliation can act as a mechanical protection but also increases surface area and topography, and thereby possible infection sites. The cilia on gill and mantle epithelia of mussels generate a steady water current for respiration and transport food particles through ciliary movement <ns0:ref type='bibr' target='#b26'>(Jones et al. 1990</ns0:ref>). Currently, MSI cannot spatially resolve if phosphonolipids are specific to just the ciliary membrane, or if they are also present in the epithelial cell membrane of ciliated cells. Without sub-micrometer resolution for MSI, additional experiments, such as LC-MS/MS on isolated cilia <ns0:ref type='bibr' target='#b44'>(Mitchell 2013</ns0:ref>) would be needed. Notably, the analysis of cilia isolated through shearing forces revealed that phosphonolipids made up the majority of the ciliary lipids in a free-living protozoan <ns0:ref type='bibr' target='#b62'>(Smith et al. 1970)</ns0:ref>. Regardless of this finer differentiation, the abundance of some phosphonolipids on the outermost, exposed regions of organs could point towards a protective measure.</ns0:p><ns0:p>While the localization of a metabolite is no proof of its function, the Bauhaus principle of </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We were able to map the distribution of phosphonolipids in tissues of different environmentally and economically important mollusks species to reveal a possible role of phosphonolipids in those animals. For those phosphonolipids in the ciliated epithelia, a protective role is plausible, however, to prove this protective function, manipulative experiments are required. Future studies could investigate the fitness of pathogen-challenged mussels with and without phosphonolipids. Similarly, the infectivity of pathogenic Vibrio strains with a knock-out of phosphonate degradation genes should be tested against the wild type in a mussel infection system. In the future, correlative approaches with spatial metabolomics, such as spatial transcriptomics and proteomics, may resolve the biochemistry of phosphonolipid degradation during pathogen infection in mollusks.</ns0:p><ns0:p>We conclude that a functional annotation cannot be generalized for the entire class of lipids. It is evident from our study that phosphonolipids in marine mussels are chemically diverse and possibly have versatile functions based on their tissue distributions.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Figure 1</ns0:note><ns0:p>Mussel extracts contain a high number of phosphonolipids Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p><ns0:note type='other'>Figure 2</ns0:note><ns0:p>Relative abundance of identified phosphonolipids in analyzed mussels and selected metabolite distributions in M. edulis mantle tissue</ns0:p><ns0:p>Heatmap table shows relative abundance of ion intensity counts from bulk LC-MS analysis, number of carbon atoms and double bonds in lipid backbone annotated as 'Lipid-sidechain', isobaric isomers of a lipid are summarized and displayed as one value for the sum of all ions.</ns0:p><ns0:p>For details of all [M-H] -ions, their sum formula and exact mass see Table <ns0:ref type='table'>S1</ns0:ref>). Mass Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Lipids are an integral part of cellular membranes that allow cells to alter stiffness, permeability, and curvature. Among the diversity of lipids, phosphonolipids uniquely contain a phosphonate bond between carbon and phosphorous. Despite this distinctive biochemical characteristic, few studies have explored the biological role of phosphonolipids, although a protective function has been inferred based on chemical and biological stability. We analyzed two species of marine mollusks, the blue mussel Mytilus edulis and pacific oyster Crassostrea gigas, and determined the diversity of their phosphonolipids and their distribution in different organs. High-resolution spatial metabolomics revealed that the lipidome varies significantly between tissues within one organ. Despite their chemical similarity we observed a high heterogeneity of phosphonolipid distributions that originated from minor structural differences. Some phosphonolipids are ubiquitously distributed, while others are present almost exclusively in the layer of ciliated epithelial cells. This distinct localization of certain phosphonolipids in tissues exposed to the environment could support the hypothesis of a protective function in mollusks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Using spatial metabolomics with MSI we examined the distribution of the phosphonolipids identified with LC-MS/MS within organs of the two marine mollusks. We analyzed tissue sections of the gill and mantle of M. edulis and C. gigas as well as the foot of M. edulis. The 20 phosphonolipids showed variable spatial distributions even though they only differ by their fatty PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science acid moieties (see fig.2). Some phosphonolipids were mostly confined to the epithelial layers of the organs, while others showed a homogenous distribution throughout the organs. The most evident examples for a tissue-specific distribution, PnE-Cer(34:1) (m/z 667.5155, [M+Na] + ) in M. edulis and PnE-Cer(35:3-OH) (m/z 693.4948, [M+Na] + ) in C. gigas, were only present in epithelial tissue (see fig.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>design 'form follows function' holds true in many biological examples, especially in the context PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science of protective barriers against biotic and abiotic stressors in the environment. The spatial metabolite distribution (form), following a specific cell type or tissue, in many cases reflects the local function of a metabolite. Dotriacontanal, a wax lipid, is present only on the cuticle of maize plants where it protects from UV radiation and water loss(Dueñas et al. 2019). In the giant clam Tridacna crocea, UV-protective secondary mycosporines are localized in the outer layer of the epithelium(Goto-Inoue et al. 2020). A cocktail of antibiotics, produced by symbiotic bacteria, is found in high concentration only on the outer surface of the beewolf digger wasp cocoon it protects(Kroiss et al. 2010). The spatial distribution of these metabolites which shape tissues functionality, was revealed by MSI after their biological function was already known. We show that by applying MSI techniques, tissue-specific localization of metabolites complements our understanding of tissue functioning on a biochemical level. Teasing apart tissue chemistry is essential to translate and test our findings for potential applications in medicine or biotechnology. Understanding the role of phosphonolipids in defense against pathogens could open up a new line of research and identify a potential target for drug development to protect shellfish hatcheries.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>A) LC-MS chromatogram (base peak) of M. edulis gill tissue extract, measured in positive mode. (B) Ion trace for the 2-aminoethylphosphonate (AEP) headgroup m/z 124.0160 ± 5 ppm in all acquired MS2 scans (negative mode). Putative identification of phosphonolipid PnE-Cer 34:1 (exact mass 644.525711) in (C) negative ionization mode (m/z 643.5184 [M-H] -) and (D) positive ionization mode (m/z 645.5330 [M+H] + ). Indicative AEP occurs as neutral loss in positive ionization mode and as fragment ion in negative ionization mode (m/z 124.01). PeerJ An. Chem. reviewing PDF | (ACHEM-2021:10:66735:1:0:NEW 3 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "Response Letter for manuscript peerJ achem-66735 Bourceau et al.
We thank the editor Robert Winkler and all reviewers for their ideas and critics, which we responded to in detail here, and made text and figure changes as recommended. Our responses are displayed in blue font.
Editor's Decision MAJOR REVISIONS
Besides the comments of the three reviewers, I want to add that you claim the importance of localization for biological function. However, no test, for example of antimicrobial activity, is presented in your paper.
We thank the editor for pointing out this aspect. As we did not provide functional data by testing for antimicrobial activity, we added a short discussion citing examples of reported biological functions and phrased the title and text more carefully.
The new title reads: “Spatial metabolomics shows contrasting phosphonolipid distributions in tissues of marine bivalves “
Our aim was to close the gap in between bulk metabolomics and functional experiments by complementing known functions of tissues with their metabolic fingerprint. Notably, we do not suggest that imaging-based spatial metabolomics alone can be proof a metabolite’s function. To clarify, we changed the manuscript text to convey this better. The following text was inserted:
While the localization of a metabolite is no proof of its function, the Bauhaus principle of design “form follows function” holds true in many biological examples, especially in the context of protective barriers against biotic and abiotic stressors in the environment. The spatial metabolite distribution (form), following a specific cell type or tissue, in many cases reflects the local function of a metabolite. Dotriacontanal, a wax lipid, is present only on the cuticle of maize plants where it protects from UV radiation and water loss(Dueñas et al. 2019). In the giant clam Tridacna crocea UV-protective secondary mycosporines are localized in the outer layer of the epithelium(Goto-Inoue et al. 2020). A cocktail of antibiotics, produced by symbiotic bacteria, is found in high concentration only on the outer surface of the beewolf digger wasp cocoon it protects(Kroiss et al. 2010). The spatial distribution of these metabolites was revealed by MSI after their biological function was already known. We think we show that by applying MSI techniques, new hypotheses can be generated regarding possible functions of metabolites based on their distribution. These putative functional annotations can then be tested in a targeted approach as we have suggested in the conclusions. We have also added to the introduction which underlying biochemical properties of phosphonate metabolites suggest a protective role besides the spatial distribution observed in our samples.
Comments regarding the image quality in the manuscript file for reviewers made us realize that in the review pdf our figures are rendered as low-resolution pixel graphics. In addition to the primary files we have also provided a high resolution pdf with all figures in the review info section.
Reviewer: Abigail Moreno Pedraza
Basic reporting
The manuscript entitled “Spatial metabolomics reveals functional diversity of phosphonolipids in marine bivalves” explains the detection and localization of lipids in two species of marine mollusks. Overall, the manuscript has an interesting approach combining MS imaging, histology, and LC-MS/MS techniques.
We thank the reviewer Abigail M. Pedraza for the positive recognition of our work.
However, the manuscript explains not so well the methodology, the ionization technique is mentioned as MALDI but the instrument used in AP-SMALDI10 (TransMIT GmbH). The authors did not go in detail in the imaging part, the quality of the figures should be improved in order to reflect much better what is in the title of the manuscript 'spatial metabolomics'.
Indeed we used atmospheric pressure MALDI, we now specify this in more detail in the methods.
The results reflect moderately what they hypothesized, phosphonolipids as protective molecules present on the outside mollusk tissues but should be explained and discussed in more detail.
We did hypothesize a protective function represented in a localization on the epithelial tissue and we agree that our results reflect this hypothesis only moderately. We now emphasized that the role of phosphonolipids in these species is likely more diverse. We adopted the title of our work and added a paragraph about the connection between localization and function (see also answer to editor).
There are references poorly written, incomplete, and even nonpresent in the bibliography list. The figures have poor quality and contain errors.
We now provide high quality images as extra files and paid attention to provide a correct reference list. Thanks for highlighting the figure problems, we provide a detailed list of changes at the end of the letter. We used the template provided by PeerJ for the references and carefully checked them before resubmission.
Experimental design
The question and experimental design in principle follow what is expected in this type of analysis including a complementary approach using LC-MS/MS and a histological analysis.
We appreciate the recognition of our experimental design.
In the subsection: MALDI-MSI sample preparation, it is not clear nor explained why the authors used two different approaches to apply the matrix, two different matrices and two instruments e.g. gill and mantle sections 2,5-dimethoxy and 2-hydroxy-5-methoxybenzoic acid (Super-DHB, Sigma Aldrich) was applied by the SMALDIPrep sprayer (TransMIT GmbH). For M. edulis foot 2′,5′-Dihydroxyacetophenone (DHAP) was applied via a Sono-Tek SimCoat sprayer with ACCUMIST ultrasonic nebulizer (Sono-Tek Corporation). The authors did not explain the reason, not in the experimental design not in the results or discussion sections.
We thank the reviewer for the comment on the different spraying techniques, we now explain in the text why we used two different approaches and state this explicitly in the methods.
In general, a combination of different MALDI matrices yields a broader spectrum of detected metabolites in untargeted metabolomics experiments. All of our MSI datasets were acquired with the most common matrix DHB and we extended our analysis by testing an additional matrix (DHAP) on one sample (Mytilus foot). As a result, complementing DHB with the DHAP matrix resulted in more putatively annotated metabolites (see metaspace MPIMM_177_QE_P_Medulis). However, we do not have an ultrasonic sprayer in our institute and used the lab of a collaborator to spray the DHAP sample. We where not able to extend this part of the study for the other tissue samples. We chose to include the DHAP dataset, as it extents the number of annotated lipids for this mussel species and gives an outlook of the possibilities for MSI with DHAP. To avoid confusion among readers regarding the two different matrices, we pointed out our matrix choice and their effect on metabolite annotation quantity (see results line 331 and new table S2).
In the subsection: MALDI-MSI measurements, Datasets were acquired with an AP-SMALDI10 (TransMIT GmbH) ion source, the ionization method needs clarification: MALDI or Atmospheric Pressure MALDI.
Please see answer to second question.
The authors must clarify the resolution, the data conversion, and how the data were analyzed (open software, commercial one)
Thanks for pointing us to the missing information. We included all used software and updated the methods section with the requested information. The resolution of each dataset is included in the new table S2.
imzML data from the foot of Mytilus edulis is missing in Metaspace2020.
For a better overview of all the datasets we included table S2 with all hyperlinks and parameters.
The dataset of the M. edulis foot sample is on www.metaspace2020.eu under the name “MPIMM_177_QE_P_Medulis”.
Validity of the findings
The results section, including the figures, does not represent remarkable results. Figure 1, for example, is not a good figure and is poorly labeled (see Fig1a is written positive mode but in the caption, it is written negative mode), the figure does not contribute too much to be presented in the main text.
We thank the reviewer for this careful inspection of our figures and corrected them accordingly. Showing bulk LC/MS data, provides the overview and allows for high-throughput MS2 measurements, currently not possible with MSI. It is also not very common to link both techniques on one sample type and we show the power of this approach by combining spatial information of MSI with chemical information obtained by LC/MS2. We feel it is important to show how we identified phosphonolipids that could not be identified by database matches and kept the figure 1 in the main text.
Figure 2 uses too much space for the heat map and needs to be modified (see intensity bar).
We agree and collated the isobaric compounds (detected by LC/MS with different retention times), but kept the MS images. We also increased the size of intensity bars for better readability.
Figure 3 the intensity bar is illegible in all cases. In the caption 'l' is erroneously added, but the last figure is marked as 'k'.
We addressed the issues with readability in figure 3 and increased the size of the intensity bars and corrected issue with the labels.
More MS images of lipid distribution, comparison, and a discussion with possible protective function are expected in the results section.
We agree and added a new figure (now fig. 4) with a diverse set of lipid distributions supporting our study. First, we show two different lipids with high colocalization to the tissue outline. Only with the high spatial resolution of our dataset we can show that these distributions differ on the fine scale. The distinct localization on the very edge of the tissue for the phosphonolipid PnE-Cer(34:1) (m/z 667.5155 [M+Na]+) indicates a functional difference to other lipids. A protective function seems like a reasonable assumption based on distribution and reported biochemical properties. However, we followed the reviewer and editor suggestions to not focus entirely on this hypothesis. (See also answer to editor).
Additional comments
The localization of molecules in tissues represents an extraordinary method to investigate and decipher the possible functions of molecules in organisms. I suggest that the authors re-evaluate the data already acquired, attention should be paid to the details, avoiding repetitive ideas. Improve the quality of the figures.
We appreciate the constructive feedback and changed text and figures as proposed to improve the overall appearance. We also provide an additional pdf with all figures in original resolution (see answer to editor). As suggested, we re-evaluated the existing data including colocalization analysis.
Reviewer 2
Basic reporting
NO comment
Experimental design
No comment
Validity of the findings
The findings in this manuscript are novel and are of potential interest. The experiments are clearly presented, and the study question is certainly of great interest. The combined use of a metabolomic approach is a strength, much of the study is well done and the group is very capable of such laboratory analyses.
We thank the reviewer for their very positive assessment of our study.
Additional comments
There are minor concerns that the authors need to address for the manuscript.
(1) It is not clear to the readers why authors choose gill, mantle tissue, foot for sampling?
We agree, we did not make our choice of sampled organs clear in the manuscript. The analyzed tissues were chosen because they are associated with ciliated epithelia that are directly exposed to the environment. (See text change line 164)
(2) How do the authors rule out the possibility of the Phosphonolipid composition could be due to different environmental conditions or due to differences in seasonal modifications?
This is an interesting question. We are aware that the MSI represents a snapshot of the physiology at the sampling time point. We want to highlight the phosphonolipids are differently distributed within one organ of one species. It would be interesting to also check how environmental factors influence the spatial pattern of phosphonolipids within a follow-up study. We like the reviewer suggestion and added information to the introduction of other factors and included relevant literature. (See line 127)
(3) Perhaps it was overlooked but could the authors please describe briefly about seasonal changes in phosphono lipid composition and also the interplay of environmental factors such as temperature and food quality.
Please see answer to question 2.
(4) The authors have to mention if their study obtained ethical approval
In Germany, where the study was conducted, an ethical approval is not required for work with the two bivalve species in our study. The German Animal Welfare Act does not require a permit or notification for animal experiments with invertebrates, unless they are cephalopods or decapods, or other invertebrates that are on a 'sensory-physiological developmental stage corresponding to vertebrates'. The relevant passages in the German Animal Protection Act (TierSchG) are §7, (2), point 4, §8 paragraph (1), §8a, paragraph (1), §8a, paragraph (6).
Reviewer: Venkata Reddy Chirasani
Basic reporting
It is difficult read some of the labels on figures (Figure 1 and Figure 2). Please render images at high resolution and increase the font size.
Unfortunately, the final review pdf was not the resolution we have submitted (see answer to editor). Now we carefully made sure the material for review is in the highest quality and provided an additional pdf with all figures in their original resolution. We also increased the size of annotations in the figures.
Experimental design
No comment
Validity of the findings
It seems the authors have not thoroughly analyzed the relative abundance of Pne-Cer lipid subtypes in different organs or tissues of M. edulis and C.gigas. For eg: the localization of Pne-Cer 34:1 is not prominent in the mantle of C.gigas. Similarly, 34:2 is not abundant in the gill of C.gigas and 36:2 is not expressed in M.edulis. The authors need to evaluate these differential expressions within/in between the species and explain with respect to their phenotypes.
We could show that there are indeed differences in phosphonolipid profiles between species as well as organs. We chose to present a summary of the data in the heatmap of figure 2 and included the underlying data in table S1. These differences can be attributed to species but also environmental factors and nutrition (see answer to reviewer 2, question 2). Finding a detailed biological explanation for the differences between species is beyond the scope of our study where we focused on the spatial distribution of phosphonolipids within tissues. We chose two species to evaluate whether observed patterns are consistent.
2. The authors stated that phosphonolipids only exist in invertebrates (Line 115-116). However, phosphonolipids are known to exist in variety of other species such as Tetrahymena pyriformis and single celled organisms (protozoa) - Trypanosoma cruzi. The phosphonolipids appear less abundantly in various bovine tissues and even in human aorta. Can authors comment on chemical composition. nomenclature differences, or side chain dissimilarities of these phosphonolipids in comparison to the lipids in mollusks species?
Many thanks for pointing this out, we added a number of examples and cite relevant literature to underline that phosphonolipids are indeed widespread in nature.
We extended the introduction and results to cover the nomenclature with several examples. To further underline the diversity of phosphonolipids and their widespread distribution we give several examples for different classes and cite relevant literature. We also point out that, while common for marine invertebrates, phosphono ceramides (PnE-Cer) are not exclusive to this clade (see line 104).
3. Although the authors precisely identify the distribution of PnE-Cer in the foot of M. edulis, they didn't correlate the functional significance. In order to do so, the authors must answer following questions:
• Why the phosphonolipid PnE-Cer (34:1) was absent in other foot cell types, such as musculature and gland cells.
• What is the relative distribution of phosphonolipid Pne-Cer (35:3) and others in the foot compared to gill and mantle of M.edulis?
These are interesting questions, however it is not possible to answer all of these based on our dataset and within this study. We did not aim for a comprehensive species comparison on organ/tissue level, so our dataset misses especially the absent foot in adult C. gigas. We cannot give an explanation why PnE-Cer (34:1) was absent in other foot cell types, such as musculature and gland cells.
For M. edulis the relative distribution of PnE-Cer (35:3) is shown in blue along PnE-Cer (34:1) in yellow in figures 3 & S1, Below we show additional phosphonolipid distributions (PnE-Cer (34:2), (34:3) and (36:2) for all analyzed tissues of M. edulis as well as C. gigas. We feel that we present the distributions that are most different within one species to underline contrasting distributions in the main figures.
Figure 1 – Additional distributions of phosphonolipids from datasets presented in the study. Scale bars 1 mm.
Additional comments
1. The authors must differentiate the nomenclature of Pne-Cer phosphonolipids. For instance, Fig. 2 summarize the relative abundance of Pne-Cer lipids in mussels and mantle tissue. However, the sidechain nomenclature 34:2, 35:1, 36:2...etc are repeating throughout the plot.
We showed several isobaric isomers with the same number of carbon atoms and double bonds in the fatty acid tail region. We used the same annotation for these isobars as they cannot be distinguished in the MSI datasets where all isobars show up as one m/z peak. We collapsed the isobars in the heatmap in figure 2 and present all data in supplement table S1. Please see line 219 for clarification of the nomenclature.
2. Is there any chemical similarity between currently available therapeutics for shellfish hatcheries and phosphonolipids?
Several phosphonate antibiotics are known of which currently only fosfomycin is used clinically(Cao et al. 2019). We are not aware of any therapeutics marketed specifically towards shellfish hatcheries.
3. It is interesting to see that the phosphonolipids protect mucous covered soft tissues against microbial pathogen invasions. Do the phosphonolipids also protect against harmful UV rays and other environmental pollutants?
The phosphonolipids we discuss in our study do not absorb strongly in the UV spectrum so a protective effect against UV radiation seems unlikely. A recent spatial metabolomics study we cite now, revealed other UV-protective metabolites, secondary mycosporines, in the giant clam Tridacna crocea(Goto-Inoue et al. 2020). Changes in lipid composition have been observed in mussels exposed to pollutants so it seems likely that phosphonolipids also play a role in this response(Bakhmet et al. 2009).
4. HMDB is wrongly abbreviated in the manuscript as HMBD. Please correct this minor thing.
We thank the reviewer for spotting this mistake. We corrected it accordingly.
Changes in Figures
Figure 1 – Legend corrected.
Figure 2 – We combined isobaric phosphonolipids in the heat map and adjusted intensity bar accordingly. The intensity bars and annotation for the ion-maps were increased in size to aid readability.
Figure 3 – We increased the size of the intensity bars and annotation for better readability. Scale bars of f) and j) were halved to accommodate larger intensity bars.
Figure 4 – New figure, showing the distribution of phospholipids in relation to an epithelia specific phosphonolipid.
Figure S1 - We increased the size of the intensity bars and annotation for better readability.
Bakhmet IN, Fokina NN, Nefedova ZA, and Nemova NN. 2009. Physiological–biochemical properties of blue mussel Mytilus edulis adaptation to oil contamination. Environmental Monitoring and Assessment 155:581-591. 10.1007/s10661-008-0457-5
Cao Y, Peng Q, Li S, Deng Z, and Gao J. 2019. The intriguing biology and chemistry of fosfomycin: the only marketed phosphonate antibiotic. RSC Advances 9:42204-42218. 10.1039/C9RA08299A
Dueñas ME, Larson EA, and Lee YJ. 2019. Toward Mass Spectrometry Imaging in the Metabolomics Scale: Increasing Metabolic Coverage Through Multiple On-Tissue Chemical Modifications. Frontiers in Plant Science 10. 10.3389/fpls.2019.00860
Goto-Inoue N, Sato T, Morisasa M, Yamashita H, Maruyama T, Ikeda H, and Sakai R. 2020. Mass spectrometry imaging reveals differential localization of natural sunscreens in the mantle of the giant clam Tridacna crocea. Scientific Reports 10:656. 10.1038/s41598-019-57296-9
Kroiss J, Kaltenpoth M, Schneider B, Schwinger M-G, Hertweck C, Maddula RK, Strohm E, and Svatoš A. 2010. Symbiotic streptomycetes provide antibiotic combination prophylaxis for wasp offspring. Nature Chemical Biology 6:261-263. 10.1038/nchembio.331
" | Here is a paper. Please give your review comments after reading it. |
677 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Sacred lotus (Nelumbo nucifera) is a large economic crop, which is also cultivated as a horticultural crop. This study performed a systematic qualitative and quantitative determination of five anthocyanins and 18 non-anthocyanin flavonoids from the petals of 207 lotus cultivars. Among the compounds identified in this study, quercetin 3-O-pentoseglucuronide, quercetin 7-O-glucoside, laricitrin 3-O-hexose, and laricitrin 3-O-glucuronide were discovered for the first time in sacred lotus. The relationships between these pigments and petals colors were also evaluated. A decrease in the total content of anthocyanins and increase in the content of myricetin 3-O-glucuronide resulted in a lighter flower color. Furthermore, petals were yellow when the content of quercetin 3-Oneohesperidoside and myricetin 3-O-glucuronide were increased, whereas petals were red when the total anthocyanin content was high and the quercetin 3-O-sambubioside content was low. These investigations contribute to the understanding of mechanisms that underlie the development of flower color and provide a solid theoretical basis for the further study of sacred lotus.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Sacred lotus (Nelumbo nucifera) is a large economic crop that is also cultivated as a horticultural crop; its seeds and underground stems are commonly used as vegetables, and its flower has an ornamental value. It has a long cultivated history with a rich variety resource. Based on morphological characteristics <ns0:ref type='bibr'>(Guo, 2009;</ns0:ref><ns0:ref type='bibr'>Mukherjee et al., 2009)</ns0:ref>, more than 500 cultivars of N. nucifera exist and are native to Asia and Australia, while N. lutea has yellow petals and is native to North America. It is acknowledged that sacred lotus petals present different colors, including red, pink, yellow, white, and red/white pied. The lotus cultivars, 'Feihong' 'Fenhonglingxiao' 'Guoqinghong' 'Honghuajianlian' 'Shaoxinghonglian' and 'Yanyangtian', attract widespread attention because of their bright red, while 'Yuwan' 'Xueju' 'Baijunzixiaolian' and 'Baixuegongzhu' are loved by people owing to their pure white color. It is worth mentioning that yellow occupies a special position in lotus flower colors, mainly from 'Meizhouhuanglian' and its hybrid descendants. Moreover, anthocyanins are known to be the key factors in the diversity of sacred lotus colors <ns0:ref type='bibr' target='#b4'>(Deng et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b12'>Yang et al., 2009)</ns0:ref>. Because of the presence of anthocyanins and non-anthocyanin flavonoids, lotus exhibit many beneficial biological activities, such as antioxidant, anti-inflammatory, antibiotic, antiallergic and antitumor activities. <ns0:ref type='bibr' target='#b2'>(Chen et al., 2012;</ns0:ref><ns0:ref type='bibr'>Jung, 2008;</ns0:ref><ns0:ref type='bibr'>Jung et al., 2003;</ns0:ref><ns0:ref type='bibr'>Juranić and Žižak, 2005;</ns0:ref><ns0:ref type='bibr' target='#b19'>Zhu et al., 2013)</ns0:ref>. <ns0:ref type='bibr' target='#b4'>Deng et al. (2013)</ns0:ref> systematically analyzed the composition and content of anthocyanins, flavones, and flavonols in 108 sacred lotus cultivars with different petals colors. Furthermore, <ns0:ref type='bibr' target='#b3'>Chen et al. (2013)</ns0:ref> proposed a putative flavonoid biosynthetic pathway in sacred lotus; however, a branch of the suggested pathway remains incomplete. To explore the flower coloration mechanism in sacred lotus, <ns0:ref type='bibr' target='#b5'>Deng et al. (2015)</ns0:ref> conducted a comparative proteomics analysis of petals from red and white cultivars and found that different methylation intensities on the promotor sequences of the anthocyanin synthase gene may contribute to the diversity of petal colors. In addition, <ns0:ref type='bibr'>Sun et al. (2016)</ns0:ref> validated that NnMYB5 is a transcription activator of anthocyanin synthesis and the color difference between red and yellow sacred lotus species may be related to a variation in the MYB5 gene. These studies have shown that it is pertinent to investigate the mechanism that underlies color formation in sacred lotus and further study is required.</ns0:p></ns0:div>
<ns0:div><ns0:head>It appears that a correlation between chemical composition and color phenotype may exist in sacred lotus.</ns0:head><ns0:p>To further investigate the coloring mechanism of sacred lotus petals, a large number of sacred lotus samples were collected from all over the world, comprising examples of almost all the colors that exist in this species. Based on this collection, we systematically detected, qualified, and quantified the contents of anthocyanins and nonanthocyanin flavonoids in 207 sacred lotus cultivars, and measured the petal color phenotypes using spectrophotometry. In addition, correlations between petal color and the presence of different pigments were analyzed. This work may benefit our understanding of the relationship between the composition of flavonoids and petal color in sacred lotus, while providing a basis for subsequent research on this important plant species.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Chemicals and materials</ns0:head><ns0:p>The anthocyanin standard petunidin 3-O-glucoside (≥98.0%) and flavonol standards hyperoside, astragalin, and isorhamnetin 3-O-glucoside (≥98.0%) were purchased from Chengdu Push Bio-technology Co., Ltd. Acetonitrile and formic acid were obtained from Sigma-Aldrich (St. Louis, MO, USA), which were applied as eluent and eluent additive in ultra high-performance-liquid chromatography (UPLC) and UPLC-mass spectrometry (UPLC-MS). Other analytical grade reagents were purchased from the Beijing Chemistry Factory (Beijing, China). UPLC-grade water was obtained from Watsons water. Millipore membranes (0.22 μm) were acquired from Alltech Scientific (Beijing, China). The samples were powdered in liquid nitrogen using an analytical mill (IKA A11 basic machine, Germany).</ns0:p></ns0:div>
<ns0:div><ns0:head>Plant materials</ns0:head><ns0:p>Petals of 207 sacred lotus cultivars (Supplementary Table <ns0:ref type='table' target='#tab_0'>S1</ns0:ref>) were grown in the United Lotus Germplasm Resource of the Amway Plant Research and Development Center and the Chinese Academy of Traditional Chinese Medicine (China, WuXi, lat. 31 o 57', N long, 120 o 29') in the same-sized containers (height: 90 cm, diameter: 70 cm), while receiving the same fertilization and disease control treatments. Two days after the bracts emerged, three biological replicates of petals from each cultivar were manually collected, during May and June of 2018 (ambient temperature, 26-30 o C), from three individual plants. The fresh petals were immediately frozen in liquid nitrogen, powdered with an analytical mill (IKA AII basic, IKA), and then stored at -80°C until later use.</ns0:p></ns0:div>
<ns0:div><ns0:head>Color analysis</ns0:head><ns0:p>The fresh petals were compared to the Royal Horticultural Society Color Chart (RHSCC) and sorted into four color groups, including purple-red, red, yellow, and white. The colors of the lotus flowers were measured using a spectrophotometer (NF555, Nippon Denshoku, Japan). For each lotus flower, petals were randomly selected, except for those in the outermost and innermost layers. The selected petals were then measured at a viewing angle of 2° under Illuminant C. And ColorMate software (version 5) was adopted to collect and process the values of L*, a*, b*, C*, and h. The L* value symbolizes the lightness of the color. With L value increased, the color becomes lighter, from black (L* = 0) to white (L* = 100). In addition, positive and negative a* values separately represent red and green, while positive and negative b* values on behalf of yellow and blue, respectively. Two new parameters, chroma [C* = (a* 2 + b* 2 ) 1/2 ] and hue angle [h = arctan b*/a*], were derived from a* and b*. The chroma parameter describes the saturation of the color, while the hue angle value is stepped counterclockwise across a continuously fading hue circle <ns0:ref type='bibr'>(Gonnet, 1998</ns0:ref><ns0:ref type='bibr'>(Gonnet, , 1999))</ns0:ref>. The co-pigment index (CI) value [CI = TF/TA], which represents the co-pigmentation effect, is calculated from the total content of non-anthocyanin flavonoids (TF) and the total content of anthocyanins (TA). TF and TA will be described in the section 'Anthocyanin and non-anthocyanin flavonoid profiles in sacred lotus petals'.</ns0:p></ns0:div>
<ns0:div><ns0:head>Extraction of anthocyanins and non-anthocyanin flavonoids</ns0:head><ns0:p>The petals were ground into fine powders in liquid nitrogen using an analytical mill (IKA A11 basic machine, Germany). All of the collected samples were extracted according to the method reported by <ns0:ref type='bibr' target='#b4'>Deng et al. (2013)</ns0:ref> and <ns0:ref type='bibr' target='#b3'>Chen et al. (2013)</ns0:ref>, with the following modifications: a solvent system comprising methanol, water, and formic acid (70:28:2, v:v:v) was applied in the extraction, and 1 g of sacred lotus petals was extracted with 8 mL of extraction buffer and sonicated for 20 min at room temperature. The extracts were centrifuged at 5000 × g for 10 min, and the supernatants filtered through a 0.22 μm Millipore filter (Alltech Scientific Corporation) prior to UPLC analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head>HPLC analysis of flavonoids</ns0:head><ns0:p>The analysis of flavonoids was carried out using a Waters H-Class UPLC system consisting of an auto-sampler and quaternary pump arrangement (Waters Corporation, USA) coupled to a UV-vis detector. Compared with previous reports <ns0:ref type='bibr' target='#b2'>(Chen et al., 2012;</ns0:ref><ns0:ref type='bibr'>Lin and Harnly, 2007;</ns0:ref><ns0:ref type='bibr'>Nováková et al., 2006)</ns0:ref>, our UPLC method showed higher separation efficiency and resulted in a shorter run-time. A 5 μl aliquot of each sample solution was injected and analyzed on a Waters Xselect C 18 column (150 mm × 4.6 mm, 3.5 μm, Waters, USA). In the solvent system, eluent A was 10% formic acid in water and eluent B was 0.1% formic acid, added in acetonitrile, as the organic phase. Chromatograms were acquired at 520 nm and 350 nm for anthocyanins and nonanthocyanin flavonoids, respectively. The gradient elution conditions for the separation of the extracted flavonoids were as follows: 0-10 min, 8-15% B; 10-19 min, 15-21% B; 19-22 min, 21% B; 22-23 min, 21-98% B; 23-35 min, 98% B; 35-35.1 min, 98-8% B; 35.1-50 min, 8% B; flow rate, 0.5 mL min -1 ; and temperature, 30℃.</ns0:p><ns0:p>Anthocyanins and non-anthocyanin flavonoids were quantitatively analyzed with reference to external standards (petunidin 3-O-glucoside and hyperoside). The calibration curves showed good linear regression within test concentration ranges, with R 2 = 0.9972 and 0.9994, respectively. The limits of detection of the optimized method, calculated as a signal-to-noise ratio of three, were 0.059 μg mL -1 and 0.016 μg mL -1 for petunidin 3-O-glucoside and hyperoside, respectively, while the limits of quantification, with a signal-to-noise ratio of 10, were 0.228 μg mL -1 and 0.063 μg mL -1 , respectively. In addition, the newly developed method provided satisfactory precision and accuracy with overall intra-day and inter-day variations of 0.09-3.41% and 0.66-3.91%, respectively. These results indicated that the optimized UPLC method was stable and suitable for use in the quantitative analysis of flavonoids in sacred lotus petals. In addition, content of compounds 3, 11, 19, and 21 were quantified by comparison with external standards, while compounds 1, 2, 4, and 5 are given in ug/g FW equivalent of petunidin 3-O-glucoside. The other non-anthocyanin flavonoids were quantified as hyperoside.</ns0:p></ns0:div>
<ns0:div><ns0:head>UPLC-ESI-Q-TOF-MS/MS analysis for determination of flavonoids</ns0:head><ns0:p>Flavonoids in the sacred lotus petal extracts were identified using an Agilent 1290 photodiode array and 6540 triple quad mass time-of-flight (Q-TOF) mass spectrometer, equipped with a dual electrospray ionization (ESI) detector (Agilent, Palo Alto, CA, USA). Nitrogen auxiliary gas was provided. ESI was performed in the negative ionization (NI) mode for both MS and tandem MS (MS/MS) analysis to provide fragmentation information about the molecular weights of the molecules being screened. The ESI source operation parameters were optimized as follows: gas temperature, 350℃; drying gas, 8 L min -1 ; nebulizer, 45 psig; sheath gas temp, 350℃; sheath gas flow, 11 L min -1 ; Vcap, 3500 V; nozzle voltage, 1500 V; and scan range, m/z 100-1100 units. A collision energy of 20 eV was used during MS/MS analysis. Purine and HP-0921 were used as internal references in real time and, in NI mode, their m/z ratios were 119.0363 and 1033.9881, respectively. The MS data, retention times, and UV-vis spectra were used to identify the flavonoids contained in the sacred lotus petals.</ns0:p></ns0:div>
<ns0:div><ns0:head>RNA extraction and qRT-PCR analysis</ns0:head><ns0:p>Total RNA was isolated from petals (B88, white petal; A89, yellow petal; B121, red petal) using quick RNA Isolation Kit for po(, Beijing, China). Each RNA sample was treated with RNase-free DNase I (TaKaRa) prior to the reverse transcription (RT) reaction to eliminate contaminating genomic DNA. RT-PCR was performed on the basis of the standard instruction of Prime Script RT ReagenKit with gDNA Eraser (TaKaRa).</ns0:p><ns0:p>As previously described in Sun et al., 2016, the qRT-PCR was carried out using Step One Real-Time PCR system (Applied Biosystems, Foster City, CA, USA). A total reaction volume of 25 uL was applied, containing 10 uL of 2 x TransStart Green PCR Supermix UDG (S602, TRANS), 4 uM of each primer, and about 100 ng of template cDNA. And the amplification condition was follows: incubation at 95 o C for 2 min, denaturation at 95 o C for 5 s, annealing at 60 o C for 10 s, and extension at 72 o C for 10 s, and the process continued for a total of 40 cycles. Besides, action gene of lotus (GenBank ID:EU131153) served as a constitutive control. Target gene relative expression levels were calculated by 2 -△△Ct comparative threshold cycle (Ct) method, and three biological replicates were conducted. Primer sequences (Supplementary Table <ns0:ref type='table'>S2</ns0:ref>) were designed on the whole-genome resequencing data of N. nucifera (-30X coverage depth). Three main genes in flavonoid biosynthesis pathway of lotus (DFR (Gene_ID: NW_010729118.1_renew:02005922_02019073), UFGT (Gene_ID: NW_010729304.1_renew:00079217_00080656), OMT (Gene_ID: NW_010729121.1_renew:03748642_03755897) by quantitative reverse transcription -PCR (qRT-PCR) were investigated.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis</ns0:head><ns0:p>Data were analyzed using SPSS 24.0 for Windows ® . The color parameters and pigment contents of petals from 207 cultivars were compared by analysis of variance, combined with Duncan's multiple range tests. Multiple linear regression (MLR) was used to analyze the relationship between color parameters and pigment contents.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Identification of anthocyanins and non-anthocyanin flavonoids</ns0:head><ns0:p>Flavonoids were identified according to the accurate molecular and fragment ion information obtained using MS and MS/MS, UV-vis spectra, and retention times on the C18 column, as revealed by HPLC and HPLC-MS. Ultimately, five anthocyanins and 18 nonanthocyanin flavonoids were identified (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, Figure <ns0:ref type='figure' target='#fig_3'>1A</ns0:ref>). Flavonoids glycosylated with monosaccharide glycosides show mass spectrometric behavior when using MS with ESI in NI mode. As reported <ns0:ref type='bibr' target='#b0'>(Ablajan et al., 2006)</ns0:ref>, when the abundance of the radical aglycone, annotated as [A−2H]−, is notably higher than that of the aglycone product ion, annotated as [A−H]−, the glycoside is usually linked at the 3-position, and the opposite abundance trend occurs when the conjugation occurs at the 7-position. However, as an exception, only the aglycone product ion can be produced when there is a glucuronic acid group, no matter whether the linked position is 3 or 7. The neutral loss of fragment ions of 146 and 176 mass units, which were produced by the protonated precursor 623.1475 [M-H]-for peak 8, implied that a pentose and a glucuronide were linked at the 3-position. Moreover, the aglycone product ion at m/z 301.0467 [A-H]-in NI mode indicated that the flavonoid aglycone is quercetin, and thus peak 8 was tentatively identified as quercetin 3-O-pentose-glucuronide (Qc-3-Pen-Gln) (Figure <ns0:ref type='figure' target='#fig_3'>1B</ns0:ref>). As regard to the peak 9, due to the neutral loss of a fragment ion of 162 mass units, which was produced by the protonated precursor of 463.0910 [M-H]-, and the intensive showing of an aglycone product ion at m/z 301.0362 [A-H]-indicated that the linked position is 7 (Figure <ns0:ref type='figure' target='#fig_3'>1C</ns0:ref>), thus peak 9 was temporarily identified as quercetin 7-O-glucoside (Qc-7-Glu). Furthermore, based on the MS/MS spectra data, both peak 13 and peak 14 were tentatively assigned as laricitrin monosaccharide, which has been reported in grapes <ns0:ref type='bibr'>(Jin et al., 2009)</ns0:ref>. The radical aglycone ion at m/z 330.0482 [A-2H]-and the corresponding ion at m/z 315.0228 [A-2H-CH 3 ]-, with a fragment ion of 163 mass units, demonstrated that a hexose substituent was linked at the 3-position (Figure <ns0:ref type='figure' target='#fig_3'>1D</ns0:ref>). Hence, peak 13 was tentatively identified as laricitrin 3-O-hexose (Lar-3-hex); and further work is required to identify the nature of this hexose-based compound. Meanwhile, according to the data acquired for the glycone product ion at m/z 331.0581 [A-H]-, and the protonated precursor of 507.0999 [M-H]-, the loss of a fragment ion of 176 mass units manifested that a glucuronic acid glycoside was conjugated at the 3-position. In addition to the corresponding ion at m/z 316.0342 [A-H-CH 3 ]-(Figure <ns0:ref type='figure' target='#fig_3'>1E</ns0:ref>), peak 14 was assigned as laricitrin 3-O-glucuronide (Lar-3-Gln).</ns0:p><ns0:p>The chromatographic and MS data for the anthocyanins and non-anthocyanin flavonoids separated and identified from the sacred lotus petals are listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The data from the MS analysis in NI mode provided valuable information, including molecular weights and information about the presence of aglycones and sugars, with their linkage positions. Peaks 1-5 were identified as the following anthocyanins: delphinidin 3-O-glucoside (Dp-3-Glu, 1), cyanidin 3-Oglucoside (Cy-3-Glu, 2), petunidin 3-O-glucoside (Pt-3-Glu, 3), peonidin 3-O-glucoside (Pn-3-Glu, 4), and malvidin 3-O-glucoside (Mv-3-Glu, 5), as previously reported <ns0:ref type='bibr' target='#b12'>(Yang et al., 2009)</ns0:ref>. Peaks 6-23 were identified as non-anthocyanin flavonoids: myricetin 3-O-glucuronide (Myr-3-Gln, 6) <ns0:ref type='bibr' target='#b4'>(Deng et al., 2013)</ns0:ref>, quercetin 3-O-xylopyranosyl-(1→2)-glucopyranoside (quercetin 3-O-sambubioside/Qc-3-Sam, 7) <ns0:ref type='bibr' target='#b6'>(Deng et al., 2009)</ns0:ref>, quercetin 3-O-pentose-glucuronide (Qc-3-Pen-Gln, 8), quercetin 7-O-glucoside (Qc-7-Glu, 9), quercetin 3-O-rhamnopyranosyl-(1→2)galactopyranoside (quercetin 3-O-neohesperidoside/Qc-3-Neo, 10) <ns0:ref type='bibr'>(Li et al., 2014b)</ns0:ref>, quercetin 3-O-galactoside (Qc-3-Gal/Hyperoside, 11) <ns0:ref type='bibr' target='#b9'>(Suzuki et al., 2008)</ns0:ref>, quercetin 3-O-glucuronide (Qc-3-Gln, 12) <ns0:ref type='bibr' target='#b12'>(Yang et al., 2009)</ns0:ref>, laricitrin 3-O-hexose (Lar-3-hex, 13), laricitrin 3-Oglucuronide (Lar-3-Gln, 14), kaempferol 3-O-rhamnopyranosyl-(1→2)-glucopyranoside (kaempferol 3-O-neohesperidoside/Kae-3-Neo, 15) <ns0:ref type='bibr'>(Lim et al., 2006;</ns0:ref><ns0:ref type='bibr'>Sharma et al., 2017)</ns0:ref>, kaempferol 3-O-galactoside (Kae-3-Gal, 16) <ns0:ref type='bibr'>(Jung et al., 2003)</ns0:ref>, kaempferol 3-Orhamnopyranosyl-(1→6)-glucopyranoside (kaempferol 3-O-rutinoside/Kae-3-Rut, 17) <ns0:ref type='bibr'>(Hyun et al., 2017;</ns0:ref><ns0:ref type='bibr'>Sharma et al., 2017)</ns0:ref>, isorhamnetin 3-O-rutinoside (Iso-3-Rut, 18) <ns0:ref type='bibr' target='#b12'>(Yang et al., 2009)</ns0:ref>, kaempferol 3-O-glucoside (Kae-3-Glu/astragalin, 19) <ns0:ref type='bibr' target='#b2'>(Chen et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b12'>Yang et al., 2009)</ns0:ref>, syringetin 3-O-hexose (Syr-3-Hex, 20) <ns0:ref type='bibr'>(Guo, 2009)</ns0:ref>, isorhamnetin 3-O-glucoside (Iso-3-Glu, 21) <ns0:ref type='bibr'>(Sharma et al., 2017)</ns0:ref>, isorhamnetin 3-O-glucuronide (Iso-3-Gln, 22) <ns0:ref type='bibr' target='#b2'>(Chen et al., 2012)</ns0:ref>, and syringetin 3-O-glucuronide (Syr-3-Gln, 23) <ns0:ref type='bibr'>(Li et al., 2014b)</ns0:ref>. These 18 non-anthocyanin flavonoids were classified into six groups: quercetin (Qc), kaempferol (Kae), isorhamnetin (Iso), myricetin (Myr), syringetin (Syr), and laricitrin (Lar), based on the aglycones they contain.</ns0:p><ns0:p>In this study, the four non-anthocyanin flavonoids discussed above (Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref>), including Qc-3-Pen-Gln (8), Qc-7-Glu (9), Lar-3-Hex (13) and Lar-3-Gln ( <ns0:ref type='formula'>14</ns0:ref>), were discovered for the first time in sacred lotus petals using the newly developed UPLC-DAD-ESI-Q-TOF-MS/MS technique. Therefore, our study has further refined the research carried out by <ns0:ref type='bibr' target='#b3'>Chen et al. (2013)</ns0:ref> and <ns0:ref type='bibr' target='#b4'>Deng et al. (2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Anthocyanin and non-anthocyanin flavonoid profiles in sacred lotus petals</ns0:head><ns0:p>By comparison with the RHSCC, the 207 sacred lotus cultivars were grouped into four color groups: purple-red, red, yellow, and white. Although there were five different anthocyanins and 18 non-anthocyanin flavonoids detected in the flower petals, the compositions and total contents varied significantly among different cultivars. The most abundant anthocyanin was Mv-3-Glu (5), which accounted for 50% of the compounds in the purple-red group and 47% of those in the red group. This was followed by Dp-3-Glu (1), which accounted for 19% of the compounds in the purple-red group and 17% of those in the red group. As for the nonanthocyanin flavonoids, quercetin-, kaempferol-, and isorhamnetin-derivatives were the dominant non-anthocyanin flavonoids in all of the four color groups; however, quercetin derivatives were the most abundant (Supplementary Figure <ns0:ref type='figure' target='#fig_3'>S1</ns0:ref>). Of note, in yellow cultivars, quercetin derivatives were seen to be the most important flavonoids, as these were present in significantly higher amounts (up to 64%) than the other five flavone aglycone derivatives, suggesting a link between these compounds and the yellow petal color. Additionally, kaempferol and isorhamnetin derivatives were the second and third most abundant complexes in yellow petals, accounting for 16% and 9% of the TF, respectively. Conversely, quercetin and kaempferol derivatives showed equal importance in their contribution to the TF of the purplered, red, and white color groups, making up approximately 85% of the TF (Table <ns0:ref type='table'>2</ns0:ref>). Among all the cultivars tested, the highest TA was detected in cultivars of the purple-red group, with a mean content of 464.47 ug/g fresh weight (FW), followed by the red group, with an average content of 171.74 ug/g FW. Generally, the petals containing more anthocyanins displayed darker colors; for example, the purple-red cultivar 'Cuifuhongya' ( <ns0:ref type='formula'>27</ns0:ref>) had the highest mean anthocyanin content (1133.61 ug/g FW) among all the cultivars. In terms of germplasm assessment, some cultivars, such as 'Qiuwanluoshan' (5), 'Ti-13' (74), and 'Ti-13-I' (9), which contain very high contents of Mv-3-Glu (5) and relatively higher TA, may be ideal candidates for breeding purple-red flowers and studying the anthocyanin biosynthesis pathway in sacred lotus. What's more, the TF was the highest in yellow petals, with an average content of 3517.93 ug/g FW. The yellow petal cultivar 'Jintaiyang' (165) contained the highest TF, with a concentration of 7149.35 ug/g FW, and would be a candidate cultivar for the study of the coloring mechanism in yellow sacred lotus (Figure <ns0:ref type='figure'>2</ns0:ref>).</ns0:p><ns0:p>What's more, significant differences were observed in the contents of anthocyanins and non-anthocyanin flavonoids among cultivars of differing color. In order to visualize these differences, individual anthocyanin and non-anthocyanin flavonoid contents were normalized using the Z-score and expressed as a heat map (Figure <ns0:ref type='figure'>3</ns0:ref>). In purple-red cultivars, Mv-3-Glu (5) and Dp-3-Glu (1) were the two major anthocyanins, while Qc-3-Gln (12) and Kae-3-Glu (19) were the two dominant non-anthocyanin flavonoids. The red cultivars exhibited similar profiles, with Mv-3-Glu (5) being the most highly concentrated anthocyanin, with Qc-3-Gln (12) and Kae-3-Glu (19) the most abundant non-anthocyanin flavonoids. The yellow and white cultivars, however, demonstrated a deficiency of anthocyanins and an abundance of non-anthocyanin flavonoids. In yellow cultivars, the concentration of Qc-3-Gln (12) was found to be the greatest among the 18 non-anthocyanin flavonoids, which may contribute to the yellow color. Furthermore, the concentrations of Myr-3-Gln (6), Qc-3-Neo (10), Qc-3-Gal (11), Iso-3-Glu (21), Iso-3-Gln ( <ns0:ref type='formula'>22</ns0:ref>), Syr-3-Gln ( <ns0:ref type='formula'>23</ns0:ref>), Syr-3-Hex (20), and Iso-3-Rut (18) showed varying degrees of increase, in association with purple-red and red colored cultivars. In white cultivars, the TF exhibited similar trends to those of the yellow cultivars, but at relatively lower levels. Differences in the distribution of the contents of these ingredients suggested that the color of the sacred lotus may be related to a single anthocyanin or non-anthocyanin flavonoid.</ns0:p></ns0:div>
<ns0:div><ns0:head>Relationships between petal color, anthocyanins, and non-anthocyanin flavonoids</ns0:head><ns0:p>Researchers have reported that flavones and flavonols are responsible for flower color <ns0:ref type='bibr'>(Li et al., 2008)</ns0:ref>, but the relationships between these factors are unknown in sacred lotus. In maize, non-anthocyanin flavonoids are considered to be co-pigments, alongside anthocyanins <ns0:ref type='bibr'>(Stafford, 1998)</ns0:ref>. Hence the co-pigmentation index is an important indicator of the co-pigmentation effect, which occurs, in the main, when CI > 5 <ns0:ref type='bibr' target='#b1'>(Asen et al., 1971;</ns0:ref><ns0:ref type='bibr'>He et al., 2011)</ns0:ref>. According to the formula CI = TF/TA, we found that, in the most purple-red sacred lotus petals, CI was <5, while in the other three color groups, CI tended to be >5, indicating that non-anthocyanin flavonoids had a significant effect on the color of sacred lotus petals, especially when the anthocyanin content was low. This result was in line with the suggestion that co-pigmentation between anthocyanins and non-anthocyanin flavonoids may result in distinct petal colors <ns0:ref type='bibr'>(Li et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b14'>Zhang et al., 2011b;</ns0:ref><ns0:ref type='bibr' target='#b18'>Zhu et al., 2012)</ns0:ref>.</ns0:p><ns0:p>To analyze the relationship between petal color and pigment content in sacred lotus, Pearson's correlation coefficients were calculated among color parameters, anthocyanin contents, and non-anthocyanin flavonoids contents, and displayed as a heat map (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>). Strong correlations were seen. For example, L* values were significantly negatively correlated with a*, C*, and h (P < 0.01), and significantly positively correlated with b* and CI (P < 0.01). The individual anthocyanin contents of the five groups and TA demonstrated significant negative correlations with L*, b*, and CI (P < 0.01), and positive correlations with a*, C*, and h (P < 0.01). In addition, TF, and most of the individual non-anthocyanin flavonoid contents, were negatively correlated with a*, C*, and h, but positively correlated with L*, b*, and CI. Moreover, Qc-3-Neo (10), Qc-3-Gal (11), Kae-3-Rut (17), Iso-3-Rut (18), Syr-3-Hex (20), Iso-3-Gln ( <ns0:ref type='formula'>22</ns0:ref>), and Syr-3-Gln (23)were significantly correlated with all of the six color parameters (L*, a*, b*, C*, h, and CI), while Qc-3-Pen-Gln (8) and Kae-3-Glu (19) had no apparent correlations with most of these color parameters. Correlations among anthocyanin and non-anthocyanin flavonoid metabolism were also evaluated. Strong positive correlations were found between different anthocyanins (P < 0.01) and between different non-anthocyanin flavonoids. However, significant negative correlations were observed between anthocyanins and most non-anthocyanin flavonoids (P < 0.05), as was found for TA and TF. The correlation analysis indicated that many pigments influence sacred lotus petal color. Thus, MLR analysis was used to estimate the type of pigment that dramatically affects petal color. Color parameters, including L*, a*, and b*, were chosen as dependent variables, and 25 indexes, containing 23 various pigment components, plus TA and TF, were selected as independent variables. To study the interactions between these pigment compositions and color formation, regression equations were established. Significant statistical results were acquired as follows:</ns0:p><ns0:p>L* = 76.083 -0.028 TA + 0.022 Myr-3-GlcA ( <ns0:ref type='formula'>6</ns0:ref> The MLR analysis showed that there are many factors affecting petal color, including TA, TF and the levels of Myr-3-GlcA (6), Qc-3-Sam (7), Kae-3-Rut (17), Qc-3-Neo (10), Myr-3-Gln (6), Syr-3-GlcA (23), Pn-3-Glc (4) and Syr-3-Hex (20), among which, TA was the major factor, with positive effects on the values of a*, but negative effects on the value of L*. Myr-3-GlcA (6) was another important factor that exhibited positive effects on the L* value. TA was found to be the primary factor positively influencing the a* value, whereas Qc-3-Sam (7) negatively affected the a* value. Furthermore, Qc-3-Neo (10), Myr-3-Gln (6) and Syr-3-GlcA (23) had positive effects on value of b*. Based on these findings, an increase in TA was determined to push up the values of a*, but lower the value of L*, indicating that the flower colors would become red and darker. The L* value indicates that with less TA and higher Myr-3-GlcA (6) contents, flowers are lighter or white in color. Parameter a* suggests that higher TA and Kae-3-Rut (17) contents, with lower Qc-3-Sam (7) contents and TF, lead to a deeper red flower color, whereas b* indicates that higher Qc-3-Neo (10), Myr-3-Gln (6) and Syr-3-GlcA (23) levels deepen the yellow color of petals. In summary, increasing the content of Myr-3-GlcA (6) and decreasing the TA results in a lighter flower color, whereas a rise in Qc-3-Neo (10) and Myr-3-GlcA (6) turns petals yellow, while a higher TA and Kae-3-Rut (17) contents and lower Qc-3-Sam (7) contents turns flowers red.</ns0:p></ns0:div>
<ns0:div><ns0:head>Putative flavonoid biosynthesis pathway of lotus</ns0:head><ns0:p>In our study, 5 anthocyanins and 18 non-anthocyanin flavonoids were simultaneously detected, qualified and quantified in 207 sacred lotus cultivars, among which, four components were discovered for the first time in sacred lotus petals. Combined with these newly detected compounds in sacred lotus petals and previous study <ns0:ref type='bibr' target='#b3'>(Chen et al., 2013;</ns0:ref><ns0:ref type='bibr'>Li et al., 2014b)</ns0:ref>, the sacred lotus biosynthetic pathway for the detected anthocyanins and non-anthocyanin flavonoids was deduced in depth (Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>). The precursors of flavonoid biosynthesis are 4-coumaroyl-CoA and Malonyl-CoA, which are condensed to form Naringenin under the catalysis of chalcone synthase (CHS) and chalcone isomerase (CHI). Then, with the help of flavonoid 3-hydroxylase (F3H), flavonoid 3'-hydroxylase (F3'H) and flavonoid 3'5'-hydroxylase (F3'5'H), dihydrokaempferol, dihydroquercetin and dihydromyricetin were synthesized, which were the most essential precursor compounds used to synthesize the corresponding anthocyanins and nonanthocyanin flavonoids. As the biosynthesis of flavonols is closely concerning to that of anthocyanins <ns0:ref type='bibr'>(Jeong et al., 2006)</ns0:ref>, the pathway then divided into five sub-pathway. Flavonol synthase (FLS) played a decisive role for producing aglycones of non-anthocyanin flavonoids, while dihydroflavonol reductase (DFR) determined the generation of anthocyanins. The main difference is sub-pathway 3, for the presence of flavonol kaempferol synthesized from dihydrokaempferol, while the anthocyanin pelargonidin is lacking in sacred lotus <ns0:ref type='bibr'>(Li et al., 2014b)</ns0:ref>. Finally, both anthocyanins and non-anthocyanin flavonoids, with the assistance of enzyme UDP flavonoid glycosyltransferase (UFGT), O-methyltransferase (OMT) and other enzymes, performed different structural modification at the linkage position. Especially, glycosylation is a key mechanism to coordinate the bioactivity, metabolism and location of small molecules in living cells <ns0:ref type='bibr'>(Pfeiffer & Hegedűs, 2011)</ns0:ref>. As shown in our study, it may be much simpler to glycosylate at 3-O-position in sacred lotus petals, with the exception of quercetin. The compound Qc-7-Glu ( <ns0:ref type='formula'>9</ns0:ref>), glycosylated in the 7-O-position, was detected for the first time in sacred lotus. More importantly, the first discovery of laricitrin derivatives supplemented the subpathway 5, which was validated by the results of the large data resources and chemical technologies.</ns0:p><ns0:p>To further valid the flavonoid biosynthetic pathway we proposed, total RNA was isolated from three lotus cultivars with different petal colors, and qRT-PCR was conducted to observe these gene expressions (DFR, OMT, UFGT) in lotus petals with different colors (Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>). The qRT-PCR results showed that the expressions of DFR and UFGT in red petals was significantly higher than those of yellow and white petals, indicating the higher anthocyanin content in red petal cultivars. In addition, the expression levels of OMT genes in yellow petals were significantly higher than those in red and white petals, suggesting the higher non-anthocyanin flavonoids contents in yellow petal cultivars. The qRT-PCR results further validated the putative flavonoid biosynthetic pathway.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head>Pigments determine the different colors of various sacred lotus cultivars</ns0:head><ns0:p>Since the mid-19 th century, pigments have been extracted from colorful flowers to research the components, and a wide variety of pigments have been discovered, such as carotenoids and flavonoids. Carotenoids are considered to be the most widely distributed pigments in nature, which could not only be found in flowers, but also in fruits and storage organs in higher plants <ns0:ref type='bibr' target='#b17'>(Zhu et al., 2010)</ns0:ref>. Previous study showed that carotenoids existed in petals of different plant species, and contributed to the yellow color, such as in Osmanthus fragrans, butter yellow and golden yellow petals contain α-carotene and β-carotene <ns0:ref type='bibr'>(Han et al., 2013)</ns0:ref>. The carotenoids may have relationship with the yellow petals color of N. lutea, while the physical and chemical properties of carotenoids varied greatly with the flavonoids, which need deep research, separately. It is acknowledged that flavonoids are a large class of secondary metabolites, which also widely distributed in lotus <ns0:ref type='bibr'>(Li et al., 2014a)</ns0:ref>. Previous studies have shown that anthocyanins belong to the red series of pigments and control flowers colors from pink to blue violet, while non-anthocyanin flavonoids belong to the pure yellow series, controlling colors from deep yellow to light yellow and approaching colorlessness <ns0:ref type='bibr'>(He et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b16'>Zhao et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Zhao and Tao, 2015)</ns0:ref>. Cyanidin appears in red flowers, while delphinidin leans petals toward the blue spectrum <ns0:ref type='bibr' target='#b8'>(Sun et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b13'>Zhang et al., 2011a)</ns0:ref>. In tropical water lilies, the cultivars in which delphinidin 3-galactoside was detected presented an amaranth color, whereas those containing delphinidin 3'-galactoside appeared blue <ns0:ref type='bibr' target='#b18'>(Zhu et al., 2012)</ns0:ref>. However, an overview of the relationship between color phenotype and chemical composition remains lacking in sacred lotus. In this study, 5 anthocyanins and 18 non-anthocyanin flavonoids were simultaneously detected, qualified and quantified in 207 sacred lotus cultivars, with four components firstly discovered. The composition and content of these anthocyanins and non-anthocyanins flavonoids were also investigated in 207 lotus varieties. The results revealed that the contents of non-anthocyanins flavonoids were far higher than those anthocyanins, and the distribution of these components differed significantly in lotus petals of different colors (Table <ns0:ref type='table'>2</ns0:ref>). Anthocyanins mainly distributed in purple-red and red lotus cultivars, while hardly produced in yellow and white varieties. Unlike anthocyanins, no-anthocyanin flavonoids existed universally in all the lotus cultivars, and accumulated the highest in yellow petals. As anthocyanins and non-anthocyanin flavonoids were known to contribute a lot to the lotus colors <ns0:ref type='bibr'>(Li et al., 2008)</ns0:ref>, significance analysis was conducted, and notable differences were observed in the contents of anthocyanins and non-anthocyanin flavonoids among cultivars of differing color. In addition, strong correlations were seen among color parameters, anthocyanin contents, and non-anthocyanin flavonoids contents (Figure <ns0:ref type='figure'>3</ns0:ref>). Moreover, MLR analysis showed that there are many factors affecting petal color of sacred lotus. It is considered that TA is the essential factor responsible for sacred lotus color. With the increase in TA, a* value increased and L* value decreased, indicating the intense red and dark color of sacred lotus petals. In addition, Qc-3-Neo (10) was found to be the primary factor positively influencing the b* value, suggesting the higher Qc-3-Neo (10) content, the deeper yellow petal color. Thus, it could be speculated that red color of sacred lotus petals due to anthocyanins content, while the yellow and white color are owing to non-anthocyanin flavonoids content (Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Regulatory genes contribute to flower coloration of lotus</ns0:head><ns0:p>In the present study, a model explaining the biosynthetic pathway of flavonoids in lotus was proposed to give us a better understanding of the connection between flower coloration and the modified patterns of anthocyanins and non-anthocyanin flavonoids (Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>). To date, many studies has reported that variation in regulatory genes is central to variation in pattern and intensity of pigmentation through the genetic basis of flower coloration <ns0:ref type='bibr'>(Schwinn et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b11'>Yamagishi et al., 2010)</ns0:ref>. Sun et al. identified and isolated several regulatory genes from sacred lotus, and a striking difference in MYB5 gene was detected in two sacred lotus species through introducing NnMYB5 into Arabidopsis plants, indicating MYB5 is a functional transcription activator of anthocyanin synthesis, and related to the flower color difference between red flowers and yellow flowers <ns0:ref type='bibr'>(Sun et al., 2016)</ns0:ref>. However, it still needs further effort to investigate the regulation mechanism of flavonoid biosynthesis in sacred lotus. Based on the biosynthetic pathway of flavonoids in lotus that we put forward, we further verified the expression of pathway genes in lotus petals of different colors (Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>). The qRT-PCR results showed that the expression of DFR in red petals was significantly higher than that of yellow and white petals, which was consistent with high anthocyanin content in red petal cultivars. The expression levels of OMT genes in yellow petals were significantly higher than in red and white flowers, which was consistent with high non-anthocyanin flavonoids contents in yellow petal cultivar. Previously, we sequenced these 207 sacred lotus cultivars. Combined with the metabolome data in this study, the regulatory patterns and metabolic pathways of flavonoids are expected to be analyzed. Further, the relationship between the compositions of flavonoids and petal colors in sacred lotus will be hopefully explained at the molecular level.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, we developed an analytical method to detect a wide range of anthocyanins and non-anthocyanin flavonoids simultaneously in a dramatically shortened time period in the petals of 207 sacred lotus cultivars. Among the five anthocyanins and 18 non-anthocyanin flavonoids identified, four of the latter were reported for the first time in sacred lotus petals. Furthermore, the relationships between flower color and pigment composition and content were elucidated. The results showed that Mv-3-Glu (5) is the dominant anthocyanin, while Qc-3-Gln (12) accounts for most of non-anthocyanin flavonoids in sacred lotus cultivars. Moreover, there are significant differences in the anthocyanin and non-anthocyanin flavonoid contents among different cultivars, and MLR analysis confirmed that the TA was the most essential factor for determining petal color. A higher content of Qc-3-Neo (10) and Myr-3-GlcA (6) results in yellow flowers, while an increased TA and reduced Qc-3-Sam (7) content lend petals to turn red.</ns0:p><ns0:p>These results are indispensable to investigating the relationship between the compositions of anthocyanins, non-anthocyanin flavonoids, and petal colors in sacred lotus. The findings will contribute to our understanding of flavonoid biosynthesis, which may provide a theoretical basis for developing sacred lotus petals as a natural source of anthocyanins and non-anthocyanin flavonoids. In addition, this research will lay a solid foundation for subsequent investigations into metabolic and biosynthetic pathways in sacred lotus. <ns0:ref type='bibr'>Gonnet, J.-F. (1998)</ns0:ref>. Colour effects of co-pigmentation of anthocyanins revisited-1. A colorimetric definition using the CIELAB scale. Food Chem. 63(3), <ns0:ref type='bibr'>409-415. doi: 10.1016/S0308-8146(98)</ns0:ref> <ns0:ref type='bibr'>00053-3. Gonnet, J.-F. (1999)</ns0:ref>. Colour effects of co-pigmentation of anthocyanins revisited-2.A colorimetric look at the solutions of cyanin co-pigmented byrutin using the CIELAB scale. Food Chem. 66(3), 387-394. doi: 10.1016/S0308-8146(99)00088-6. <ns0:ref type='bibr'>Guo, H. (2009)</ns0:ref>. Cultivation of lotus (Nelumbo nucifera Gaertn. ssp. nucifera) and its utilization in China. Genet Resour Crop Evol. 56(3), 323-330. doi: 10.1007/s10722-008-9366-2. Han, Y., Wang, X., Chen, W., Dong, M., Yuan, W., <ns0:ref type='bibr' target='#b4'>Liu, X., et al. (2013)</ns0:ref>. Differential expression of carotenoid-related genes determines diversified carotenoid coloration in flower petal of Osmanthus fragrans. Tree Genetics & Genomes. 10(2), 329-338. doi: 10.1007/s11295-013-0687-8. He, Q., Shen, Y., Wang, M., Huang, M., <ns0:ref type='bibr'>Yang, R., Zhu, S., et al. (2011)</ns0:ref>. Natural variation in petal color in Lycoris longituba revealed by anthocyanin components. PLoS One. 6(8), e22098. doi: 10.1371/journal.pone.0022098. Hyun, S.K., Yu, J.J., Chung, H.Y., Jung, H.A., and Choi, J.S. ( <ns0:ref type='formula'>2006</ns0:ref>). Isorhamnetin glycosides with free radical and ONOO-scavenging activities from the stamens of Nelumbo nucifera. Arch Pharm Res. 29(4), 287-292. doi: 10.1007/bf02968572. Jeong, S.T., Goto-Yamamoto, N., <ns0:ref type='bibr'>Hashizume, K., and Esaka, M. (2006)</ns0:ref>. Expression of the flavonoid 3' -hydroxylase and flavonoid 3' ,5' -hydroxylase genes and flavonoid composition in grape (Vitis vinifera). Plant Sci. 170(1), 61-69. Jin, Z.M., He, J.J., Bi, H.Q., Cui, X.Y., and Duan, C.Q. <ns0:ref type='bibr'>(2009)</ns0:ref>. Phenolic compound profiles in berry skins from nine red wine grape cultivars in northwest China. <ns0:ref type='bibr'>Molecules. 14(12), 4922-4935. doi: 10.3390/molecules14124922. Jung, H.A., Jung, Y.J., Yoon, N.Y., Jeong, D.M., Bae, H.J., Kim, D.W., et al. (2008)</ns0:ref>. Inhibitory effects of Nelumbo nucifera leaves on rat lens aldose reductase, advanced glycation endproducts formation, and oxidative stress. Food Chem <ns0:ref type='bibr'>Toxicol. 46(12), 3818-3826. doi: 10.1016</ns0:ref><ns0:ref type='bibr'>/j.fct.2008</ns0:ref><ns0:ref type='bibr'>.10.004. Jung, H.A., Kim, J.E., Chung, H.Y., and Choi, J.S. (2003)</ns0:ref>. Antioxidant principles of Nelumbo nucifera stamens. Arch Pharm Res. 26(4), <ns0:ref type='bibr'>279-285. doi: 10.1007/BF02976956. Juranić, Z., and</ns0:ref><ns0:ref type='bibr'>Žižak, Ž. (2005)</ns0:ref>. Biological activities of berries: from antioxidant capacity to anti-cancer effects. <ns0:ref type='bibr'>BioFactors. 23(4), 207-211. doi: 10.1002/biof.5520230405. Li, C., Wang, L., Shu, Q., Xu, Y., and</ns0:ref><ns0:ref type='bibr'>Zhang, J. (2008)</ns0:ref>. Pigments composition of petals and floral color change during the blooming period in Rhododendron mucronulatum. Acta Horticulture <ns0:ref type='bibr'>Sinica. 35, 1023</ns0:ref><ns0:ref type='bibr'>-1030</ns0:ref><ns0:ref type='bibr'>. doi: 10.3724/SP.J.1005</ns0:ref><ns0:ref type='bibr' target='#b9'>.2008</ns0:ref><ns0:ref type='bibr'>.01083. Li, S., Wu, Q., Yuan, R., Shao, S., Zhang, H., and Wang, L. (2014a)</ns0:ref>. Recent advances in metabolic products of flavonoids in Nelumbo. <ns0:ref type='bibr'>Chin. Bull. Bot. 49(6), 738. doi: 10.3724/SP.J.1259</ns0:ref><ns0:ref type='bibr'>.2014</ns0:ref><ns0:ref type='bibr'>.00738. Li, S.S., Wu, J., Chen, L.G., Du, H., Xu, Y.J., Wang, L.J., et al. (2014b)</ns0:ref>. Biogenesis of Cglycosyl flavones and profiling of flavonoid glycosides in lotus (Nelumbo nucifera). PLoS One. 9(10), e108860. doi: 10.1371/journal.pone.0108860. Li, Y., <ns0:ref type='bibr'>Yang, S., Gao, B., and Fu, X. (2011)</ns0:ref>. Co-pigmentation effect and color stability of flavonoids on red dayberry (Myrica rubra Sieb. et Zucc) Anthocyanins. Food Science. 32(13), 37-39. doi: 10.1631/jzus.B1000185. Lim, S.S., Jung, Y.J., Hyun, S.K., Lee, Y.S., and Choi, J.S. ( <ns0:ref type='formula'>2006</ns0:ref>). Rat lens aldose reductase inhibitory constituents of Nelumbo nucifera stamens. Phytother Res. 20(10), <ns0:ref type='bibr'>825-830. doi: 10.1002</ns0:ref><ns0:ref type='bibr'>/ptr.1847</ns0:ref><ns0:ref type='bibr'>. Lin, L.-Z., and Harnly, J.M. (2007)</ns0:ref>. A screening method for the identification of glycosylated flavonoids and other phenolic compounds using a standard analytical approach for all plant materials. J Agric Food Chem. 55(4), 1084-1096. doi: 10.1021/jf062431s. Mukherjee, P.K., Mukherjee, D., Maji, A.K., Rai, S., and Heinrich, M. ( <ns0:ref type='formula'>2009</ns0:ref>). The sacred lotus (Nelumbo nucifera)-phytochemical and therapeutic profile. J Pharm Pharmacol. 61(4), 407-422. doi: 10.1211/jpp.61.04.0001. Nováková, L., Matysová, L., and Solich, P., <ns0:ref type='bibr'>(2006)</ns0:ref>. Advantages of application of UPLC in pharmaceutical analysis. Talanta. 68(3), 908-918. doi: 10.1016/j.talanta.2005.06.035. Pfeiffer, P., Hegedűs, A. (2011). Review of the molecular genetics of flavonoid biosynthesis in fruits. Acta <ns0:ref type='bibr'>Aliment. 40, 150-163. doi: 10.1556</ns0:ref><ns0:ref type='bibr'>/AAlim.40.2011</ns0:ref><ns0:ref type='bibr'>.Suppl.15. doi: 10.1556</ns0:ref><ns0:ref type='bibr'>/aalim.40.2011</ns0:ref><ns0:ref type='bibr'>.suppl.15. Schwinn, K., Venail, J., Shang, Y., Mackay, S., Alm, V., Butelli, E., et al. (2006)</ns0:ref>. A small family of MYB-regulatory genes controls floral pigmentation intensity and patterning in the genus Antirrhinum. Plant Cell. 18( <ns0:ref type='formula'>4</ns0:ref> (A) HPLC chromatograms of anthocyanins at 520 nm (peaks 1-5) and of non-anthocyanin flavonoids at 350 nm (peaks 6-23). Peak numbers in this figure correspond to compound numbers in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. And these data were obtained from the mixture of three cultivars: 'Qiaoshou-I' (71), 'Jinlingningcui' (151), and 'Silian13-I' ( <ns0:ref type='formula'>178</ns0:ref> Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>) (R 2 = 0.408, P = 3.008E-24) a* = 15.783 + 0.052 TA -0.575 Qc-3-Sam (7) + 0.233 Kae-3-Rut (17) -0.003 TF (R 2 = 0.630, P = 1.079E-42) b* = 0.719 + 0.219 Qc-3-Neo (10) + 0.033 Myr-3-GlcA (6) + 0.121 Syr-3-GlcA (23) -0.196 Pn-3-Glc (4) -0.325 Syr-3-Hex (20) (R 2 = 0.570, P = 1.006E-35)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>), 831-851. doi: 10.2307/20076645. Sharma, B.R., Gautam, L.N., Adhikari, D., and Karki, R. (2017). A comprehensive review on chemical profiling of Nelumbo Nucifera: potential for drug development. Phytother Res. 31(1), 3-26. doi: 10.1002/ptr.5732. Stafford, H.A. (1998). Teosinte to maize -Some aspects of missing biochemical and physiological data concerning regulation of flavonoid pathways. Phytochemistry. 49(2), 285-293. doi: 10.1016/S0031-9422(98)00175-7. Sun, S.S., Gugger, P.F., Wang, Q.F., and Chen, J.M. (2016). Identification of a R2R3-MYB gene regulating anthocyanin biosynthesis and relationships between its variation and flower color difference in lotus (Nelumbo Adans.). PeerJ. 4, e2369. doi: 10.3724/sp.J.1259.2014.00738.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>retention time on C 18 column. b Compounds identified by standards. c Compounds identified for the first in sacred lotus. PeerJ An. Chem. reviewing PDF | (ACHEM-2022:02:71194:1:2:NEW 5 May 2022) of different pigment types in different colors of lotus (each value is the mean ± standard deviation).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>). (B-D) Structures of four flavonoids found in lotus petals. MS/MS spectra (in NI mode) of quercetin 3-O-pentoseglucuronide (B, 8), quercetin 7-O-glucoside (C, 9), laricitrin 3-O-hexose (D, 13), and laricitrin 3-O-glucuronide (E, 14), and produced by each precursor. PeerJ An. Chem. reviewing PDF | (ACHEM-2022:02:71194:1:2:NEW 5 May 2022)Manuscript to be reviewed Chemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 A</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,70.87,440.47,672.95' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,262.12,525.00,275.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,306.37,525.00,170.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 . Identification of flavonoids in petals of lotus.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Pe ak No .</ns0:cell><ns0:cell>Rt(mi n) a</ns0:cell><ns0:cell>λmax (nm)</ns0:cell><ns0:cell>Parent ion(m/z)(me asured value)</ns0:cell><ns0:cell>Molecular fomular</ns0:cell><ns0:cell>Parent ion(m/z)( calculate d value)</ns0:cell><ns0:cell>Error (ppm)</ns0:cell><ns0:cell>Fragmentation (Relative abundance %) profile(m/z)</ns0:cell><ns0:cell>Identification</ns0:cell><ns0:cell>References</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='2'>4.765 523.9,</ns0:cell><ns0:cell>463.0904[M</ns0:cell><ns0:cell cols='3'>C 21 H 21 O 12 + 463.0882 -4.75</ns0:cell><ns0:cell>301.0376(100),300.035</ns0:cell><ns0:cell>delphinidin 3-O-</ns0:cell><ns0:cell>(Yang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>272.2</ns0:cell><ns0:cell>-2H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>8(95.28)</ns0:cell><ns0:cell>glucoside(Dp-3-Glu)</ns0:cell><ns0:cell>2009)</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell cols='2'>5.959 519.0,</ns0:cell><ns0:cell>447.0951[M</ns0:cell><ns0:cell cols='3'>C 21 H 21 O 11 + 447.0933 -4.03</ns0:cell><ns0:cell>285.0448(100),284.037</ns0:cell><ns0:cell>cyanidin 3-O-</ns0:cell><ns0:cell>(Yang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>279.3</ns0:cell><ns0:cell>-2H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>2(57.11)</ns0:cell><ns0:cell>glucoside(Cy-3-Glu)</ns0:cell><ns0:cell>2009)</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell cols='2'>6.770 523.9,</ns0:cell><ns0:cell>477.1046[M</ns0:cell><ns0:cell cols='3'>C 22 H 23 O 12 + 477.1038 -1.68</ns0:cell><ns0:cell>315.0547(100),314.048</ns0:cell><ns0:cell>petunidin 3-O-</ns0:cell><ns0:cell>(Yang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>279.3</ns0:cell><ns0:cell>-2H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>5(87.66)</ns0:cell><ns0:cell>glucoside(Pt-3-Glu) b</ns0:cell><ns0:cell>2009)</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell cols='2'>8.418 517.8,</ns0:cell><ns0:cell>461.1099[M</ns0:cell><ns0:cell cols='3'>C 22 H 23 O 11 + 461.1089 -2.17</ns0:cell><ns0:cell>299.0597(100),298.051</ns0:cell><ns0:cell>peonidin 3-O-</ns0:cell><ns0:cell>(Yang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>279.3</ns0:cell><ns0:cell>-2H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>9(55.27)</ns0:cell><ns0:cell>glucoside(Pn-3-Glu)</ns0:cell><ns0:cell>2009)</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='2'>9.271 527.6,</ns0:cell><ns0:cell>491.1208[M</ns0:cell><ns0:cell cols='3'>C 23 H 25 O 12 + 491.1195 -2.65</ns0:cell><ns0:cell>329.0711(100),328.064</ns0:cell><ns0:cell>malvidin 3-O-</ns0:cell><ns0:cell>(Yang et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>277.0</ns0:cell><ns0:cell>-2H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>4(69.32)</ns0:cell><ns0:cell>glucoside(Mv-3-Glu)</ns0:cell><ns0:cell>2009)</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell>11.59</ns0:cell><ns0:cell>355.4,</ns0:cell><ns0:cell>493.0640[M</ns0:cell><ns0:cell cols='3'>C 21 H 18 O 14 493.0624 -3.25</ns0:cell><ns0:cell>317.0418(100),318.045</ns0:cell><ns0:cell>myricetin 3-O-</ns0:cell><ns0:cell>(Deng et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>264.7</ns0:cell><ns0:cell>-H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1(24.47)</ns0:cell><ns0:cell>glucuronide(Myr-3-Gln)</ns0:cell><ns0:cell>2013)</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell>12.26</ns0:cell><ns0:cell>356.6,</ns0:cell><ns0:cell>595.1309[M</ns0:cell><ns0:cell cols='3'>C 26 H 28 O 16 595.1305 -0.67</ns0:cell><ns0:cell>300.0380(100),301.043</ns0:cell><ns0:cell>quercetin 3-O-</ns0:cell><ns0:cell>(Deng et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell>252.0</ns0:cell><ns0:cell>-H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>3(21.23)</ns0:cell><ns0:cell>sambubioside(Qc-3-</ns0:cell><ns0:cell>2009)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Sam/Qc-3-Xyl-Glu)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>12.60</ns0:cell><ns0:cell>356.6,</ns0:cell><ns0:cell>623.1475[M</ns0:cell><ns0:cell cols='4'>C 27 H 27 O 17 623.1254 -11.72 301.0467(100),302.050</ns0:cell><ns0:cell>quercetin 3-O-pentose-</ns0:cell><ns0:cell>(Ablajan et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>6</ns0:cell><ns0:cell>253.2</ns0:cell><ns0:cell>-H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>2(13.95),446.7323(2.46</ns0:cell><ns0:cell>glucuronide(Qc-3-Pen-</ns0:cell><ns0:cell>2006)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>)</ns0:cell><ns0:cell>Gln) c</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>14.34</ns0:cell><ns0:cell>319.9,</ns0:cell><ns0:cell>463.0910[M</ns0:cell><ns0:cell cols='3'>C 21 H 20 O 12 463.0882 -6.05</ns0:cell><ns0:cell>301.0362(100),300.025</ns0:cell><ns0:cell>quercetin 7-O-</ns0:cell><ns0:cell>(Ablajan et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>9</ns0:cell><ns0:cell>252.0</ns0:cell><ns0:cell>-H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1(20.05)</ns0:cell><ns0:cell>glucoside(Qc-7-Glu) c</ns0:cell><ns0:cell>2006)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>10 14.68</ns0:cell><ns0:cell>352.9,</ns0:cell><ns0:cell>609.1463[M</ns0:cell><ns0:cell cols='3'>C 27 H 30 O 16 609.1461 -0.33</ns0:cell><ns0:cell>609.1461(100),300.027</ns0:cell><ns0:cell>quercetin 3-O-</ns0:cell><ns0:cell>(Li et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>7</ns0:cell><ns0:cell>268.6</ns0:cell><ns0:cell>-H] -</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>6(50.58),301.0325(36.6</ns0:cell><ns0:cell>neohesperidoside(Qc-3-</ns0:cell><ns0:cell>2014b)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>5)</ns0:cell><ns0:cell>Neo)</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>1 (Chen et al., PeerJ An. Chem. reviewing PDF | (ACHEM-2022:02:71194:1:2:NEW 5 May 2022)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ An. Chem. reviewing PDF | (ACHEM-2022:02:71194:1:2:NEW 5 May 2022)</ns0:note>
<ns0:note place='foot' n='11'>15.39 354.1, 463.0900[M C 21 H 20 O 12 463.0882 -3.89 300.0274(100),301.034 quercetin 3-O-(Suzuki et al., PeerJ An. Chem. reviewing PDF | (ACHEM-2022:02:71194:1:2:NEW 5 May 2022)Manuscript to be reviewed Chemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "Dear editor and reviewers,
We are very thankful for your comments and suggestions on the manuscript “Identification and quantification of flavonoids in 207 cultivated lotus (Nelumbo nucifera) and their contribution to different colors”. We believe that your commentary has helped us significantly improve the manuscript’s quality and hope that the revisions are met with approval.
The English language of the manuscript has been revised by a professional editing service (see certificate).
Reviewer 1
1. The authors can improve some English language. One example is the phrase in lines 15 to 17 that would be clearer if the authors used: This study performed a systematic qualitative and quantitative determination of five anthocyanins and 18 non-anthocyanin flavonoids from the petals of 207 lotus cultivars.
Answer:We have revised the phrase from lines 15-18. Besides, we have improved the English language of the manuscript.
2. Please clarify if the full name of the species is Nelumbo nucifera Gaertn.
Answer: Lotus, known as Nelumbo, consists of two species based on the morphological characters, Nelumbo nucifera Gaertn. and Nelumbo lutea (Willd.) Per. Thus, Nelumbo nucifera Gaertn. is one of the lotus species.
3. Explain the differences between your work presented and discussed in this manuscript and previous works cited between lines 39 to 50. In particular, the differences with the Deng et al. (2013) manuscript.
Answer: Compared to Deng et al. (2013) manuscript, our manuscript complied a larger sample size (from 108 cultivars to 207 cultivars) and detected more compounds, with four compounds discovered for the first time in sacred lotus. Besides, we not only studied the content and composition of these pigments among different cultivars, but also evaluated the relationships between these pigments and petals colors among different cultivars.
Chen et al. (2013) analyzed the contents and compositions among nine different tissues of twelve lotus cultivars, and proposed a putative flavonoid biosynthetic pathway in sacred lotus. Our manuscript focused on the relationship between the flavonoids and the flower colors among 207 lotus cultivars. And based on the newly found compounds in lotus flowers, the flavonoid biosynthetic pathway was further enriched in our manuscript.
Deng et al. (2015) conducted a comparative proteomics analysis on the flower petals between two cultivars with red and white flowers, finding that the different methylation intensities on the promoter sequences of the ANS gene may contribute to the flower color difference between red and white lotus cultivars. Sun et al. (2016) analyzed the content of anthocyanins and the expression levels of four key structural genes in two species with red and yellow flowers, discovering that NnMYB5 is a transcription activator of anthocyanin synthesis. Our manuscript focused on the relationship between the flavonoids and the flower colors among 207 lotus cultivars, and the results will lay a solid foundation for subsequent investigations into the flower coloration mechanism in sacred lotus.
4. The authors should probably join the Results and Discussion part.
Answer: We have supplemented the Results and Discussion part.
5. It would help the readers if table 1 included a column with references, and more ion fragments should be included. The accuracy used in the m/z values is probably less important than the indication of more ion fragments. For example, peak 8 identification is discussed in lines 183 to 187. It is indicated that the loss of m/z 146 and m/z 176 implies the linkage of the sugar unit to C-3. The reference supporting this information is Ablajan et al., 2006? The presence of m/z 447 in table 1 would help to understand.
Answer: We have improved table 1 as required.
6. The authors did not detect the common flavonoid nucleus fragmentation?
Answer: We detected flavonoids with relatively high ionic strength response, and we didn’t pay more attention to flavonoid nucleus than flavonoids. Thus, we did not present the common flavonoid nucleus in our manuscript.
7. Table 2 is confusing. The abbreviations used for the compound’s names are indicated in Table 1, so there is no need to repeat them, and the footnote can be simplified. The footnote b should be for all results, not just for the first one. The table caption should probably indicate that each value is the mean ± standard deviation. Furthermore, it would be better if the notes and the letters used to identify differences differed. It will be less confusing for readers. Finally, are the differences within the column or the row?
Answer: We have improved table 2 as required. And the differences are within the row.
Reviewer 2
Your introduction needs more detail. I suggest that you improve the description at lines 35- 38 using the lotus as an object of the studies. A description/explanation regarding the main cultivars (can be the ones that you described in your study) can be included in lines 30-33. Also, highlight the hypothesis to provide more justification for the research.
Answer: We have improved the description at lines 42-44, and supplemented the description regarding the main cultivars in lines 36-41. In addition, we highlighted the hypothesis in lines 60-61.
Materials & Methods
Lines 94-97: Please, include at the end of these lines which section the authors will describe the total content of non-anthocyanin flavonoids (TF) and the total content of anthocyanins (TA).
Answer: TF and TA will be described in the section “Anthocyanin and non-anthocyanin flavonoid profiles in sacred lotus petals”. And we have included at the end of these lines in the manuscript from lines 106-107.
Line 148: Please, edit the information regarding the “quick RNA Isolation Kit po(, Beijing, China)”. Also, line 150.
Answer: We have revised Line 148, and Line 150 in the manuscript.
Results and Discussion sections
For me, these sections are the most critical point of the study. Therefore, I suggest a rearrangement and an improvement in these sections. For instance, some descriptions could be included in the discussion section and are in the results section (regarding table 2, for example). Moreover, the results of Figure 6 were not described in the results section. Besides that, the discussion is shown in the section “Putative flavonoid biosynthesis pathway of lotus” could be improved. Also, a test of significance can be included in Figure 6.
Answer: We have supplemented the Results and Discussion sections as required. The Figure 6 has been revised, and a test of significance was included in Figure 6.
Lines 385-387: Why could it have happened?
Answer: Under certain conditions, the higher expression of DFR, the higher content of anthocyanins. We have revised the sentence from lines 435-437 as “The qRT-PCR results showed that the expression of DFR in red petals was higher than that of yellow and white petals, which was consistent with high anthocyanin content in red petal cultivars.”
Line 405: Please, edit this sentence.
Answer: We have reedited this sentence in lines 440-443.
Table 2. Please, standardize the significant letters.
Answer: We have revised table 2.
Figure 1: Why these data were obtained from the mixture of three cultivars: “Qiaoshou-I” (71), “Jinlingningcui” (151), and “Silian13-I” (178)?
Answer: “Qiaoshou-I” (71) represents purple-red cultivars, “Jinlingningcui” (151) represents yellow cultivars, and “Silian13-I” (178) represents white cultivars. Mixed samples of different varieties of lotus are selected to obtain more types of compounds, as there are differences in the types and contents of compounds in different varieties of lotus. Besides, the results of mixed samples are more representative.
Figure 6: Please, correct the spelling of the title on this figure.
Answer: We have corrected the spelling of the title in “Figure Captions”.
I appreciate you taking time to review this revision and look forward to hearing from you soon.
Yours sincerely,
Professor Dr. Sha Chen
Institute of Chinese Materia Media,
China Academy of Chinese Medical Sciences
Attachment 1
" | Here is a paper. Please give your review comments after reading it. |
678 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>In this work, we demonstrate a simple spectrophotometry approach to more accurately quantify and measure paclitaxel (PTX) concentrations. PTX cannot be precisely quantified when mixed with an aqueous solvent, and carries the risk of undergoing crystal precipitation. It is likely that PTX undergoes numerous interactions with aqueous solvents and enters a supersaturated state due to its low solubility. Therefore, a quantitative method is required to measure PTX for quality control before clinical use. Although several high-performance liquid chromatography (HPLC) methods have been reported to date, not all medical facilities have a clinical laboratory with such HPLC devices and analysis techniques. Spectroscopy is a simple and convenient method; however, calibration standards are prepared with an organic solvent, such as methanol and acetonitrile, which, when mixed with PTX, can cause solvent effects that lead to inaccurate results. We generated a calibration curve of PTX at various concentrations (40%, 50%, 60%, 70%, 80%, 90%, and 100%) of methanol and evaluated the relative error from HPLC results. The optimum methanol concentration for quantification of PTX was 65.8%, which corresponded to the minimum relative error. The detection limit and quantification limit were 0.030 μg/mL and 0.092 μg/mL, respectively. It was possible to predict the PTX concentration even when polyoxyethylene castor oil and anhydrous ethanol were added, as in the commercially available PTX formulation, by diluting 32-fold with saline after mixing. Our findings show that PTX can be more accurately quantified using a calibration curve when prepared in a methanol/water mixture without the need for special devices or techniques.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>To properly use an anticancer drug in clinical settings, it is necessary to periodically verify its stability after mixing <ns0:ref type='bibr' target='#b1'>(Badea et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b10'>Kawashima et al., 2015)</ns0:ref>. The hydrophobic anticancer drug paclitaxel (PTX), isolated from the bark of Taxus brevifolia, is one of the most important anticancer drugs and is effective against a variety of human cancers, including breast and ovarian cancer <ns0:ref type='bibr' target='#b8'>(Huizing et al., 1995;</ns0:ref><ns0:ref type='bibr' target='#b20'>Wani et al., 1971)</ns0:ref>. It is extremely lipophilic (log P=3.5) and practically insoluble in water (0.3±0.02 μg/mL) <ns0:ref type='bibr' target='#b0'>(Ahmad et al., 2013)</ns0:ref>, and is therefore commercially available as a suspension in polyoxyethylene castor oil and anhydrous ethanol. Inconveniently, PTX cannot be precisely quantified when mixed with an aqueous solvent such as saline or glucose injection before use, and carries the risk of undergoing crystal precipitation <ns0:ref type='bibr' target='#b10'>(Kawashima et al., 2015)</ns0:ref>. It is likely that PTX undergoes numerous interactions with aqueous solvents and enters a supersaturated state due to its low solubility, although the details are unclear <ns0:ref type='bibr' target='#b5'>(Finney et al., 1980;</ns0:ref><ns0:ref type='bibr' target='#b15'>Ohno, Abe, & Tsuchida, 1978)</ns0:ref>. Therefore, a quantitative method is required to measure PTX for quality control.</ns0:p><ns0:p>Although several reversed-phase high performance liquid chromatography (HPLC) methods for quantification have been reported to date <ns0:ref type='bibr' target='#b2'>(Bonde, Bonde, & Prabhakar, 2019;</ns0:ref><ns0:ref type='bibr' target='#b4'>Choudhury et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b12'>Khan et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b13'>Kim et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b19'>Wang et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b21'>Xavier-Junior et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b22'>Yonemoto et al., 2007)</ns0:ref>, not all medical facilities have a clinical laboratory with such HPLC devices and analysis techniques. In addition, PTX must be used immediately after mixing, and therefore requires a simple, fast, and precise method that can be conducted in the hospital or clinic rather than being outsourced. Spectroscopy is a simple and convenient method <ns0:ref type='bibr' target='#b6'>(Heydari et al., 2016)</ns0:ref>. However, there are important reasons why HPLC is recommended for measuring PTX. Although spectroscopic drug release evaluation can be successfully undertaken if the test specimen can be prepared with a solvent using the same conditions as those used to generate the standard calibration curve <ns0:ref type='bibr' target='#b11'>(Kesarwani et al., 2011)</ns0:ref>, our study focuses on the clinical quality control of non-single component PTX formulations. Namely, while the PTX formulation is diluted with an aqueous solvent in clinical use, as described above, calibration standards for spectroscopy must be accurately prepared with an organic solvent such as methanol and acetonitrile, which, if mixed with PTX, would cause solvent effects and lead to inaccurate results. The HPLC method does not have this problem because the solvent is displaced by the HPLC mobile phase. Conventionally, the test specimen is resuspended in an appropriate solvent after lyophilization and diluted with methanol, dimethyl sulfoxide (DMSO), or N,N-dimethylformamide (DMF) to prevent interactions with the solvent from affecting the analysis <ns0:ref type='bibr' target='#b14'>(Ni et al., 2001)</ns0:ref>. However, the lyophilization process causes both freezing and drying stresses, which can cause deactivation and degradation of the drug <ns0:ref type='bibr' target='#b18'>(Wang, 2000)</ns0:ref>. Furthermore, direct dilution of a test specimen with an organic solvent may also cause unknown transformation of the sample.</ns0:p><ns0:p>Here, we demonstrate that PTX mixed with aqueous solvents can be quantified spectrophotometrically using a calibration curve when prepared in a methanol/water mixture without the need for special devices or techniques (Fig. <ns0:ref type='figure'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41234:1:1:NEW 5 Dec 2019)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Reagents PTX powder and injectable formulation were purchased from Tokyo Chemical Industry (Tokyo, Japan) and Bristol-Myers Squibb Company (Tokyo, Japan), respectively. Methanol, acetonitrile, anhydrous ethanol, and polyoxyethylene(10) castor oil were all purchased from Fujifilm Wako Pure Chemical (Osaka, Japan). Normal saline was purchased from Otsuka Pharmaceutical Factory (Tokushima, Japan).</ns0:p></ns0:div>
<ns0:div><ns0:head>Calibration curve of PTX prepared using various concentrations of methanol</ns0:head><ns0:p>Each concentration of PTX (0.313, 0.625, 1.25, and 2.50 μg/mL) was prepared with 40%, 50%, 60%, 70%, 80%, 90%, and 100% methanol. Absorbance was measured using an ultraviolet (UV) and visible spectrophotometer (UV-1800, Shimadzu, Kyoto, Japan) at a wavelength of 230 nm after blank correction with each solvent. Each absorbance reading was plotted against the corresponding known PTX concentration to generate a calibration curve. This experiment was replicated three times at room temperature.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation of saturated PTX in saline</ns0:head><ns0:p>Approximately 2.5 mg of PTX was added to 10 mL of saline, vortexed for 30 s and mixed by rotating for 24 h at 30 rpm at 4°C. The supernatant was collected after centrifuging at 9,000 × g for 10 min at 4°C.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quantification by HPLC</ns0:head><ns0:p>A series of standard solutions of different known concentrations of PTX (0.313, 0.625, 1.25, and 2.50 μg/mL) in methanol and the PTX-saturated saline solution, prepared above, were injected (20 μL each) into an octadecylsilyl silica gel column (5 μm, φ4.6 mm × h250 mm, Osaka Soda, Osaka, Japan) of an Elite LaChrom HPLC system (Hitachi High-Technologies, Tokyo, Japan) with 50% acetonitrile aqueous solution as the mobile phase and a flow rate of 1.2 mL/min <ns0:ref type='bibr' target='#b22'>(Yonemoto et al., 2007)</ns0:ref>. The eluent was monitored at 230 nm. The peak area of PTX in each standard solution was measured and plotted against the PTX concentration to generate a calibration curve. The concentration of PTX in each test specimen (the PTX-saturated saline solution) was subsequently determined using the same conditions as those used to generate the standard calibration curve.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quantification by spectrophotometry</ns0:head><ns0:p>The concentration of PTX-saturated saline was also measured spectrophotometrically at a wavelength of 230 nm using a calibration curve prepared based on the methanol dilution series (40%, 50%, 60%, 70%, and 80%). This experiment was replicated three times at room temperature. The quantitative concentrations were compared to those obtained using HPLC. Accuracy (R) indicates the relative error, which was defined as the deviation from the HPLC results, and was calculated as follows:</ns0:p><ns0:formula xml:id='formula_0'>R = (Cs -Cr)/Cr (1)</ns0:formula><ns0:p>where Cs is the quantitative concentration measured spectrophotometrically and Cr is the concentration measured using HPLC.</ns0:p></ns0:div>
<ns0:div><ns0:head>Practical simulation</ns0:head><ns0:p>For quality control against the commercially available PTX formulation, the method was examined in the presence of polyoxyethylene castor oil and anhydrous ethanol. First, 30 mg of PTX was added to 2.5 mL of polyoxyethylene castor oil and 2.5 mL of anhydrous ethanol. The mixture was diluted 100-fold with saline, as in clinical use, and then further diluted from 2-fold to 1,024-fold (final: 200-fold to 102,400-fold dilution) with saline to prepare a 2-fold dilution series. The absorbance of each diluted solution was measured spectrophotometrically at a wavelength of 230 nm. The PTX concentration in each diluted solution was determined using the HPLC method described above. Reference solutions were prepared in the same manner with the exclusion of PTX, and the absorbance was measured spectrophotometrically at a wavelength of 230 nm.</ns0:p></ns0:div>
<ns0:div><ns0:head>Verification of method accuracy</ns0:head><ns0:p>Injectable PTX formulation was quantitatively analyzed using the method established in this study, the accuracy of which was verified. The PTX formulation was diluted 100-fold with saline, as in clinical use, and then further diluted 32-fold (final: 3,200-fold dilution) with saline. This further 32-fold dilution is unique to this study and showed reasonable values in spectroscopy. The PTX concentration was measured spectrophotometrically in the same manner as that described above and compared to the results obtained using HPLC. The experiment was replicated five times and the unpaired t-test was performed using Excel 2010 (Microsoft, Redmond, WA, USA). Differences between the spectrometry and HPLC values were considered statistically significant when the p-value was less than 0.05.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Calibration curve</ns0:head><ns0:p>We found that parameters of the PTX calibration curve, including slope and intercept, varied depending on the solvent used (Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref> and Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). The calibration curves showed high absorbance values at low concentrations of less than 1 μg/mL when the methanol concentration was 80% or higher. While the values obtained at 50% and 60% methanol were comparable, a marked change in the calibration curve was observed at 70-80% methanol. The findings suggest that quantification results for PTX, particularly at low concentrations, may vary substantially if experiments are conducted in solvents that differ to those used for calibration.</ns0:p></ns0:div>
<ns0:div><ns0:head>Optimum methanol concentration</ns0:head><ns0:p>According to HPLC, the average concentration of triplicate tests in the PTX-saturated saline solution was 0.731±0.0438 μg/mL. The concentration in the same PTX-saturated saline solution was also measured spectrophotometrically using each calibration curve, as shown in Fig. <ns0:ref type='figure' target='#fig_0'>2</ns0:ref>. The absorbance of the PTX-saturated saline solution at a wavelength of 230 nm was 0.058-0.074, which could not be measured using calibration curves prepared based on dilution in 90% or 100% methanol. Table <ns0:ref type='table'>2</ns0:ref> shows the spectroscopic quantitative concentrations calculated using each calibration curve, including the accuracy (R). An R value close to '0' indicates that the results from the two methods are comparable.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref> shows the correlation between the methanol concentration and relative error from the HPLC results. The x-intercept of the approximate curves indicates the concentration of methanol at which PTX concentrations in saline were comparable to those obtained using HPLC. However, because the x-intercept could not be determined in this study, the solution to the approximate curve equation, 65.8%, corresponding to the minimum relative error (-0.0174; Fig. <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>, arrow), was identified as the optimum methanol concentration for quantification of PTX. The calibration curve prepared using 0.313, 0.625, 1.25, and 2.50 μg/mL PTX in 65.8% methanol as the solvent can therefore be expressed using the regression curve (r 2 = 0.9998) with the slope 0.0486 and the intercept 0.0032 (Fig. <ns0:ref type='figure'>S1</ns0:ref>). The detection limit and quantification limit of PTX were 0.030 μg/mL and 0.092 μg/mL, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Simulation</ns0:head><ns0:p>PTX formulations contain polyoxyethylene castor oil and anhydrous ethanol, and the effects of these solvents should be considered when quantifying PTX. The concentrations of PTX, polyoxyethylene castor oil, and anhydrous ethanol used in this experiment were the same as those in the commercially available formulation <ns0:ref type='bibr' target='#b3'>(Chen et al., 2001)</ns0:ref>. Figure <ns0:ref type='figure'>4A</ns0:ref> shows the absorbance at each dilution with or without PTX and the difference in absorbance, which would indicate the absorbance derived from PTX. Figure <ns0:ref type='figure'>4B</ns0:ref> shows the PTX concentration in each dilution compared to that obtained using HPLC, which was calculated using the differential absorbance obtained in Fig. <ns0:ref type='figure'>4A</ns0:ref> and a calibration curve prepared using 65.8% methanol, the optimal concentration for quantification of PTX. There was higher correlation between the results obtained using the calibration curve and HPLC at a 32-fold dilution (dilution rate: 0.0313; Fig. <ns0:ref type='figure'>4B, arrow</ns0:ref>) or less of the PTX concentration. These findings indicate that evaluation of the PTX concentration in a test specimen should be conducted for quality control by mixing at a dilution rate of 0.0313 (32-fold dilution).</ns0:p></ns0:div>
<ns0:div><ns0:head>Application</ns0:head><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41234:1:1:NEW 5 Dec 2019)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> According to the simulation described above, the commercially available PTX formulation can be quantitatively analyzed without the HPLC method. First, PTX after mixing (100-fold dilution with saline) should be further diluted 32-fold with saline before spectrophotometrically measuring the absorbance at a wavelength of 230 nm (As), which produced a reading of 0.372±0.0168 in this study. The absorbance of the reference solution without PTX at a dilution rate of 0.0313 (32-fold dilution) should be unchanged between lots and can be measured in advance (Ar), producing a reading of 0.307±0.00814 in this study. The difference between As and Ar (As -Ar: 0.0654±0.0168) would provide the absorbance of PTX in the test specimen at a dilution rate of 0.0313, and the PTX concentration (1.28±0.346 μg/mL) can subsequently be determined using a calibration curve prepared based on dilution in 65.8% methanol. Comparison with the results obtained using HPLC showed that there was no significant difference between values (Fig. <ns0:ref type='figure' target='#fig_2'>5</ns0:ref>). Fig. <ns0:ref type='figure' target='#fig_3'>6</ns0:ref> shows a schematic illustration of the methodology established in this study.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>PTX can be more accurately quantified using a calibration curve when prepared in a methanol/water mixture without the need for special devices or techniques. Solvent interactions have been extensively studied since the 1970s and are known to affect not only the solubility but also the stability and reaction rate of a solute <ns0:ref type='bibr' target='#b9'>(Hynes, 1985;</ns0:ref><ns0:ref type='bibr' target='#b16'>Reichardt, 1982)</ns0:ref>. Therefore, it is important to evaluate the interactions of hydrophobic drugs such as PTX in polar aqueous solvents. In most cases, calibration standards for hydrophobic drugs are prepared in non-polar solvents, aprotic polar solvents, and certain alcohols. However, these standards may not be accurate when evaluated using a spectrophotometer due to changes in the absorbance spectra as a result of fundamental solute/solvent interactions or other factors. For verification, methanol and acetonitrile with higher permeability in the target wavelength region (230 nm in this study) may be more suitable than DMSO and DMF. In particular, methanol is less expensive and more friendly to the environment than acetonitrile. Figure <ns0:ref type='figure' target='#fig_0'>2</ns0:ref> shows that a marked change in the calibration curve was particularly observed at 70-80% methanol. Wakisaka and Ohki showed that the hydrogen bonding network (cluster level) in alcohol/water mixtures changes depending on the alcohol concentration, and causes various solvent effects <ns0:ref type='bibr' target='#b17'>(Wakisaka & Ohki, 2005)</ns0:ref>. They found marked cluster-level changes at alcohol concentrations of 5.00-52.3% and 79.3-100%. As the alcohol concentration increased, clusters of water molecules formed in the former range, while clusters of alcohol molecules formed in the 52.3-79.3% range, and then disappeared in the latter range. Because a lipophilic solute is more stable when surrounded by clusters of alcohol molecules, PTX is expected to be more stable at methanol concentrations between 52.3% and 79.3%. We therefore speculate that the absorbance value increased at higher methanol concentrations due to instability, leading to marked changes in the calibration curve. Our findings suggest that quantification results for PTX, particularly at low concentrations, may vary substantially if experiments are conducted in solvents that differ to those used for calibration.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_1'>3</ns0:ref>, we evaluated the relative error from HPLC results, and 65.8% was identified as the optimum methanol concentration for quantification of PTX. The calibration curve prepared using 0.313, 0.625, 1.25, and 2.50 μg/mL PTX in 65.8% methanol as the solvent can therefore be expressed using the regression curve with the slope 0.0486 and the intercept 0.0032.</ns0:p><ns0:p>On the other hand, PTX formulations contain polyoxyethylene castor oil and anhydrous ethanol, and the effects of these solvents should be considered when quantifying PTX. As shown in Fig. <ns0:ref type='figure'>4</ns0:ref>, there was higher correlation between the results obtained using the calibration curve and HPLC at a 32-fold dilution or less of the PTX concentration. These findings indicated that it was possible to predict the PTX concentration even when polyoxyethylene castor oil and anhydrous ethanol were added, as in the commercially available PTX formulation, by diluting 32-fold with saline after mixing, and the accuracy of the methods established in this study (Fig. <ns0:ref type='figure' target='#fig_3'>6</ns0:ref>) was verified using the commercially available PTX formulation (Fig. <ns0:ref type='figure' target='#fig_2'>5</ns0:ref>).</ns0:p><ns0:p>Although the results may differ depending on the chain length of the polyoxyethylene castor oil, the theory and process should be similar for measuring other PTX solutions. While the Beer-Lambert law supports use of the additivity of absorbance for each component in the mixture, in practice, it is necessary to verify whether the drugs and other components follow the law of additivity of absorbance, including the presence or absence of interactions, even if they are commercially available in mixed form, such as PTX formulations. To overcome the need to verify this in our present study, we established an effective method for quantifying PTX in the supersaturated state in saline based on correlations with the HPLC results, and determined the required conditions for measurement using a calibration curve.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>We evaluated a simple and rapid method for determining the concentration of PTX in aqueous solvent using spectrophotometry. Use of a calibration curve prepared based on dilution in 65.8% methanol was effective for analyzing the PTX concentration in saline while minimizing the solvent effect. Even when polyoxyethylene castor oil and anhydrous ethanol were added, as in the commercially available PTX formulation, it was possible to predict the PTX concentration by diluting 32-fold after mixing. This approach may be useful for quality control of PTX before clinical use.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure Captions</ns0:head><ns0:p>Figure <ns0:ref type='figure'>1</ns0:ref> Schematic illustration of the background behind the potential of a simple and rapid spectroscopy method to replace the HPLC method. Hydrophobic drugs such as PTX undergo numerous interactions after mixing with aqueous solvents, some of which cause them to be unquantifiable. We demonstrated the simple concept that PTX in an aqueous solvent can be quantified using a calibration curve when prepared in a methanol/water mixture. Figure <ns0:ref type='figure'>4</ns0:ref> (A) Absorbance of each dilution solution with or without PTX and the difference in absorbance. (B) PTX concentration in each dilution solution compared to that determined using HPLC, which was calculated using the differential absorbance obtained in Figure <ns0:ref type='figure'>4A</ns0:ref> and a calibration curve prepared based on dilution in 65.8% methanol. Arrow indicates the dilution rate at which there was highest correlation between the results obtained using the calibration curve and HPLC. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Calibration curves of PTX at various concentrations of methanol.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3Relative error of quantitative PTX results between HPLC and spectrophotometry using calibration curves prepared based on dilution in various concentrations of methanol. Arrow indicates the optimum methanol concentration for quantification of PTX, which corresponds to the minimum relative error.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Comparison of the concentration of the commercially available PTX formulation determined by spectroscopy versus HPLC.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 Schematic illustration of the methodology established in this study to determine the concentration of the commercially available PTX formulation.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='15,42.52,178.87,525.00,225.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='17,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Details of the calibration curves described in Figure2.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Methanol concentration (%)</ns0:cell><ns0:cell>Slope</ns0:cell><ns0:cell>Intercept</ns0:cell><ns0:cell>Correlation coefficient</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>0.1405</ns0:cell><ns0:cell>0.0010</ns0:cell><ns0:cell>0.9963</ns0:cell></ns0:row><ns0:row><ns0:cell>50</ns0:cell><ns0:cell>0.1079</ns0:cell><ns0:cell>-0.0010</ns0:cell><ns0:cell>0.9999</ns0:cell></ns0:row><ns0:row><ns0:cell>60</ns0:cell><ns0:cell>0.1080</ns0:cell><ns0:cell>-0.0014</ns0:cell><ns0:cell>0.9999</ns0:cell></ns0:row><ns0:row><ns0:cell>70</ns0:cell><ns0:cell>0.0851</ns0:cell><ns0:cell>0.0079</ns0:cell><ns0:cell>1.0000</ns0:cell></ns0:row><ns0:row><ns0:cell>80</ns0:cell><ns0:cell>0.0681</ns0:cell><ns0:cell>0.0717</ns0:cell><ns0:cell>0.9967</ns0:cell></ns0:row><ns0:row><ns0:cell>90</ns0:cell><ns0:cell>0.0508</ns0:cell><ns0:cell>0.1066</ns0:cell><ns0:cell>0.9998</ns0:cell></ns0:row><ns0:row><ns0:cell>100</ns0:cell><ns0:cell>0.0702</ns0:cell><ns0:cell>0.1024</ns0:cell><ns0:cell>0.9968</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41234:1:1:NEW 5 Dec 2019)</ns0:note></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "RESPONSE TO REVIEWER 1:
We wish to express our appreciation to the Reviewer for his or her insightful comments, which have helped us significantly improve the paper.
Comment 1: I find a report about the spectrophotometric determination of PTX (International Journal of Advances in Pharmaceutical Sciences 2(1):29-32, 2011). Please explain about the advantage of your work than this report.
Response: We agree that this point requires clarification, and have added the following text to the Introduction (p. 2, lines 63-67) and this reference to the References:
“Although spectroscopic drug release evaluation can be successfully undertaken if the test specimen can be prepared with a solvent using the same conditions as those used to generate the standard calibration curve (Kesarwani et al., 2011), our study focuses on the clinical quality control of non-single component PTX formulations.”
Comments 2: Some analytical parameters of the proposed method such as limit of detection and quantification are not calculated.
Response: We agree that this point requires clarification, and have added the following text to the Abstract (p. 1, lines 33-34) and Results (p. 5, lines 198-199):
“The detection limit and quantification limit were 0.030 μg/mL and 0.092 μg/mL, respectively.”
Comment 3: The manuscript well designed. The below references about the spectrophotometric and HPLC determinations can be added to the introduction.
1) Heydari, R., Hosseini, M., Alimoradi, M., Zarabi, S. A simple method for simultaneous spectrophotometric determination of brilliant blue FCF and sunset yellow FCF in food samples after cloud point extraction. Journal of the Chemical Society of Pakistan, 38, 2016, 438-445
2) Heydari, R., Hosseini, M., Zarabi, S. A simple method for determination of carmine in food samples based on cloud point extraction and spectrophotometric detection. Spectrochimica Acta - Part A: Molecular and Biomolecular Spectroscopy, 150, 2015, 786-791
3) R. Heydaria, F. Bastami, M. Hosseini, M. Alimoradi, Simultaneous determination of Tropaeolin O and brilliant blue in food samples after cloud point extraction. Iranian Chemical Communication, 5, 2017, 242-251
4) Heydari, R., Shamsipur, M., Naleini, N. Simultaneous determination of EDTA, sorbic acid, and diclofenac sodium in pharmaceutical preparations using high-performance liquid chromatography. AAPS PharmSciTech, 14, 2013, 764-769
Response: We appreciate the reviewer’s comment. We agree with the relevance of these references, and have added the reference (1) to the Introduction (p. 2, line 62) and References.
Thank you again for your comments on our paper. We trust that the revised manuscript is suitable for publication.
RESPONSE TO REVIEWER 2:
We wish to express our appreciation to the Reviewer for his or her insightful comments, which have helped us significantly improve the paper.
Comment 1: In some parts of the article English must be improve and some sentences should be rewritten to be more understandable.
I consider some more references must be added.
Response: The paper has been edited and rewritten by an experienced scientific editor, who improved the grammar and stylistic expression of the paper. And the following references have been added to the Introduction and References:
Heydari, et al., 2016. Journal of the Chemical Society of Pakistan 38 (3):438-445 (p. 2, line 62)
Kesarwani, et al., 2011. International Journal of Advances in Pharmaceutical Sciences 2(1):29-32 (p. 2, line 66)
Comment 2: Authors can cite some more new references. In line 56 you can cite some more current article about the determination of paclitaxel by HPLC.
Response: We agree and have added the following references to the Introduction (p. 2, lines 56-58) and References:
Khan, et al., 2016. Journal of Chromatography B 1033-1034:261-270
Xavier-Junior, et al., 2016. Chromatographia 79(7-8):405-412
Bonde, et al., 2019. Microchemical Journal 149:Article ID 103982
Comment 3: In your article you demonstrate that PTX can be quantified spectrophotometrically. However this method will be only useful when one analyte is present in the sample, for example in drugs for the determination of PTX in plasma you will need HPLC. In my opinion in the introduction you should clarify this point.
Response: We appreciate the reviewer’s comment. We agree with the reviewer that we will need HPLC for the determination of PTX in plasma. However, our study specializes in the clinical quality control of PTX formulations containing polyoxyethylene castor oil and anhydrous ethanol. So, we have added the following text to the Introduction (p. 2, lines 66-67):
“our study focuses on clinical quality control of non-single component PTX formulations.”
Comment 4: There are other articles where spectrophotometric methods have been proposed to quantify PTX. You should mention them in the introduction and underline the advantages of your method over these.
Response: We appreciate the reviewer’s comment. We agree that this point requires clarification, and have added the following text to the Introduction (p. 2, lines 63-67):
“Although spectroscopic drug release evaluation can be successfully undertaken if the test specimen can be prepared with a solvent using the same conditions as those used to generate the standard calibration curve (Kesarwani et al., 2011), our study focuses on the clinical quality control of non-single component PTX formulations.”
Comment 5: Figure 1 has a lot of text to be a Figure. The text should be remove from the Figure and placed in the text.
Response: As requested, we have modified Figure 1 by removing some text. However, we understand that Figure 1 is designed as a kind of graphic abstract for readers’ understanding rather than a “Figure”, so some text may be included.
Comment 6: In line 114 you mention, “The concentration of PTX in each test sample was subsequently determined using the same conditions as those used to generate the standard calibration curve”, however you have not mentioned before what the test samples are. I think you should explain before what you will use as test samples.
Response: We appreciate the reviewer’s comment. Accordingly, we have changed this to “the test specimen (the PTX-saturated saline solution)”.
Comment 7: Line 156 and 157 describe some of the results found: “there was no significant difference between the spectrometry and HPLC values”, in my opinion these comment must be placed in Results.
Response: The reviewer’s comment is correct. To clarify, we have changed this to the following text (p. 4, lines 162-164):
“the unpaired t-test was performed using Excel 2010 (Microsoft, Redmond, WA, USA). Differences between the spectrometry and HPLC values were considered statistically significant when the p-value was less than 0.05.”
Comment 8: In Figure 2 the title of the X Axis should be PTX concentration (µg/mL) and clarify that 40 %, 50 %, 60 % is the % of methanol. Moreover, you can use different colours for a better differentiation between the calibrations curves. Moreover, it would be useful if you add the slope and intercept value in this figure or at least in the table related to this figure.
Response: We appreciate the reviewer’s comment. Accordingly, we have changed the title of the X-axis in Figure 2 to “PTX concentration (μg/mL)”, have used different colors between the calibration curves, and have added a new Table 1 with the slopes and intercepts.
Comment 9: In “Calibration curve” you should clarify if you have used methanol and the different % of methanol (in each case) as blank or not. Because the absorbance of the different dissolvent should be taken into account. This analyte can have different absorbance in the different solvents but also the solvents themselves may have different absorbance. I consider this point should be clarify in the article.
Response: We agree that this point requires clarification, and have added the following text to the Materials and Methods (p. 3, lines 97-99):
“Absorbance was measured using an ultraviolet (UV) and visible spectrophotometer (UV-1800, Shimadzu, Kyoto, Japan) at a wavelength of 230 nm after blank correction with each solvent.”
Comment 10: I also miss a figure with the spectrum of PTX in different solvents. Could it be added in a new figure?
Response: We appreciate the reviewer’s interest in additional information on the spectrum of PTX in different solvents. However, we consider that the approach of study is the same even if different solvents are used. Rather, it is more important to propose a solvent with permeability, inexpensive, and low toxicity. So, instead of a new Figure, we have added the following text to the Discussion (p. 6, lines 245-248):
“For verification, methanol and acetonitrile with higher permeability in the target wavelength region (230 nm in this study) may be more suitable than DMSO and DMF. In particular, methanol is less expensive and more friendly to the environment than acetonitrile.”
Comment 11: In line 177 and 178 as it is written it is not clear if you have used several samples of PTX-saturated saline solution with different % of methanol, if you have used the different calibration curves prepared in these solvents, or both things to be in the same solvent the sample and the calibration curve. I think this sentence should be rewritten.
Response: We appreciate the reviewer’s comment. We agree that this point requires clarification, and have changed this to the following text (p. 5, lines 183-184):
“The concentration in the same PTX-saturated saline solution was also measured spectrophotometrically using each calibration curve, as shown in Fig. 2.”
Comment 12: In Figure 3, the title of the X Axis should be MeOH concentration (%).
Response: We appreciate the reviewer’s comment. Accordingly, we have changed the title of the X-axis in Figure 3 to “Methanol concentration (%).”
Comment 13: Line 193: the regression curve should be express with the slope and intercept deviation.
Response: We appreciate the reviewer’s comment. Accordingly, we have changed this to the following text (p. 5, lines 195-198):
“The calibration curve prepared using 0.313, 0.625, 1.25, and 2.50 μg/mL PTX in 65.8% methanol as the solvent can therefore be expressed using the regression curve (r2 = 0.9998) with the slope 0.0486 and the intercept 0.0032 (Fig. S1).”
Comment 14: Figure 4A: PTX (+) or PTX (-) must be replace by dilution with or without PTX in order to be more understandable.
Response: We appreciate the reviewer’s comment. Accordingly, we have changed this to “Dilution with PTX or without PTX.”
Comment 15: Finally, I consider that if you are proposing a new analytical method, you should provide some analytical parameters such as the detection and quantification limit.
Response: We agree that this point requires clarification, and have added the following text to the Abstract (p. 1, lines 33-34) and Results (p. 5, lines 198-199):
“The detection limit and quantification limit were 0.030 μg/mL and 0.092 μg/mL, respectively.”
Comment 16: Authors must clarify the novelty of this methods and compare it with other spectrophotometric methods previously published.
Response: We appreciate the reviewer’s comment. Again, we have added the following text to the Introduction (p. 2, lines 63-67):
“Although spectroscopic drug release evaluation can be successfully undertaken if the test specimen can be prepared with a solvent using the same conditions as those used to generate the standard calibration curve (Kesarwani et al., 2011), our study focuses on the clinical quality control of non-single component PTX formulations.”
Thank you again for your comments on our paper. We trust that the revised manuscript is suitable for publication.
RESPONSE TO REVIEWER 3:
We wish to express our appreciation to the Reviewer for his or her insightful comments, which have helped us significantly improve the paper.
Comment 1: Line 56-57: Add more recent references to methods for quantification of PTX (maybe something from 2010-2019).
Response: We appreciate the reviewer’s comment. We agree and have added the following references to the Introduction (p. 2, lines 56-58) and References:
Khan, et al., 2016. Journal of Chromatography B 1033-1034:261-270
Xavier-Junior, et al., 2016. Chromatographia 79(7-8):405-412
Bonde, et al., 2019. Microchemical Journal 149:Article ID 103982
Comment 2: The sensitivity of the method depends on the amount of organic solvent (Fig. 2). Please add to the text in the chapter „Calibration curve” the values of the sensitivity factor.
Response: We appreciate the reviewer’s comment. We agree that this point requires clarification, and have listed the details of the calibration curves described in Fig. 2 in a new Table 1. In contrast, we consider that a description of sensitivity is required in the calibration curve prepared using the optimum methanol concentration 65.8%. So, we have added the following text to the Abstract (p. 1, lines 33-34) and Results (p. 5, lines 198-199):
“The detection limit and quantification limit were 0.030 μg/mL and 0.092 μg/mL, respectively.”
Comment 3: Table 1: Add quantitative concentration from spectrophotometry measurement.
Response: We appreciate the reviewer’s comment. Originally, the second column of Table 1, which was revised as a new Table 2 this time, shows spectroscopic quantitative concentration. But, in accordance with the reviewer’s comment, we have changed this to the following text (p. 5, lines 187-188, and Table 2) in order to be more understandable:
“Table 2 shows the spectroscopic quantitative concentrations calculated using each calibration curve, including the accuracy (R).”
Thank you again for your comments on our paper. We trust that the revised manuscript is suitable for publication.
" | Here is a paper. Please give your review comments after reading it. |
679 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Extracellular vesicles (EVs) are released by most cell types and are involved in multiple basic biological processes. Medium/large EVs (m/lEVs), which is different size from exosome, plays an important role in the coagulation in blood, and is secreted from cancer cells, etc., suggesting functions related to malignant transformation. The m/lEVs levels in blood or urine may help unravel pathophysiological findings in many diseases.</ns0:p><ns0:p>However, it remains unclear how many naturally-occurring m/lEV subtypes exist as well as how their characteristics and functions differ from one another. Methods. We used the blood and urinal sample from each 10 healthy donors for analysis. Using a flow cytometer, we focus on characterization of EVs with large sizes (>200nm) that are different from exosomes. We also searched for a membrane protein for characterization with a flow cytometer using shotgun proteomics. Then we identified m/lEVs pelleted from plasma and urine samples by differential centrifugation and characterized by flow cytometry. Results.</ns0:p><ns0:p>Using proteomic profiling, we identified several proteins involved in m/lEV biogenesis including adhesion molecules, peptidases and exocytosis regulatory proteins. In healthy human plasma, we could distinguish m/lEVs derived from platelets, erythrocytes, monocytes/macrophages, T and B cells, and vascular endothelial cells with more than two positive surface antigens. The ratio of phosphatidylserine appearing on the membrane surface differed depending on the cell-derived m/lEVs. In urine, 50% of m/lEVs were Annexin V negative but contained various membrane peptidases derived from renal tubular villi. Urinary m/lEVs, but not plasma m/lEVs, showed peptidase activity. The knowledge of the new characteristics is considered to be useful as a diagnostic material</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Extracellular vesicles (EVs) play essential roles in cell-cell communication and are diagnostically significant molecules. EVs are secreted from most cell types under normal and pathophysiological conditions <ns0:ref type='bibr' target='#b10'>(Iraci et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ohno et al. 2013</ns0:ref>). These membrane vesicles can be detected in many human body fluids and are thought to have signaling functions in interactions between cells. Analysis of EVs may have applications in therapy, prognosis, and biomarker development in various fields. The hope is that using EV analysis, clinicians will be able to detect the presence of disease as well as to classify its progression using noninvasive methods such as liquid biopsy <ns0:ref type='bibr' target='#b2'>(Boukouris & Mathivanan 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Piccin 2014;</ns0:ref><ns0:ref type='bibr' target='#b24'>Piccin et al. 2015a;</ns0:ref><ns0:ref type='bibr' target='#b26'>Piccin et al. 2017a;</ns0:ref><ns0:ref type='bibr' target='#b28'>Piccin et al. 2017b;</ns0:ref><ns0:ref type='bibr' target='#b29'>Piccin et al. 2015b)</ns0:ref>.</ns0:p><ns0:p>Medium/large extracellular vesicles (m/lEVs) can be classified based on their cellular origins, biological functions and biogenesis <ns0:ref type='bibr' target='#b6'>(El Andaloussi et al. 2013)</ns0:ref>. In a broad sense, they can be classified into m/lEVs with diameters of 100-1000 nm diameter (membrane blebs) and smaller EVs (e.g. exosomes) with diameters of 30-150 nm <ns0:ref type='bibr' target='#b30'>(Raposo & Stoorvogel 2013;</ns0:ref><ns0:ref type='bibr' target='#b31'>Robbins & Morelli 2014)</ns0:ref>. The m/lEVs are generated by direct outward budding from the plasma membrane (D'Souza-Schorey & Clancy 2012), while smaller EVs (e.g. exosomes) are produced via the endosomal pathway with formation of intraluminal vesicles by inward budding of multivesicular bodies (MVBs) <ns0:ref type='bibr' target='#b30'>(Raposo & Stoorvogel 2013)</ns0:ref>. In this study, we analyzed the physical characteristics of EVs from 200 nm to 800 nm in diameter which we refer to as m/lEVs as per the MISEV2018 guidelines <ns0:ref type='bibr'>(Thery et al. 2018)</ns0:ref>.</ns0:p><ns0:p>Recently, the clinical relevance of EVs has attracted significant attention. In particular, m/lEVs play an important role in tumor invasion <ns0:ref type='bibr' target='#b4'>(Clancy et al. 2015)</ns0:ref>. m/lEVs in blood act as a coagulant factor and have been associated with sickle cell disease, sepsis, thrombotic thrombocytopenic purpura, and other diseases <ns0:ref type='bibr' target='#b25'>(Piccin et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b26'>Piccin et al. 2017a;</ns0:ref><ns0:ref type='bibr' target='#b28'>Piccin et al. 2017b)</ns0:ref>. A possible role for urinary m/lEVs in diabetic nephropathy was also reported <ns0:ref type='bibr' target='#b34'>(Sun et al. 2012)</ns0:ref>. In recent years, the clinical applications of exosomes have been developed <ns0:ref type='bibr'>(Yoshioka et al. 2014</ns0:ref>). However, because characterization of exosomes is analytically challenging, determining the cells and tissues from which exosomes are derived can be difficult. m/lEVs are PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science generated differently from exosomes <ns0:ref type='bibr' target='#b17'>(Mathivanan et al. 2010)</ns0:ref> but are similar in size and contain many of the same surface antigens. It is widely hypothesized that complete separation of exosomes and m/lEVs is likely to be a major challenge, and more effective techniques to purify and characterize m/lEVs would be extremely valuable.</ns0:p><ns0:p>In this study, we focused on m/lEVs in plasma and urine, which are representative body fluids in clinical laboratories. We purified for m/lEVs based on differential centrifugation and characterized m/lEVs by flow cytometry and mass spectrometry analysis and described the basic properties (characterizing surface antigen and orientation of phosphatidylserine and activity of the enzymes) of m/lEV subpopulations in blood and urine. healthy donors by measuring blood count and creatinine in blood and total urine protein (Supplementary Table.S1).</ns0:p></ns0:div>
<ns0:div><ns0:head>Isolation of plasma m/lEVs</ns0:head><ns0:p>Essentially platelet-free plasma (PFP) was prepared from EDTA-treated blood by double centrifugation at 2,330 ×g for 10 min. To assess residual platelets remaining in this sample, we measured platelet number using the ADVIA® 2120i Hematology System (SIEMENS Healthineers, Erlangen Germany). The number of platelets in this sample was below the limit of detection (1×10 3 cells/μL). We used a centrifugation method to obtain m/lEVs. In an effort to ensure our approach could be applied to clinical testing, we chose a simple and easy method for pretreatment. In an ISEV position paper <ns0:ref type='bibr'>(Mateescu et al. 2017</ns0:ref>), Thery's group referred to vesicles sedimenting at 100,000 ×g as 'small EVs' rather than exosomes, those pelleting at intermediate speed (lower than 20,000 ×g) as 'medium EVs' (including microvesicles and ectosomes) and those pelleting at low speed (e.g., 2000 ×g) as 'large EVs'. Because these definitions are less biologically meaningful but more experimentally tractable than previouslyapplied exosome/microvesicle definitions, we attempted biological characterization through subsequent shotgun and flow cytometry analysis.</ns0:p><ns0:p>In flow cytometric analysis, the volume of PFP used in each assay was 0.6 mL from each donor. In electron microscopy, the volume of PFP used was 3 mL. Samples were independent and were treated individually prior to each measurement. PFP was centrifuged at 18,900 ×g for 30 min in a fixed-angle rotor. The m/lEV pellet obtained after centrifugation was reconstituted by vortex mixing (1-2 min) with an equivalent volume of Dulbecco's phosphate-buffered saline (DPBS), pH 7.4. The solution was centrifuged at 18,900 ×g for 30 min again and the supernatant was discarded.</ns0:p></ns0:div>
<ns0:div><ns0:head>Isolation of urinary m/lEVs</ns0:head><ns0:p>For isolation of urinary m/lEVs, we modified a urinary exosome extraction protocol <ns0:ref type='bibr' target='#b7'>(Fernandez-Llama et al. 2010)</ns0:ref>. The centrifugation conditions were identical for plasma and urine so that the size and the density of m/lEVs were similar, enabling comparison of plasma and urinary m/lEVs. Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> In flow cytometric analysis, the volume of urine used for each assay was 1.2 mL from each donor. In electron microscopy, the volume of urine used was 15 mL. Samples were independent and were treated individually prior to each measurement. Collected urine was centrifuged at 2,330 ×g for 10 min twice. The supernatant was centrifuged at 18,900 ×g for 30 min in a fixed-angle rotor. The m/lEV pellet obtained from centrifugation was reconstituted by vortex mixing (1-2 min) with 0.2 mL of DPBS followed by incubation with DTT (final concentration 10 mg/mL) at 37°C for 10-15 min. The samples were centrifuged again at 18,900 ×g for 30 min and the supernatant was discarded. Addition of DTT, a reducing agent, reduced the formation of Tamm-Horsfall protein (THP) polymers. THP monomers were removed from m/lEVs after centrifugation. DTT-containing DPBS solutions were filtered through 0.1-μm filters (Millipore).</ns0:p></ns0:div>
<ns0:div><ns0:head>Flow cytometric analysis of m/lEVs</ns0:head><ns0:p>After resuspending m/lEV pellets in 60 μL of DPBS, we added saturating concentrations of several labelled antibodies, Annexin V and normal mouse IgG and incubated the tubes in the dark, without stirring, for 15-30 min at room temperature. In one case, we added labelled antibodies directly to 60 μL of PFP for staining. We resuspended stained fractions in Annexin V binding buffer (BD Biosciences: 10 mM HEPES, 0.14 mM NaCl, 2.5 mM CaCl 2 , pH 7.4) for analysis by flow cytometry. DPBS and Annexin V binding buffer were filtered through 0.1-μm filters (Millipore). Flow cytometry was performed using a FACSVerse™ flow cytometer (BD Biosciences). The flow cytometer was equipped with 405 nm, 488 nm and 638 nm lasers to detect up to 13 fluorescent parameters. The flow rate was 12 μL/min. Forward scatter voltage was set to 381, side scatter voltage was set to 340, and each threshold was set to 200. Details of excitation (Ex.) and emission (Em.) wavelengths as well as voltages described in supplements Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science NTA measurements were performed using a NanoSight LM10 (NanoSight, Amesbury, United Kingdom). After resuspending mEV pellets in 50 μL of DPBS, samples were diluted eight-fold (plasma) and 100-fold (urinary) with PBS prior to measurement. Particles in the laser beam undergo Brownian motion and videos of these particle movements are recorded. NTA 2.3 software then analyses the video and determines the particle concentration and the size distribution of the particles. Twenty-five frames per second were recorded for each sample at appropriate dilutions with a 'frames processed' setting of 1500. The detection threshold was set at '7 Multi' and at least 1,000 tracks were analyzed for each video.</ns0:p></ns0:div>
<ns0:div><ns0:head>Electron microscopy</ns0:head><ns0:p>For immobilization, we added 100 μL of PBS and another 100 μL of immobilization solution (4% paraformaldehyde, 4% glutaraldehyde, 0.1 M phosphate buffer, pH 7.4) to m/lEV pellets.</ns0:p><ns0:p>After stirring, we incubated at 4°C for 1 h. For negative staining, the samples were adsorbed to formvar film-coated copper grids (400 mesh) and stained with 2% phosphotungstic acid, pH 7.0, for 30 s. For observation and imaging, the grids were observed using a transmission electron microscope (JEM-1400Plus; JEOL Ltd., Tokyo, Japan) at an acceleration voltage of 100 kV.</ns0:p><ns0:p>Digital images (3296 × 2472 pixels) were taken with a CCD camera (EM-14830RUBY2; JEOL Ltd., Tokyo, Japan).</ns0:p></ns0:div>
<ns0:div><ns0:head>Protein digestion</ns0:head><ns0:p>We used approximately 50 mL of pooled healthy plasma and 100 mL of pooled healthy male urine from five healthy subjects for digestion of m/lEVs.</ns0:p><ns0:p>In plasma the following process is the same as 'Isolation of plasma m/lEVs' section. We repeated 18,900 ×g centrifugation washing steps three times to reduce levels of contaminating free plasma proteins and small EVs for shotgun analysis. After the last centrifugation, we removed supernatants and froze the samples.</ns0:p><ns0:p>In urine the following process is the same as 'Isolation of urinary m/lEVs' section. We repeated washing steps twice (after DTT treatment) to reduce levels of contaminating free urinary proteins and small EVs for shotgun analysis. We removed supernatants and froze the samples.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM- <ns0:ref type='table' target='#tab_1'>2019:09:41205:1:1:NEW 29 Nov 2019)</ns0:ref> Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> To discover characterizing surface antigen by flowcytometry, the sample was digested using a phase transfer surfactant-aided procedure so that many hydrophobic membrane proteins could be detected <ns0:ref type='bibr' target='#b3'>(Chen et al. 2017)</ns0:ref>. The precipitated frozen fractions of plasma and urine were thawed at 37°C, and then m/lEVs were solubilized in 250 μL of lysis buffer containing 12 mM sodium deoxycholate and 12 mM sodium lauroyl sarcosinate in 100 mM Tris•HCl, pH 8.5. After incubating for 5 min at 95°C, the solution was sonicated using an ultrasonic homogenizer.</ns0:p><ns0:p>Protein concentrations of the solutions were measured using a bicinchoninic acid assay (Pierce™ BCA Protein Assay Kit; Thermo Fisher Scientific).</ns0:p><ns0:p>Twenty microliters of the dissolved pellet (30 µg protein) were used for protein digestion.</ns0:p><ns0:p>Proteins were reduced and alkylated with 1 mM DTT and 5.5 mM iodoacetamide at 25°C for 60 min. Trypsin was added to a final enzyme:protein ratio of 1:100 (wt/wt) for overnight digestion.</ns0:p><ns0:p>Digested peptides were acidified with 0.5% trifluoroacetic acid (final concentration) and 100 μL of ethyl acetate was added for each 100 μL of digested m/lEVs. The mixture was shaken for 2 min and then centrifuged at 15,600 ×g for 2 min to obtain aqueous and organic phases. The aqueous phase was collected and desalted using a GL-Tip SDB column (GL Sciences Inc).</ns0:p></ns0:div>
<ns0:div><ns0:head>LC-MS/MS analysis</ns0:head><ns0:p>Digested peptides were dissolved in 40 μL of 0.1% formic acid containing 2% (v/v) acetonitrile and 2 μL were injected into an Easy-nLC 1000 system (Thermo Fisher Scientific). Peptides were separated on an Acclaim PepMap™ RSLC column (15 cm × 50 μm inner diameter) containing C18 resin (2 μm, 100 Å; Thermo Fisher Scientific™), and an Acclaim PepMap™ 100 trap column (2 cm× 75 μm inner diameter) containing C18 resin (3 μm, 100 Å; Thermo Fisher Scientific™). The mobile phase consisted of 0.1% formic acid in ultrapure water (buffer A). The elution buffer was 0.1 % formic acid in acetonitrile (buffer B); a linear 200 min gradient from 0%-40% buffer B was used at a flow rate of 200 nL/min. The Easy-nLC 1000 was coupled via a nanospray Flex ion source (Thermo Fisher Scientific™) to a Q Exactive™ Orbitrap (Thermo Fisher Scientific™). The mass spectrometer was operated in data-dependent mode, in which a full-scan MS (from 350 to 1,400 m/z with a resolution of 70,000, automatic gain control (AGC) 3E+06, maximum injection time 50 ms) was followed by MS/MS on the 20 most intense ions </ns0:p></ns0:div>
<ns0:div><ns0:head>Proteome Data Analysis</ns0:head><ns0:p>Raw MS files were analyzed using Proteome Discoverer software version 1.4 (Thermo Fisher Scientific™) and peptide lists were searched against the Uniprot Proteomes-Homo sapiens FASTA (Last modified November 17, 2018) using the Sequest HT algorithm. Initial precursor mass tolerance was set at 10 ppm and fragment mass tolerance was set at 0.6 Da. Search criteria included static carbamidomethylation of cysteine (+57.0214 Da), dynamic oxidation of methionine (+15.995 Da) and dynamic acetylation (+43.006 Da) of lysine and arginine residues.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gene ontology analysis and gene enrichment analysis</ns0:head><ns0:p>We conducted GO analysis using DAVID (https://david.ncifcrf.gov) to categorize the proteins identified by shotgun analysis and used Metascape (http://metascape.org/gp/index.html#/main/step1) for gene enrichment analysis. We uploaded the UNIPROT_ACCESSION No. for each protein.</ns0:p></ns0:div>
<ns0:div><ns0:head>Extracellular vesicle preparation from isolated erythrocytes</ns0:head><ns0:p>Whole blood was collected by the same method as above and centrifuged at 2,330 ×g for 10 min. After removal of the buffy coat and supernatant plasma, the remaining erythrocytes were washed three times by centrifugation at 2,330 ×g for 10 min and the erythrocyte pellet was resuspended in DPBS. EVs were generated from the washed erythrocytes by stimulation in the presence of 2.5 mM CaCl 2 (10 mM HEPES, 0.14 mM NaCl, 2.5 mM CaCl 2 , pH 7.4) for 1 h at room temperature under rotating conditions. Erythrocytes were removed by centrifugation at 2,330 ×g for 10 min and the EV rich supernatant was subsequently centrifuged (18,900 ×g for 30 min) to pellet the EVs. EVs were resuspended in DPBS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dipeptidyl peptidase IV (DPP4:CD26) activity assay</ns0:head><ns0:p>DPP4 activity was measured in the plasma and urine of six individuals (different from plasma donors). The method was previously published in part <ns0:ref type='bibr' target='#b14'>(Kawaguchi et al. 2010</ns0:ref>). DPP4 activity was measured via the fluorescence intensity of 7-amino-4-methylcoumarin (AMC) after its dissociation from the synthetic substrate (Gly-Pro-AMC • HBr) catalyzed by DPP4. The workflow for the isolation and enrichment of m/lEVs for proteomic and flow cytometric analyses is illustrated in Fig. <ns0:ref type='figure'>1A and 1B</ns0:ref>. m/lEVs from human plasma samples were isolated by high-speed centrifugation, an approach used in previous studies <ns0:ref type='bibr' target='#b11'>(Jayachandran et al. 2012)</ns0:ref>. For isolation of m/lEVs from urine, DTT, a reducing agent, was used to remove THP polymers because these non-specifically interact with IgGs.</ns0:p><ns0:p>Transmission electron microscopy revealed that almost all m/lEVs were small, closed vesicles with a size of approximately 200 nm that were surrounded by lipid bilayer (Fig. <ns0:ref type='figure'>1C-H</ns0:ref>).</ns0:p><ns0:p>In plasma, we observed EVs whose membranes were not stained either inside or on the surface (Fig. <ns0:ref type='figure'>1C</ns0:ref>, 1D); we also observed EVs whose forms were slightly distorted (Fig. <ns0:ref type='figure'>1E</ns0:ref>). In urine, a group of EVs with uneven morphology and EVs with interior structures were observed (Fig. <ns0:ref type='figure'>1F-1H</ns0:ref>). Apoptotic bodies, cellular debris, and protein aggregates were not detected.</ns0:p><ns0:p>No EVs with diameters greater than 800 nm were observed by NTA (Supplementary Fig. <ns0:ref type='figure'>S1</ns0:ref>) and flow cytometry can detect only EVs with diameters larger than 200 nm. Together, these data suggested that we characterized m/lEVs between 200 nm and 800 nm in diameter from plasma and urine by flow cytometry analysis.</ns0:p><ns0:p>Side-scatter events from size calibration beads of (diameters: 0.22, 0.45, 0.88 and 1.35 μm) were resolved from instrument noise using a FACS Verse flow cytometer (Supplementary Fig. <ns0:ref type='figure'>S2A</ns0:ref>). Inspection of the side-scatter plot indicated that 0.22 μm was the lower limit for bead detection. More than 90% of m/lEVs isolated from plasma and urine showed side-scatter intensities lower than those of 0.88-μm calibration beads (Fig. <ns0:ref type='figure'>2A-D</ns0:ref>). m/lEVs were heterogeneous in size, with diameters ranging from 200-800 nm in plasma and urine (Fig. <ns0:ref type='figure'>2A-D</ns0:ref>). Fluorescently-labeled mouse IgG was used to exclude nonspecific IgG-binding fractions (Supplementary Fig. <ns0:ref type='figure'>S2 B and C</ns0:ref>). In this experiment, we characterized m/lEVs with diameters ranging from 200-800 nm. NTA analysis shows less than 100nm size particles in the plasma fraction extracted by centrifugation, but we focused on particles over 200nm using a flow PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science cytometer. Using these methods, we observed an average of 8×10 5 and 1×10 5 m/lEVs in each mL of plasma and urine by flow cytometry observation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Shotgun proteomic analysis of plasma and urine EVs.</ns0:head><ns0:p>To analyze the protein components and discover characterizing surface antigen of m/lEVs present in plasma and urine of five healthy individuals, we performed LC-MS/MS proteomic analysis. A total of 593 and 1,793 proteins were identified in m/lEVs from plasma and urine, respectively (Fig. <ns0:ref type='figure' target='#fig_4'>3A</ns0:ref> and Supplementary Table <ns0:ref type='table'>S2</ns0:ref> and Table <ns0:ref type='table'>S3</ns0:ref>). Scoring counts using the SequestHT algorithm for the top 20 most abundant proteins are shown in Table <ns0:ref type='table' target='#tab_1'>1 and 2</ns0:ref>. We detected cytoskeleton-related protein such as actin, ficolin-3 and filamin and cell-surface antigen such as CD5, band3 and CD41 in plasma. We also identified actin filament-related proteins such as ezrin, radixin, ankylin and moesin which play key roles in cell surface adhesion, migration and organization in both plasma and urine. In urine, several types of peptidases (membrane alanine aminopeptidase or CD13; neprilysin or CD10; DPP4 or CD26) and MUC1 (mucin 1 or CD227) were detected in high abundance, and these proteins were used to characterize m/lEVs by flow cytometric analysis (Table <ns0:ref type='table'>2</ns0:ref> and Supplementary Table <ns0:ref type='table'>S3</ns0:ref>). We demonstrated that the isolated m/lEVs showed high expression of tubulin and actinin, while the tetraspanins CD9 and CD81 that are often used as exosome markers were only weakly identified. These data suggest that m/lEVs differ from small EVs including exosomes (Supplementary Table <ns0:ref type='table'>S4</ns0:ref>).</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_4'>3A</ns0:ref> and Supplementary Fig <ns0:ref type='table'>S3</ns0:ref>, about 10% of urinary EVs proteins were also identified in plasma EVs. Urinary EVs in the presence of blood contaminants were also observed in previous studies <ns0:ref type='bibr' target='#b33'>(Smalley et al. 2008)</ns0:ref>. These result suggest that m/lEVs in plasma were excreted in the urine via renal filtration and not reabsorbed. Gene ontology analysis of the identified proteins indicated overall similar cellular components in plasma and urine m/lEVs (Fig. <ns0:ref type='figure' target='#fig_4'>3B</ns0:ref>). The results of gene set enrichment analysis by metascape are shown for plasma and urine m/lEVs (Fig. <ns0:ref type='figure' target='#fig_4'>3C</ns0:ref>, D and Supplementary Table <ns0:ref type='table'>S5</ns0:ref> and Table <ns0:ref type='table'>S6</ns0:ref>). The most commonlyobserved functions in both plasma and urine were 'regulated exocytosis', 'hemostasis' and 'vesicle-mediated transport'. In plasma, several functions of blood cells were observed, including 'complement and coagulation cascades' and 'immune response'. Moreover, analysis of urinary EVs showed several characteristic functions including 'transport of small molecules', 'metabolic process' and 'cell projection assembly'. This may reflect the nature of the kidney, the urinary system and tubular villi. These data demonstrate the power of data-driven biological analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Characterization of plasma EVs by flow cytometry.</ns0:head><ns0:p>Next, we characterized m/lEVs in plasma by flow cytometry using antibodies against several surface antigens and Annexin V. To eliminate nonspecific adsorption, we excluded the mouse IgG-positive fraction. (Supplementary Fig. <ns0:ref type='figure'>S2B</ns0:ref>) Eliminating non-specific reactions to antibodies is important in using human body fluids as diagnostic materials for immunological measurements. By adding mouse IgG-APC to the system, we observed accurate flow cytometry image in which specific surface antigens were recognized by following two points: 1) blocking of non-specific reaction sites, 2) gate-out of positive non-specific reaction. We characterized positive m/lEVs using surface antigens detected by shotgun proteomic analysis and Annexin V (Fig. <ns0:ref type='figure'>4A-L</ns0:ref>).</ns0:p><ns0:p>To characterize m/lEVs derived from erythrocytes, T and B cells, macrophages/monocytes, granulocytes, platelets and endothelial cells, we selected nine antigens described in Fig. <ns0:ref type='figure'>4A</ns0:ref>. Two or more antigens were used for characterization of m/lEVs: for example, CD59 and CD235a double-positive and CD45-negative m/lEVs were classified as erythrocyte-derived m/lEVs (Supplementary Fig. <ns0:ref type='figure'>S4B</ns0:ref>). We confirmed that m/lEVs isolated from erythrocytes in vitro and erythrocytes derived m/lEVs from plasma are characterized by CD59 and CD235a double-positive and CD45-negative (Supplementary Fig. <ns0:ref type='figure'>S5</ns0:ref>). Determined positive area by addition of EDTA (Supplementary Fig. <ns0:ref type='figure'>S2D</ns0:ref>), we also show Annexin V staining for the m/lEVs corresponding to these five classifications (Fig. <ns0:ref type='figure'>4B-L</ns0:ref>). We integrated these characterizations and assessed the distribution of EV classifications among ten healthy subjects (Fig. <ns0:ref type='figure'>4M</ns0:ref>). The results suggested that no major differences in the ratios of fractions in these ten subjects and thus these definitions may be used for pathological analysis.</ns0:p><ns0:p>We found that 10% and 35% of m/lEVs were derived from erythrocytes and platelets, respectively. However, only 0.5%, 0.6% and 0.1% of m/lEVs were derived from macrophages, leukocytes and endothelial cells, respectively suggesting that the ratio of m/lEVs of different cellular origins is dependent on the number of cells present in plasma (Fig. <ns0:ref type='figure'>4M</ns0:ref>). We also observed that most m/lEVs derived from erythrocytes and macrophages were Annexin V positive (Fig. <ns0:ref type='figure'>4 N and O</ns0:ref>). By contrast, many Annexin V negative m/lEVs were identified among plateletand T and B cell-derived m/lEVs (Fig. <ns0:ref type='figure'>4 P and Q</ns0:ref>). Especially about erythrocyte-derived m/lEVs other studies have shown high percentages of phosphatidylserine-positive(:Annexin V positive) m/lEVs after red blood cell storage under blood bank conditions that these results are consistent <ns0:ref type='bibr' target='#b9'>(Gao et al. 2013</ns0:ref><ns0:ref type='bibr'>)(Xiong et al. 2011)</ns0:ref>.</ns0:p><ns0:p>In general, it is known that microparicle in blood are known to be exposed to PS on the surface, which is verified by being positive by Annexin5 staining. We found that the degree of exposure of phosphatidylserine to the membrane surface was vary depending on the cell derived from annexin V staining. Thus, the characteristics of m/lEVs can be determined in detail by using AnnexinV and antigenicity. These results suggested that the degree of exposing PS are cell-type specific and that release mechanisms may differ among cell types.</ns0:p></ns0:div>
<ns0:div><ns0:head>Characterization of urinary EVs by flow cytometry and enzyme activity assay.</ns0:head><ns0:p>In urine, we first removed aggregated m/lEVs and residual THP polymers using labelled normal mouse IgG (Supplementary Fig. <ns0:ref type='figure'>S2C</ns0:ref>). By removing the THP polymer by DTT treatment, many immunological non-specific reactions in flow cytometry observation were eliminated, and the remaining non-specific reactions were completely excluded from the observed image by mouse IgG-positive gating-out (Supplementary Fig. <ns0:ref type='figure'>S2F</ns0:ref>). To characterize urinary m/lEVs, we used surface antigens detected by shotgun proteomic analysis including CD10 (neprilysin), CD13 (alanine aminopeptidase), CD26 (DPP4) and CD227 (MUC1) (Fig. <ns0:ref type='figure'>5A-F</ns0:ref>). Many m/lEVs in the observation area were triple-positive for CD10, CD13 and CD26, but negative for Annexin V (Fig. <ns0:ref type='figure'>5B-D</ns0:ref>, Supplementary Fig. <ns0:ref type='figure'>S6</ns0:ref>). Furthermore, MUC1-positive EVs were both Annexin V positive and negative in roughly equivalent frequencies (Fig. <ns0:ref type='figure'>5B, E and F</ns0:ref>). These results suggested that m/lEVs containing peptidases were released by outward budding directly from the cilial membrane of renal proximal tubule epithelial cells. The results of integrating these Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science (Fig. <ns0:ref type='figure'>5G-I</ns0:ref>). These data indicated no major differences in the ratio among these populations, suggesting that our methodology was reliable for m/lEV analysis.</ns0:p><ns0:p>We next verified the CD26 peptidase enzyme activities of m/lEVs in plasma and urine from six individuals. We prepared three fractions: (i) 'whole,' in which debris were removed after low speed centrifugation, (ii) 'm/lEVs' and (iii) 'free (supernatant)' both of which were obtained via high speed centrifugation (18,900 ×g for 30 min) (Fig. <ns0:ref type='figure'>5J</ns0:ref>). We found that more than 20% of DPP4 activity in whole urine was contributed by the EV fraction (Fig. <ns0:ref type='figure'>5K</ns0:ref> and Supplementary Fig. <ns0:ref type='figure'>S7</ns0:ref>). By contrast, there was no peptidase activity associated with plasma m/lEVs (Fig. <ns0:ref type='figure'>5L</ns0:ref>). These results suggested that functional CD26 peptidase activity is present in m/lEVs in urine, which may be useful for pathological analysis.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In this study, we analyzed m/lEVs using various analytic techniques and found the following four major results. First, it was possible to characterize m/lEVs using multiple surface markers. Second, m/lEVs bear functional enzymes with demonstrable enzyme activity on the vesicle surface. Trird, there are probability of differences in asymmetry of membrane lipids by derived cells. Finally, there was little variation m/lEVs in the plasma and urine of healthy individuals, indicating that our method is useful for identifying cell-derived m/lEVs in these body fluids.</ns0:p><ns0:p>We isolated m/lEVs from plasma and urine that were primarily 200-800 nm in diameter as shown by transmission electron microscopy. A large proportion of proteins detected in m/lEVs using shotgun proteomic analysis were categorized as plasma membrane proteins.</ns0:p><ns0:p>Isolation of m/lEVs by centrifugation is a classical technique, but in the present study we further separated and classified the m/lEVs according to their cell types of origin by flow cytometry.</ns0:p><ns0:p>The results indicated the validity of the differential centrifugation method <ns0:ref type='bibr' target='#b0'>(Biro et al. 2003;</ns0:ref><ns0:ref type='bibr' target='#b24'>Piccin et al. 2015a</ns0:ref>).</ns0:p><ns0:p>Pang et al. <ns0:ref type='bibr' target='#b21'>(Pang et al. 2018</ns0:ref>) reported that integrin outside-in signaling is an important mechanism for microvesicle formation, in which the procoagulant phospholipid phosphatidylserine (PS) is efficiently externalized to release PS-exposed microvesicles (MVs).</ns0:p><ns0:p>These platelet-derived Annexin V positive MVs were induced by application of a pulling force via an integrin ligand such as shear stress. This exposure of PS allows binding of important coagulation factors, enhancing the catalytic efficiencies of coagulation enzymes. We observed that 50% of m/lEVs derived from leukocytes and platelets were Annexin V positive, suggesting that release PS-positive m/lEVs during activation, inflammation, and injury. It would be interesting to further investigate whether the ratio of Annexin V positive m/lEVs from platelets or leukocytes was an important diagnostic factor for inflammatory disease or tissue injury.</ns0:p><ns0:p>In urinary m/lEVs, we identified aminopeptidases such as CD10, CD13 and CD26 which are localized in proximal renal tubular epithelial cells. The functions of these proteins relating to exocytosis were categorized by gene enrichment analysis. The cilium in the kidney is PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals the site at which a variety of membrane receptors, enzymes and signal transduction molecules critical to many cellular processes function. In recent years, ciliary ectosomes -bioactive vesicles released from the surface of the cilium -have attracted attention <ns0:ref type='bibr' target='#b18'>(Nager et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b22'>Phua et al. 2017;</ns0:ref><ns0:ref type='bibr'>Wood & Rosenbaum 2015)</ns0:ref>. We also identified ciliary ectosome formation ESCRT complexes proteins (CHAMP; Supplementary Table <ns0:ref type='table'>S3 and 4</ns0:ref>) in proteomic analyses, suggesting that our isolation method was valid and the possibility that these proteins were biomarkers of kidney disease. Because triple peptidase positive m/lEVs were negative for Annexin V, the mechanism of budding from cells may not be dependent on scramblase. <ns0:ref type='bibr'>(Wood & Rosenbaum 2015)</ns0:ref> Platelet-derived m/lEVs are the most abundant population of extracellular vesicles in blood, and their presence <ns0:ref type='bibr' target='#b25'>(Piccin et al. 2007</ns0:ref>) and connection with tumor formation were reported in a recent study <ns0:ref type='bibr'>(Zmigrodzka et al. 2016)</ns0:ref>. In our study, platelet-derived EVs were observed in healthy subjects and had the highest abundance of Annexin V-positive EVs. In plasma, leukocyte-derived EVs were defined as CD11b/CD66b-or CD15-positive <ns0:ref type='bibr' target='#b32'>(Sarlon-Bartoli et al. 2013)</ns0:ref>. We characterized macrophage/monocyte/granulocyte-and T/B cell-derived EVs based on two specific CD antigens, and we confirmed that EVs derived from these cells were very rare.</ns0:p><ns0:p>Importantly, there was little variation in the cellular origins of m/lEVs in samples from ten healthy individuals, indicating that this method was useful for identifying cell-derived m/lEVs. We plan to examine m/lEVs differences in patients with these diseases in the near future.</ns0:p><ns0:p>Erythrocyte-derived EVs were also characterized by their expression of CD235a and glycophorin A by flow cytometry <ns0:ref type='bibr' target='#b8'>(Ferru et al. 2014;</ns0:ref><ns0:ref type='bibr'>Zecher et al. 2014)</ns0:ref>.</ns0:p><ns0:p>We also characterized m/lEVs in urine. In kidneys and particularly in the renal tubule, CD10, CD13, CD26 can be detected in high abundance by immunohistochemical staining (website: The Human Protein Atlas). CD10/CD13-double positive labeling can be used for isolation and characterization of primary proximal tubular epithelial cells from human kidney <ns0:ref type='bibr'>(Van der Hauwaert et al. 2013)</ns0:ref>. DPP4 (CD26) is a potential biomarker in urine for diabetic kidney disease and the presence of urinary m/lEV-bound DPP4 has been demonstrated <ns0:ref type='bibr' target='#b34'>(Sun et al. 2012)</ns0:ref>. The presence of peptidases on the m/lEV surface, and their major contribution to PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science peptidase activity in whole urine <ns0:ref type='bibr' target='#b34'>(Sun et al. 2012)</ns0:ref>, may suggest a functional contribution to reabsorption in the proximal tubules. These observations suggested that the ratio of DPP activity between m/lEVs and total urine can be an important factor in the diagnosis of kidney disease. MUC1 can also be detected in kidney and urinary bladder by immunohistochemical staining (website: The Human Protein Atlas). Significant increases of MUC1 expression in cancerous tissue and in the intermediate zone compared with normal renal tissue distant from the tumor was observed <ns0:ref type='bibr' target='#b1'>(Borzym-Kluczyk et al. 2015)</ns0:ref>. In any case, MUC1-positive EVs are thought to be more likely to be derived from the tubular epithelium or the urothelium.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Use of EVs as diagnostic reagents with superior disease and organ specificity for liquid biopsy samples is a possibility. This protocol will allow further study and in depth characterization of EV profiles in large patient groups for clinical applications. We are going to attempt to identify novel biomarkers by comparing healthy subjects and patients with various diseases. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Fig.</ns0:head><ns0:label /><ns0:figDesc>Fig. Flow cytometry was performed using FACSuite™ software (BD Biosciences) and data were analyzed using FlowJo software. The authors have applied for the following patents for the characterization method of m/lEVs isolated from plasma and urine with a flow cytometer: JP2018-109402(plasma) and JP2018-109403(urine).Nanoparticle tracking analysis (NTA)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science (AGC 1E+05, maximum injection time 100 ms, 4.0 m/z isolation window, fixed first mass 100 m/z, normalized collision energy 32 eV).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Experiments were performed in 96-well black plates. Titrated AMC was added to each well to prepare a standard curve. Fluorescence intensity was measured after incubating substrate with urine samples for 10 min. The enzyme reaction was terminated by addition of acetic acid. The fluorescence intensity (Ex. = 380 nm and Em. = 460 nm) was measured using Varioskan Flash (Thermo Fisher Scientific™). DPP4 activity assays were performed by Kyushu Pro Search LLP (Fukuoka, Japan). PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Results Isolation and characterization of m/lEVs from plasma and urine.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>characterizations and the distribution of EV classifications among ten healthy subjects are shown PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:1:1:NEW 29 Nov 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,306.37,525.00,387.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 . Twenty most abundant proteins identified in plasma m/lEVs</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Protein name</ns0:cell></ns0:row></ns0:table><ns0:note>1</ns0:note></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "Takashi Funatsu
Academic Editor,
PeerJ Analytical Chemistry
Nov 20th 2019
Dear Dr. Takashi Funatsu
Here is presented our revised manuscript entitled “Characterization and function of medium and large extracellular vesicles from plasma and urine by surface antigens and Annexin V” by Igami K. et al.
We thank the reviewers for their evaluation of our manuscript. The comments were carefully considered and proved very useful for improving the manuscript. We have responded to the referees’ comments by performing additional experiments and making corresponding modifications to the manuscript.
We feel that the study has been significantly improved by the reviewer’s thoughtful comments. We are therefore submitting a revised version of the manuscript in the hope that it will meet with your consideration and will be judged suitable for publication as an article in PeerJ.
We look forward to hearing from you at your earliest convenience.
Yours sincerely,
Takeshi Uchiumi, M.D., Ph.D.
Department of Clinical Chemistry and Laboratory Medicine
Kyushu University Graduate School of Medical Sciences
3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan.
Tel: +81-92-642-5750, Fax: +81-92-642-5772
E-mail: [email protected]
Reviewers' 1 Comments to Author
We sincerely appreciate that you provided several critical comments. We are sure that your suggestions were very important for improvement of our manuscript. We added several experiments and extensively revised the manuscript according to your comments.
Comments from Reviewer #1
1. ABSTRACT
This should be rewritten!
1. Need a sentence to introduce first EV then move on introducing m/lEV
2. However give also the size 200-800 nm here, not in the method.
3. please in the method need to mention that samples were drawn by healthy people
Were these blood donors? How do you ensure that they were healthy? Did you measure other key parameters such as for example FBC and urine creatinine? if yes please provide these data in a Table
According to reviewer’s comment, we rewrote the abstract in this revised manuscript.
“Extracellular vesicles (EVs) are released by most cell types and are involved in multiple basic biological processes. Medium/large EVs (m/lEVs), which is different size from exosome, plays an important role in the coagulation in blood, and is secreted from cancer cells, etc., suggesting functions related to malignant transformation. The m/lEVs levels in blood or urine may help unravel pathophysiological findings in many diseases. However, it remains unclear how many naturally-occurring m/lEV subtypes exist as well as how their characteristics and functions differ from one another.
Methods: We used the blood and urinal sample from each 10 healthy donors for analysis. Using a flow cytometer, we focus on characterization of EVs with large sizes (>200nm) that are different from exosomes. We also searched for a membrane protein for characterization with a flow cytometer using shotgun proteomics. Then we identified m/lEVs pelleted from plasma and urine samples by differential centrifugation and characterized by flow cytometry.” The knowledge of the new characteristics obtained is considered to be useful as a diagnostic material and the newly developed method suggests the possibility of clinical application.
2. INTRODUCTION
1. line 51
add those more references:
Intern Med J. 2017 Oct;47(10):1173-1183. doi: 10.1111/imj.13550
Transl Res. 2017 Jun;184:21-34. doi: 10.1016/j.trsl.2017.02.001.
Blood Transfus. 2015 Apr;13(2):172-3. doi: 10.2450/2014.0276-14.
Acta Haematol. 2014;132(2):199.
According to reviewer’s comment, we added the four reference in Introduction (line 62-63).
2. line 65,
need to also comment on EV in VOD and on ET use these references:
Intern Med J. 2017 Oct;47(10):1173-1183. doi: 10.1111/imj.13550
Transl Res. 2017 Jun;184:21-34. doi: 10.1016/j.trsl.2017.02.001.
According to reviewer’s comment, we added the two reference in Introduction (line 79).
3. MATERIAL AND METHODS
1. when reporting industrial data or product the copyright sign ' © 'should be included e.g Thermo Fisher, Wako etc
According to reviewer’s comment, we changed the copyright sign © or TM in material and method in this revised manuscript. (line 103, 112, 129, 231,233,237,238,245,276)
2.As above: healthy people how did you ensure this?
Were these blood donors? How do you ensure that they were healty?
Did you measure other key parameters such as for example FBC and urine creatinine?
if yes please provide these data in an additional Table
According to reviewer’s comment, we added the detail data about the healthy donors in Material and Method. We also added the Supplemental Table S1 which show the FBC and urinary creatinine.
In line 123 we added the sentence “ In particular, we confirmed that sample used for analysis by flow cytometer were the healthy donors by measuring blood and creatinine blood counts and total urine protein (Supplementary Table.S1).”
3. line 322 we next examined= rephrase it
According to reviewer’s comment, we changed the sentence “We next verified the CD26 peptidase enzyme activities of m/lEVs in plasma and urine from six individuals” (line 404)
4.. DISCUSSION
line 353 delete 'these cells are exposed to shear stress during blood flow and'
According to reviewer’s comment, we deleted the this sentence in this revised manuscript.
(line 437)
5.. CONCLUSION
Delete first sentence: ' We made...derived.'
According to reviewer’s comment, we deleted this first sentence in this revised manuscript. (line 489)
Comments from Reviewer #2
Basic reporting
This manuscript describes the proteome analysis of medium and large extracellular vesicles (m/lEVs) from plasma and urine. I can not find out the importance and novelty of this study from the viewpoint of analytical chemistry, because authors used general analytical procedures and their findings does not directly lead to a new analytical method. Hence, I can not understand why this manuscript was submitted to an analytical chemistry journal. Please add the novelty, importance, and progress of this study in terms of analytical chemistry more concretely.
In this manuscript, we analyzed the characterization of medium and large extracellular vesicles (m/lEVs) from plasma and urine. We discovered new properties and confirmed the MV properties by more than two antibody. We also found that new method which eliminate the non-specific reaction in urine sample and found that MVs has enzymatic activity in urine sample. These findings provide the validity and effectiveness of laboratory tests and are considered essential for future analysis of clinical specimens.
Point 1.
In healthy human m/lEVs (from plasma and urine), we have discovered new properties (double positive membrane antigens, and we suggest that these properties can be applied to new analytical diagnostic indicators by comparing them with patient-specific m/lEVs.
In Plasma m/lEVs
1) We could distinguish m/lEVs derived from platelets, erythrocytes, monocytes/macrophages, T and B cells, and vascular endothelial cells by more than two surface antigens.
2) It was found by annexin V staining that the degree of exposure of phosphatidylserine to the membrane surface differs depending on the derived cells.
In Urine m/lEVs
1) We could distinguish two type m/lEVs (three peptidase such as CD10, CD13, CD26) positive , MUC1positive)
2) Urinary m/lEVs has CD26 enzyme activity actually.
Point 2
Eliminating non-specific reactions to antibodies is important in using human body fluids as diagnostic materials for immunological measurements. When using blood and urine for samples, especially in urine, there are many THP polymers that bind to antibodies, so it was necessary to eliminate nonspecific reactions to these in performing flow cytometry. By adding mouse IgG-APC to the system, we observed accurate flow cytometry image in which specific surface antigens were recognized by following two points: 1) blocking of non-specific reaction sites, 2) gate-out of positive non-specific reaction.
According to reviewer’s comment we changed the some sentences in this revised manuscript.
Experimental design
Although authors said that they collected 200-800 nm m/lEVs by differential centrifugation, EVs larger than 800 nm exist in your samples by the flow cytometric analysis (Figure 2). Authors used nanoparticle tracking analysis (NTA) (line 242) which is a useful instrument for size, size distribution and particle number analyses of the particles, please add the analytical results of your samples by NTA in the revised manuscript.
According to reviewer’s comment we added results of NTA analysis in this revised manuscript (Supplementary Fig.S1) (line 294-295)
We also added the sentence “NTA analysis shows less than 100nm size particles in the plasma fraction extracted by centrifugation, but we focused on vesicles over 200nm using a flow cytometer.” (line 306-308)
Validity of the findings
This study is important in the field of EVs.
Comments from Reviewer #3
Comments for the Author
The manuscript entitled “Characterization and function of medium and large extracellular vesicles from plasma and urine by surface antigens and Annexin V” shows that m/lEVs derived from plasma and urine can be characterized by flow cytometry analysis using some membrane proteins and their methodology can distinguish m/lEVs from each type of cells. Moreover, they revealed that DPP4 activity is present in m/lEVs from urine, but not m/lEVs from plasma. These methods are effective in quality control of m/lEVs from plasma and urine, however, the impact is lost by not indicating the clinical applications and I found no major flaws in this manuscript.
The following points should be addressed:
1. What is the aim of this study? If the authors claim that the novelty of this study is development of m/lEVs detection methods, some researcher already reported the EV detection methods using flow cytometer. Moreover, if the authors want to use this method for clinical applications, the authors need to clearly articulate why this study was undertaken (what clinical situations?).
We are considering the next stage of testing for blood diseases (lymphoma, leukemia, aplastic anemia, thrombocytopenic purpura, myelodysplastic syndromes) in blood m/lEVs, renal diseases and bladder cancer in urine m/lEVs. In order to use patient m/lEVs as a diagnostic material, we needed the following two developments.
1. Characterize healthy human EVs in body fluids in detail and discover new properties 2. Eliminate non-specific reactions in body fluid m/lEVs measurements accurately.
1. new properties
Plasma m/lEVs
3) We could distinguish m/lEVs derived from platelets, erythrocytes, monocytes/macrophages, T and B cells, and vascular endothelial cells with more than three surface antigens.
4) It was found that the degree of exposing phosphatidyl serine to the membrane surface was different depending on the cell from which it was derived by Annexin V staining.
Urine m/lEVs
3) We could distinguish two type m/lEVs (three peptidase(CD10,CD13, CD26) positive , MUC1 (Urinary epithelial marker for bladder cancer) positive)
4) Urinary m/lEVs has CD26 enzyme activity actually.
2. Eliminate non-specific reactions
Eliminating non-specific reactions to antibodies is important in using human body fluids as diagnostic materials for immunological measurements. When using blood and urine for samples, especially in urine, there are many THP polymers that bind to antibodies, so it was necessary to eliminate nonspecific reactions to these in performing flow cytometry. By adding mouse IgG-APC to the system, we observed accurate flow cytometry image in which specific surface antigens were recognized by following two points: 1) blocking of non-specific reaction sites, 2) gate-out of positive non-specific reaction.
2. The authors described “data not shown” in this manuscript, however, the authors should show the all data in manuscript.
According to reviewer’s comment we added results of NTA analysis in this revised manuscript (Supplementary Fig.S1) (line 294-295)
We also added the sentence “NTA analysis shows less than 100nm size particles in the plasma fraction extracted by centrifugation, but we focused on particles over 200nm using a flow cytometer.” (line 306-308)
3. Although the authors claimed that CD59 and CD235a double-positive and CD45-negative m/lEVs were classified as erythrocyte-derived m/lEVs etc…, there is no evidence shown in the manuscript. The authors should indicate CD59 and CD235a double-positive and CD45-negative m/lEVs are derived from erythrocyte using pure culture system (I mean the authors culture erythrocyte and purification of m/lEVs from this culture supernatant).
According to reviewer’s comment we added the characterization of isolated erythrocyte-derived vesicles to Supplementary Fig.S5
We also added the sentence “We confirmed that m/lEVs isolated from erythrocytes in vitro and erythrocytes derived m/lEVs from plasma are characterized by the same CD antigen.” In this revised manuscript. (line 360-362)
4. Why did the authors use Annexin V as a marker of m/lEVs? Please explain more details.
According to reviewer’s comment we added the sentence in Results (line 379)
“In general, it is known that microparicle in blood are known to be exposed to PS on the surface, which is verified by being positive by Annexin5 staining. We found that the degree of exposure of phosphatidylserine to the membrane surface was vary depending on the cell derived from annexin V staining. Thus, the characteristics of m/lEVs can be determined in detail by using AnnexinV and more than two antigenicity. These results suggested that the degree of exposing PS are cell-type specific and that release mechanisms may differ among these cell types.”
These are considered to be new properties that characterize m/lEVs, and this may also be a viewpoint as a diagnostic material.
" | Here is a paper. Please give your review comments after reading it. |
680 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Extracellular vesicles (EVs) are released by most cell types and are involved in multiple basic biological processes. Medium/large EVs (m/lEVs), which is different size from exosome, plays an important role in the coagulation in blood, and is secreted from cancer cells, etc., suggesting functions related to malignant transformation. The m/lEVs levels in blood or urine may help unravel pathophysiological findings in many diseases.</ns0:p><ns0:p>However, it remains unclear how many naturally-occurring m/lEV subtypes exist as well as how their characteristics and functions differ from one another. Methods. We used the blood and urinal sample from each 10 healthy donors for analysis. Using a flow cytometer, we focus on characterization of EVs with large sizes (>200nm) that are different from exosomes. We also searched for a membrane protein for characterization with a flow cytometer using shotgun proteomics. Then we identified m/lEVs pelleted from plasma and urine samples by differential centrifugation and characterized by flow cytometry. Results.</ns0:p><ns0:p>Using proteomic profiling, we identified several proteins involved in m/lEV biogenesis including adhesion molecules, peptidases and exocytosis regulatory proteins. In healthy human plasma, we could distinguish m/lEVs derived from platelets, erythrocytes, monocytes/macrophages, T and B cells, and vascular endothelial cells with more than two positive surface antigens. The ratio of phosphatidylserine appearing on the membrane surface differed depending on the cell-derived m/lEVs. In urine, 50% of m/lEVs were Annexin V negative but contained various membrane peptidases derived from renal tubular villi. Urinary m/lEVs, but not plasma m/lEVs, showed peptidase activity. The knowledge of the new characteristics is considered to be useful as a diagnostic material</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Extracellular vesicles (EVs) play essential roles in cell-cell communication and are diagnostically significant molecules. EVs are secreted from most cell types under normal and pathophysiological conditions <ns0:ref type='bibr' target='#b10'>(Iraci et al. 2016;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ohno et al. 2013</ns0:ref>). These membrane vesicles can be detected in many human body fluids and are thought to have signaling functions in interactions between cells. Analysis of EVs may have applications in therapy, prognosis, and biomarker development in various fields. The hope is that using EV analysis, clinicians will be able to detect the presence of disease as well as to classify its progression using noninvasive methods such as liquid biopsy <ns0:ref type='bibr' target='#b2'>(Boukouris & Mathivanan 2015;</ns0:ref><ns0:ref type='bibr' target='#b23'>Piccin 2014;</ns0:ref><ns0:ref type='bibr' target='#b24'>Piccin et al. 2015a;</ns0:ref><ns0:ref type='bibr' target='#b26'>Piccin et al. 2017a;</ns0:ref><ns0:ref type='bibr' target='#b28'>Piccin et al. 2017b;</ns0:ref><ns0:ref type='bibr' target='#b29'>Piccin et al. 2015b)</ns0:ref>.</ns0:p><ns0:p>Medium/large extracellular vesicles (m/lEVs) can be classified based on their cellular origins, biological functions and biogenesis <ns0:ref type='bibr' target='#b6'>(El Andaloussi et al. 2013)</ns0:ref>. In a broad sense, they can be classified into m/lEVs with diameters of 100-1000 nm diameter (membrane blebs) and smaller EVs (e.g. exosomes) with diameters of 30-150 nm <ns0:ref type='bibr' target='#b30'>(Raposo & Stoorvogel 2013;</ns0:ref><ns0:ref type='bibr' target='#b31'>Robbins & Morelli 2014)</ns0:ref>. The m/lEVs are generated by direct outward budding from the plasma membrane (D'Souza-Schorey & Clancy 2012), while smaller EVs (e.g. exosomes) are produced via the endosomal pathway with formation of intraluminal vesicles by inward budding of multivesicular bodies (MVBs) <ns0:ref type='bibr' target='#b30'>(Raposo & Stoorvogel 2013)</ns0:ref>. In this study, we analyzed the physical characteristics of EVs from 200 nm to 800 nm in diameter which we refer to as m/lEVs as per the MISEV2018 guidelines <ns0:ref type='bibr'>(Thery et al. 2018)</ns0:ref>.</ns0:p><ns0:p>Recently, the clinical relevance of EVs has attracted significant attention. In particular, m/lEVs play an important role in tumor invasion <ns0:ref type='bibr' target='#b4'>(Clancy et al. 2015)</ns0:ref>. m/lEVs in blood act as a coagulant factor and have been associated with sickle cell disease, sepsis, thrombotic thrombocytopenic purpura, and other diseases <ns0:ref type='bibr' target='#b25'>(Piccin et al. 2007;</ns0:ref><ns0:ref type='bibr' target='#b26'>Piccin et al. 2017a;</ns0:ref><ns0:ref type='bibr' target='#b28'>Piccin et al. 2017b)</ns0:ref>. A possible role for urinary m/lEVs in diabetic nephropathy was also reported <ns0:ref type='bibr' target='#b34'>(Sun et al. 2012)</ns0:ref>. In recent years, the clinical applications of exosomes have been developed <ns0:ref type='bibr'>(Yoshioka et al. 2014</ns0:ref>). However, because characterization of exosomes is analytically challenging, determining the cells and tissues from which exosomes are derived can be difficult. m/lEVs are PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:p><ns0:p>Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science generated differently from exosomes <ns0:ref type='bibr' target='#b17'>(Mathivanan et al. 2010)</ns0:ref> but are similar in size and contain many of the same surface antigens. It is widely hypothesized that complete separation of exosomes and m/lEVs is likely to be a major challenge, and more effective techniques to purify and characterize m/lEVs would be extremely valuable.</ns0:p><ns0:p>In this study, we focused on m/lEVs in plasma and urine, which are representative body fluids in clinical laboratories. We purified for m/lEVs based on differential centrifugation and characterized m/lEVs by flow cytometry and mass spectrometry analysis and described the basic properties (characterizing surface antigen and orientation of phosphatidylserine and activity of the enzymes) of m/lEV subpopulations in blood and urine. healthy donors by measuring blood count and creatinine in blood and total urine protein (Supplementary Table.S1).</ns0:p></ns0:div>
<ns0:div><ns0:head>Isolation of plasma m/lEVs</ns0:head><ns0:p>Essentially platelet-free plasma (PFP) was prepared from EDTA-treated blood by double centrifugation at 2,330 ×g for 10 min. To assess residual platelets remaining in this sample, we measured platelet number using the ADVIA® 2120i Hematology System (SIEMENS Healthineers, Erlangen Germany). The number of platelets in this sample was below the limit of detection (1×10 3 cells/μL). We used a centrifugation method to obtain m/lEVs. In an effort to ensure our approach could be applied to clinical testing, we chose a simple and easy method for pretreatment. In an ISEV position paper <ns0:ref type='bibr'>(Mateescu et al. 2017</ns0:ref>), Thery's group referred to vesicles sedimenting at 100,000 ×g as 'small EVs' rather than exosomes, those pelleting at intermediate speed (lower than 20,000 ×g) as 'medium EVs' (including microvesicles and ectosomes) and those pelleting at low speed (e.g., 2000 ×g) as 'large EVs'. Because these definitions are less biologically meaningful but more experimentally tractable than previouslyapplied exosome/microvesicle definitions, we attempted biological characterization through subsequent shotgun and flow cytometry analysis.</ns0:p><ns0:p>In flow cytometric analysis, the volume of PFP used in each assay was 0.6 mL from each donor. In electron microscopy, the volume of PFP used was 3 mL. Samples were independent and were treated individually prior to each measurement. PFP was centrifuged at 18,900 ×g for 30 min in a fixed-angle rotor. The m/lEV pellet obtained after centrifugation was reconstituted by vortex mixing (1-2 min) with an equivalent volume of Dulbecco's phosphate-buffered saline (DPBS), pH 7.4. The solution was centrifuged at 18,900 ×g for 30 min again and the supernatant was discarded.</ns0:p></ns0:div>
<ns0:div><ns0:head>Isolation of urinary m/lEVs</ns0:head><ns0:p>For isolation of urinary m/lEVs, we modified a urinary exosome extraction protocol <ns0:ref type='bibr' target='#b7'>(Fernandez-Llama et al. 2010)</ns0:ref>. The centrifugation conditions were identical for plasma and urine so that the size and the density of m/lEVs were similar, enabling comparison of plasma and urinary m/lEVs. Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> In flow cytometric analysis, the volume of urine used for each assay was 1.2 mL from each donor. In electron microscopy, the volume of urine used was 15 mL. Samples were independent and were treated individually prior to each measurement. Collected urine was centrifuged at 2,330 ×g for 10 min twice. The supernatant was centrifuged at 18,900 ×g for 30 min in a fixed-angle rotor. The m/lEV pellet obtained from centrifugation was reconstituted by vortex mixing (1-2 min) with 0.2 mL of DPBS followed by incubation with DTT (final concentration 10 mg/mL) at 37°C for 10-15 min. The samples were centrifuged again at 18,900 ×g for 30 min and the supernatant was discarded. Addition of DTT, a reducing agent, reduced the formation of Tamm-Horsfall protein (THP) polymers. THP monomers were removed from m/lEVs after centrifugation. DTT-containing DPBS solutions were filtered through 0.1-μm filters (Millipore).</ns0:p></ns0:div>
<ns0:div><ns0:head>Flow cytometric analysis of m/lEVs</ns0:head><ns0:p>After resuspending m/lEV pellets in 60 μL of DPBS, we added saturating concentrations of several labelled antibodies, Annexin V and normal mouse IgG and incubated the tubes in the dark, without stirring, for 15-30 min at room temperature. In one case, we added labelled antibodies directly to 60 μL of PFP for staining. We resuspended stained fractions in Annexin V binding buffer (BD Biosciences: 10 mM HEPES, 0.14 mM NaCl, 2.5 mM CaCl 2 , pH 7.4) for analysis by flow cytometry. DPBS and Annexin V binding buffer were filtered through 0.1-μm filters (Millipore). Flow cytometry was performed using a FACSVerse™ flow cytometer (BD Biosciences). The flow cytometer was equipped with 405 nm, 488 nm and 638 nm lasers to detect up to 13 fluorescent parameters. The flow rate was 12 μL/min. Forward scatter voltage was set to 381, side scatter voltage was set to 340, and each threshold was set to 200. Details of excitation (Ex.) and emission (Em.) wavelengths as well as voltages described in supplements Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science NTA measurements were performed using a NanoSight LM10 (NanoSight, Amesbury, United Kingdom). After resuspending mEV pellets in 50 μL of DPBS, samples were diluted eight-fold (plasma) and 100-fold (urinary) with PBS prior to measurement. Particles in the laser beam undergo Brownian motion and videos of these particle movements are recorded. NTA 2.3 software then analyses the video and determines the particle concentration and the size distribution of the particles. Twenty-five frames per second were recorded for each sample at appropriate dilutions with a 'frames processed' setting of 1500. The detection threshold was set at '7 Multi' and at least 1,000 tracks were analyzed for each video.</ns0:p></ns0:div>
<ns0:div><ns0:head>Electron microscopy</ns0:head><ns0:p>For immobilization, we added 100 μL of PBS and another 100 μL of immobilization solution (4% paraformaldehyde, 4% glutaraldehyde, 0.1 M phosphate buffer, pH 7.4) to m/lEV pellets.</ns0:p><ns0:p>After stirring, we incubated at 4°C for 1 h. For negative staining, the samples were adsorbed to formvar film-coated copper grids (400 mesh) and stained with 2% phosphotungstic acid, pH 7.0, for 30 s. For observation and imaging, the grids were observed using a transmission electron microscope (JEM-1400Plus; JEOL Ltd., Tokyo, Japan) at an acceleration voltage of 100 kV.</ns0:p><ns0:p>Digital images (3296 × 2472 pixels) were taken with a CCD camera (EM-14830RUBY2; JEOL Ltd., Tokyo, Japan).</ns0:p></ns0:div>
<ns0:div><ns0:head>Protein digestion</ns0:head><ns0:p>We used approximately 50 mL of pooled healthy plasma and 100 mL of pooled healthy male urine from five healthy subjects for digestion of m/lEVs.</ns0:p><ns0:p>In plasma the following process is the same as 'Isolation of plasma m/lEVs' section. We repeated 18,900 ×g centrifugation washing steps three times to reduce levels of contaminating free plasma proteins and small EVs for shotgun analysis. After the last centrifugation, we removed supernatants and froze the samples.</ns0:p><ns0:p>In urine the following process is the same as 'Isolation of urinary m/lEVs' section. We repeated washing steps twice (after DTT treatment) to reduce levels of contaminating free urinary proteins and small EVs for shotgun analysis. We removed supernatants and froze the samples.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM- <ns0:ref type='table' target='#tab_1'>2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:ref> Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> To discover characterizing surface antigen by flowcytometry, the sample was digested using a phase transfer surfactant-aided procedure so that many hydrophobic membrane proteins could be detected <ns0:ref type='bibr' target='#b3'>(Chen et al. 2017)</ns0:ref>. The precipitated frozen fractions of plasma and urine were thawed at 37°C, and then m/lEVs were solubilized in 250 μL of lysis buffer containing 12 mM sodium deoxycholate and 12 mM sodium lauroyl sarcosinate in 100 mM Tris•HCl, pH 8.5. After incubating for 5 min at 95°C, the solution was sonicated using an ultrasonic homogenizer.</ns0:p><ns0:p>Protein concentrations of the solutions were measured using a bicinchoninic acid assay (Pierce™ BCA Protein Assay Kit; Thermo Fisher Scientific).</ns0:p><ns0:p>Twenty microliters of the dissolved pellet (30 µg protein) were used for protein digestion.</ns0:p><ns0:p>Proteins were reduced and alkylated with 1 mM DTT and 5.5 mM iodoacetamide at 25°C for 60 min. Trypsin was added to a final enzyme:protein ratio of 1:100 (wt/wt) for overnight digestion.</ns0:p><ns0:p>Digested peptides were acidified with 0.5% trifluoroacetic acid (final concentration) and 100 μL of ethyl acetate was added for each 100 μL of digested m/lEVs. The mixture was shaken for 2 min and then centrifuged at 15,600 ×g for 2 min to obtain aqueous and organic phases. The aqueous phase was collected and desalted using a GL-Tip SDB column (GL Sciences Inc).</ns0:p></ns0:div>
<ns0:div><ns0:head>LC-MS/MS analysis</ns0:head><ns0:p>Digested peptides were dissolved in 40 μL of 0.1% formic acid containing 2% (v/v) acetonitrile and 2 μL were injected into an Easy-nLC 1000 system (Thermo Fisher Scientific). Peptides were separated on an Acclaim PepMap™ RSLC column (15 cm × 50 μm inner diameter) containing C18 resin (2 μm, 100 Å; Thermo Fisher Scientific™), and an Acclaim PepMap™ 100 trap column (2 cm× 75 μm inner diameter) containing C18 resin (3 μm, 100 Å; Thermo Fisher Scientific™). The mobile phase consisted of 0.1% formic acid in ultrapure water (buffer A). The elution buffer was 0.1 % formic acid in acetonitrile (buffer B); a linear 200 min gradient from 0%-40% buffer B was used at a flow rate of 200 nL/min. The Easy-nLC 1000 was coupled via a nanospray Flex ion source (Thermo Fisher Scientific™) to a Q Exactive™ Orbitrap (Thermo Fisher Scientific™). The mass spectrometer was operated in data-dependent mode, in which a full-scan MS (from 350 to 1,400 m/z with a resolution of 70,000, automatic gain control (AGC) 3E+06, maximum injection time 50 ms) was followed by MS/MS on the 20 most intense ions PeerJ An. Chem. reviewing PDF | (ACHEM- <ns0:ref type='table' target='#tab_1'>2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science (AGC 1E+05, maximum injection time 100 ms, 4.0 m/z isolation window, fixed first mass 100 m/z, normalized collision energy 32 eV).</ns0:p></ns0:div>
<ns0:div><ns0:head>Proteome Data Analysis</ns0:head><ns0:p>Raw MS files were analyzed using Proteome Discoverer software version 1.4 (Thermo Fisher Scientific™) and peptide lists were searched against the Uniprot Proteomes-Homo sapiens FASTA (Last modified November 17, 2018) using the Sequest HT algorithm. Initial precursor mass tolerance was set at 10 ppm and fragment mass tolerance was set at 0.6 Da. Search criteria included static carbamidomethylation of cysteine (+57.0214 Da), dynamic oxidation of methionine (+15.995 Da) and dynamic acetylation (+43.006 Da) of lysine and arginine residues.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gene ontology analysis and gene enrichment analysis</ns0:head><ns0:p>We conducted GO analysis using DAVID (https://david.ncifcrf.gov) to categorize the proteins identified by shotgun analysis and used Metascape (http://metascape.org/gp/index.html#/main/step1) for gene enrichment analysis. We uploaded the UNIPROT_ACCESSION No. for each protein.</ns0:p></ns0:div>
<ns0:div><ns0:head>Extracellular vesicle preparation from isolated erythrocytes</ns0:head><ns0:p>Whole blood was collected by the same method as above and centrifuged at 2,330 ×g for 10 min. After removal of the buffy coat and supernatant plasma, the remaining erythrocytes were washed three times by centrifugation at 2,330 ×g for 10 min and the erythrocyte pellet was resuspended in DPBS. EVs were generated from the washed erythrocytes by stimulation in the presence of 2.5 mM CaCl 2 (10 mM HEPES, 0.14 mM NaCl, 2.5 mM CaCl 2 , pH 7.4) for 1 h at room temperature under rotating conditions. Erythrocytes were removed by centrifugation at 2,330 ×g for 10 min and the EV rich supernatant was subsequently centrifuged (18,900 ×g for 30 min) to pellet the EVs. EVs were resuspended in DPBS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dipeptidyl peptidase IV (DPP4:CD26) activity assay</ns0:head><ns0:p>DPP4 activity was measured in the plasma and urine of six individuals (different from plasma donors). The method was previously published in part <ns0:ref type='bibr' target='#b14'>(Kawaguchi et al. 2010</ns0:ref>). DPP4 activity was measured via the fluorescence intensity of 7-amino-4-methylcoumarin (AMC) after its dissociation from the synthetic substrate (Gly-Pro-AMC • HBr) catalyzed by DPP4. The workflow for the isolation and enrichment of m/lEVs for flow cytometric analyses is illustrated in Fig. <ns0:ref type='figure'>1A and 1B</ns0:ref>. m/lEVs from human plasma samples were isolated by high-speed centrifugation, an approach used in previous studies <ns0:ref type='bibr' target='#b11'>(Jayachandran et al. 2012)</ns0:ref>. For isolation of m/lEVs from urine, DTT, a reducing agent, was used to remove THP polymers because these non-specifically interact with IgGs.</ns0:p><ns0:p>Transmission electron microscopy revealed that almost all m/lEVs were small, closed vesicles with a size of approximately 200 nm that were surrounded by lipid bilayer (Fig. <ns0:ref type='figure'>1C-H</ns0:ref>).</ns0:p><ns0:p>In plasma, we observed EVs whose membranes were not stained either inside or on the surface (Fig. <ns0:ref type='figure'>1C</ns0:ref>, 1D); we also observed EVs whose forms were slightly distorted (Fig. <ns0:ref type='figure'>1E</ns0:ref>). In urine, a group of EVs with uneven morphology and EVs with interior structures were observed (Fig. <ns0:ref type='figure'>1F-1H</ns0:ref>). Apoptotic bodies, cellular debris, and protein aggregates were not detected.</ns0:p><ns0:p>No EVs with diameters greater than 800 nm were observed by NTA (Supplementary Fig. <ns0:ref type='figure'>S1</ns0:ref>) and flow cytometry can detect only EVs with diameters larger than 200 nm. Together, these data suggested that we characterized m/lEVs between 200 nm and 800 nm in diameter from plasma and urine by flow cytometry analysis. We observed the m/lEVs less than 100 nm by NTA because of some contamination or degradation after purification (Supplementary Fig. <ns0:ref type='figure'>S1</ns0:ref>).'</ns0:p><ns0:p>Side-scatter events from size calibration beads of (diameters: 0.22, 0.45, 0.88 and 1.35 μm) were resolved from instrument noise using a FACS Verse flow cytometer (Supplementary Fig. <ns0:ref type='figure'>S2A</ns0:ref>). Inspection of the side-scatter plot indicated that 0.22 μm was the lower limit for bead detection. More than 90% of m/lEVs isolated from plasma and urine showed side-scatter intensities lower than those of 0.88-μm calibration beads (Fig. <ns0:ref type='figure'>2A-D</ns0:ref>). m/lEVs were heterogeneous in size, with diameters ranging from 200-800 nm in plasma and urine (Fig. <ns0:ref type='figure'>2A-D</ns0:ref>). Fluorescently-labeled mouse IgG was used to exclude nonspecific IgG-binding fractions (Supplementary Fig. <ns0:ref type='figure'>S2 B and C</ns0:ref>). In this experiment, we characterized m/lEVs with diameters ranging from 200-800 nm. NTA analysis shows less than 100nm size particles in the plasma fraction extracted by centrifugation, but we focused on particles over 200nm using a flow Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science cytometer. Using these methods, we observed an average of 8×10 5 and 1×10 5 m/lEVs in each mL of plasma and urine by flow cytometry observation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Shotgun proteomic analysis of plasma and urine EVs.</ns0:head><ns0:p>To analyze the protein components and discover characterizing surface antigen of m/lEVs present in plasma and urine of five healthy individuals, we performed LC-MS/MS proteomic analysis. In this analysis, in order to prevent small EVs contamination, the washing process by centrifugation was increased compared to other analyses (Materials and Methods). A total of 593 and 1,793 proteins were identified in m/lEVs from plasma and urine, respectively (Fig. <ns0:ref type='figure' target='#fig_8'>3A</ns0:ref> and Supplementary Table <ns0:ref type='table'>S2</ns0:ref> and Table <ns0:ref type='table'>S3</ns0:ref>). Scoring counts using the SequestHT algorithm for the top 20 most abundant proteins are shown in Table <ns0:ref type='table' target='#tab_1'>1 and 2</ns0:ref>. We detected cytoskeleton-related protein such as actin, ficolin-3 and filamin and cell-surface antigen such as CD5, band3 and CD41 in plasma. We also identified actin filament-related proteins such as ezrin, radixin, ankylin and moesin which play key roles in cell surface adhesion, migration and organization in both plasma and urine. In urine, several types of peptidases (membrane alanine aminopeptidase or CD13; neprilysin or CD10; DPP4 or CD26) and MUC1 (mucin 1 or CD227) were detected in high abundance, and these proteins were used to characterize m/lEVs by flow cytometric analysis (Table <ns0:ref type='table'>2</ns0:ref> and Supplementary Table <ns0:ref type='table'>S3</ns0:ref>). We demonstrated that the isolated m/lEVs showed high expression of tubulin and actinin, while the tetraspanins CD9 and CD81 that are often used as exosome markers were only weakly identified. Especially in plasma, small EV (exosome) markers TSG101, VPS4 and Alix were not observed in this m/lEVs fraction (Supplementary Table <ns0:ref type='table'>S4</ns0:ref>). These data suggest that m/lEVs differ from small EVs including exosomes.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_8'>3A</ns0:ref> and Supplementary Fig <ns0:ref type='table'>S3</ns0:ref>, about 10% of urinary EVs proteins were also identified in plasma EVs. Urinary EVs in the presence of blood contaminants were also observed in previous studies <ns0:ref type='bibr' target='#b33'>(Smalley et al. 2008)</ns0:ref>. These result suggest that m/lEVs in plasma were excreted in the urine via renal filtration and not reabsorbed. Gene ontology analysis of the identified proteins indicated overall similar cellular components in plasma and urine m/lEVs (Fig. <ns0:ref type='figure' target='#fig_8'>3B</ns0:ref>). The results of gene set enrichment analysis by metascape are shown for plasma and <ns0:ref type='table'>S5 and Table S6</ns0:ref>). The most commonlyobserved functions in both plasma and urine were 'regulated exocytosis', 'hemostasis' and 'vesicle-mediated transport'. In plasma, several functions of blood cells were observed, including 'complement and coagulation cascades' and 'immune response'. Moreover, analysis of urinary EVs showed several characteristic functions including 'transport of small molecules', 'metabolic process' and 'cell projection assembly'. This may reflect the nature of the kidney, the urinary system and tubular villi. These data demonstrate the power of data-driven biological analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Characterization of plasma EVs by flow cytometry.</ns0:head><ns0:p>Next, we characterized m/lEVs in plasma by flow cytometry using antibodies against several surface antigens and Annexin V. To eliminate nonspecific adsorption, we excluded the mouse IgG-positive fraction. (Supplementary Fig. <ns0:ref type='figure'>S2B</ns0:ref>) Eliminating non-specific reactions to antibodies is important in using human body fluids as diagnostic materials for immunological measurements. By adding mouse IgG-APC to the system, we observed accurate flow cytometry image in which specific surface antigens were recognized by following two points: 1) blocking of non-specific reaction sites, 2) gate-out of positive non-specific reaction. We characterized positive m/lEVs using surface antigens detected by shotgun proteomic analysis and Annexin V (Fig. <ns0:ref type='figure'>4A-L</ns0:ref>).</ns0:p><ns0:p>To characterize m/lEVs derived from erythrocytes, T and B cells, macrophages/monocytes, granulocytes, platelets and endothelial cells, we selected nine antigens described in Fig. <ns0:ref type='figure'>4A</ns0:ref>. Two or more antigens were used for characterization of m/lEVs: for example, CD59 and CD235a double-positive and CD45-negative m/lEVs were classified as erythrocyte-derived m/lEVs (Supplementary Fig. <ns0:ref type='figure'>S4B</ns0:ref>). We confirmed that m/lEVs isolated from erythrocytes in vitro and erythrocytes derived m/lEVs from plasma are characterized by CD59 and CD235a double-positive and CD45-negative (Supplementary Fig. <ns0:ref type='figure'>S5</ns0:ref>). Determined positive area by addition of EDTA (Supplementary Fig. <ns0:ref type='figure'>S2D</ns0:ref>), we also show Annexin V staining for the m/lEVs corresponding to these five classifications (Fig. <ns0:ref type='figure'>4B-L</ns0:ref>). We integrated these Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science (Fig. <ns0:ref type='figure'>4M</ns0:ref>). The results suggested that no major differences in the ratios of fractions in these ten subjects and thus these definitions may be used for pathological analysis.</ns0:p><ns0:p>We found that 10% and 35% of m/lEVs were derived from erythrocytes and platelets, respectively. However, only 0.5%, 0.6% and 0.1% of m/lEVs were derived from macrophages, leukocytes and endothelial cells, respectively suggesting that the ratio of m/lEVs of different cellular origins is dependent on the number of cells present in plasma (Fig. <ns0:ref type='figure'>4M</ns0:ref>). We also observed that most m/lEVs derived from erythrocytes and macrophages were Annexin V positive (Fig. <ns0:ref type='figure'>4 N and O</ns0:ref>). By contrast, many Annexin V negative m/lEVs were identified among plateletand T and B cell-derived m/lEVs (Fig. <ns0:ref type='figure'>4 P and Q</ns0:ref>). Especially about erythrocyte-derived m/lEVs other studies have shown high percentages of phosphatidylserine-positive(:Annexin V positive) m/lEVs after red blood cell storage under blood bank conditions that these results are consistent <ns0:ref type='bibr' target='#b9'>(Gao et al. 2013</ns0:ref><ns0:ref type='bibr'>)(Xiong et al. 2011)</ns0:ref>.</ns0:p><ns0:p>In general, it is known that microparicle in blood are known to be exposed to PS on the surface, which is verified by being positive by Annexin5 staining. We found that the degree of exposure of phosphatidylserine to the membrane surface was vary depending on the cell derived from annexin V staining. Thus, the characteristics of m/lEVs can be determined in detail by using AnnexinV and antigenicity. These results suggested that the degree of exposing PS are cell-type specific and that release mechanisms may differ among cell types.</ns0:p></ns0:div>
<ns0:div><ns0:head>Characterization of urinary EVs by flow cytometry and enzyme activity assay.</ns0:head><ns0:p>In urine, we first removed aggregated m/lEVs and residual THP polymers using labelled normal mouse IgG (Supplementary Fig. <ns0:ref type='figure'>S2C</ns0:ref>). By removing the THP polymer by DTT treatment, many immunological non-specific reactions in flow cytometry observation were eliminated, and the remaining non-specific reactions were completely excluded from the observed image by mouse IgG-positive gating-out (Supplementary Fig. <ns0:ref type='figure'>S2F</ns0:ref>). To characterize urinary m/lEVs, we used surface antigens detected by shotgun proteomic analysis including CD10 (neprilysin), CD13 (alanine aminopeptidase), CD26 (DPP4) and CD227 (MUC1) (Fig. <ns0:ref type='figure'>5A-F</ns0:ref>). Many m/lEVs in the observation area were triple-positive for CD10, CD13 and CD26, but negative for Annexin V (Fig. <ns0:ref type='figure'>5B-D</ns0:ref>, Supplementary Fig. <ns0:ref type='figure'>S6</ns0:ref>). Furthermore, MUC1-positive EVs were both Annexin V PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science positive and negative in roughly equivalent frequencies (Fig. <ns0:ref type='figure'>5B, E and F</ns0:ref>). These results suggested that m/lEVs containing peptidases were released by outward budding directly from the cilial membrane of renal proximal tubule epithelial cells. The results of integrating these characterizations and the distribution of EV classifications among ten healthy subjects are shown (Fig. <ns0:ref type='figure'>5G-I</ns0:ref>). These data indicated no major differences in the ratio among these populations, suggesting that our methodology was reliable for m/lEV analysis.</ns0:p><ns0:p>We next verified the CD26 peptidase enzyme activities of m/lEVs in plasma and urine from six individuals. We prepared three fractions: (i) 'whole,' in which debris were removed after low speed centrifugation, (ii) 'm/lEVs' and (iii) 'free (supernatant)' both of which were obtained via high speed centrifugation (18,900 ×g for 30 min) (Fig. <ns0:ref type='figure'>5J</ns0:ref>). We found that more than 20% of DPP4 activity in whole urine was contributed by the EV fraction (Fig. <ns0:ref type='figure'>5K</ns0:ref> and Supplementary Fig. <ns0:ref type='figure'>S7</ns0:ref>). By contrast, there was no peptidase activity associated with plasma m/lEVs (Fig. <ns0:ref type='figure'>5L</ns0:ref>). These results suggested that functional CD26 peptidase activity is present in m/lEVs in urine, which may be useful for pathological analysis. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In this study, we analyzed m/lEVs using various analytic techniques and found the following four major results. First, it was possible to characterize m/lEVs using multiple surface markers. Second, m/lEVs bear functional enzymes with demonstrable enzyme activity on the vesicle surface. Trird, there are probability of differences in asymmetry of membrane lipids by derived cells. Finally, there was little variation m/lEVs in the plasma and urine of healthy individuals, indicating that our method is useful for identifying cell-derived m/lEVs in these body fluids.</ns0:p><ns0:p>We isolated m/lEVs from plasma and urine that were primarily 200-800 nm in diameter as shown by transmission electron microscopy. A large proportion of proteins detected in m/lEVs using shotgun proteomic analysis were categorized as plasma membrane proteins.</ns0:p><ns0:p>Isolation of m/lEVs by centrifugation is a classical technique, but in the present study we further separated and classified the m/lEVs according to their cell types of origin by flow cytometry.</ns0:p><ns0:p>The results indicated the validity of the differential centrifugation method <ns0:ref type='bibr' target='#b0'>(Biro et al. 2003;</ns0:ref><ns0:ref type='bibr' target='#b24'>Piccin et al. 2015a</ns0:ref>).</ns0:p><ns0:p>Pang et al. <ns0:ref type='bibr' target='#b21'>(Pang et al. 2018</ns0:ref>) reported that integrin outside-in signaling is an important mechanism for microvesicle formation, in which the procoagulant phospholipid phosphatidylserine (PS) is efficiently externalized to release PS-exposed microvesicles (MVs).</ns0:p><ns0:p>These platelet-derived Annexin V positive MVs were induced by application of a pulling force via an integrin ligand such as shear stress. This exposure of PS allows binding of important coagulation factors, enhancing the catalytic efficiencies of coagulation enzymes. We observed that 50% of m/lEVs derived from leukocytes and platelets were Annexin V positive, suggesting that release PS-positive m/lEVs during activation, inflammation, and injury. It would be interesting to further investigate whether the ratio of Annexin V positive m/lEVs from platelets or leukocytes was an important diagnostic factor for inflammatory disease or tissue injury.</ns0:p><ns0:p>In urinary m/lEVs, we identified aminopeptidases such as CD10, CD13 and CD26 which are localized in proximal renal tubular epithelial cells. The functions of these proteins relating to exocytosis were categorized by gene enrichment analysis. The cilium in the kidney is Manuscript to be reviewed Chemistry Journals the site at which a variety of membrane receptors, enzymes and signal transduction molecules critical to many cellular processes function. In recent years, ciliary ectosomes -bioactive vesicles released from the surface of the cilium -have attracted attention <ns0:ref type='bibr' target='#b18'>(Nager et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b22'>Phua et al. 2017;</ns0:ref><ns0:ref type='bibr'>Wood & Rosenbaum 2015)</ns0:ref>. We also identified ciliary ectosome formation ESCRT complexes proteins (CHAMP; Supplementary Table <ns0:ref type='table'>S3 and 4</ns0:ref>) in proteomic analyses, suggesting that the possibility that these proteins were biomarkers of kidney disease. Because triple peptidase positive m/lEVs were negative for Annexin V, the mechanism of budding from cells may not be dependent on scramblase. <ns0:ref type='bibr'>(Wood & Rosenbaum 2015)</ns0:ref> Platelet-derived m/lEVs are the most abundant population of extracellular vesicles in blood, and their presence <ns0:ref type='bibr' target='#b25'>(Piccin et al. 2007</ns0:ref>) and connection with tumor formation were reported in a recent study <ns0:ref type='bibr'>(Zmigrodzka et al. 2016)</ns0:ref>. In our study, platelet-derived EVs were observed in healthy subjects and had the highest abundance of Annexin V-positive EVs. In plasma, leukocyte-derived EVs were defined as CD11b/CD66b-or CD15-positive <ns0:ref type='bibr' target='#b32'>(Sarlon-Bartoli et al. 2013)</ns0:ref>. We characterized macrophage/monocyte/granulocyte-and T/B cell-derived EVs based on two specific CD antigens, and we confirmed that EVs derived from these cells were very rare.</ns0:p><ns0:p>Importantly, there was little variation in the cellular origins of m/lEVs in samples from ten healthy individuals, indicating that this method was useful for identifying cell-derived m/lEVs. We plan to examine m/lEVs differences in patients with these diseases in the near future.</ns0:p><ns0:p>Erythrocyte-derived EVs were also characterized by their expression of CD235a and glycophorin A by flow cytometry <ns0:ref type='bibr' target='#b8'>(Ferru et al. 2014;</ns0:ref><ns0:ref type='bibr'>Zecher et al. 2014)</ns0:ref>.</ns0:p><ns0:p>We also characterized m/lEVs in urine. In kidneys and particularly in the renal tubule, CD10, CD13, CD26 can be detected in high abundance by immunohistochemical staining (website: The Human Protein Atlas). CD10/CD13-double positive labeling can be used for isolation and characterization of primary proximal tubular epithelial cells from human kidney <ns0:ref type='bibr'>(Van der Hauwaert et al. 2013)</ns0:ref>. DPP4 (CD26) is a potential biomarker in urine for diabetic kidney disease and the presence of urinary m/lEV-bound DPP4 has been demonstrated <ns0:ref type='bibr' target='#b34'>(Sun et al. 2012)</ns0:ref>. The presence of peptidases on the m/lEV surface, and their major contribution to peptidase activity in whole urine <ns0:ref type='bibr' target='#b34'>(Sun et al. 2012)</ns0:ref> Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science reabsorption in the proximal tubules. These observations suggested that the ratio of DPP activity between m/lEVs and total urine can be an important factor in the diagnosis of kidney disease. MUC1 can also be detected in kidney and urinary bladder by immunohistochemical staining (website: The Human Protein Atlas). Significant increases of MUC1 expression in cancerous tissue and in the intermediate zone compared with normal renal tissue distant from the tumor was observed <ns0:ref type='bibr' target='#b1'>(Borzym-Kluczyk et al. 2015)</ns0:ref>. In any case, MUC1-positive EVs are thought to be more likely to be derived from the tubular epithelium or the urothelium.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Use of EVs as diagnostic reagents with superior disease and organ specificity for liquid biopsy samples is a possibility. This protocol will allow further study and in depth characterization of EV profiles in large patient groups for clinical applications. We are going to attempt to identify novel biomarkers by comparing healthy subjects and patients with various diseases. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Fig.</ns0:head><ns0:label /><ns0:figDesc>Fig. Flow cytometry was performed using FACSuite™ software (BD Biosciences) and data were analyzed using FlowJo software. The authors have applied for the following patents for the characterization method of m/lEVs isolated from plasma and urine with a flow cytometer: JP2018-109402(plasma) and JP2018-109403(urine).Nanoparticle tracking analysis (NTA)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Experiments were performed in 96-well black plates. Titrated AMC was added to each well to prepare a standard curve. Fluorescence intensity was measured after incubating substrate with urine samples for 10 min. The enzyme reaction was terminated by addition of acetic acid. The fluorescence intensity (Ex. = 380 nm and Em. = 460 nm) was measured using Varioskan Flash (Thermo Fisher Scientific™). DPP4 activity assays were performed by Kyushu Pro Search LLP (Fukuoka, Japan). PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science Results Isolation and characterization of m/lEVs from plasma and urine.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science urine m/lEVs (Fig. 3C, D and Supplementary Table</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>characterizations and assessed the distribution of EV classifications among ten healthy subjects PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>, may suggest a functional contribution to PeerJ An. Chem. reviewing PDF | (ACHEM-2019:09:41205:2:0:NEW 20 Dec 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,306.37,525.00,387.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 . Twenty most abundant proteins identified in plasma m/lEVs</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Protein name</ns0:cell></ns0:row></ns0:table><ns0:note>1</ns0:note></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "Takashi Funatsu
Academic Editor,
PeerJ Analytical Chemistry
Dec 20th 2019
Dear Dr. Takashi Funatsu
Here is presented our re-revised manuscript entitled “Characterization and function of medium and large extracellular vesicles from plasma and urine by surface antigens and Annexin V” by Igami K. et al.
We thank the reviewers for their evaluation of our manuscript. The comments were carefully considered and proved very useful for improving the manuscript. We have responded to the referees’ 2 comments by making corresponding modifications to the manuscript.
We feel that the study has been significantly improved by the reviewer’s thoughtful comments. We are therefore submitting a re-revised version of the manuscript in the hope that it will meet with your consideration and will be judged suitable for publication as an article in PeerJ.
We look forward to hearing from you at your earliest convenience.
Yours sincerely,
Takeshi Uchiumi, M.D., Ph.D.
Department of Clinical Chemistry and Laboratory Medicine
Kyushu University Graduate School of Medical Sciences
3-1-1 Maidashi, Higashi-ku, Fukuoka 812-8582, Japan.
Tel: +81-92-642-5750, Fax: +81-92-642-5772
E-mail: [email protected]
Comments from Reviewer #2
Experimental design
Authors mentioned that they characterized medium/large EVs (m/lEV) and their size ranges were from 200 to -800 nm in the manuscript. However, the size of more than half plasm EVs measured by NTA was less than 100 nm after centrifugation (Supplementary Fig.S1). How did you eliminate the effects of proteins in small EVs in shotgun proteomic analysis?
We sincerely appreciate that you provided several critical comments. We are sure that your suggestions were very important for improvement of our manuscript. We extensively revised the manuscript according to your comments.
In consideration of future clinical laboratory applications, we considered simple isolation processes including centrifugation. Thus, flow cytometry, NTA, and electron microscopy results can be obtained by performing two 18900 g centrifugal wash steps as shown in Fig.1. In shotgun proteomic analysis, main objective is to discover for m/lEVs characterization markers, thus, increasing the washing process compared to other analysis processes to prevent contamination of small EVs (described in the material method line 206). Therefore, a more purified m/lEVs fraction would be expected for shotgun analysis than for NTA. In plasma NTA results, many fractions below 100nm were observed, but in the shotgun analysis results with plasma, small EV (exosome) markers TSG101, VPS4 and Alix were not observed (Supplementary TableS4).
It takes 3 days to measure the NTA analysis after m/lEVs purification. Thus, we observed the smaller m/lEVs less than 100nm because of degradation after purification.
According to reviewer’s comment, we change the sentence (line 298, 318 and 332) in this re-revised version.
In line 298
“No EVs with diameters greater than 800 nm were observed by NTA (Supplementary Fig.S1) and flow cytometry can detect only EVs with diameters larger than 200 nm. Together, these data suggested that we characterized m/lEVs between 200 nm and 800 nm in diameter from plasma and urine by flow cytometry analysis. We observed the m/lEVs less than 100 nm in diameter by NTA due to degradation after purification (Supplementary Fig.S1).”
In line 318
“In this analysis, in order to prevent small EVs contamination, the washing process by centrifugation was increased compared to other analyses (Materials and Methods).”
In line 332
“Especially in plasma, small EV (exosome) markers TSG101, VPS4 and Alix were not observed in this m/lEVs fraction (Supplementary Table S4).”
Validity of the findings
Although you added the importance of this study from the viewpoints of analytical chemistry, I couldn't understand the importance of this study even in the revised manuscript. For example, you found CD26 enzyme activity from urine samples in this study. Why is this important from an analytical chemistry viewpoint? Please explain the novelty and importance of this study in a simpler and easier way from an analytical chemistry viewpoint.
In this manuscript, we analyzed the characterization of medium and large extracellular vesicles (m/lEVs) from plasma and urine. In particular, it was found that urinary m/lEVs by centrifuge has CD26 activity but not plasma m/lEVs. We considered that CD26 activity, including m/lEVs, is a potential biomarker for renal diseases such as kidney disease and acute glomerulonephritis.
In this paper, we established the newly developed method to characterize m/lEVs from plasma and urine, and found that these method were more accurate analysis and gain new discovery. Secondly, the knowledge of the new characteristics is considered to be useful as a diagnostic material and the newly developed method suggests the possibility of clinical application.
" | Here is a paper. Please give your review comments after reading it. |
681 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Water has been described as a universal solvent and this is perhaps the strength behind its many uses. Despite this unique property anthropogenic activities along its course and natural factors often determine the composition of water. In the current research the portion of River Nworie having past Owerri town was sampled in the dry season 2017 to determine its ionic composition at predestinated points and to relate such properties to its physicochemical characteristics. Studies relating physicochemical properties and dissolved toxic ions in water could develop a body of knowledge that could enable detection and quantification of potential risk of ions such as heavy metals from natural water to aquatic ecosystem, animal and human health without actually involving aquatic organism, animal and human. Clean sterile plastic bottles were used for collecting surface water. A total of 30 sub-samples from 5 points at 300 m apart were sampled in the morning. Physicochemical properties were determined using standard methods and ionic composition of water was determined according methods of APHA. Results revealed that Ca 2+ had a mean 23.60±0.67 mg/l and was the highest while K + with a mean 0.72±0.30 was the least amongst major cations. Amongst the major anions Clhad mean of 31.58±4.47 mg/l while mean of PO 4 3was 1.42±0.13 mg/l. The ionic balance calculate as % balance error showed high values for all sampling sites ranging from 30 to 39.42% indicating that there is massive input from anthropogenic activities. The computed relationships for selected heavy metals, cations and anions revealed that R 2 values were ranging between ±0.012 to 1 indicating some form of relationship existing. The water pH weakly correlated with dissolved cations and anions while moderate with pH only, due to the pH level (5.2-6.2). The cations and anions were more influenced by the water temperature than the heavy metals. Therefore, high temperature ranges of 31-32.4°C will favour more dissolution of cations and anions</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Water has been described as a universal solvent and this is perhaps the strength behind its many uses. Despite this unique property anthropogenic activities along its course and natural factors often determine the composition of water. In the current research the portion of River Nworie having past Owerri town was sampled in the dry season 2017 to determine its ionic composition at predestinated points and to relate such properties to its physicochemical characteristics. Studies relating physicochemical properties and dissolved toxic ions in water could develop a body of knowledge that could enable detection and quantification of potential risk of ions such as heavy metals from natural water to aquatic ecosystem, animal and human health without actually involving aquatic organism, animal and human. Clean sterile plastic bottles were used for collecting surface water. A total of 30 sub-samples from 5 points at 300 m apart were sampled in the morning. Physicochemical properties were determined using standard methods and ionic composition of water was determined according methods of APHA. Results revealed that Ca 2+ had a mean 23.60±0.67 mg/l and was the highest while K + with a mean 0.72±0.30 was the least amongst major cations. Amongst the major anions Cl had mean of 31.58±4.47 mg/l while mean of PO 4 3-was 1.42±0.13 mg/l. The ionic balance calculate as % balance error showed high values for all sampling sites ranging from 30 to 39.42% indicating that there is massive input from anthropogenic activities. The computed relationships for selected heavy metals, cations and anions revealed that R 2 values were ranging between ±0.012 to 1 indicating some form of relationship existing. The water pH weakly correlated with dissolved cations and anions while moderate with pH only, due to the pH level (5.2-6.2). The cations and anions were more influenced by the water temperature than the heavy metals. Therefore, high temperature ranges of 31-32.4°C will favour more dissolution of cations and anions in natural water. Cations showed stronger relationship with EC while only heavy metals showed no relationship with DO (Dissolved oxygen). Dissolved oxygen relationship with cations and anions was in the order; K respectively. Information here could be used to predict the effects of using this water for various purposes including water for agricultural purpose, in the management of ions polluted waters and also inform on the mitigation process to be taken.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>All waters in the environment contain dissolved salts existing in ionic forms. However, some species occur more frequently and at greater concentrations than others. The concentrations of dissolved salts in water are influenced by anthropogenic and natural factors such as industrial and domestic effluents and sewage, agricultural effluents, radioactive wastes, thermal pollution, oil pollution, topography, geology, and inputs through rainwater, water/rock interaction and climate <ns0:ref type='bibr'>(Verla et. al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Isiuku and Enyoh, 2019)</ns0:ref>. The occurrence of ions such as cations, heavy metals and anions in excess of natural load is of current concern not only to researchers, governmental and non-governmental organizations. The concern stems from their persistence in the environment and biopersistence when ingested. Therefore, causing damage to ecosystem and posing a serious health threat to the immediate population.</ns0:p><ns0:p>The distribution, solubility and mobility of ions in water are of importance to water as a media due to potential toxicity to man, plants and animals <ns0:ref type='bibr'>(Enyoh et. al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Isiuku and Enyoh, 2019)</ns0:ref>. The toxicity of dissolved ions such as heavy metals is due to their ability to bind to oxygen, nitrogen, and sulphur groups in proteins, resulting in alterations of enzymatic activity. Most organ/systems are affected by heavy metal toxicity; the most common include the hematopoietic, renal, and cardiovascular organs/systems <ns0:ref type='bibr' target='#b34'>(Verla et. al., 2019b)</ns0:ref>. Movement and chemical stability of ions in water are controlled by a complex series of biogeochemical processes that depend on physicochemical properties of the water such as pH, temperature, redox potential including adsorption, precipitation and ion exchange reactions in water <ns0:ref type='bibr'>(Verla et. al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b13'>Enyoh et. al., 2018)</ns0:ref>. Generally, for most metals decreasing pH causes an increase in metal solubility in many forms except metals present in the form of oxyanions or amphoteric species. Studies have shown that metals solubility correlated positively with pH <ns0:ref type='bibr' target='#b13'>(Enyoh et. al., 2018)</ns0:ref> and becomes limited at pH range of 5.5 to 6.0 while more distributed at temperature range of <ns0:ref type='bibr'>15-35 ∘C (Pérez-Esteban et al. 2013</ns0:ref><ns0:ref type='bibr'>, Yang et al. 2006</ns0:ref><ns0:ref type='bibr' target='#b21'>, Jing et al. 2007;</ns0:ref><ns0:ref type='bibr'>Yuanxing et. al., 2017)</ns0:ref>. Therefore, the distribution, solubility and toxicity of ions in water can be controlled by controlling the physicochemical properties of the water.</ns0:p><ns0:p>River system, especially the ones flowing through urban cities is greatly threatened by pollution. Nworie River which flows through Owerri, the capital city of Imo State in South-eastern Nigeria is an example of such river system. Numerous studies have confirmed that the river is polluted by ions such as Cd, Ni, Fe, Hg, As, Co, Cu, Mn, SO 4 2-, PO 4 3and NO 3 - <ns0:ref type='bibr'>(Ukagwu et. al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b36'>Verla et. al., 2019)</ns0:ref> and without treatment could pose serious health issues to the user. However, study focusing on the distribution of its ionic composition and the relationship with its physicochemical properties is lacking. It is possible to characterize waters by performing a chemical analysis of their ionic composition. Such study reveals the nature of weathering and a variety of other natural and anthropogenic processes. Clearly, the precise chemical composition of the water will depend upon the types of rock and soils with which the water has been in contact and this can be used to characterize particular water by determining its chemical make-up and suggesting pollution problem mitigation strategy. Furthermore, studies focusing on the relationship between physicochemical parameters and ionic compositions of natural water are scarce while laboratory studies have been well conducted. This kind of studies relating physicochemical properties and dissolved toxic ions in water could develop a body of knowledge that could enable detection and quantification of potential risk of ions such as heavy metals from natural water to aquatic ecosystem, animal and human health without actually involving aquatic organism, animal and human.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.0'>Materials and methods</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Study area</ns0:head><ns0:p>Nworie River is the study area (Figure <ns0:ref type='figure'>1</ns0:ref>). The river originates from Mbiatolu LGA of Imo State between latitude 5 o 28N and 5 o 31N (Figure <ns0:ref type='figure'>1</ns0:ref>). It passes through Owerri Municipal of Imo State and then empties into the Otamiri river at Nekede, Owerri West LGA, Imo State. The measured length of the river is approximately 9 kilometers.</ns0:p><ns0:p>Photographs showing the decline state and anthropogenic activities of some points along River Nworie are shown in Figures <ns0:ref type='figure'>2A-2D</ns0:ref> and Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Collection of water sample</ns0:head><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science A total of 30 sub-samples were collected to make five composite (six subsamples per sampling location) samples.</ns0:p><ns0:p>Samples were collected following a 'W' shaped design along the longitudinal course of the river using the grab technique <ns0:ref type='bibr' target='#b36'>(Verla et. al., 2019)</ns0:ref>. From where the river enters the Owerri municipal to where it leaves. Geographically, the sampling points lies between the latitudes 05.52°N and longitude 07.03°E, NDW1 and NDW2 (Amakohia-Alvan), NDW3 (HolyGhost college), NDW4 (Wetheral) lies between 5.479 °N and longitude 07.027 °E , and NDW5 (river leaving the town) (Figure <ns0:ref type='figure'>1</ns0:ref>). The water samples were collected during the dry season period. The points of sample collection were at least 300m apart, done in morning against the water current. Clean plastic bottles were used for the collection.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Analytical procedure of water sample</ns0:head><ns0:p>The electrical conductivity was measured using HANNA HI8733 EC METER in μS/cm, which was calibrated using KCl. The electrical conductivity (EC) probe was dipped into the water sample for 60 seconds and readings were recorded on the meter screen.</ns0:p><ns0:p>The pH was determined using JENWAY 3510pH METER which was calibrated using buffer 4 and buffer 7 by dissolving one capsule each in 100 ml of distilled water respectively. The pH was determined by introducing the probe of the pH meter into water sample (collected from a large sample) and the reading on the meter screen was recorded.</ns0:p><ns0:p>Dissolved oxygen (DO) concentrations were determined with a Jenway 9071 DO analyzer by inserting the probe into the water samples. TDS (Total Dissolved Solids) and TSS (Total Suspended Solid) (mg/L) was determined according method described by <ns0:ref type='bibr'>Verla et al., (2018a)</ns0:ref> while phosphate (PO 4 3-), nitrate (NO 3 2-), chloride was determined according to American Public Health Association method 4110 (APHA, 2005).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Determination of heavy metals and macro elements</ns0:head><ns0:p>The water sample was digested using aqua-regia (HCl+HNO 3 in ratio 3 to 1). 1 mL of the water sample was digested for 3days in a test tube with 24 mL of aqua-regia <ns0:ref type='bibr' target='#b13'>(Enyoh et. al., 2018;</ns0:ref><ns0:ref type='bibr'>Verla et. al., 2018;</ns0:ref><ns0:ref type='bibr'>Verla et. al., 2019a)</ns0:ref>. The digested filtrates were used for the total metal quantification of Na, K, Mg, Ca, Mn Pb, Cd, Fe, Zn, and Cu using Atomic Absorption Spectrophotometer (Perkin Elmer AAnalyst 400) in mg/l (ppm). The characteristic wavelengths of metals determined were first set using the hollow cathode lamp, then digested filtrates samples was aspirated directly into the flame. To ensure accuracy of data, calibration of the equipment was done for each element using a standard sample prepared as a control with every set of samples. The instrumental parameters for particular metals that were analysed are presented in table 2. The same procedure was done for all five composite samples.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Data analysis</ns0:head><ns0:p>The data were analyzed for basic descriptive statistics such minimum, maximum, mean and standard deviation of triplicate analysis and level of significance was determined using Microsoft Excel and IBM SPSS statistics version 20. Correlation, Principal Component and linear regression analysis were conducted to establish relationship between physicochemical properties and ionic composition at 5 % level of significance. P-values were considered significant when less than 0.05. To interpret the r-value for the correlation and linear relationship, the following classification was adopted, presented in Table <ns0:ref type='table'>3</ns0:ref>. Hierarchical Cluster Analysis (HCA) was used also to find similarities between physicochemical properties and ionic composition.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.0'>Results</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Surface water characteristics</ns0:head><ns0:p>The surface water characteristics are presented in Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>. The results are compared with World Health Organization standards permissible limit <ns0:ref type='bibr'>(WHO, 2007)</ns0:ref>. The mean values of temperature and total dissolved solids (TDS) exceeded the recommended limit. Mean pH (5.62) was below the acceptable range while other physicochemical parameters didn't exceed the permissible limit. However, the highest and lowest temperature was recorded at NDW4 and NDW1 respectively. The overlying water in NDW4 had the highest DO value of 3.89 mg/L and in NDW5 the lowest was recorded (2.27 mg/L). The standard DO expected according to WHO permissible limit ( <ns0:ref type='formula'>2007</ns0:ref>) is 4 mg/L, hence suggesting relatively poor water quality of river Nworie.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Ionic composition and distribution</ns0:head><ns0:p>The results for the ionic compositions and percentage distributions are presented in table 5 and Figure <ns0:ref type='figure'>3</ns0:ref>. Only calcium, iron, nitrate, phosphate and chloride showed lower mean concentrations when compared to WHO permissible limits (Table <ns0:ref type='table' target='#tab_3'>5</ns0:ref>), all others showed higher mean concentrations. The order of ionic composition was Cl -> SO 4 2-> Ca 2+ > Zn 2+ > Mg 2+ >NO 3 -> PO 4 3-> Na + > Mn 2+ > others. Chloride, calcium and zinc showed the highest distributions for anions, cations and heavy metals respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Ion balancing of River</ns0:head><ns0:p>When a water quality sample has been analysed for the major ionic species, one of the most important validation tests can be conducted: the cation-anion balance <ns0:ref type='bibr'>(HPTM, 1999)</ns0:ref>. The principle of electroneutrality requires that the sum of the positive ions (cations) must equal the sum of the negative ions (anions). Thus the error in a cation-anion balance can be written as ( <ns0:ref type='formula'>1</ns0:ref>):</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>% 𝐵𝑎𝑙𝑎𝑛𝑐𝑒 𝑒𝑟𝑟𝑜𝑟 = ∑𝐶𝑎𝑡𝑖𝑜𝑛𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑟𝑖𝑣𝑒𝑟 -∑𝐴𝑛𝑖𝑜𝑛𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑟𝑖𝑣𝑒𝑟 ∑𝐶𝑎𝑡𝑖𝑜𝑛𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑟𝑖𝑣𝑒𝑟 + ∑𝑎𝑛𝑖𝑜𝑛𝑠 𝑖𝑛 𝑡ℎ𝑒 𝑟𝑖𝑣𝑒𝑟</ns0:formula><ns0:p>The computed cation-anion balance is presented in Figure <ns0:ref type='figure'>4</ns0:ref>. For surface water, the % error should be less than 10.</ns0:p><ns0:p>Here, a balance error ranging from 30 to 39.42 % is recorded for the different sampling point of the river. The high balance errors suggest that major ionic concentrations distributed in river Nworie are not equal (i.e major cations ≠ major anions).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4.'>Correlation analysis and Principal component analysis of various ions</ns0:head><ns0:p>The result for the Pearson's correlation analysis of various ions in River Nworie is presented in table 6. This model has been used well by many researchers in determining contamination sources for pollutants in the environment <ns0:ref type='bibr' target='#b13'>(Enyoh et. al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b7'>Duru et. al., 2017;</ns0:ref><ns0:ref type='bibr'>Verla et. al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b18'>Ibe et. al., 2017)</ns0:ref>. The value of r is always between +1 and -1. Most ions exhibit strong relationships, suggesting that their presence in the water is from similar anthropogenic source(s). Strong relationship was exhibited by some of the ions. Cations exhibited strong relationship amongst them. Na further showed strong relationship with some heavy metals (Cu, Cd, Mn and Fe) and anion (PO 4 3-). Similar relationships were also exhibited Mg 2+ and Ca 2+ . Strong relationship was exhibited between heavy metals amongst them and with Cland PO 4 3-, except for Zn. Most anions showed negative relationships except for NO 3 -/ SO 4 2-(0.54). To determine the precise contamination source(s) of ions as predicted by the correlation analysis; we conducted a principal component analysis following standard procedures <ns0:ref type='bibr' target='#b9'>(Dragović, et. al., 2008;</ns0:ref><ns0:ref type='bibr'>Franco-Uria et. al., 2009)</ns0:ref>. We used the varimax rotation with Kaiser Normalization because it better Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science explained the possible groups or sources that influence the soil system and maximise the sum of the variance of the factor coefficients <ns0:ref type='bibr' target='#b15'>(Gotelli & Ellinson, 2013)</ns0:ref>. The factor loadings for heavy metals in the river water are shown in Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref>. Three components were extracted based on eigen value > 1, best described the sources.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Relationship between water characteristics and dissolved ions</ns0:head><ns0:p>In order to establish a relationship between the recorded physicochemical properties and ionic composition of river Nworie, multiple linear regression analysis was conducted. Linear regression is a statistical method used to assess a possible linear association between two continuous variables. The analysis gives out a regression coefficient (R 2 value) <ns0:ref type='bibr' target='#b13'>(Enyoh et. al., 2018)</ns0:ref>. R 2 value reveals the extent of influence of river physicochemical properties on the distribution of ions in the river on the basis of closeness to either -1 or +1 to indicate a strong enough linear relationship. The computed relationship for cations, anions and heavy metals are presented in Figures <ns0:ref type='figure'>5-7</ns0:ref>. All ions showed positive relationships at varying level with the water physicochemical characteristics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6.'>Hierarchical Cluster analysis</ns0:head><ns0:p>The result for Hierarchical cluster analysis (HCA) is presented in Figure <ns0:ref type='figure'>8</ns0:ref>, most the ions showed similarities with measured physicochemical parameters. Only TDS and EC showed some dissimilarity with iron and sulphate.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.0'>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1.'>Surface water characteristics</ns0:head><ns0:p>Only mean values of temperature and total dissolved solids (TDS) exceeded the recommended limit (Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>). The characteristics of the river are a reflection of the sampling period. High temperature obtained wasn't surprising due sampling season (dry season). Low values of DO have been reported earlier in other studies <ns0:ref type='bibr' target='#b27'>(Manila and Frank, 2009;</ns0:ref><ns0:ref type='bibr' target='#b8'>Duru and Nwanekwu, 2012;</ns0:ref><ns0:ref type='bibr' target='#b28'>Okoro et. al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b36'>Verla et. al., 2019;</ns0:ref><ns0:ref type='bibr'>Verla et. al., 2019a)</ns0:ref>. These studies related the low DO to human activity, causing enrichment of the surface water with high organic content. Electrical conductivity recorded in this study fell below the acceptable limit (100 µS/cm) set by <ns0:ref type='bibr'>WHO (2007)</ns0:ref>. The conductivity depends on water temperature and is the measure of water capability to pass electric flow. High temperature may cause high EC. The conductive ions may come from dissolved salts and inorganic materials in the River.</ns0:p><ns0:p>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Ionic composition and distribution</ns0:head><ns0:p>Dissolved salt often exists as ions in solution. The ions can be positive or negative. Ions with a positive charge are called 'cations' while negative charge is called an 'anion'. Going by this definition, heavy metals are cations because they are positively charged. However in this study we classified the ions as major cations, heavy metals (often categorized as secondary constituents) and anions. The major cations studied were sodium, potassium, magnesium and calcium; heavy metals were copper, cadmium, manganese, zinc, iron and lead; while anions were nitrate, sulphate, phosphate and chloride. The results for the ionic compositions are presented in table 4. The major cationic and anionic concentrations were low generally when compared to WHO permissible limit (2007) standard except for magnesium. The magnesium concentration ranges from 1.13 mg/L to 2.78 mg/L as opposed to the WHO permissible limit (2007) standard of 0.5 mg/L. Calcium (Ca 2+ ) and Magnesium (Mg 2+ ) ions are both common in natural waters and both are essential elements for all organisms <ns0:ref type='bibr'>(Hydrology Project Training Module, 1999)</ns0:ref>. They are responsible for the hardness of natural waters when combined with dissolved materials in the water. WHO reported that hard water has no known adverse health effect (Sengupta, 2013) and could provide an important supplementary contribution to total calcium and magnesium intake <ns0:ref type='bibr'>(Garlan et. al., 2002)</ns0:ref>. However, prolong consumption of water with high magnesium can cause hypermagnesemia if the consumer have significantly decreased ability to excrete magnesium <ns0:ref type='bibr'>(Chandra et. al., 2013)</ns0:ref>.</ns0:p><ns0:p>All heavy metals except for Fe had mean values exceeding the permissible limit set by WHO (Table <ns0:ref type='table' target='#tab_3'>5</ns0:ref>), which is indicative that the water is polluted by these metals. The concentration of Pb ranged from 0.127 to 0.521 mg/l, Fe ranged from 0.091 to 0.191, Cd ranged from 0.002 to 0.180 mg/l, Cu ranged from 0.13 to 0.79 mg/l, manganese ranged from 0.08 to 1.02 and zinc ranged from 1.2 to 2.63 with mean values of 0.22, 0.13, 0.05, 0.35, 0.41 and 2.25 mg/l respectively. The solubility of trace metals in surface waters is predominantly controlled by the water pH, the type and concentration of ligands on which the metal could adsorb, and the oxidation state of the mineral components and the redox environment of the system. Ingestion of metals such as lead (Pb) and cadmium (Cd) may pose great risks to human health by interfering with essential nutrients in the body, possibly causing small increases in blood pressure and damaging the kidney. In addition, they can equally affect aquatic fauna and flora <ns0:ref type='bibr' target='#b20'>(Isiuku and Enyoh, 2019)</ns0:ref>. Manuscript to be reviewed Chemistry Journals <ns0:ref type='bibr'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:ref> The percentages of ions are presented in Figure <ns0:ref type='figure'>3</ns0:ref>. This distribution is showing the abundance of major ions in river Nworie during dry season. Metal ions and anions highly distributed were calcium (26.07 %), sulphate (26.90 %), chlorine (34.88 %), while sodium, potassium, nitrate, phosphate, zinc were within the range of 1 -3 %. Other heavy metal ions showed distribution of 0 %. These suggest that the presence of heavy metals in river Nworie during dry season is low in the total composition of ions in the River. The low distribution could probably due to the metals forming complexes with organic materials in the water or they could be in abundance in other forms which can be accessed through chemical speciation <ns0:ref type='bibr'>(Verla et. al., 2019c)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Ion balancing of River</ns0:head><ns0:p>The ionic balancing of the river is high (mean balance error of 35.62 %) indicating variable dissolution of ions in the river (Figure <ns0:ref type='figure'>4</ns0:ref>). The obvious reason could be due to pollution by anthropogenic means through dumping of waste and the river finding itself at the receiving end of an effluent discharge from domestic and industrial sources.</ns0:p><ns0:p>Fertilizers from farmlands along river Nworie finds their way into the river through surface run-off and increases the anionic concentrations such as PO 4 3and NO 3in the river while major cations might undergo complexometric reactions reducing their concentrations in the water. Anthropogenic activities have been reported to significantly alter ionic concentration in water <ns0:ref type='bibr' target='#b27'>(Manila and Frank, 2009;</ns0:ref><ns0:ref type='bibr' target='#b13'>Enyoh et. al., 2018;</ns0:ref><ns0:ref type='bibr'>Verla et. al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b20'>Isiuku and Enyoh, 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.'>Correlation analysis and Principal component analysis of various ions</ns0:head><ns0:p>The result for the Pearson's correlation analysis of various ions in River Nworie showed many ions showing strong relationships suggesting that their presence in the water is from similar anthropogenic source(s). Some association exhibited by the ions has observed in other water studies <ns0:ref type='bibr' target='#b13'>(Enyoh et. al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b7'>Duru et. al., 2017;</ns0:ref><ns0:ref type='bibr'>Verla et. al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b18'>Ibe et. al., 2017)</ns0:ref>. Precise contamination source of ions was determined by PCA. Three components were extracted based on the eigen value greater than 1. The three components cumulatively explained 93.324 % of the total variance and generally indicated an anthropogenic source(s) of the studied ions in the river water (Table <ns0:ref type='table' target='#tab_5'>7</ns0:ref>).</ns0:p><ns0:p>According to <ns0:ref type='bibr' target='#b23'>Liu et. al., (2003)</ns0:ref>, components loadings values of > 0.75, 0.75-0.50, and 0.50-0.30 were classified as 'strong', 'moderate', and 'weak respectively. The PC1 explained 62.424 % of total variance and was found to be strongly and positively correlated to Cd 2+ , Cu 2+ , Ca 2+ , Mn 2+ , Na + , Fe 2+ , PO 4 2and Mg 2+ (0.71-0.99). This relates to the artisanal activities, metal processing works in the area and agricultural activities; PC2 explained 16.088 % of total variance and showed moderate to strong positive factor loadings for SO 4 2and Pb (0.65-0.90) which also indicate industrial source, activities of scrap metal dealers, recycling of metals as well as lead-acid accumulators, vehicle emissions in urban areas and agricultural activities in the study area. PC3 explained 14.812 % of total variance and showed strong positive factor loadings for NO 3and K + (0.72-0.81) which could be attributed to atmospheric deposition and agricultural activities involving the use of fertilizers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Relationship between water characteristics and dissolved ions</ns0:head><ns0:p>The computed relationship for cations, anions and heavy metals are presented in Figures <ns0:ref type='figure'>5-7</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.1'>Temperature</ns0:head><ns0:p>Different researches have reported different result for temperature effects on metal distribution in water. most studies have been conducted under laboratory conditions. <ns0:ref type='bibr' target='#b25'>Lourino-Cabana et al., (2014)</ns0:ref> and <ns0:ref type='bibr'>Green-Ruiz et al. (2008)</ns0:ref> reported that the distribution of metals changed with temperature variation while also some research reported that the influence of temperature on metal distribution was not evident <ns0:ref type='bibr' target='#b1'>(Aston et al. 2010</ns0:ref><ns0:ref type='bibr' target='#b2'>, Biesuz et al. 1998</ns0:ref><ns0:ref type='bibr'>, Zhang et al. 2013)</ns0:ref>. Another studies by <ns0:ref type='bibr' target='#b10'>Echeverrıa et al. (2003)</ns0:ref> and <ns0:ref type='bibr' target='#b11'>Echeverría et al. (2005)</ns0:ref> found that increased temperature resulted in a higher maximum sorption of metals by minerals. In the current study, the temperature relationship was measured under ambient field/natural conditions. Our results showed that temperature relationship was weakly positive especially with heavy metals distribution with R 2 -value < 0.1 (Figure <ns0:ref type='figure'>5</ns0:ref>). We also observed a strong positive relationship existed with anions such as NO 3 -(0.922) and Cl -(0.749) (Figure <ns0:ref type='figure'>6</ns0:ref>) while only K + (0.379) was more significant for major cations (Figure <ns0:ref type='figure'>5</ns0:ref>). <ns0:ref type='bibr'>Haiyan et. al., (2013)</ns0:ref> reported that at temperature range of 4-25°C, there is a weak temperature-dependence of metal distributed. However in this study high temperature ranges of 31-32.4°C were recorded (Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>), which fell between the temperature ranges of 15-35 o C reported by <ns0:ref type='bibr'>Yuanxing et. al., (2017)</ns0:ref>, who observed that distribution rates of ions such Zn, Cu, Pb, Cr, and Cd were greater in high temperatures than at low temperature.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.2'>pH</ns0:head><ns0:p>Different studies have shown that heavy metals are generally associated with contaminated water <ns0:ref type='bibr'>(Bouma et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b5'>Lors et al., 2004</ns0:ref><ns0:ref type='bibr'>). Yuanxing et. al., (2017)</ns0:ref> explains that when pH decreases (< 5), the competition between H + The effects of TDS and TSS on ionic compositions are presented in Figures <ns0:ref type='figure'>5-7</ns0:ref>. The TDS-dependence followed the order; Heavy metals (0.327) > anions (0.057) > cations (0.057), while the TSS-dependence followed the order; anions (0.186) > cations (0.133) > heavy metals (0.100). This could be from the weight of the ions. Heavy metals are heavier (with atomic weights between 63.546 and 200.590) and less dissolved in water when compared to anions and cations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4.'>Hierarchical Cluster analysis</ns0:head><ns0:p>Hierarchical cluster analysis (HCA) based on Square Euclidian Distance (SED) was carried out to show the similarities and/or dissimilarities between the physicochemical characteristic data and ionic concentrations in the river water. The SED was adopted since it is based on the Euclidian Distance between two observations, which is the square root of the sum of squared distances, thereby increases the importance of large distances, while weakening the importance of small distances. The results obtained from this analysis are presented as dendrogram shown in Figure <ns0:ref type='figure'>8</ns0:ref>. A dendrogram is a branch diagram that represents the level of relationships or similarity among parameters arranged like the branches of a tree <ns0:ref type='bibr' target='#b19'>(Ibe et. al., 2019)</ns0:ref>. From figure <ns0:ref type='figure'>8</ns0:ref>, based on the rescaled distance of cluster combined, it can be seen that most the ions showed similarities with measured physicochemical parameters.</ns0:p><ns0:p>Only TDS and EC showed some dissimilarity with iron and sulphate. The similarities exhibited by most parameters further highlighted the relationships existing between the physicochemical and ionic parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.'>Conclusion</ns0:head><ns0:p>The study has shown that the physicochemical properties of river water from Nworie river influences ionic composition under ambient natural/field conditions. The presence of ions dissolved in the river water as revealed by the PCA and HCA was from anthropogenic activities including atmospheric deposition, application of fertilizer on nearby farming, artisanal and automechanic activities. The ions showed varying relationship with the water characteristics. The water pH weakly correlated with dissolved cations and anions while moderate with pH only, due to the pH level (5.2-6.2). The cations and anions were more influenced by the water temperature than the heavy metals. Therefore, high temperature ranges of 31-32.4°C will favour more dissolution of cations and anions in natural water. Cations showed stronger relationship with EC while only heavy metals showed no relationship with DO. Dissolved oxygen relationship with cations and anions was in the order; K + > Mg 2+ > Ca 2+ > Na + while anions was SO 4 2-> NO 3 -> Cl -> PO 4 3respectively. The TDS and TSS had more influence on heavy metals and anions Manuscript to be reviewed Chemistry Journals Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p><ns0:note type='other'>Figure 2</ns0:note><ns0:p>Photographs of some points along River Nworie</ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)Manuscript to be reviewed Chemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science </ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,455.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,445.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='37,42.52,178.87,525.00,408.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,431.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>The description and activities carried out within the study area ***** means Very severe, **** means Severe.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Sampling</ns0:cell><ns0:cell>Major</ns0:cell><ns0:cell>Human activity</ns0:cell><ns0:cell>Vegetation</ns0:cell><ns0:cell>Extent of Human</ns0:cell></ns0:row><ns0:row><ns0:cell>points</ns0:cell><ns0:cell>Landmark</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>destruction of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Natural</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ecosystem</ns0:cell></ns0:row><ns0:row><ns0:cell>NDW1</ns0:cell><ns0:cell cols='2'>Fly-over bridge Road</ns0:cell><ns0:cell>Grassland and patches of</ns0:cell><ns0:cell>*****</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>expansion/construction,</ns0:cell><ns0:cell>dead grass.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>waste dumpsites,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>automechanic activities</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>NDW2</ns0:cell><ns0:cell cols='2'>Fly-over bridge Road</ns0:cell><ns0:cell>Grassland and patches of</ns0:cell><ns0:cell>*****</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>expansion/construction,</ns0:cell><ns0:cell>dead grass.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>waste dumpsites and</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>automechanic activities</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>NDW3</ns0:cell><ns0:cell>Bridge</ns0:cell><ns0:cell>Road</ns0:cell><ns0:cell>Shrubs</ns0:cell><ns0:cell>****</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>expansion/construction,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>waste</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>dumpsites and</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>automechanic activities</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>NDW4</ns0:cell><ns0:cell>Bridge, Mbari</ns0:cell><ns0:cell>Road</ns0:cell><ns0:cell>Shrubs</ns0:cell><ns0:cell>*****</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Kitchen</ns0:cell><ns0:cell>expansion/construction</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>restaurant and</ns0:cell><ns0:cell>waste</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Umezurike</ns0:cell><ns0:cell>dumpsites, Dredging,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Hospital</ns0:cell><ns0:cell>sand</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>mining and</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>automechanic activities</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>NDW5</ns0:cell><ns0:cell>Bridge and</ns0:cell><ns0:cell>Road</ns0:cell><ns0:cell>Shrubs</ns0:cell><ns0:cell>*****</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Emmanuel</ns0:cell><ns0:cell>expansion/construction</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>college</ns0:cell><ns0:cell>waste</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>dumpsites and</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>automechanic activities</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Optimal Instrumental parameters for AAS determination of the metals</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Metal</ns0:cell><ns0:cell>Wavelength</ns0:cell><ns0:cell>Spectral</ns0:cell><ns0:cell>Flame gases</ns0:cell><ns0:cell>Time of</ns0:cell><ns0:cell>Atomization</ns0:cell></ns0:row><ns0:row><ns0:cell>symbols</ns0:cell><ns0:cell>(nm)</ns0:cell><ns0:cell>Band Width</ns0:cell><ns0:cell /><ns0:cell>measurement</ns0:cell><ns0:cell>flow rate</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>(nm)</ns0:cell><ns0:cell /><ns0:cell>(secs)</ns0:cell><ns0:cell>(L/min)</ns0:cell></ns0:row><ns0:row><ns0:cell>Ca</ns0:cell><ns0:cell>422.7</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>1.2</ns0:cell></ns0:row><ns0:row><ns0:cell>K</ns0:cell><ns0:cell>766.5</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>1.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Mg</ns0:cell><ns0:cell>285.2</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>1.1</ns0:cell></ns0:row><ns0:row><ns0:cell>Na</ns0:cell><ns0:cell>589.0</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>1.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Pb</ns0:cell><ns0:cell>283.3</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Cu</ns0:cell><ns0:cell>324.8</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Fe</ns0:cell><ns0:cell>248.3</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Mn</ns0:cell><ns0:cell>279.5</ns0:cell><ns0:cell>0.2</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Zn</ns0:cell><ns0:cell>213.9</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Cd</ns0:cell><ns0:cell>228.8</ns0:cell><ns0:cell>0.7</ns0:cell><ns0:cell>Air-Acetylene</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Characteristics of River Nworie</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameters</ns0:cell><ns0:cell>NDW1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>NDW4 NDW5</ns0:cell><ns0:cell>WHO</ns0:cell><ns0:cell>Max</ns0:cell><ns0:cell>Min</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SDV</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>NDW2 NDW3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>(2007)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Temp ( o C)</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>32.4</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>20-30</ns0:cell><ns0:cell>32.4</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>31.88</ns0:cell><ns0:cell>0.55</ns0:cell></ns0:row><ns0:row><ns0:cell>pH</ns0:cell><ns0:cell>5.61</ns0:cell><ns0:cell>5.48</ns0:cell><ns0:cell>6.2</ns0:cell><ns0:cell>5.59</ns0:cell><ns0:cell>5.2</ns0:cell><ns0:cell>6.5-9.0</ns0:cell><ns0:cell>6.2</ns0:cell><ns0:cell>5.2</ns0:cell><ns0:cell>5.62</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>EC (µS/cm) 90</ns0:cell><ns0:cell>91</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell>92</ns0:cell><ns0:cell>97</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>97</ns0:cell><ns0:cell>90</ns0:cell><ns0:cell>92.40</ns0:cell><ns0:cell>2.70</ns0:cell></ns0:row><ns0:row><ns0:cell>DO (mg/L)</ns0:cell><ns0:cell>2.78</ns0:cell><ns0:cell>2.98</ns0:cell><ns0:cell>2.85</ns0:cell><ns0:cell>3.89</ns0:cell><ns0:cell>2.27</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>3.89</ns0:cell><ns0:cell>2.27</ns0:cell><ns0:cell>2.95</ns0:cell><ns0:cell>0.59</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>TDS (mg/L) 123.94 123.84 143.78 122</ns0:cell><ns0:cell>123.98</ns0:cell><ns0:cell>250</ns0:cell><ns0:cell>143.78</ns0:cell><ns0:cell>122</ns0:cell><ns0:cell cols='2'>127.51 9.13</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>TSS (mg/L) 87.74</ns0:cell><ns0:cell>86.64</ns0:cell><ns0:cell>96.74</ns0:cell><ns0:cell>96.43</ns0:cell><ns0:cell>87.44</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>96.74</ns0:cell><ns0:cell>86.64</ns0:cell><ns0:cell>90.99</ns0:cell><ns0:cell>5.12</ns0:cell></ns0:row><ns0:row><ns0:cell cols='11'>*EC-Electrical conductivity, DO-Dissolved Oxygen, TDS-Total Dissolved Solids, TSS-Total Suspended</ns0:cell></ns0:row><ns0:row><ns0:cell>solids</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Ionic composition of water from Nworie in the dry season</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Ions</ns0:cell><ns0:cell cols='5'>NDW1 NDW2 NDW3 NDW4 NDW5</ns0:cell><ns0:cell>WHO</ns0:cell><ns0:cell>Max</ns0:cell><ns0:cell>Min</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell>SDV</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(2007)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Major cations</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Na + (mg/L)</ns0:cell><ns0:cell>1.33</ns0:cell><ns0:cell>1.673</ns0:cell><ns0:cell>1.37</ns0:cell><ns0:cell>1.30</ns0:cell><ns0:cell>1.37</ns0:cell><ns0:cell>N/A</ns0:cell><ns0:cell>1.673</ns0:cell><ns0:cell>1.3</ns0:cell><ns0:cell>1.41</ns0:cell><ns0:cell>0.15</ns0:cell></ns0:row><ns0:row><ns0:cell>K + (mg/L)</ns0:cell><ns0:cell>0.89</ns0:cell><ns0:cell>0.791</ns0:cell><ns0:cell>0.898</ns0:cell><ns0:cell>0.189</ns0:cell><ns0:cell>0.819</ns0:cell><ns0:cell>N/A</ns0:cell><ns0:cell>0.898</ns0:cell><ns0:cell>0.189</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row><ns0:row><ns0:cell>Mg 2+ (mg/L)</ns0:cell><ns0:cell>2.23</ns0:cell><ns0:cell>2.78</ns0:cell><ns0:cell>2.27</ns0:cell><ns0:cell>2.22</ns0:cell><ns0:cell>1.13</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>2.78</ns0:cell><ns0:cell>1.13</ns0:cell><ns0:cell>2.13</ns0:cell><ns0:cell>0.60</ns0:cell></ns0:row><ns0:row><ns0:cell>Ca 2+ (mg/L)</ns0:cell><ns0:cell>23.2</ns0:cell><ns0:cell>24.6</ns0:cell><ns0:cell>23.8</ns0:cell><ns0:cell>23.5</ns0:cell><ns0:cell>22.92</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>24.6</ns0:cell><ns0:cell>22.92</ns0:cell><ns0:cell>23.60</ns0:cell><ns0:cell>0.67</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Heavy metals</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cu (mg/L)</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.15</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.35</ns0:cell><ns0:cell>0.30</ns0:cell></ns0:row><ns0:row><ns0:cell>Cd (mg/L)</ns0:cell><ns0:cell>0.002</ns0:cell><ns0:cell>0.180</ns0:cell><ns0:cell>0.072</ns0:cell><ns0:cell>0.006</ns0:cell><ns0:cell>0.012</ns0:cell><ns0:cell>0.003</ns0:cell><ns0:cell>0.18</ns0:cell><ns0:cell>0.002</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>0.08</ns0:cell></ns0:row><ns0:row><ns0:cell>Mn (mg/L)</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>1.02</ns0:cell><ns0:cell>0.78</ns0:cell><ns0:cell>0.085</ns0:cell><ns0:cell>0.089</ns0:cell><ns0:cell>0.4</ns0:cell><ns0:cell>1.02</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>0.45</ns0:cell></ns0:row><ns0:row><ns0:cell>Zn (mg/L)</ns0:cell><ns0:cell>2.6</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>2.21</ns0:cell><ns0:cell>2.61</ns0:cell><ns0:cell>2.63</ns0:cell><ns0:cell><0.1</ns0:cell><ns0:cell>2.63</ns0:cell><ns0:cell>1.2</ns0:cell><ns0:cell>2.25</ns0:cell><ns0:cell>0.61</ns0:cell></ns0:row><ns0:row><ns0:cell>Fe (mg/L)</ns0:cell><ns0:cell>0.097</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.191</ns0:cell><ns0:cell>0.097</ns0:cell><ns0:cell>0.091</ns0:cell><ns0:cell>0.3</ns0:cell><ns0:cell>0.191</ns0:cell><ns0:cell>0.091</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.05</ns0:cell></ns0:row><ns0:row><ns0:cell>Pb (mg/L)</ns0:cell><ns0:cell>0.127</ns0:cell><ns0:cell>0.178</ns0:cell><ns0:cell>0.521</ns0:cell><ns0:cell>0.123</ns0:cell><ns0:cell>0.127</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.521</ns0:cell><ns0:cell>0.123</ns0:cell><ns0:cell>0.22</ns0:cell><ns0:cell>0.17</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Major anion</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>NO 3 -(mg/L)</ns0:cell><ns0:cell>1.96</ns0:cell><ns0:cell>1.91</ns0:cell><ns0:cell>1.92</ns0:cell><ns0:cell>1.91</ns0:cell><ns0:cell>1.916</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>1.96</ns0:cell><ns0:cell>1.91</ns0:cell><ns0:cell>1.92</ns0:cell><ns0:cell>0.02</ns0:cell></ns0:row><ns0:row><ns0:cell>SO 4 2-(mg/L)</ns0:cell><ns0:cell>24.8</ns0:cell><ns0:cell>23.4</ns0:cell><ns0:cell>24.8</ns0:cell><ns0:cell>23.98</ns0:cell><ns0:cell>24.8</ns0:cell><ns0:cell>250</ns0:cell><ns0:cell>24.8</ns0:cell><ns0:cell>23.4</ns0:cell><ns0:cell>24.36</ns0:cell><ns0:cell>0.64</ns0:cell></ns0:row><ns0:row><ns0:cell>PO 4 3-(mg/L)</ns0:cell><ns0:cell>1.37</ns0:cell><ns0:cell>1.65</ns0:cell><ns0:cell>1.36</ns0:cell><ns0:cell>1.34</ns0:cell><ns0:cell>1.37</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1.65</ns0:cell><ns0:cell>1.34</ns0:cell><ns0:cell>1.42</ns0:cell><ns0:cell>0.13</ns0:cell></ns0:row><ns0:row><ns0:cell>Cl -(mg/L)</ns0:cell><ns0:cell>24.15</ns0:cell><ns0:cell>32.24</ns0:cell><ns0:cell>36.19</ns0:cell><ns0:cell cols='2'>33.16 32.18</ns0:cell><ns0:cell>250</ns0:cell><ns0:cell>36.19</ns0:cell><ns0:cell>24.15</ns0:cell><ns0:cell>31.58</ns0:cell><ns0:cell>4.47</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>*WHO=World Health Organization</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 6 . Correlation matrix of various ions in River Nworie</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>Na +</ns0:cell><ns0:cell>K +</ns0:cell><ns0:cell>Mg 2+</ns0:cell><ns0:cell>Ca 2+</ns0:cell><ns0:cell>Cu</ns0:cell><ns0:cell>Cd</ns0:cell><ns0:cell>Mn</ns0:cell><ns0:cell>Zn</ns0:cell><ns0:cell>Fe</ns0:cell><ns0:cell>Pb</ns0:cell><ns0:cell>NO 3 -</ns0:cell><ns0:cell>SO 4 2-</ns0:cell><ns0:cell>PO 4 3-</ns0:cell><ns0:cell>Cl -</ns0:cell></ns0:row><ns0:row><ns0:cell>Na +</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>K +</ns0:cell><ns0:cell>0.291</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mg 2+</ns0:cell><ns0:cell>0.515</ns0:cell><ns0:cell>-0.081</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ca 2+</ns0:cell><ns0:cell>0.834</ns0:cell><ns0:cell>0.032</ns0:cell><ns0:cell>0.838</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cu</ns0:cell><ns0:cell>0.864</ns0:cell><ns0:cell>0.306</ns0:cell><ns0:cell>0.675</ns0:cell><ns0:cell>0.942</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cd</ns0:cell><ns0:cell>0.953</ns0:cell><ns0:cell>0.272</ns0:cell><ns0:cell>0.638</ns0:cell><ns0:cell>0.938</ns0:cell><ns0:cell>0.975</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mn</ns0:cell><ns0:cell>0.805</ns0:cell><ns0:cell>0.357</ns0:cell><ns0:cell>0.643</ns0:cell><ns0:cell>0.907</ns0:cell><ns0:cell>0.992</ns0:cell><ns0:cell>0.941</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Zn</ns0:cell><ns0:cell>-0.968</ns0:cell><ns0:cell>-0.240</ns0:cell><ns0:cell>-0.676</ns0:cell><ns0:cell>-0.942</ns0:cell><ns0:cell cols='2'>-0.951 -0.993</ns0:cell><ns0:cell cols='2'>-0.907 1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fe</ns0:cell><ns0:cell>0.671</ns0:cell><ns0:cell>0.371</ns0:cell><ns0:cell>0.635</ns0:cell><ns0:cell>0.850</ns0:cell><ns0:cell>0.949</ns0:cell><ns0:cell>0.856</ns0:cell><ns0:cell>0.979</ns0:cell><ns0:cell>-0.808</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Pb</ns0:cell><ns0:cell>-0.012</ns0:cell><ns0:cell>0.374</ns0:cell><ns0:cell>0.215</ns0:cell><ns0:cell>0.289</ns0:cell><ns0:cell>0.462</ns0:cell><ns0:cell>0.259</ns0:cell><ns0:cell>0.567</ns0:cell><ns0:cell>-0.167</ns0:cell><ns0:cell>0.715</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>NO 3 -</ns0:cell><ns0:cell>-0.359</ns0:cell><ns0:cell>0.439</ns0:cell><ns0:cell>0.013</ns0:cell><ns0:cell>-0.411</ns0:cell><ns0:cell cols='2'>-0.405 -0.424</ns0:cell><ns0:cell cols='2'>-0.386 0.377</ns0:cell><ns0:cell cols='3'>-0.338 -0.130 1</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>SO 4 2-</ns0:cell><ns0:cell>-0.726</ns0:cell><ns0:cell>0.431</ns0:cell><ns0:cell>-0.640</ns0:cell><ns0:cell>-0.789</ns0:cell><ns0:cell cols='2'>-0.596 -0.700</ns0:cell><ns0:cell cols='2'>-0.502 0.747</ns0:cell><ns0:cell cols='2'>-0.371 0.288</ns0:cell><ns0:cell>0.544</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>PO 4 3-</ns0:cell><ns0:cell>0.989</ns0:cell><ns0:cell>0.223</ns0:cell><ns0:cell>0.568</ns0:cell><ns0:cell>0.831</ns0:cell><ns0:cell>0.818</ns0:cell><ns0:cell>0.923</ns0:cell><ns0:cell>0.745</ns0:cell><ns0:cell>-0.954</ns0:cell><ns0:cell>0.602</ns0:cell><ns0:cell cols='2'>-0.119 -0.298</ns0:cell><ns0:cell cols='2'>-0.780 1</ns0:cell></ns0:row><ns0:row><ns0:cell>Cl -</ns0:cell><ns0:cell>0.143</ns0:cell><ns0:cell>-0.220</ns0:cell><ns0:cell>-0.012</ns0:cell><ns0:cell>0.345</ns0:cell><ns0:cell>0.429</ns0:cell><ns0:cell>0.334</ns0:cell><ns0:cell>0.475</ns0:cell><ns0:cell>-0.245</ns0:cell><ns0:cell>0.524</ns0:cell><ns0:cell>0.598</ns0:cell><ns0:cell>-0.858</ns0:cell><ns0:cell cols='2'>-0.193 0.039</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>*Bold numbers are significant at 5 %.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 7 . Principal Component Matrix using varimax rotation of ions in the river water Principal Component Parameter PC1 PC2 PC3</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Cd 2+</ns0:cell><ns0:cell>.990</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Cu 2+</ns0:cell><ns0:cell>.985</ns0:cell><ns0:cell>.166</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Zn 2+</ns0:cell><ns0:cell>-.978</ns0:cell><ns0:cell>.119</ns0:cell><ns0:cell>-.147</ns0:cell></ns0:row><ns0:row><ns0:cell>Ca 2+</ns0:cell><ns0:cell>.975</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Mn 2+</ns0:cell><ns0:cell>.958</ns0:cell><ns0:cell>.285</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Na +</ns0:cell><ns0:cell>.901</ns0:cell><ns0:cell>-.223</ns0:cell><ns0:cell>.223</ns0:cell></ns0:row><ns0:row><ns0:cell>Fe 2+</ns0:cell><ns0:cell>.894</ns0:cell><ns0:cell>.427</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>PO 4 2-</ns0:cell><ns0:cell>.873</ns0:cell><ns0:cell>-.343</ns0:cell><ns0:cell>.264</ns0:cell></ns0:row><ns0:row><ns0:cell>SO 4 2-</ns0:cell><ns0:cell>-.726</ns0:cell><ns0:cell>.647</ns0:cell><ns0:cell>.215</ns0:cell></ns0:row><ns0:row><ns0:cell>Mg 2+</ns0:cell><ns0:cell>.711</ns0:cell><ns0:cell>-.158</ns0:cell><ns0:cell>.161</ns0:cell></ns0:row><ns0:row><ns0:cell>Pb 2+</ns0:cell><ns0:cell>.329</ns0:cell><ns0:cell>.904</ns0:cell><ns0:cell>-.160</ns0:cell></ns0:row><ns0:row><ns0:cell>NO 3 -</ns0:cell><ns0:cell>-.506</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>.814</ns0:cell></ns0:row><ns0:row><ns0:cell>Cl -</ns0:cell><ns0:cell>.406</ns0:cell><ns0:cell>.404</ns0:cell><ns0:cell>-.804</ns0:cell></ns0:row><ns0:row><ns0:cell>K +</ns0:cell><ns0:cell>.173</ns0:cell><ns0:cell>.579</ns0:cell><ns0:cell>.718</ns0:cell></ns0:row><ns0:row><ns0:cell>Eigen value</ns0:cell><ns0:cell>8.739</ns0:cell><ns0:cell>2.252</ns0:cell><ns0:cell>2.074</ns0:cell></ns0:row><ns0:row><ns0:cell>% Variance</ns0:cell><ns0:cell>62.424</ns0:cell><ns0:cell>16.088</ns0:cell><ns0:cell>14.812</ns0:cell></ns0:row><ns0:row><ns0:cell>% Cumulative</ns0:cell><ns0:cell>62.424</ns0:cell><ns0:cell>78.512</ns0:cell><ns0:cell>93.324</ns0:cell></ns0:row><ns0:row><ns0:cell>.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42759:1:2:NEW 20 Dec 2019)</ns0:note></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "December 19, 2019
Sreeprasad Sreenivasan
Academic Editor, PeerJ Analytical Chemistry
RE: (#ACHEM-2019:11:42759:0:2:REVIEW)
Dear Editor,
We appreciate the insightful comments from the two reviewers. We have carefully addressed their concerns and made corresponding revisions according to their advice. The two reviewers complained about the English language used, so we gave the manuscript for proof-reading by an English speaker and that had been done on the revised manuscript. The following is the answers we have made in response to the reviewer’s comments:
Reviewer 1
The title of manuscript in not really consistent with the text. In my opinion the paper is only a simple monitoring study of water quality in the River Nworie, Nigeria. It is a classical study analyzing specific quality indicators of water quality by standard methods. What is the novelty of the work?
We agree in part with this comment as monitoring study. We think the title is inline with the text (the author could suggest another title, if the reviewer thinks otherwise). The novelty of the work is that, the studies focusing on the relationship between physicochemical parameters and ionic compositions of natural water is scarce while laboratory studies have been well conducted. Additionally, studies relating physicochemical properties and dissolved toxic ions in water could develop a body of knowledge that could enable detection and quantification of potential risk of ions such as heavy metals from natural water to aquatic ecosystem, animal and human health without actually involving aquatic organism, animal and human.
The paper has many gaps and I cannot recommended for publication in its present form. The manuscript is difficult to read.
We have carefully proof-read the manuscript to minimize English errors.
- The abstract has to be reconsidered, it has to highlight the main findings regarding the relationship between physicochemical characteristics and ionic composition of the Nworie River waters, as is stated in the title.
The abstract have been critically reviewed and updated carefully following the reviewers advice
- The study was performed during the dry season period only on 5 samples. Why only 5 samples were considered? Normally, a more comprehensive study has to be done to draw real conclusions. Therefore, the experimental plan has to be reconsidered.
The 5 samples collected were composite samples. A total of 30 samples (six per point) were collected overall.
- A detailed description of the anthropogenic activities along the river has to be performed.
The description is now included in the revised manuscript following the advice.
- Data on materials and methods used is very poor.
The materials and methods have been carefully reviewed and corrected. We believe now it is better.
- Figures has low resolution.
Figures have been re-plotted with better resolution using the IBM SPSS version 20.
Reviewer 2 (Anonymous)
Basic reporting
I strongly recommend the authors to edit the language errors in the current version of the manuscript. This will help the readers to follow and understand the observations and discussions presented in this article.
We have carefully proof-read the manuscript to minimize English errors.
Please double-check the figure numbers in the main body, figure captions as well as in the actual figure are consistent and are accurate. For instance, in line#146 it is indicated that the “percentage of ions are presented in Figure 2” which is not correct.
Corrected
Experimental design
Although the authors had emphasized the importance of studying the relationship between physicochemical properties and chemical properties, the focus of the current study is not stated explicitly in the introduction of the manuscript.
The introduction part has been reviewed and corrected. The focus of the study is now explicitly stated (to author’s best knowledge).
Validity of the findings
In the manuscript, I don’t find any data for UV-Vis absorbance as well as other measurements based on which the relationship analysis is done. It is very important to show such experimental results that allow the readers to understand the measurement and estimation procedures followed by the authors. Due to the lack of any such experimental data in the manuscript, I find it difficult to follow the discussion and verify the conclusions derived through this study.
We didn’t use a UV-Vis. The experimental data are presented in the supplementary files.
Comments for the Author
In tables 2 and 3, I would suggest using “WHO permissible limit (2007)” instead of just “WHO (2007)”
Corrected
All the changes are marked red in a highlighted manuscript.
We thank you and reviewer’s time spent on this manuscript, and their suggestions have improved the paper, and we hope the manuscript is now in a form acceptable for publication in PeerJ.
Sincerely,
Enyoh Christian Ebere
" | Here is a paper. Please give your review comments after reading it. |
682 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The study tested the efficiency and reproducibility of a method for optimal separation of low and high abundant proteins in blood plasma. Firstly, three methods for the separation and concentration of eluted (E: low abundance), or bound (B: high abundance) proteins were investigated: TCA protein precipitation, the ReadyPrep TM 2-D cleanup Kit and Vivaspin Turbo 4, 5 kDa ultrafiltration units. Secondly, the efficiency and reproducibility of a Seppro column or a ProteoExtract Albumin/IgG column were assessed by quantification of E and B proteins. Thirdly, the efficiency of two elution buffers, containing either 25% or 10% glycerol for elution of the bound protein, was assessed by measuring the remaining eluted volume and the final protein concentration. Compared to the samples treated with TCA protein precipitation and the ReadyPrep TM 2-D cleanup Kit, the E & B proteins concentrated by the Vivaspin4, 5 kDa ultrafiltration unit were separated well in both 1-D and 2-D gels.</ns0:p><ns0:p>The depletion efficiency of abundant protein in the Seppro column was reduced after 15 cycles of sample processing and regeneration and the average ratio of E/(B+E) x100% was 37 ± 11(%) with a poor sample reproducibility as shown by a high coefficient of variation (CV = 30%). However, when the ProteoExtract Albumin/IgG column was used, the ratio of E/(B+E) × 100% was 43 ± 3.1% (n=6) and its CV was 7.1%, showing good reproducibility.</ns0:p><ns0:p>Furthermore, the elution buffer containing 10% (W/V) glycerol increased the rate of B protein elution from the ProteoExtract Albumin/IgG column, and an appropriate protein concentration (3.5 µg/µl) for a 2-D gel assay could also be obtained when it was concentrated with Vivaspin Turbo 4, 5 kDa ultrafiltration unit. In conclusion, the ProteoExtract Albumin/IgG column shows good reproducibility of preparation of low and high abundance blood plasma proteins when using the elution buffer containing 10% (W/V) glycerol. The optimized method of preparation of low/high abundance plasma proteins was when plasma was eluted through a ProteoExtract Albumin/IgG removal column, the column</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Blood plasma is an easily available bio-fluid and is therefore routinely used for monitoring changes in protein levels which may be actively secreted or leak from cells throughout the body. The highest abundance proteins in blood plasma are albumin, globulins and fibrinogen which comprise about 60%, 30%, and 4% of whole plasma proteins (7-8 g/dL), respectively <ns0:ref type='bibr' target='#b6'>[Farrugia, 2010;</ns0:ref><ns0:ref type='bibr' target='#b0'>Anderson and Anderson, 2002</ns0:ref>]. Among the remaining proteins, about 1%, are regulatory and they are comprised of thousands of low abundance proteins, such as enzymes, proenzymes and hormones <ns0:ref type='bibr' target='#b0'>[Anderson and Anderson, 2002]</ns0:ref>. Plasma protein biomarkers of disease progression is currently a very active research area <ns0:ref type='bibr' target='#b26'>[Zhang, et al., 2013]</ns0:ref>.</ns0:p><ns0:p>As a common metabolic pool, plasma has been a very important material for biomarker discovery <ns0:ref type='bibr' target='#b14'>[O'Connell et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b26'>Zhang, et al., 2013]</ns0:ref>. Biomarker targets of disease progression are typically found at low concentrations <ns0:ref type='bibr' target='#b0'>[Anderson and Anderson, 2002]</ns0:ref> and so identification and quantification of these proteins is challenging. Two-dimensional gel electrophoresis (2-DE) with IPGs were developed in the 1970s <ns0:ref type='bibr'>[Görg et al., 2002]</ns0:ref> but it still has many challenges <ns0:ref type='bibr' target='#b27'>[Zhou et al., 2005]</ns0:ref>, while LC-MS based proteomics has been a more recent development [van den <ns0:ref type='bibr'>Broek, et al., 2013]</ns0:ref>. They have both proved useful in plasma biomarker discovery. However, the plasma proteome has a large dynamic range of individual protein concentrations (10 orders of magnitude).</ns0:p><ns0:p>Therefore, there are several barriers to overcome for identification and quantification of low abundance proteins of interest using 2-DE and LC-MS. One of them is the visualisation and measurement of lower-abundance proteins which are typically masked by the highly-abundant proteins in a standard measurement [Kovàcs and Guttman, 2013; <ns0:ref type='bibr' target='#b2'>Boschetti and Righetti, 2009;</ns0:ref><ns0:ref type='bibr' target='#b16'>Pernemalm et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b0'>Anderson and Anderson, 2002]</ns0:ref>. In order to tackle this problem, several commercial columns have been developed to deplete the higher abundance proteins. These columns have helped to further the research process, even though there are several significant challenges to overcome, e.g. sample reproducibility.</ns0:p><ns0:p>Trichloroacetic acid (TCA) can be added to too diluted protein samples in order to precipitate and concentrate proteins or remove salts and detergents to clean the samples. TCA precipitation was frequently used to prepare samples for SDS-PAGE or 2D-gels <ns0:ref type='bibr' target='#b27'>(Zhou et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b10'>Koontz, 2014)</ns0:ref>.</ns0:p><ns0:p>The ReadyPrep 2-D cleanup kit was developed by Bio-rad. It has similar mechanisms to TCA precipitation, using a modified traditional TCA-like protein precipitation to remove ionic contaminants, e.g. detergents, lipids, and phenolic compounds, from protein samples to improve the 2-D resolution and reproducibility <ns0:ref type='bibr' target='#b18'>(Posch, Paulus and Brubacher, 2005)</ns0:ref>. Vivaspin ® Turbo 4 can handle up to 4 ml sample and ensures maximum process ultra-fast speed down to the last few micro liters after > 100 fold concentration with high retentive recovery > 95%. In addition, it has universal rotor compatibility and easy recovery due to a unique, angular and pipette-friendly deadstop pocket <ns0:ref type='bibr' target='#b5'>(Capriotti et al., 2012)</ns0:ref>.</ns0:p><ns0:p>The Seppro Column (SEP130-1KT, Sigma-Aldrich Lt.d), specified for rat plasma, is designed to remove seven highly abundant proteins: albumin, IgG, fibrinogen, transferrin, IgM, haptoglobin, and alpha1-antitrypsin. The column contains an antibody-coated resin, and this depletion technology uses a mixture of small single-chained recombinant antibody ligands along with conventional affinity purified polyclonal antibodies. The efficiency of high-abundance protein depletion is 90%. Following depletion of these high abundance proteins, the remaining lower abundance proteins were then loaded at a 20-50 times higher concentration in a 2-DE or LC separation. Seppro Columns have also been used for human plasma <ns0:ref type='bibr' target='#b4'>(Corrigan et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b17'>Polaskova et al., 2010)</ns0:ref> and plant proteins <ns0:ref type='bibr' target='#b3'>(Cellar et al., 2008)</ns0:ref>. Most studies have focused on their binding efficiency, which is reported to be high for the targeted abundant proteins, however, there is no report on the reproducibility of abundant proteins depletion with repeated use of the columns, even though it has been claimed by the manufacturer that columns can be used up to 100 times.</ns0:p><ns0:p>High efficiency and good reproducibility are important in maintaining a reproducible protein profile. Some animal or human nutritional interventions, such as zinc depletion, can affect hundreds of plasma proteins which may be found in high or low concentrations. Thus, related biomarker discovery has to focus on both abundant and low abundance proteins separately.</ns0:p><ns0:p>Another frequently used column, ProteoExtract™ Abundant Protein Removal Kit, is a disposable column, which was developed in 2004 for the purpose of enhancing low abundance protein resolution. It only removes 2 abundant proteins, albumin and IgG. However, it is highly specific, exhibits little to zero non-specific binding and uses a combination of an albumin-specific resin and Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science This is the first systematic study to investigate: 1) several methods for sample preparation, e.g. appropriate conditions for depletion, desalting and concentration of protein samples; 2) the reproducibility and depletion efficiency of abundant protein removal from plasma samples using a Seppro column repeatedly or using the individual ProteoExtract™ Abundant Protein Removal Kit; and 3) the development of an efficient elution buffer to wash out the bound protein from the ProteoExtract Albumin/IgG column.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Method and materials</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.1'>Chemicals:</ns0:head><ns0:p>Seppro rat spin column (Seppro ® Rat. SIGMA/SEP130), Laemmli buffer, SDS, glycerol, Tris base, BCA protein assay (All from Sigma-Aldrich Ltd, UK); ProteoExtract Albumin/IgG removal kit (Cat: 122642, Merck, USA); the bicinchoninic acid (BCA) protein assay kit (The Thermo Scientific Pierce); rat plasma</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Animals</ns0:head><ns0:p>Male Hooded Lister rats [n=50, body weight was 200 ± 25 (g)] were given semi-synthetic egg white-based diets, containing zinc from <1, up to 35 mg Zn/kg. The rats were handled and studied in compliance with the UK Animals (Scientific Procedures) Act 1986 with appropriate licensing.</ns0:p><ns0:p>Rats were individually housed in polypropylene cages with a 12h:12h light:dark cycle and a room temperature of 22-24 o C. Blood was collected and plasma was isolated using our established protocol <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>. The study was approved by the Small Animal Ethics Committee of University of Aberdeen (the reference was 604012) and monitored by qualified university-based veterinary surgeons.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Depletion of abundant plasma proteins using a Seppro column</ns0:head><ns0:p>The methods, mainly based on the protocols provided by the company (Sigma aldrich, UK), as well as a modified procedure, is described in the Supporting information 1.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Depletion of abundant plasma proteins using ProteoExtract Albumin/IgG column</ns0:head><ns0:p>The detailed procedure of column equilibration and sample treatment using ProteoExtract Albumin/IgG removal kit is shown in Supporting information 2. The procedure was briefly as follows: <ns0:ref type='bibr' target='#b0'>(1)</ns0:ref>. Collection of the low abundant proteins: The column was inverted on tissue paper for 5 minutes; 850 µL of the binding buffer was added into the column and allowed to pass through the resin bed by gravity flow; a 40 µl sample of rat plasma was then diluted with 360 µl of binding buffer and applied onto the column. The diluted sample was then allowed to pass through the resin bed by gravity-flow; 600 µl of binding buffer was added to wash the column by gravity-flow and the eluent, which contained the low abundance proteins, was collected; (2) Collection of the high abundance proteins: 1 ml of Laemmli buffer [50 ml Laemmli buffer containing 0.3785 g Tris base, (62.4 mM); 1.0279 g SDS (2%); buffer 1 containing glycerol 9.92 ml (25% W/V) or buffer 2 containing glycerol 3.97 ml (10% W/V), pH 6.8] in a tube was left in a boiling water bath for 5 minutes, then cooled to room temperature. 850 µl of above Laemmli buffer was added onto the column and allowed to pass through, by gravity-flow. This step was repeated again for further washing to elute the bound protein. The eluent contained the abundant proteins.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Concentration, desalting, clean up and quantification of eluted or bound proteins</ns0:head><ns0:p>Three methods were compared in order to concentrate the eluted plasma fraction and the bound proteins to at least 5 μg/μl for the generation of quality 2-D gels. They were the Vivaspin Turbo 4, 5 kDa ultrafiltration unit (VS04T11, Sartorius, UK), trichloroacetic acid (TCA) precipitation <ns0:ref type='bibr' target='#b27'>(Zhou et al., 2005)</ns0:ref> or the ReadyPrep TM 2-D cleanup Kit (Catalog #163-2130, Bio-Rad). A detailed protocol for each is provided in Supporting information 3. Samples were desalted using pH 7.4 50 mM Tris buffer. Protein quantification was achieved using the bicinchoninic acid (BCA) method, or the RC-DC protein assay (Thermo Scientific Pierce) (Supporting information 4). The impact of each concentration method on the quality of the protein separation profile was analysed by loading the samples onto either 1-D or 2-D gels. V was applied after the gel run to prevent diffusion.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.8'>Coomassie blue stain</ns0:head><ns0:p>The gels were fixed in 200 ml solution [50% (v/v) ethanol, 2% (v/v) ortho-phosphoric acid, 48% H 2 O] for 3 hours and washed with H 2 O for at least 1 hour with a couple of changes for rehydration.</ns0:p><ns0:p>They were then stained with 200 ml Coomassie blue [34% methanol, 2% ortho-phosphoric acid, 64% H 2 O, containing 1 mg/1ml Coomassie blue] for three days, according to the manufacturer's instructions. The gels were then scanned with the GS-800 Calibrated Densitometer and analysed using Progenesis SameSpots (Nonlinear, UK).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.9'>Statistics</ns0:head><ns0:p>After normalisation and spot matching on Progenesis SameSpots, the normalized volume (densities) of all matched spots were statistically analysed using Genstat (VSN International, Hemel Hempstead, UK).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Results</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Protein recovery rate of three concentration methods and their protein profiles in 1-D and 2-D gels</ns0:head><ns0:p>The protein recovery rate of three methods using the TCA precipitation, the ReadyPrep TM with a bigger CV which was 30%. There was a variety change during the whole procedure, e.g. initially relatively stable for the first fifteen depletions at around 28% but then increased dramatically up to 53% at 16 th depletion which may be caused by transferring the resin to a new column which is required after several depletions. Then it reduced to average of 41% in the following six extra depletions, then further increased to 53% in the following five depletions. The average ratio of the eluted protein compared to total protein for 27 samples was 63% ± 11 (%, x ± SD, n = 27) and with a CV of 18%. Using the ProteoExtract Albumin/IgG removal column, the average recovery rate of eluted proteins (amount of protein in the eluted solution /total protein loaded onto the column) was 43% (n=6), while the recovery rate of abundant proteins (amount of bound protein on the column /total protein loaded onto the column) was 33% (n=6). Their CVs for the recovery rate of eluted and abundant proteins were 7.1% and 5.4% respectively (Table <ns0:ref type='table'>1</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>The efficiency of two eluted buffers for remove bound proteins from the column</ns0:head><ns0:p>Laemmli buffer containing with 25% or 10% glycerol was used to wash out bound proteins from column. The solutions collected were further concentrated using Vivaspin Turbo 4, 5 kDa ultrafiltration unit respectively. The total volume of the solution remained during the concentration procedure is shown in Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1'>Concentration methods and protein recovery</ns0:head><ns0:p>The quality of 2-D gels using of samples prepared from both TCA and Vivaspin Turbo 4 were similar in their sharpness resolution and clearance clarity (Figure <ns0:ref type='figure' target='#fig_10'>1C</ns0:ref> & E for eluted proteins, D & F for bound proteins). However, samples treated with TCA precipitation did not generate higher spots numbers for bound protein samples compared to the samples prepared by Vivaspin Turbo 4 (495 & 659), even though the number of eluted protein spots from TCA-precipitation treated samples was 39 spots higher than that from the Vivaspin Turbo 4 prepared eluted protein sample (604 & 565) (Section 3.1). This might be caused by the TCA -prepared bound sample which had a lower separation efficiency. Based on above results, the Vivaspin Turbo 4, 5 kDa ultrafiltration unit was selected to concentrate protein samples in the study. From Figure <ns0:ref type='figure' target='#fig_10'>1A</ns0:ref>, clear and dense bands about 66 kDa were observed in bound proteins, collected from both columns. However, several bands above 75 kDa were observed in the samples of using Seppro spin column. Clearly, eluted proteins prepared by the Seppro spin column had more bands generated in the gel. Also, some dense bands around 140 kDa and 190 kDa had been lightly stained, compared to the sample prepared by the ProteoExtract Albumin/IgG removal kit. Comparison of the untreated plasma sample with extracted abundant proteins (lane P), showed that eluted protein from ProteoExtract Albumin/IgG removal kit resulted in a higher loading of proteins showing more bands (lane PD).</ns0:p><ns0:p>This may be caused by the amount of eluted proteins being lower and also avoidance of masking by the presence of co-existing abundant proteins. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science the method of protein samples concentrated by Vivaspin Turbo 4, 5 kDa ultrafiltration unit was selected in this study. Samples obtained from ProteoExtract Albumin/IgG removal kit were run onto 2-DE gels (Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>). Samples from crude plasma without treatment generated an average of 457 spots in a 2-D gel. After removal of the abundant proteins, 553 of eluted protein spots were observed in a 2-D gel. There were about 582 of abundant protein spots observed in the gel. This work clearly showed the benefit of abundant protein removal to enhance the separation of low abundance proteins. There were about 60 spots matched in 2-D gels, for both eluted and abundant proteins.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3.'>A good reproducibility of depleting abundant proteins using a ProteoExtract Albumin/IgG removal column</ns0:head><ns0:p>Results of the reproducibility of the of the Seppro spin column (Figure <ns0:ref type='figure' target='#fig_12'>3</ns0:ref>) in depleting abundant proteins was instable. The average ratio of E/(E+B)×100% was 37 ± 11 (%, x ± SD, n = 27) with a coefficient variation (CV) of 30%. The ratio was increasing with linear relationship of R 2 =0.57 (see the relationship data in the supported material), which means the column's efficiency of depletion decreased. This could be caused by decreased binding affinity due to increased irreversible specific or non-specific binding of antibody epitopes by protein from previously processed samples. This also reflected from the ratio of B/(E+B)×100%, which decreased during the 27 depletion cycles. Four quality control samples were studied as a parallel control for every other 6 samples preparation. The detail information were supplied in supported material, in which eluted protein rate of QC1D to QC4D was increased gradually from 26.32%, 27.37%, 42.53% to 56.61%, respectively; and the rate of bound protein of QC1B to QC4B was decreased from 88.21%, 58.55%, 57.47% to 43.39% respectively. This study demonstrated the limitations of this column for repeated depletion and regeneration. The results did not agree with the manufacture's claims that the column can be used up to 100 times.</ns0:p><ns0:p>Compared the CV of 30% for the recovery rate of eluted proteins using Seppro spin column to the CV of using ProteoExtract Albumin/IgG removal kit which was only 7.1%, this also shows that the reproducibility among each depletion process in the second kit was very good. Furthermore, after the proteins were concentrated, the recovery rate, calculated by the amount (µg) in the eluted or bound fractions after concentration, compared to the amount of protein without concentration, was 81 ± 6% for eluted protein and 82 ± 8 for bound protein. There was also good inter-column PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42746:1:1:NEW 9 Jan 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science reproducibility. The ProteoExtract™ removal kit provides a binding capacity of 0.7 mg IgG and/or 2 mg albumin per column, indicating a limitation of 30 µl plasma based on an albumin concentration of 7 g/dL. Depletion of albumin and IgG from human serum samples was consistently higher than 70% without binding significant amounts of other serum proteins, and so a sample loaded onto a 2-D gel may be 3-4 times more concentrated. The manufacturer states that the 'remarkable selectivity provided by the resins and the optimized design of the columns result in background binding of less than 10% to other plasma proteins'. Another advantage of the product is the pre-filled disposable gravity-flow columns, which allow the parallel processing of multiple samples. The whole procedure takes about 30 minutes, in comparison to the long procedure of the Suppro column, which takes up to 12 hours. 4.4 Elution buffer containing 10% glycerol was efficient to elute bound protein from the ProteoExtract Albumin/IgG removal column Another consideration of our method evaluation was to find an appropriate elution buffer to elute the proteins bound to the extraction column resin, because a biomarker of interest may be in the eluted or bound fraction. Thus efficient and complete removal of bound protein was of importance.</ns0:p><ns0:p>The elution buffer was not provided in the kit. Because glycerol is often used as a cosolvent to inhibit protein aggregation during protein refolding <ns0:ref type='bibr' target='#b28'>(Vagenende et al., 2009 )</ns0:ref>, 25% or 10% glycerol was added into Laemmli buffer (62.4mM Tris base, 2% SDS) in order to elute the bound protein. The efficiency of the two elution buffers was further assessed. A protein concentration of 3.5 µg/µl is important for loading onto 1-D and 2-D gels. When Laemmli buffer with 25% glycerol was used, the concentration of the eluted bound fraction was very slow and moreover, the total volume of the concentrated eluted protein fraction could not be reduced to 120 µl, a required volume in order to reach the ideal protein concentration of 3.5 µg/µl. This problem may have been caused by the high percentage of glycerol in the fraction. When Laemmli buffer containing 10% glycerol was chosen as elution buffer, a final concentration volume of 115 μl could be achieved.</ns0:p><ns0:p>In summary, the present study explored the optimization of a method for the preparation of low abundance and abundant plasma proteins. The efficiency, selectivity and reproducibility of Seppro Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science column showed efficient separation of abundant and low abundance proteins, repeated re-use of the high cost antibody-based column was limited to 27 depletion-regeneration cycles before binding capacity of abundant proteins was gradually reduced. In our experience therefore, the column failed to achieve the specification of the manufacturer (100 re-use cycles with good reproducibility). Even though ProteoExtract™ removal kits removed only two abundant proteins, it could achieve a three times concentration of low abundance proteins loaded onto a gel. This improved the separation of lower abundant proteins, which was demonstrated in the separation of both 1-D and 2-D gels. Furthermore, the depletions using this column showed good reproducibility between individual columns, the CV being less than 10% in both protein fractions. Using a 10% glycerol in Lamili buffer clearly improved the elution speed during the depletion process by the ProteoExtract Albumin/IgG column and also improved the efficiency of the evaporation of the concentrated samples. The optimized method of preparation of low/high abundant plasma proteins was: plasma was eluted through a ProteoExtract Albumin/IgG removal column, the elution contains the low abundant proteins; and the column was then further washed with elution buffer containing 10% glycerol, the elution contains the high abundant proteins. All elutions were further concentrated using Vivaspin ® Turbo 4 5kDa ultrafiltration units for 1 or 2-D gel electrophoresis. </ns0:p></ns0:div>
<ns0:div><ns0:head>Legends</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>a</ns0:head><ns0:label /><ns0:figDesc>unique immobilized protein A polymeric resin. It has been tested for different purposes (Oliver et al., 2010; Sawhney et al., 2009; Murray et al., 2009; Liang et al., 2007 and Björhall, 2005). Even though several studies have cited its use, there is no report on reproducibility between individual columns, especially for the recovery of bound protein from the column. PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42746:1:1:NEW 9 Jan 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>2. 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Reproducibility of the Seppro spin column and the ProteoExtract Albumin/IgG removal kit for the depletion of abundant proteins Twenty seven plasma samples were depleted of abundant proteins sequentially using a Seppro spin column. Every other six samples, one quality control sample was used to perform the same depletion procedure. The eluted (E: low abundance) and bound (B: abundant) protein was quantified. The ratio of either E or B protein to total (E+B) recovered proteins during 27 sample as well as the four quality control sample depletions was used to assess the stability and reproducibility of the Seppro spin column. Plasma (40 µL, n=6) samples were diluted with 360 µL of binding buffer and were loaded onto a ProteoExtract Albumin/IgG removal column separately. The amount of E and B protein were analysed. The ratio of their amount to the total protein PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42746:1:1:NEW 9 Jan 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science previously depleted was calculated and their CV% was used to assess the reproducibility of the Albumin/IgG removal kit. 2.7 One and two-dimensional SDS PAGE electrophoresis 2.7.1 One-dimensional SDS PAGE Protein (15 μg) was loaded in each well of a 4-12% Bis-Tris Criterion TM XT precast gel (Catalog 345-0124, Bio-Rad). The electrophoresis was performed using a Criterion TM cell (Catalog 165-6001, Bio-Rad) with MOPS running buffer (Catalog 161-0788, Bio-Rad) with a 200 volts constant power supply. When the bromophenol blue ran to the bottom of the gel, the power supply was turned off and the gel was removed and stained with Coomassie Blue. 2.7.2 Two-dimensional (2-D) gel electrophoresis Protein samples (200 μg) were diluted with the buffer (7 M urea, 2 M thiourea, 4% CHAPS, 2.0% bio-lyte, 3/10 ampolyte) to a volume of 325 μl, and 15 μl of 3.5% DTT was then loaded onto a 18 cm IPG readystrips (Catalog 163-2007, Bio-Rad) with a linear pH gradient of 3-10 by passive ingel rehydration. Rehydration was performed at 20 O C for 1 hour without applied voltage on an IEF cell (BioRad). Then mineral oil was added onto the strip. The rehydration took an extra 16 hours(50 V/strip). The strips were transferred to a clean tray, with a paper wick wetted with 10 μl of ddH 2 O placed at the anode end and a wick wetted with 15 μl of 3.5% DTT placed at the cathode end. The strip was then overlaid with mineral oil and the initial startup and ramping protocol followed as per the instruction booklet for the IEF cell. After 1 hour, the strip was removed to a tray containing fresh wicks and overlaid with mineral oil. The run proceeded until the preset volthours value had been reached, after which the voltage was maintained at 500V until the strip was ready to be transferred to the second dimension SDS-PAGE. IPG strips were removed from thefocusing tray and reduced by equilibrating the strips side up, in a solution (3 ml) [containing 6M urea, 2% (w/v) SDS, 20% (v/v) glycerol, 375 mM Tris-HCl (pH 8.8), 130 mM DTT] for 13 minutes at room temperature with gentle agitation, before being alkylated in a solution (3 ml) [containing 6M urea, 2% (w/v) SDS, 20% (v/v) glycerol, 375 mM Tris-HCl (pH 8.8), 135 mM lodoacetamide] for 13 minutes at room temperature. The strip was trimmed from both the anodic and cathodic ends to 15.5 cm and applied to the top of a 18 x 18 cm gel cassette (8-16% cast gels) with the lower pH end of the strip to the extreme left and then overlayed with molten agarose (2%, w/v) in DALT tank buffer [24 mM Tris base, 200.5 mM glycine and 0.1% (w/v) SDS containing 2 mg/100 PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42746:1:1:NEW 9 Jan 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science ml bromophenol blue]. The second dimension separation was performed in a Hoefer ISO DAKT tank (Bio-Rad) filled with the DALT tank buffer. The gels were typically run at 200 V for 9.5 hours or until the bromophenol blue front reached the bottom of the gel. A holding voltage of 50</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>2-D cleanup Kit and the Vivaspin Turbo 4, 5 kDa ultrafiltration unit, to concentrate the samples, was 67%, 68% and 56% respectively. Samples treated with the ReadyPrep TM 2-D cleanup Kit were too dilute (2.62 µg/µl, n=3) for loading onto 2-D gels, compared to the higher protein concentration (3.27 µg/µl, n=3) obtained after Vivaspin Turbo 4, 5 kDa ultrafiltration unit sample processing. When the crude plasma samples were treated with the Seppro spin column, total protein recovery rate was 55.3% which included 27.7% of eluted protein, 2.4% non-specifically bound protein and 25.2% bound protein. The recovery rate after ProteoExtract Albumin/IgG removal kit treatment was 58% (n=6), containing 33.6% eluted protein and 24.4% abundant protein. Both columns generated similar results for the protein recovery rate. Samples obtained from separation of plasma on ProteoExtract Albumin/IgG removal kits and Seppro spin column were run on a 1-D gel (Figure 1A), to compare the difference between two column separations. The influence of sample PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42746:1:1:NEW 9 Jan 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science concentration by TCA precipitation and Vivaspin Turbo 4, 5 kDa ultrafiltration unit on protein separation was studied by loading two samples from each separation into two 18 cm 2-D SDS-PAGE gels (Figure 1 B-F). Samples obtained from ProteoExtract Albumin/IgG removal kit were run onto 2-DE gels (Figure 2).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>3. 2 . 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Reproducibility of the Seppro spin column and the ProteoExtract Albumin/IgG removal kit in depleting abundant proteins Results for protein separation in 1-D and 2-D of both Section 3.1 showed efficient depletion of abundant proteins when using the Seppro column. The ratio of eluted protein (E) or bound protein (B) to total recovered proteins (E+B) during repeated depletion of 27 samples is shown in Figure The trend in ratio is an indication of durability and reproducibility of the column for repeated depletion and regeneration. The average ratio of E/(E+B) ×100% was 37 ± 11 (%, x ± SD, n = 27)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>4. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Protein profile of samples concentrated from TCA precipitation and Vivaspin Turbo 4, 5 kDa ultrafiltration unit Gels on Figure 1B-F were subsequently analysed by Progenesis SameSpots software. The spots of those normalization volumes less than 35317 were deleted at the filtering step. An extensive manual editing was performed after automated gel alignment and spot detection. The average number of detected protein spots was 600 spots for crude plasma samples precipitated with TCA, 604 spots for eluted protein samples precipitated with TCA, 565 spots for eluted protein concentrated by Vivaspin Turbo 4, 5 kDa ultrafiltration unit, 495 spots for the bound protein precipitated with TCA and 659 spots for the bound protein with Vivaspin Turbo 4, 5 kDa ultrafiltration unit. There were 39 more spots on gels of TCA precipitated samples than the samples concentrated with Vivaspin Turbo 4, 5 kDa ultrafiltration unit, however, there were 164 more spots on the gels of bound protein samples concentrated by Vivaspin Turbo 4, 5 kDa ultrafiltration unit than the samples precipitated with TCA. Further, the resolution of the low-molecular weight spots was better on bound protein concentrated by Vivaspin Turbo 4, 5 kDa ultrafiltration unit. Thus, PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42746:1:1:NEW 9 Jan 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>columns and ProteoExtract™ removal kits were evaluated. The Vivaspin Turbo 4, 5 kDa ultrafiltration unit gave the best concentration of eluted sample fractions when compared with TCA precipitation or a ReadyPrep 2-D cleanup Kit. Even though the results of using a Seppro PeerJ An. Chem. reviewing PDF | (ACHEM-2019:11:42746:1:1:NEW 9 Jan 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Gel images of 1 or 2-D gels of proteins from different treatments. A, 1-D gel image of 15 µg of eluted and bound proteins prepared by ProteoExtract Albumin/IgG removal kit and Seppro column kit. The sample concentration was adjusted to 3.5 µg/µl with 50 mM tris-HCl pH 7.4 before treatment with sample buffer. S: precision plus protein dual color standards. PD: ProteoExtract Albumin/IgG removal column eluted protein; PB: ProteoExtract Albumin/IgG removal column bound protein; P: crude plasma; SD: Seppro column eluted protein; SB: Seppro column bound proteins. B-F, 2-D images of protein samples precipitated with TCA or concentrated by Vivaspin Turbo 4 5 kDa ultrafiltration unit. 200 μg protein was loaded onto an 18 cm pH 3-10 IPG strip and an 8-16% gradient SDS PAGE gel in the second dimension. The gels were stained with Coomassie blue. B, crude rat plasma protein; C. eluted protein was precipitated with TCA; D. eluted protein was concentrated by Vivaspin Turbo 4, 5 kDa ultrafiltration unit; E. bound</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. 2-D gel images of plasma protein samples eluted from a ProteoExtract Albumin/IgG removal kit and concentrated using Vivaspin Turbo 4, 5 kDa ultrafiltration unit. 200 μg protein was loaded onto an 18 cm pH 3-10 IPG strip and an 8-16% gradient SDS PAGE gel was used in the second dimension. The gels were stained with Coomassie blue. A, crude rat plasma protein; B. eluted protein; C. bound protein</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The change in elution volume with time during Vivaspin Turbo 4, 5 kDa ultrafiltration unit concentration of fractions containing 10% and 25% glycerol.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Reproducibility of the separation of eluted (low abundance) and bound (abundant) plasma proteins using the ProteoExtract Albumin/IgG removal kit before and after concentration</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 1 Gel</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 2 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 3 The</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,199.12,525.00,294.75' type='bitmap' /></ns0:figure>
</ns0:body>
" | "Reviewer 1
Basic reporting
Professional article structure, figures, tables. Raw data shared.
Experimental design
Research question well defined, relevant & meaningful. It is stated how research fills an identified knowledge gap.
Validity of the findings
All underlying data have been provided; they are robust, statistically sound, & controlled.
Comments for the author
The manuscript titled “Establishing an optimized method for the preparation of low and high abundance blood plasma proteins” from Henian Yang et al tested the efficiency and reproducibility of a method for optimal separation of low and high abundant proteins in blood plasma. The authors are attempting to describe an appropriate conditions to separate low and high abundance plasma proteins from three different aspects. And then an optimized method is presented.
In my opinion, there are some points should be addressed as following.
Questions: (1) Although this paper provided the detailed protocol for three methods to concentrate the eluted plasma fraction and the bound proteins, no analysis of their differences and similarities among them were mentioned. The characteristics among them should be systematically clarified, which made it inconvenient for readers to understand these methods.
Respond
Thanks, the following paragraph was added to the introduction. Lines 73-83
Trichloroacetic acid (TCA) can be added to too diluted protein samples in order to precipitate and concentrate proteins or remove salts and detergents to clean the samples. TCA precipitation was frequently used to prepare samples for SDS-PAGE or 2D-gels (Zhou et al., 2005; Koontz, 2014). The ReadyPrep 2-D cleanup kit was developed by Bio-rad. It has similar mechanisms to TCA precipitation, using a modified traditional TCA-like protein precipitation to remove ionic contaminants, e.g. detergents, lipids, and phenolic compounds, from protein samples to improve the 2-D resolution and reproducibility (Posch, Paulus and Brubacher, 2005). Vivaspin® Turbo 4 can handle up to 4 ml sample and ensures maximum process ultra-fast speed down to the last few micro liters after > 100 fold concentration with high retentive recovery > 95 %. In addition, it has universal rotor compatibility and easy recovery due to a unique, angular and pipette-friendly dead-stop pocket (Capriotti et al., 2012).
Question: And in the part of “Results and Discussion”, the reason of why samples treated with TCA precipitation did not generate good quality 2-D gels should be properly explained.
Respond: thanks for this point, the quality is the same based on gel separation and clarity, but, the quantity of protein spots varied greatly. A further explanation was added in line 261-268 and highlighted.
The quality of 2-D gels using of samples prepared from both TCA and Vivaspin Turbo 4 were similar in their sharpness resolution and clearance clarity (Figure 1C & E for eluted proteins, D & F for bound proteins). However, samples treated with TCA precipitation did not generate higher spots numbers for bound protein samples compared to the samples prepared by Vivaspin Turbo 4 (495 & 659), even though the number of eluted protein spots from TCA- precipitation treated samples was 39 spots higher than that from the Vivaspin Turbo 4 prepared eluted protein sample (604 & 565) (Section 3.1). This might be caused by the TCA -prepared bound sample which had a lower separation efficiency.
(2) The result of protein quantification in this study is only analyzed by the method of BCA protein assay. If there is another way to proof the final result, it will be more convinced for the quality of this manuscript.
Respond
Thanks, strongly agree, in our previous research (Zhou et al., 2005, cited in this paper), radioactive labelling was used to accurately analyse the lost/recovered protein during the 2-D gel process, which can also be applied in this research. In this study, the BCA protein assay attained this research purpose, but certainly for better accuracy, an additional method should be considered for future research.
(3) In the use of the Seppro spin column and the ProteoExtract Albumin/IgG removal kit to clarify the reproducibility of abundant protein depletion, the parallel control should be designed among the same spin columns.
Respond
Thanks for this comment. The following information was inserted into the paper, line 310-314.
Four quality control samples were studied as a parallel control for every other 6 samples preparation. The detail information were supplied in supported material, in which eluted protein rate of QC1D to QC4D was increased gradually from 26.32%, 27.37%, 42.53% to 56.61%, respectively; and the rate of bound protein of QC1B to QC4B was decreased from 88.21%, 58.55%, 57.47% to 43.39% respectively.
(4) The figures in this article need to be more organized and accessible. For example, if the lanes in figure1A are divided clearly, it will be easier to understand.
Respond
Thanks, they were rearranged to make it clearer.
Other changes by authors:
According to the structure requirement for the publication, previous section on ‘results and discussion’ was split into two separated sections, but the contents weren’t changed.
Reviewer 2
Basic reporting
The language is clear enough. Can be improved slightly in terms of grammar and flow. Intro and Background are clearly written. There are appropriate references in the literature. Figures are good quality. We failed to detect any issues with the images.
Experimental design
The research work is original to the best of my knowledge. Research question is meaningful and well defined but is not relevant owing to omission of LC-MS. The researchers identify a knowledge gap but failed to fill it completely. The investigation is of good technical and ethical standard. The methods have been sufficiently described.
Respond,
Thanks, quite agree. Good idea. Theoretically, it should be also suitable to prepare samples directly for LC-MS. Certainly, it is better to test it to achieve all the proteomic work required. At least, this work will be able to help 2-D gel work and build a good foundation for the LC-MS work.
Validity of the findings
All underlying data provided. No speculative statements encountered. Conclusions are well drawn and in line with experimental findings.
Comments for the author
The authors are commended for a very useful article. They have provided detailed coverage of various commercial protein separation strategies. These even include repeated use performance of the same. We think this manuscript will definitely add to the field. However, some crucial concerns remain, as listed below.
After reading the manuscript, we suggest the title be appropriately modified. This reviewer feels preparation be replaced with separation in the title.
Respond
Thanks, it has been changed.
The Abstract is quite confusing. This reviewer suggests adding 1-2 lines stating the best method/combination of methods for optimized preparation of low and high abundance plasma proteins.
Respond. The following sentences were adding to the abstract (see the end of the abstract, line 38-43) and also to the relevant part of the main manuscript (see line 365-371)
Line 38-43:
The optimized method of preparation of low/high abundance plasma proteins was when plasma was eluted through a ProteoExtract Albumin/IgG removal column, the column was further washed with elution buffer containing 10% glycerol. The first and second elution containing the low and high abundance plasma proteins, respectively, were further concentrated using Vivaspin® Turbo 4 5 kDa ultrafiltration units for 1 or 2-D gel electrophoresis.
Line 365-371
The optimized method of preparation of low/high abundant plasma proteins was: plasma was eluted through a ProteoExtract Albumin/IgG removal column, the elution contained the low abundant proteins; and the column was then further washed with elution buffer containing 10% glycerol, the elution contained the high abundant proteins. All elutions were further concentrated using Vivaspin® Turbo 4 5kDa ultrafiltration units for protein sample preparation for 1 or 2-D gel electrophoresis.
Other changes by authors:
According to the structure requirement for the publication, previous section on ‘results and discussion’ was split into two separated sections, but the contents weren’t changed.
Extra references were added into the reference list and highlighted too.
Capriotti AL, Caruso G, Cavaliere C, Piovesana S, Samperi R, Laganà A. Comparison of three different enrichment strategies for serum low molecular weight protein identification using shotgun proteomics approach. Anal Chim Acta. 2012, 31;740:58-65. doi: 10.1016/j.aca.2012.06.033. 2012
Posch A, Paulus A and Brubacher MG, Chapter 8, Tools for sample preparation and prefractionation in 2-d gel electrophoresis, in Separation Methods In Proteomics, Gary B Smejkal, Alexander Lazarev (Ed), CRC Press 2005, pp120)
Koontz L. Chapter One - TCA Precipitation, Methods in Enzymology, 2014, 541, 3-10.
" | Here is a paper. Please give your review comments after reading it. |
683 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>A precise analytical method was established for rapid screening of 49 antibiotic residues in aquatic products by ultra-high performance liquid chromatography-quadrupole time of flight mass spectrometry (UPLC-QToFMS). The quick, easy, cheap, effective, rugged and safe (QuEChERS) process was refined for effective sample preparation. The homogenized samples of aquatic products were extracted with 3% acetic acid in acetonitrile , salted out with anhydrous magnesium sulfate and sodium chloride, and cleaned up by octadecylsilane (C18) and primary-secondary amine (PSA) powder. Then, the purified samples were separated on a BEH C18 column using 0.1% formic acid and methanol as mobile phases by gradient elution, detected by MS under positive Electron Spray Ionization (ESI+) mode. The linear range of matrix-matched calibration curve was 1-100 μg/L for each compound with the correlation coefficients in the range of 0.9851-0.9999. The recoveries of target antibiotics at the different spiked levels ranged from 60. 2% to 117.9% except for lincomycin hydrochloride, whereas relative standard deviations (RSDs) were between 1.6% and 14.0% except for sulfaguanidine in grass Carp, Penaeus vannamei and Scylla serrata matrices. The limits of detection (LODs) (S/N=3) for the analytes were 0.05-2.40 μg/kg, 0.08-2.00 μg/kg and 0.10-2.27 μg/kg and the limits of quantification (LOQs) (S/N=10) were 0.16-8.00 μg/kg, 0.25-6.66 μg/kg and 0.32-7.56 μg/kg in grass Carp, Penaeus vannamei and Scylla serrata, respectively. The method was successfully applied to grass Carp, Penaeus vannamei and Scylla serrata, demonstrating its ability for the determination of multi-categories antibiotic residues in aquatic products.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1.'>Introduction</ns0:head><ns0:p>Antibiotics, as a vital medicine with bactericidal or bacteriostatic effect, are widely used in modern aquaculture to prevent infectious diseases and promote growth for the increase of aquatic production <ns0:ref type='bibr'>(LiuWuZhangLvXu & Yan 2018;</ns0:ref><ns0:ref type='bibr'>LiuSteele & Meng 2017)</ns0:ref>. However, antibiotics would be a dietary risk in cultured aquatic products with abuse of antibiotics happened. Their residues may directly enter the human body and accumulate in human organs. Therefore, they could lead to a series of adverse reactions and toxicological effects, such as allergic reactions, toxic reactions, liver damage, kidney damage, nervous system damage, and so on(MoChenLeung & Leung 2017). More seriously, the extensive usage of antibiotics ). An LC method is always equipped with fluorescence detector which has the disadvantage of lower sensitivity and poorer qualitative ability. The major shortcoming of LC-MS/MS is a limited throughput when each compound needs optimization in instrumental parameter of mass spectrometer. With the significant advances in the performance of LC-QToFMS, this platform has the outstanding merits of high resolution, high sensitivity and applicability for high throughput screening analysis in aquatic products <ns0:ref type='bibr'>(GuChengZhenChen & Zhou 2019)</ns0:ref> . Owing to its excellent characteristics, hereby an ultra performance LC-QToFMS (UPLC-QTOFMS) was applied for the system (Millipore, USA) were used for sample preparation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.2'>LC conditions</ns0:head><ns0:p>The separation of mixed antibiotic standard solutions were achieved on a Waters Acquity UPLC BEH C18 silica column (100 mm×3.0 mm, 1.7 μm). A gradient LC elution method was employed by 0.1% formic acid aqueous solution as mobile phase A and methanol as mobile phase B. The gradient elution was as follows: 10% B at 0-3 min, 10-100% B at 3-15 min, 100% B at 15-18 min, 100-10% B at 18-18.1min and 10% B at 18.1-21min. The injection volume, flow rate, sample manager and column temperature were set at 10 μL, 0.3 mL/min, 10 ℃ and 40 ℃, respectively. All target antibiotics were eluted, and the column was cleaned and equilibrated.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3.3'>MS conditions</ns0:head><ns0:p>MS experiments were operated using electrospray ionization (ESI) in the positive mode. The optimum MS parameters were as follows: mass collection range 50-1000 Da; capillary voltage 3.0 kV; ion source temperature 120 ℃; desolvation temperature 450℃; cone gas flow 50 L/h; desolvation gas flow rate 800 L/h and core voltage 40 V. QToFMS screening for 49 antibiotic residues was performed using MS E mode. The simultaneous acquisition of accurate-mass full-spectrum at low and high collision energy are allowed in MS E mode, where the low collision energy (LE) spectrum provides useful information on the parent molecules and the main fragment ions were obtained commonly in the high collision energy (HE) function. In this study,LE was set as 6 V and HE was set from 10 eV to 40 eV. Leucine enkephalin, a commonly used peptide, was employed here as a reference material to tune MS instruments in every 10 s.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.Results and discussion</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Optimization of LC condition</ns0:head><ns0:p>The effect of the two types of mobile phases in the separation process were compared between 0.1% formic acid-acetonitrile and 0.1% formic acid water-methanol. As shown in Figure <ns0:ref type='figure'>1</ns0:ref>, using 0.1% formic acid water-acetonitrile as the mobile phases, it is difficult to separate sulfamonomethoxine and sulfamethoxypyridazine completely. It was found that when methanol was used, better resolution and higher overall signal response were obtained. Therefore, 0.1% formic acid water-methanol was selected as the mobile PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>phase in this experiment.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Optimization of the QuEChERS process</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.2.1'>Sample extraction</ns0:head><ns0:p>For the purpose of optimizing extraction of the antibiotic residues for different substrates of aquatic products including grass Carp, Penaeus vannamei and Scylla serrata, ethyl acetate and acetonitrile mixed with different amounts of acetic acid were compared. As shown in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, 3% acetic acid acetonitrile was used as the extractant, and the average recoveries of 49 antibiotics in three matrices were 75.3%, 76.7%, 81.8%, respectively, which were higher than using 1% acetic acid-acetonitrile (v:v), 5% acetic acid-acetonitrile (v:v), and ethyl acetate for the extraction. Intriguingly the acidity of the extractant has a great effect on the quinolones. The sequence of recoveries of quinolones from low to high was ethyl acetate, acetonitrile, 1% acetic acid acetonitrile, 3% acetic acid acetonitrile, 5% acetic acid acetonitrile when each of them was performed as the extractant. The possible reason is that quinolones, which are amphoteric, are easily soluble in acidic or alkaline such as acetic acid solutions. From these results, 3% acetic acid acetonitrile was chosen as the optimum composition of solvents for the extraction buffer.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2.2'>Purification procedure</ns0:head><ns0:p>Five most commonly used sorbents were investigated in this experiment, including PSA, C18, ALU-N, PSA-C18 mixture, PSA-ALU-N mixture. The purification effects on grass Carp, Penaeus vannamei and Scylla serrata were shown in Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. It is obvious that ALU-N gets an inferior purification effect probably because ALU-N has a certain adsorption effect on antibiotics especially quinolones. The highest average recoveries of all 49 antibiotics in three matrices were achieved using PSA-C18, overall. Afterwards, the amounts of salting-out agents (anhydrous Na 2 SO 4 and NaCl) and sorbents (PSA and C18)</ns0:p><ns0:p>were optimized using L 9 (3 4 ) orthogonal experimental design at three levels ( Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science aquatic products is fast, effective, economical and eco-friendly.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Method validation</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.3.1'>Identification</ns0:head><ns0:p>As listed in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>, each of the 49 target antibiotics was measured in MS E mode by one precursor ion and at least two product ions. Meanwhile, retention time was also required to provide vital information to identify specific antibiotics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.2'>Linear range, regression equation, limits of detection and limits of quantitation</ns0:head><ns0:p>The series of solvent-based standard solutions were prepared according to section 2.1 and were then determined by UPLC-QToFMS. The calibration curves were obtained from the relationship between the analyte concentration (X, μg/L) and the analyte peak areas/internal standard peak area, providing the linear equation and the correlation coefficient for each analyte. The linear ranges were 1-100 μg/L for each examined analyte with correlation coefficients of greater than 0.9888. The limits of detection (LODs) were evaluated with signal-to-noise ratio (S/N) of 3 and the limits of quantification (LOQs) were evaluated with signal-tonoise ratio (S/N) of 10. LODs and LOQs of solvent-based calibration curves were in the range of 0.01-1.33 μg/L and 0.04-4.42 μg/L, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.3'>Matrix effects</ns0:head><ns0:p>Aquatic products are rich in proteins and unsaturated fatty acids, as well as they contain a variety of vitamins, minerals, trace elements and so on. Complex components cause ubiquitous matrix effects (signal suppression and enhancement) during the LC-MS/MS analysis which may strongly affect the quantitative Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science such as lincomycin hydrochloride, clindamycin hydrochloride and tylosin. Therefore, matrix-matched standard curves were applied to mitigate matrix effects for quantification of 49 antibiotics. The results of the regression analysis showed that the correlation coefficients (R 2 ) of the matrix-matched standard curves of 49 antibiotics in grass Carp, Penaeus vannamei and Scylla serrata ranged from 0.9900 to 0.9999, 0.9851 to 0.9998, 0.9908 to 0.9997, respectively which indicated excellent linearity.</ns0:p><ns0:p>Based on data obtained from matrix-matched standard curves of 49 antibiotics in grass Carp, Penaeus vannamei and Scylla serrata, the range of the LODs were 0.05-2.40 μg/kg, 0.08-2.00 μg/kg and 0.10-2.27 μg/kg, respectively. And LOQs were in the range of 0.16-8.00 μg/kg, 0.25-6.66 μg/kg and 0.32-7.56 μg/kg, respectively. Hereby, the results of all the LODs and LOQs exhibited in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> in this research were satisfactory as compared with the MRLs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.4'>Recovery and precision</ns0:head><ns0:p>In order to investigate the accuracy and precision of this method, recovery experiments were conducted at different spiking levels of 10, 50, 100 μg/kg (Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref>). Among the 49 antibiotics, except for lincomycin hydrochloride whose recoveries were less than 60%, the recoveries of other antibiotics in three matrices were generally greater than 70%. These results indicated that this method had a satisfactory stability and could meet the actual detecting requirements of 49 antibiotics in aquatic products.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Application to real samples</ns0:head><ns0:p>In this study, 32 samples of aquatic products (including 12 grass Carp, 11 Penaeus vannamei, and 9 Scylla serrata) bought from supermarkets were tested to display the applicability of this method. These samples were dealt with the improved QuEChERS procedure and screened by UPLC-QTOF-MS. All antibiotic residues were quantified using the matrix-matched calibration method, increasing the data accuracy. Results showed that difluoxacin hydrochloride was detected in the samples of Penaeus vannamei whose amounts ranged from 1.5 to 7.0 μg/kg. MRLs of difluoxacin hydrochloride was 300 μg/kg according to GB 31650-2019 announced by MOA, China. Overall, all the concentrations of antibiotic residues in real samples were lower than their MRLs, while other target antibiotics were below their LOQs.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.'>Conclusions</ns0:head><ns0:p>Summing up, in this study, a fast, convenient, effective, economical and eco-friendly strategy based on PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p><ns0:p>QuEChERS process was established to extract the antibiotics in aquatic products including grass Carp, Penaeus vannamei and Scylla serrata. Using UPLC-QTOFMS platform and matrix-matched calibration method to screen and quantity the 49 antibiotic residues, the study achieved satisfactory recoveries, significant linearity and decent stability. Our method also possesses great potential in the analysis of various kinds of antibiotic residues in aquatic products. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 1</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Figure 2</ns0:note><ns0:note type='other'>Chemistry Journals Figure 3</ns0:note></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Chemistry Journals</ns0:head><ns0:p>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>& Lautenbach 2017). Based on both major negative effects above, regulatory limits for veterinary medicine residues are worldwide issued by many countries and organizations like Ministry of Agriculture (MOA) of China No 235 and European Union (EU) No 37/2010(DelatourRacaultBessaire & Desmarchelier 2018). To protect consumers, the overall situation of antibiotic residues in aquatic products that serve as a main food source in coastal areas of China has gained increasing attention from governments. At present, the analytical methods for antibiotics in animal food mainly include liquid chromatography (LC)(ZhouWangZhu & Tang 2015), liquid chromatography tandem triple quadrupole mass spectrometry (LC-MS/MS)(GuidiSantosRibeiroFernandesSilva & Gloria 2018) and liquid chromatography hybrid quadrupole time-of-flight mass spectrometry (LC-QToFMS)(KiHurKimKimMoonOh & Hong 2019</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>accuracy and reproducibility in this study(GuoWangXiaoHuaiWangPanLiao & Liu 2016) 14 . Here, the matrix effects of three subtracts were evaluated by comparing the calibration curves of the target antibiotics prepared in solvent and in the matrix(HernandoFerrerUlaszewskaGarcía-ReyesMolina-Díaz & Fernández-Alba 2007) 15 , which is calculated as: Matrix effect (%) = ( Slope matrix-matched standard curve / Slope solvent-based standard curve -1 ) × 100 Three sets of blank matrix samples were introduced to the mixed standard solution of different concentrations (1, 5, 10, 25, 50, 100 μg/L). As listed in Table 3, among the three matrices of grass Carp, Penaeus vannamei and Scylla serrata, matrix effects could still encountered in determining several antibiotics PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Fig. 1a</ns0:head><ns0:label>1a</ns0:label><ns0:figDesc>Fig. 1a Chromatogram of the three isomers of sulfamonomethoxine, sulfamethoxypyridazine and sulfameter with 0.1% formic acid water-acetonitrile as the mobile phase</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Fig. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Fig.2 Effects of different extracting solvents on the recoveries of the 49 antibiotics</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Fig. 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Fig. 3 Effects of 5 different sorbents on the average recoveries of the 49 antibiotics in grass Carp, Penaeus vannamei and Scylla serrata</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,42.52,70.87,321.38,672.95' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>). The results indicated that</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Table1 CAS number, molecular formula, molecular weight, RT, characteristic ions and structural formula of 49 antibiotics</ns0:figDesc><ns0:table /><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science 1 Table1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>CAS number, molecular formula, molecular weight, RT, characteristic ions and structural 2 formula of 49 antibiotics Antibiotic CAS Molecular formula Molecular weight RT (min) Precursor ion (m/z) Product ions (m/z) Structural formula</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Spiramycin Nalidixic acid Sulfamethoxypy ridazine Sulfamethazine</ns0:cell><ns0:cell>8025-81-8 389-08-2 80-35-3 57-68-1</ns0:cell><ns0:cell>C 43 H 74 N 2 O 14 C 12 H 12 N 2 O 3 C 11 H 12 N 4 O 3 S C 12 H 14 N 4 O 2 S</ns0:cell><ns0:cell>843.06 232.24 280.30 278.33</ns0:cell><ns0:cell>10.46 12.02 8.54 8.30</ns0:cell><ns0:cell>843.5208 233.0928 281.0703 279.0917</ns0:cell><ns0:cell>174.1128,540.3170 187.0508,215.0816 92.0496, 126.0662, 156.0114 124.0828, 156.0119, 186.0330</ns0:cell></ns0:row><ns0:row><ns0:cell>Lincomycin hydrochloride Virginiamycin M1 Oxolinic acid Sulfamethoxazol e Sulfadiazine</ns0:cell><ns0:cell>859-18-7 21411-53-0 14698-29-4 723-46-6 68-35-9</ns0:cell><ns0:cell>C 18 H 35 ClN 2 O 6 S C 28 H 35 N 3 O 7 C 13 H 11 NO 5 C 10 H 11 N 3 O 3 S C 10 H 10 N 4 O 2 S</ns0:cell><ns0:cell>443.00 525.59 283.21 253.28 250.28</ns0:cell><ns0:cell>8.17 13.34 10.79 9.05 5.23</ns0:cell><ns0:cell>407.2213 526.2552 262.0717 254.0603 251.0596</ns0:cell><ns0:cell>126.1281,359.2176 337.1193,508.2453 244.0619 92.0497,156.0113 92.0496,156.0112</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>hydrochloride Clindamycin Enrofloxacin Flumequine Sulfadoxine Sulfaquinoxaline 59-40-5 21462-39-5 93106-60-6 42835-25-6 2447-57-6</ns0:cell><ns0:cell>C 18 H 33 ClN 2 O 5 S C 19 H 22 FN 3 O 3 C 14 H 12 FNO 3 C 12 H 14 N 4 O 4 S C 14 H 12 N 4 O 2 S</ns0:cell><ns0:cell>461.44 359.39 261.25 310.33 300.34</ns0:cell><ns0:cell>11.76 8.79 12.32 9.39 10.81</ns0:cell><ns0:cell>425.1877 360.1717 262.0882 311.0817 301.076</ns0:cell><ns0:cell>158.1179,590.3893 245.1090,316.1823 202.0298,244.0764 92.0496,156.0115 146.0713,156.0114</ns0:cell></ns0:row><ns0:row><ns0:cell>Azithromycin Norfloxacin Danofloxacin Sulfathiazole Sulfachlorpyrida zine</ns0:cell><ns0:cell>83905-01-5 70458-96-7 112398-08-0 72-14-0 80-32-0</ns0:cell><ns0:cell>C 38 H 72 N 2 O 12 C 16 H 18 FN 3 O 3 C 19 H 20 FN 3 O 3 C 9 H 9 N 3 O 2 S 2 C 10 H 9 ClN 4 O 2 S</ns0:cell><ns0:cell>748.99 319.33 357.38 255.32 284.72</ns0:cell><ns0:cell>10.86 8.54 8.82 6.48 8.87</ns0:cell><ns0:cell>749.5153 320.1406 358.1561 256.0212 285.0206</ns0:cell><ns0:cell>158.1180,591.4227 233.1084,276.1505 245.1083,340.1449 92.0495,156.0111 92.0497,156.0115</ns0:cell></ns0:row><ns0:row><ns0:cell>Leucomycin Pefloxacin Difluoxacin hydrochloride sulfamethizole Sulfameter</ns0:cell><ns0:cell>1392-21-8 70458-92-3 91296-86-5 144-82-1 651-06-9</ns0:cell><ns0:cell cols='2'>C 40 H 67 NO 14 C 17 H 20 FN 3 O 3 C 21 H 20 ClF 2 N 3 O 3 435.85 785.96 333.35 C 9 H 10 N 4 O 2 S 2 270.33 C 11 H 12 N 4 O 3 S 280.30</ns0:cell><ns0:cell>13.19 8.37 9.11 8.20 9.16</ns0:cell><ns0:cell>786.4618 334.156 400.1471 271.0321 281.0701</ns0:cell><ns0:cell>558.3282 109.0657,174.1132, 233.1091,290.1666 299.0991, 358.1569, 382.1362 92.0495,156.0113 92.0493, 126.0657, 156.0107</ns0:cell></ns0:row><ns0:row><ns0:cell>Clarithromycin Ciprofloxacin Orbifloxacin Trimethoprim Sulfisomidine</ns0:cell><ns0:cell>81103-11-9 85721-33-1 113617-63-3 738-70-5 515-64-0</ns0:cell><ns0:cell>C 38 H 69 NO 13 C 17 H 18 FN 3 O 3 C 19 H 20 F 3 N 3 O 3 C 14 H 18 N 4 O 3 C 12 H 14 N 4 O 2 S</ns0:cell><ns0:cell>747.96 331.34 395.38 290.32 278.33</ns0:cell><ns0:cell>13.65 8.70 9.06 8.16 5.82</ns0:cell><ns0:cell>748.4853 332.1404 396.1537 291.1467 279.0917</ns0:cell><ns0:cell>158.1180,590.3899 314.1305, 231.0571, 288.1509 295.1054,352.1635 123.0655, 261.0979, 275.1135 124.0867,186.0328</ns0:cell></ns0:row><ns0:row><ns0:cell>Roxithromycin Ofloxacin Sparfloxacin Sulfisoxazole Sulfamonometh oxine</ns0:cell><ns0:cell>80214-83-1 82419-36-1 110871-86-8 127-69-5 1220-83-3</ns0:cell><ns0:cell>C 41 H 76 N 2 O 15 C 18 H 20 FN 3 O 4 C 19 H 22 F 2 N 4 O 3 C 11 H 13 N 3 O 3 S C 11 H 12 N 4 O 3 S</ns0:cell><ns0:cell>837.05 361.37 392.40 267.30 280.30</ns0:cell><ns0:cell>13.77 8.36 9.83 8.09 8.05</ns0:cell><ns0:cell>837.5327 362.1516 393.1739 268.0757 281.0706</ns0:cell><ns0:cell>158.1185,679.4380 261.1043,318.1618 292.1250,349.1827 92.0495,156.0112 126.0660,156.0111</ns0:cell></ns0:row><ns0:row><ns0:cell>Tylosin Sarafloxacin Fleroxacin Sulfamoxole Sulfadimethoxin e</ns0:cell><ns0:cell>1401-69-0 98105-99-8 79660-72-3 729-99-7 122-11-2</ns0:cell><ns0:cell>C 46 H 77 NO 17 C 20 H 17 F 2 N 3 O 3 C 17 H 18 F 3 N 3 O 3 C 11 H 13 N 3 O 3 S C 12 H 14 N 4 O 4 S</ns0:cell><ns0:cell>916.10 385.36 369.34 267.30 310.33</ns0:cell><ns0:cell>12.62 9.31 8.10 9.41 10.54</ns0:cell><ns0:cell>916.527 386.1315 370.1374 268.0756 311.0817</ns0:cell><ns0:cell>174.1131,772.4469 299.0995, 342.1414, 368.1210 269.0893,326.1469 92.0500, 113.0710, 156.0113 92.0494,156.0764</ns0:cell></ns0:row><ns0:row><ns0:cell>Enoxacin Sulfamerazine Sulfabenzamide Sulfaguanidine</ns0:cell><ns0:cell>74011-58-8 127-79-7 127-71-9 57-67-0</ns0:cell><ns0:cell>C 15 H 17 FN 4 O 3 C 11 H 12 N 4 O 2 S C 13 H 12 N 2 O 3 S C 7 H 10 N 4 O 2 S</ns0:cell><ns0:cell>320.32 264.30 276.31 214.24</ns0:cell><ns0:cell>8.39 7.30 9.80 1.89</ns0:cell><ns0:cell>321.1377 265.0754 277.0643 215.0601</ns0:cell><ns0:cell>232.0522,303.1255 92.0496,156.0111 92.0496,156.0113 92.0494,156.0112</ns0:cell></ns0:row><ns0:row><ns0:cell>Erythromycin</ns0:cell><ns0:cell>114-07-8</ns0:cell><ns0:cell>C 37 H 67 NO 13</ns0:cell><ns0:cell>733.93</ns0:cell><ns0:cell>12.83</ns0:cell><ns0:cell>734.4663</ns0:cell><ns0:cell>158.1181,576.3743</ns0:cell></ns0:row><ns0:row><ns0:cell>Tilmicosin Lomefloxacin Sulfapyridine Sulfaphenazole Sulfapyrazole</ns0:cell><ns0:cell>0 108050-54-98079-51-7 144-83-2 526-08-9 852-19-7</ns0:cell><ns0:cell>C 46 H 80 N 2 O 13 C 17 H 19 F 2 N 3 O 3 C 11 H 11 N 3 O 2 S C 15 H 14 N 4 O 2 S C 16 H 16 N 4 O 2 S</ns0:cell><ns0:cell>869.15 351.35 249.29 314.36 328.39</ns0:cell><ns0:cell>11.43 8.99 6.90 10.13 10.73</ns0:cell><ns0:cell>869.5726 352.1487 250.0652 315.0914 329.107</ns0:cell><ns0:cell>174.1134,696.4655 265.1143,308.1574 92.0495,156.0111 156.0111,158.0710 156.0121,172.0870</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020)Manuscript to be reviewed Chemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science </ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 Orthogonal design for sorbents and salting agents</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>levels</ns0:cell><ns0:cell /><ns0:cell>Factors</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>PSA(mg)</ns0:cell><ns0:cell>C18(mg)</ns0:cell><ns0:cell>Na 2 SO 4 : NaCl(g:g)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>4:1</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>200</ns0:cell><ns0:cell>3:1</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell>300</ns0:cell><ns0:cell>2:1</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 Matrix effects, LODs and LOQs for all matrices tested</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Antibiotic</ns0:cell><ns0:cell cols='2'>grass Carp</ns0:cell><ns0:cell cols='2'>Penaeus vannamei</ns0:cell><ns0:cell cols='2'>Scylla serrata</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='6'>Matrix effect LOD/LOQ Matrix effect LOD/LOQ Matrix effect LOD/LOQ</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(%)</ns0:cell><ns0:cell>(μg/kg)</ns0:cell><ns0:cell>(%)</ns0:cell><ns0:cell>(μg/kg)</ns0:cell><ns0:cell>(%)</ns0:cell><ns0:cell>(μg/kg)</ns0:cell></ns0:row><ns0:row><ns0:cell>Lincomycin hydrochloride</ns0:cell><ns0:cell>31.79</ns0:cell><ns0:cell>0.21/0.71</ns0:cell><ns0:cell>41.39</ns0:cell><ns0:cell>0.83/2.77</ns0:cell><ns0:cell>43.54</ns0:cell><ns0:cell>0.77/2.55</ns0:cell></ns0:row><ns0:row><ns0:cell>Clindamycin hydrochloride</ns0:cell><ns0:cell>-25.94</ns0:cell><ns0:cell>0.31/1.04</ns0:cell><ns0:cell>-1.62</ns0:cell><ns0:cell>0.28/0.94</ns0:cell><ns0:cell>-24.43</ns0:cell><ns0:cell>0.31/1.03</ns0:cell></ns0:row><ns0:row><ns0:cell>Azithromycin</ns0:cell><ns0:cell>-11.18</ns0:cell><ns0:cell>0.12/0.41</ns0:cell><ns0:cell>-7.69</ns0:cell><ns0:cell>0.26/0.86</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.17/0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>Clarithromycin</ns0:cell><ns0:cell>17.16</ns0:cell><ns0:cell>0.05/0.16</ns0:cell><ns0:cell>32.76</ns0:cell><ns0:cell>0.08/0.25</ns0:cell><ns0:cell>29.59</ns0:cell><ns0:cell>0.17/0.56</ns0:cell></ns0:row><ns0:row><ns0:cell>Roxithromycin</ns0:cell><ns0:cell>-9.19</ns0:cell><ns0:cell>0.07/0.23</ns0:cell><ns0:cell>-18.25</ns0:cell><ns0:cell>0.09/0.30</ns0:cell><ns0:cell>-32.34</ns0:cell><ns0:cell>0.14/0.45</ns0:cell></ns0:row><ns0:row><ns0:cell>Tylosin</ns0:cell><ns0:cell>51.20</ns0:cell><ns0:cell>0.18/0.61</ns0:cell><ns0:cell>50.99</ns0:cell><ns0:cell>0.26/0.86</ns0:cell><ns0:cell>47.37</ns0:cell><ns0:cell>0.42/1.39</ns0:cell></ns0:row><ns0:row><ns0:cell>Erythromycin</ns0:cell><ns0:cell>-8.76</ns0:cell><ns0:cell>2.40/8.00</ns0:cell><ns0:cell>-1.99</ns0:cell><ns0:cell>1.12/3.73</ns0:cell><ns0:cell>7.00</ns0:cell><ns0:cell>1.78/5.93</ns0:cell></ns0:row><ns0:row><ns0:cell>Tilmicosin</ns0:cell><ns0:cell>27.82</ns0:cell><ns0:cell>0.36/1.18</ns0:cell><ns0:cell>15.82</ns0:cell><ns0:cell>0.48/1.59</ns0:cell><ns0:cell>36.00</ns0:cell><ns0:cell>0.66/2.20</ns0:cell></ns0:row><ns0:row><ns0:cell>Spiramycin</ns0:cell><ns0:cell>-0.50</ns0:cell><ns0:cell>1.32/4.40</ns0:cell><ns0:cell>-7.63</ns0:cell><ns0:cell>1.65/5.50</ns0:cell><ns0:cell>-2.03</ns0:cell><ns0:cell>1.38/4.60</ns0:cell></ns0:row><ns0:row><ns0:cell>Virginiamycin M1</ns0:cell><ns0:cell>28.51</ns0:cell><ns0:cell>0.48/1.60</ns0:cell><ns0:cell>31.29</ns0:cell><ns0:cell>0.39/1.29</ns0:cell><ns0:cell>42.49</ns0:cell><ns0:cell>0.24/0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>Enrofloxacin</ns0:cell><ns0:cell>6.17</ns0:cell><ns0:cell>0.33/1.09</ns0:cell><ns0:cell>4.81</ns0:cell><ns0:cell>0.41/1.35</ns0:cell><ns0:cell>5.34</ns0:cell><ns0:cell>0.40/1.34</ns0:cell></ns0:row><ns0:row><ns0:cell>Norfloxacin</ns0:cell><ns0:cell>4.56</ns0:cell><ns0:cell>0.56/1.86</ns0:cell><ns0:cell>12.74</ns0:cell><ns0:cell>0.74/2.47</ns0:cell><ns0:cell>15.72</ns0:cell><ns0:cell>1.35/4.51</ns0:cell></ns0:row><ns0:row><ns0:cell>Pefloxacin</ns0:cell><ns0:cell>26.43</ns0:cell><ns0:cell>0.60/1.99</ns0:cell><ns0:cell>27.39</ns0:cell><ns0:cell>0.55/1.85</ns0:cell><ns0:cell>7.24</ns0:cell><ns0:cell>1.14/3.81</ns0:cell></ns0:row><ns0:row><ns0:cell>Ciprofloxacin</ns0:cell><ns0:cell>-14.44</ns0:cell><ns0:cell>0.20/0.65</ns0:cell><ns0:cell>-8.25</ns0:cell><ns0:cell>0.33/1.11</ns0:cell><ns0:cell>-12.74</ns0:cell><ns0:cell>0.49/1.63</ns0:cell></ns0:row><ns0:row><ns0:cell>Ofloxacin</ns0:cell><ns0:cell>-29.83</ns0:cell><ns0:cell>0.65/2.18</ns0:cell><ns0:cell>-25.24</ns0:cell><ns0:cell>0.25/0.84</ns0:cell><ns0:cell>-43.51</ns0:cell><ns0:cell>0.51/1.69</ns0:cell></ns0:row><ns0:row><ns0:cell>Sarafloxacin</ns0:cell><ns0:cell>-5.64</ns0:cell><ns0:cell>0.38/1.27</ns0:cell><ns0:cell>5.55</ns0:cell><ns0:cell>0.15/0.49</ns0:cell><ns0:cell>7.29</ns0:cell><ns0:cell>0.42/1.40</ns0:cell></ns0:row><ns0:row><ns0:cell>Enoxacin</ns0:cell><ns0:cell>12.29</ns0:cell><ns0:cell>1.44/4.80</ns0:cell><ns0:cell>5.83</ns0:cell><ns0:cell>1.54/5.15</ns0:cell><ns0:cell>12.33</ns0:cell><ns0:cell>2.09/6.98</ns0:cell></ns0:row><ns0:row><ns0:cell>Lomefloxacin</ns0:cell><ns0:cell>3.40</ns0:cell><ns0:cell>0.29/0.98</ns0:cell><ns0:cell>5.78</ns0:cell><ns0:cell>0.26/0.85</ns0:cell><ns0:cell>15.48</ns0:cell><ns0:cell>0.61/2.04</ns0:cell></ns0:row><ns0:row><ns0:cell>Nalidixic acid</ns0:cell><ns0:cell>-3.27</ns0:cell><ns0:cell>0.26/0.88</ns0:cell><ns0:cell>8.22</ns0:cell><ns0:cell>0.22/0.75</ns0:cell><ns0:cell>0.68</ns0:cell><ns0:cell>0.19/0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>Oxolinic acid</ns0:cell><ns0:cell>-10.55</ns0:cell><ns0:cell>0.18/0.60</ns0:cell><ns0:cell>2.98</ns0:cell><ns0:cell>0.38/1.26</ns0:cell><ns0:cell>-4.17</ns0:cell><ns0:cell>0.56/1.88</ns0:cell></ns0:row><ns0:row><ns0:cell>Flumequine</ns0:cell><ns0:cell>-15.30</ns0:cell><ns0:cell>0.22/0.74</ns0:cell><ns0:cell>5.03</ns0:cell><ns0:cell>0.15/0.51</ns0:cell><ns0:cell>-24.12</ns0:cell><ns0:cell>0.33/1.09</ns0:cell></ns0:row><ns0:row><ns0:cell>Danofloxacin</ns0:cell><ns0:cell>-5.79</ns0:cell><ns0:cell>0.20/0.68</ns0:cell><ns0:cell>-7.85</ns0:cell><ns0:cell>0.66/2.20</ns0:cell><ns0:cell>-0.75</ns0:cell><ns0:cell>0.65/2.15</ns0:cell></ns0:row><ns0:row><ns0:cell>Difluoxacin hydrochloride</ns0:cell><ns0:cell>-17.38</ns0:cell><ns0:cell>0.16/0.53</ns0:cell><ns0:cell>-5.70</ns0:cell><ns0:cell>0.08/0.28</ns0:cell><ns0:cell>-3.54</ns0:cell><ns0:cell>0.13/0.45</ns0:cell></ns0:row><ns0:row><ns0:cell>Orbifloxacin</ns0:cell><ns0:cell>4.17</ns0:cell><ns0:cell>0.13/0.43</ns0:cell><ns0:cell>4.59</ns0:cell><ns0:cell>0.11/0.36</ns0:cell><ns0:cell>-0.25</ns0:cell><ns0:cell>0.16/0.53</ns0:cell></ns0:row><ns0:row><ns0:cell>Sparfloxacin</ns0:cell><ns0:cell>-5.40</ns0:cell><ns0:cell>0.23/0.77</ns0:cell><ns0:cell>-21.83</ns0:cell><ns0:cell>0.20/0.65</ns0:cell><ns0:cell>-35.49</ns0:cell><ns0:cell>0.34/1.13</ns0:cell></ns0:row><ns0:row><ns0:cell>Fleroxacin</ns0:cell><ns0:cell>3.25</ns0:cell><ns0:cell>0.31/1.03</ns0:cell><ns0:cell>-14.68</ns0:cell><ns0:cell>0.80/2.65</ns0:cell><ns0:cell>-29.59</ns0:cell><ns0:cell>0.69/2.31</ns0:cell></ns0:row><ns0:row><ns0:cell>Sulfamerazine</ns0:cell><ns0:cell>2.09</ns0:cell><ns0:cell>0.29/0.98</ns0:cell><ns0:cell>30.66</ns0:cell><ns0:cell>0.17/0.57</ns0:cell><ns0:cell>18.46</ns0:cell><ns0:cell>0.23/0.78</ns0:cell></ns0:row><ns0:row><ns0:cell>Sulfapyridine</ns0:cell><ns0:cell>1.84</ns0:cell><ns0:cell>0.23/0.77</ns0:cell><ns0:cell>12.30</ns0:cell><ns0:cell>0.30/0.99</ns0:cell><ns0:cell>-10.27</ns0:cell><ns0:cell>0.24/0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Sulfamethoxypyridazine</ns0:cell><ns0:cell>-12.69</ns0:cell><ns0:cell>0.55/1.83</ns0:cell><ns0:cell>0.76</ns0:cell><ns0:cell>0.58/1.95</ns0:cell><ns0:cell>28.26</ns0:cell><ns0:cell>0.10/0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>Sulfamethoxazole</ns0:cell><ns0:cell>1.90</ns0:cell><ns0:cell>0.12/0.41</ns0:cell><ns0:cell>10.38</ns0:cell><ns0:cell>0.27/0.89</ns0:cell><ns0:cell>4.80</ns0:cell><ns0:cell>0.45/1.50</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020)Manuscript to be reviewed Chemistry JournalsAnalytical, Inorganic, Organic, Physical, Materials Science </ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 Recoveries and repeatability (expressed as %RSD) results for all matrices tested</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='3'>Chemistry Journals</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Spiked</ns0:cell><ns0:cell cols='2'>grass Carp</ns0:cell><ns0:cell cols='2'>Penaeus vannamei</ns0:cell><ns0:cell cols='2'>Scylla serrata</ns0:cell></ns0:row><ns0:row><ns0:cell>Antibiotic</ns0:cell><ns0:cell>levels</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>(μg/kg)</ns0:cell><ns0:cell>Recovery/%</ns0:cell><ns0:cell>RSD/%</ns0:cell><ns0:cell cols='2'>Recovery /% RSD/%</ns0:cell><ns0:cell>Recovery/%</ns0:cell><ns0:cell>RSD/%</ns0:cell></ns0:row><ns0:row><ns0:cell>Lincomycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>54.1</ns0:cell><ns0:cell>5.4</ns0:cell><ns0:cell>37.9</ns0:cell><ns0:cell>6.7</ns0:cell><ns0:cell>44.0</ns0:cell><ns0:cell>10.4</ns0:cell></ns0:row><ns0:row><ns0:cell>hydrochloride</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>55.5</ns0:cell><ns0:cell>2.7</ns0:cell><ns0:cell>37.4</ns0:cell><ns0:cell>4.8</ns0:cell><ns0:cell>32.2</ns0:cell><ns0:cell>11.6</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>50.7</ns0:cell><ns0:cell>3.1</ns0:cell><ns0:cell>39.6</ns0:cell><ns0:cell>5.4</ns0:cell><ns0:cell>39.3</ns0:cell><ns0:cell>15.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Clindamycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>76.4</ns0:cell><ns0:cell>10.9</ns0:cell><ns0:cell>76.6</ns0:cell><ns0:cell>5.5</ns0:cell><ns0:cell>81.4</ns0:cell><ns0:cell>4.2</ns0:cell></ns0:row><ns0:row><ns0:cell>hydrochloride</ns0:cell><ns0:cell>50</ns0:cell><ns0:cell>73.8</ns0:cell><ns0:cell>5.7</ns0:cell><ns0:cell>76.0</ns0:cell><ns0:cell>5.3</ns0:cell><ns0:cell>73.0</ns0:cell><ns0:cell>11.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>74.9</ns0:cell><ns0:cell>4.1</ns0:cell><ns0:cell>100.5</ns0:cell><ns0:cell>3.5</ns0:cell><ns0:cell>82.8</ns0:cell><ns0:cell>6.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Azithromycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>100.0</ns0:cell><ns0:cell>10.4</ns0:cell><ns0:cell>104.8</ns0:cell><ns0:cell>6.0</ns0:cell><ns0:cell>111.2</ns0:cell><ns0:cell>3.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>81.8</ns0:cell><ns0:cell>5.4</ns0:cell><ns0:cell>101.6</ns0:cell><ns0:cell>4.6</ns0:cell><ns0:cell>95.6</ns0:cell><ns0:cell>5.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>100.2</ns0:cell><ns0:cell>7.7</ns0:cell><ns0:cell>116.0</ns0:cell><ns0:cell>2.0</ns0:cell><ns0:cell>104.3</ns0:cell><ns0:cell>3.1</ns0:cell></ns0:row><ns0:row><ns0:cell>Leucomycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>81.2</ns0:cell><ns0:cell>7.0</ns0:cell><ns0:cell>86.8</ns0:cell><ns0:cell>4.9</ns0:cell><ns0:cell>63.8</ns0:cell><ns0:cell>3.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>82.8</ns0:cell><ns0:cell>7.4</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>5.7</ns0:cell><ns0:cell>69.4</ns0:cell><ns0:cell>3.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>73.4</ns0:cell><ns0:cell>8.0</ns0:cell><ns0:cell>93.4</ns0:cell><ns0:cell>7.8</ns0:cell><ns0:cell>77.9</ns0:cell><ns0:cell>5.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Clarithromycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>89.8</ns0:cell><ns0:cell>7.3</ns0:cell><ns0:cell>95.8</ns0:cell><ns0:cell>5.7</ns0:cell><ns0:cell>98.6</ns0:cell><ns0:cell>5.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>96.6</ns0:cell><ns0:cell>4.4</ns0:cell><ns0:cell>102.0</ns0:cell><ns0:cell>2.4</ns0:cell><ns0:cell>95.9</ns0:cell><ns0:cell>6.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>3.8</ns0:cell><ns0:cell>100.4</ns0:cell><ns0:cell>4.7</ns0:cell><ns0:cell>105.1</ns0:cell><ns0:cell>2.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Roxithromycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>91.0</ns0:cell><ns0:cell>2.2</ns0:cell><ns0:cell>94.8</ns0:cell><ns0:cell>2.2</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>5.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>3.6</ns0:cell><ns0:cell>87.4</ns0:cell><ns0:cell>5.0</ns0:cell><ns0:cell>73.5</ns0:cell><ns0:cell>6.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>84.6</ns0:cell><ns0:cell>3.8</ns0:cell><ns0:cell>90.3</ns0:cell><ns0:cell>2.6</ns0:cell><ns0:cell>83.1</ns0:cell><ns0:cell>6.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Tylosin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>77.3</ns0:cell><ns0:cell>7.7</ns0:cell><ns0:cell>87.6</ns0:cell><ns0:cell>6.1</ns0:cell><ns0:cell>104.7</ns0:cell><ns0:cell>5.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>76.8</ns0:cell><ns0:cell>4.6</ns0:cell><ns0:cell>91.4</ns0:cell><ns0:cell>5.1</ns0:cell><ns0:cell>99.2</ns0:cell><ns0:cell>3.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>74.1</ns0:cell><ns0:cell>3.6</ns0:cell><ns0:cell>103.3</ns0:cell><ns0:cell>3.2</ns0:cell><ns0:cell>101.3</ns0:cell><ns0:cell>6.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Erythromycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>88.3</ns0:cell><ns0:cell>9.1</ns0:cell><ns0:cell>97.8</ns0:cell><ns0:cell>14.0</ns0:cell><ns0:cell>93.1</ns0:cell><ns0:cell>5.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>76.1</ns0:cell><ns0:cell>4.7</ns0:cell><ns0:cell>78.0</ns0:cell><ns0:cell>8.3</ns0:cell><ns0:cell>75.7</ns0:cell><ns0:cell>5.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>78.0</ns0:cell><ns0:cell>3.1</ns0:cell><ns0:cell>66.6</ns0:cell><ns0:cell>5.5</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>5.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Tilmicosin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>93.9</ns0:cell><ns0:cell>7.3</ns0:cell><ns0:cell>97.9</ns0:cell><ns0:cell>6.9</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>6.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>80.8</ns0:cell><ns0:cell>3.4</ns0:cell><ns0:cell>95.3</ns0:cell><ns0:cell>4.8</ns0:cell><ns0:cell>100.7</ns0:cell><ns0:cell>3.1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>97.2</ns0:cell><ns0:cell>7.2</ns0:cell><ns0:cell>101.4</ns0:cell><ns0:cell>3.5</ns0:cell><ns0:cell>106.4</ns0:cell><ns0:cell>2.8</ns0:cell></ns0:row><ns0:row><ns0:cell>Spiramycin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>74.7</ns0:cell><ns0:cell>11.3</ns0:cell><ns0:cell>91.7</ns0:cell><ns0:cell>8.6</ns0:cell><ns0:cell>100.7</ns0:cell><ns0:cell>4.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>60.2</ns0:cell><ns0:cell>10.8</ns0:cell><ns0:cell>74.7</ns0:cell><ns0:cell>5.1</ns0:cell><ns0:cell>73.1</ns0:cell><ns0:cell>3.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>64.6</ns0:cell><ns0:cell>4.5</ns0:cell><ns0:cell>85.9</ns0:cell><ns0:cell>11.1</ns0:cell><ns0:cell>71.9</ns0:cell><ns0:cell>5.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Virginiamycin M1</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>73.0</ns0:cell><ns0:cell>12.7</ns0:cell><ns0:cell>102.4</ns0:cell><ns0:cell>4.1</ns0:cell><ns0:cell>103.4</ns0:cell><ns0:cell>4.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>75.7</ns0:cell><ns0:cell>6.6</ns0:cell><ns0:cell>98.2</ns0:cell><ns0:cell>3.4</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>8.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>68.1</ns0:cell><ns0:cell>6.0</ns0:cell><ns0:cell>107.4</ns0:cell><ns0:cell>4.8</ns0:cell><ns0:cell>91.0</ns0:cell><ns0:cell>4.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Enrofloxacin</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>99.2</ns0:cell><ns0:cell>4.9</ns0:cell><ns0:cell>109.1</ns0:cell><ns0:cell>2.6</ns0:cell><ns0:cell>101.4</ns0:cell><ns0:cell>4.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>50</ns0:cell><ns0:cell>90.4</ns0:cell><ns0:cell>2.4</ns0:cell><ns0:cell>107.1</ns0:cell><ns0:cell>3.0</ns0:cell><ns0:cell>100.4</ns0:cell><ns0:cell>2.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>95.6</ns0:cell><ns0:cell>4.7</ns0:cell><ns0:cell>104.5</ns0:cell><ns0:cell>3.1</ns0:cell><ns0:cell>101.8</ns0:cell><ns0:cell>1.7</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020) PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020) Manuscript to be reviewed Chemistry Journals Analytical, Inorganic, Organic, Physical, Materials Science PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52574:1:1:NEW 30 Nov 2020)</ns0:note></ns0:figure>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science </ns0:note>
</ns0:body>
" | "Dear Editors,
We thank the reviewers for their generous comments and have edited the manuscript.
We hope that the manuscript will now be suitable for publication in PeerJ.
Dr. Yao Gao
Fujian Medical University, China
Responses to reviewers
Reviewer 1
Basic reporting
1. The English language should be improved, such as line 49, lines 120-121, lines 129-130;
Thanks for the reviewer’s suggestions. The English language editing has been improved in Line 53-56, 132-133 and 142-144 of the revised version.
2. The format of references should meet the requirements of the Journal;
The format of references has been edited in the revised manuscript.
Experimental design
No comment
Validity of the findings
1. Line 20, RSDs were between 1.6% and 14.0% in different samples were inconsistent with the datas given in Table 4, should check it;
According to the reviewer’s advice, this part has been revised as “relative standard deviations (RSDs) were between 1.6% and 14.0% except for sulfaguanidine in grass Carp, Penaeus vannamei and Scylla serrata matrices” as shown in Line 21-23 of the revised manuscript.
2. Figure 1a-1c, the title of the Y-axis were unclear;
We thank the reviewer’s reminding, these figures have been redrawn.
3, form Figure 3, the effects of C18 and PSA-C18 nearly the same, even in grass Carp and Scylla serrata higher average recoveries were achieved using C18, why select PSA-C18 as the sorbents?
As shown in Figure 3, C18 and PSA-C18 had similar effect on recoveries, indeed. Even in grass Carp and Scylla serrata higher average recoveries were achieved using C18. But in Penaeus vannamei, C18 had an obvious poorer effect than PSA-C18 in fact. Overall, the highest average recoveries of all 49 antibiotics in these samples of aquatic products were achieved using PSA-C18.
4, In Table 1, the structure of target compounds was unclear.
We thank the reviewer’s advice, Table 1 has been reedited.
Reviewer 2
Basic reporting
The figure quality should be improved, the reference format should be consistent with the journal.
Thanks for the reviewer’s suggestions. These figures have been redrawn, and the reference format has been edited in the revised manuscript.
Experimental design
1. May be modified as “Screening of 49 antibiotic residues in aquatic products using modified QuEChERS sample preparation procedure and UPLC-QToFMS analysis
We thank the reviewer’s advice, the title of this manuscript has been revised in Line 1-2.
2. Abstract, Line 19 to 20, The recoveries of target antibiotics at the different spiked levels ranged from 60.6% to 117.9% …, 60.6% should be changed to 60.2% from table
Agreed. This error has been corrected in Line 21.
3. Introduction, a small description of regulatory limits of pesticide residues in China/EU/Codex/USA may be given to justify the LOQ and LOD requirement of the method.
The description of regulatory limits has been added in Line 41-42 and 55-56 in the introduction of the revised version. The general maximum residue limits (MRLs) (2-200μg/kg) were newly set by the Ministry of Agriculture MOA (GB 31650-2019), which were much higher than the LODs/LOQs in our experiment.
4. Materials and methods, a separate section may be given for a brief description on different treatments used with respect to each optimization parameters may be given for the clarity of readers.
“2.2 Sample extraction and clean-up optimization” has been divided into two part as “2.2 Sample treatment” and “2.3 Antibiotic extraction and clean-up optimization”. Relevant description was also replenished in Line 93-94 of the revised manuscript.
Validity of the findings
1. The spiked concentration selected is 10 μg/L, 50 μg/L, and 100 μg/L, however, as the LOQ of the most of the pesticides are 1.0 μg/L, first level recovery should be of 1.0 μg/L. Further ensure that, the highest calibration level should be 200 μg/L or more.
Thanks for the reviewer’s suggestions. But in this manuscript, LOQs of the targeted antibiotics were 0.16-8.00 μg/kg. So we set the first level for recovery test as 10 μg/kg to make sure spiked samples could be detected. If 1.0 μg/kg was set as the first level, about half of the antibiotics could be undetectable. Also, the linear ranges here were 1.0-100 μg/kg for each targeted analyte, and the antibiotic concentrations in all of the aquatic samples were much lower than 100 μg/kg. Therefore, the highest calibration level was set as 100 μg/kg.
2. In a multi-residue method, the crux of the method is the optimization of sample preparation, especially in a matrix like aquatic products. Hence, more emphasis should be given for sample preparation optimization, matrix effect minimization, clean-up optimization, sample size optimization, sample preprocessing optimization and further validation.
The sampling procedure was executed according to Practice of sampling plans for aquatic products (GB/T 30891-2014) announced by China’s State Administration for Market Regulation as described in Line 86-87. Clean-up optimization has already described in 3.2.2 of the original manuscript. Matrix effect minimization was achieved by matrix-matched standard curves in 3.3.3.
" | Here is a paper. Please give your review comments after reading it. |
684 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. This study examined the impact of heterozygous HbE on HbA1c measurements by six commonly used commercial methods. The results were compared with those from a modified isotopedilution mass spectrometry (IDMS) reference laboratory method on a liquid chromatograph coupled with tandem mass spectrometer (LC-MS/MS).</ns0:p><ns0:p>Methods. Twenty-three leftover samples of patients with heterozygous HbE (HbA1c range: 5.4%-11.6%), and nineteen samples with normal hemoglobin (HbA1c range: 5.0%-13.7%) were included. The selected commercial methods included the Tina-quant HbA1c Gen. 3 (Roche Diagnostics), Cobas B 101 (Roche Diagnostics), D100 (Bio-Rad Laboratories), Variant II Turbo HbA1c 2.0 (Rio-Rad Laboratories), DCA Vantage (Siemens Healthcare) and HbA1c Advanced (Beckman Coulter Inc.).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results.</ns0:head><ns0:p>With the exception of Cobas B 101 and the Variant II Turbo 2.0, the 95% confidence intervals of the Passing-Bablok regression lines between the results from the six commercial methods and the IDMS method overlapped. The latter suggested no statistically significant difference in results and hence no impact on HbA1c result despite the presence of heterozygous HbE. The method of Cobas B 101 gave positive bias at the range of concentrations examined (5.4%-11.6%), while that of Variant II Turbo 2.0 gave positive bias at concentrations up to approximately 9.5%. The finding of significant positive bias in the methods of Cobas B 101 and Variant II Turbo 2.0 agrees with the observations of some previous studies, but is contrary to manufacturer's claim indicating the absence of interference by heterozygous HbE. Our results also clearly showed the impact of heterozygous HbE across a fairly broad measurement range using a laboratory method (the Variant II Turbo 2.0). Laboratory practitioners and clinicians should familiarize themselves with prevailing hemoglobin variants in the population they serve and select the appropriate methods for HbA1c measurement.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Hemoglobin E (HbE) is a variant hemoglobin caused by a single point mutation, resulting in glutamic acid to lysine substitution at position 26 of the beta chain of the hemoglobin (β 26 Glu→Lys). The amino acid substitution shifts the overall molecular charge more basic, which can be detected by separative methods, such as electrophoresis and liquid chromatography. Globally, HbE is the second most common hemoglobin variant with a prevalence of up to 40% in certain populations in South and South-East Asia <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Hemologbin A1c (HbA1c) measurement is recommended for monitoring of long-term glycemic control and treatment titration in patients with diabetes <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. The measurement of HbA1c can be affected by the presence of hemoglobin variants, leading to spurious measurement that can adversely affect clinical decision making. This study examined the impact of heterozygous HbE on HbA1c measurements using six commonly used commercial methods and a modified isotopedilution mass spectrometry (IDMS) reference method. The IDMS reference method has demonstrated comparability to the IFCC network <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>Study subjects. Twenty-three de-identified leftover whole blood samples from patients who had heterozygous HbE (AE) identified by capillary zone electrophoresis (Capillarys Hemoglobin, Sebia, Cedex, France) and gel electrophoresis (Hydragel Hemoglobin, Sebia, Cedex, France) were included in this study (HbA1c range: 5.4% -11.6%, average 7.5%, determined by IDMS). Another twenty de-identified whole blood samples belonging to patients with normal hemoglobin were included as controls (HbA1c range: 5.0% -13.7%, average 8.1%, determined by IDMS). The study subjects included patients with diabetes and individuals undergoing wellness screening to allow inclusion of HbA1c samples spanning across the analytical measurement range from both HbE and the normal hemoglobin subjects.</ns0:p><ns0:p>Ethics declaration. This study was performed as part of the laboratory quality assurance system. The study protocol complies with local regulatory requirements and the Declaration of Helsinki. It has been approved by the institutional ethics review board (National Healthcare Group Domain Specific Review Board, Ref: 2017/00257) with an exemption for written consent for the use of de-identified leftover samples for this study.</ns0:p><ns0:p>Laboratory analysis. The blood samples were subjected to HbA1c measurement using the Tinaquant HbA1c Gen. 3 (immunoassay adapted on Cobas 501 analyser, Roche Diagnostics, Basel, Switzerland), Cobas B 101 (point-of-care immunoassay, Roche Diagnostics), D100 (cation-exchange high performance liquid chromatography (CE-HPLC), Bio-Rad Laboratories, Hercules, CA, USA), Variant II Turbo HbA1c 2.0 (CE-HPLC, Rio-Rad Laboratories), DCA Vantage (point-of-care immunoassay, Siemens Healthcare GmbH, Erlangen, Germany) and HbA1c Advanced (immunoassay adapted on DxC 700 AU, Beckman Coulter Inc., Miami, FL, USA). All the commercial methods had been certified by the National Glycohemoglobin Standardization Program and their details are summarised in Table <ns0:ref type='table'>1</ns0:ref>. The blood samples were also subjected to a previously described IDMS reference method <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref> using an Agilent 1290 Infinity liquid chromatograph coupled with AB SCIEX 6500+ tandem mass spectrometer.</ns0:p><ns0:p>It has previously been suggested that the glycation rate of HbE is the same as normal hemoglobin (HbA0) since the modification on HbE is far from the site of glycation <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. The IDMS HbA1c method involved measuring enzymatically digested N-terminal hexapeptides of the β-chain of the hemoglobin using LC-MS/MS and does not distinguish between HbE and HbA0. Hence, the IDMS method is unaffected by HbE, and the HbA1c results from this method were considered the reference values in our study. The HbA1c were also separately measured on the six commercial methods. The blood samples were initially analysed by Variant II Turbo 2.0 or Cobas B 101 to ensure that the samples could cover a wide concentration range for this study, then kept at 4°C for less than 7 days before testing on the other commercial methods. The hemolyzed blood samples were stored at -20°C for less than two months before testing on the LC-MS/MS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis.</ns0:head><ns0:p>The results for samples with HbE and normal hemoglobin obtained from the methods of the six commercial methods were plotted against those of the IDMS method using Passing-Bablok analysis with 95% confidence intervals obtained by bootstrapping. HbA1c levels of 6% and 9% are clinically important decision values <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Hence, the relative difference of HbA1c results between samples with HbE and normal hemoglobin were compared. A relative difference of ±6% were considered significant, as recommended by the College of American Pathologists <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>. The statistical analysis was performed using Analyze-It (Microsoft, Redmount, WA, USA).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The Passing-Bablok regression between the results from the routine methods of six selected commercial methods and IDMS method are shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. The regression equations are summarized in Table <ns0:ref type='table'>2</ns0:ref>. Except for results from Cobas B 101 and the Variant II Turbo 2.0 methods, all the 95% confidence intervals of the regression lines overlapped, suggesting no statistically significant difference in results for the methods of the other four commercial methods. In the presence of heterozygous HbE, the Cobas B 101 method had positive bias at the range of concentrations examined (5.4%-11.6%), while the Variant II Turbo 2.0 method had positive bias at concentrations up to approximately 9.5%.</ns0:p><ns0:p>The relative difference between heterozygous HbE and normal hemoglobin at HbA1c of 6% and 9% are summarized in Table <ns0:ref type='table'>3</ns0:ref>. Results from the Cobas B 101 method (difference = +6.4% for HbA1c at 6%; difference = +7.4% for HbA1c at 9%) and the Variant II Turbo 2.0 method (difference = +9.9% for HbA1c at 6%) exceeded the a priori criteria for significance. The relative difference at 9% for the Variant II Turbo 2.0 method was +4.4%, which was within the a priori criteria for significance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The six commercial methods examined in this study included commonly used mainframe analyzer and point-of-care systems. Together, they represented the methods used by 36% of the College of American Pathologists proficiency testing survey participants <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>. This study examined the effect of HbE on HbA1c measurement across the clinically important measurement range by comparing against metrologically traceable reference values determined by a chemical metrology laboratory using IDMS measurements. Using this statistical approach, the effect of HbE on the methods can be examined relative to the normal hemoglobin across a range of concentration, which ameliorates the effects of inter-method calibrator bias.</ns0:p><ns0:p>The Cobas B 101 method showed significant positive bias across the measurement range investigated in the presence of heterozygous HbE. This finding extended the observation of significant positive bias in the Cobas B 101 method in a previous study that examined only a single patient sample <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>. This finding is contrary to prior published manufacturer's claim, which indicated no significant analytical interference by heterozygous HbE <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>. The discrepancy may be explained by the use of an acceptability criteria of ±10% relative difference <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref>, which is significantly wider than the clinical standards of ± 6% required by the laboratory profession <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref>. Laboratory practitioners should carefully communicate the significant positive interference in the Cobas B 101 method due to heterozygous HbE to the clinical user. The significant positive bias observed in the Cobas B101 method is analytically unexpected. The Cobas B 101 method is a point-of-care assay that applies the principle of turbidimetric immunoinhibition. In general, immunoassays are thought to be more resilient against heterozygous HbE since the amino acid substitution is far from the N-terminus of the β chain where HbA1c glycation and antibody binding occur <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. The cause of this discrepancy remains unclear at present. On the other hand, the Variant II Turbo 2.0 method showed proportional positive bias in the presence of HbE. This finding is consistent with a previous report by Sthaneshwar et al. <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>. At lower HbA1c concentration, the positive relative bias is large (10%) and reduces at higher HbA1c concentrations. Interestingly, a previous publication reported no significant interference with HbE, despite having 1 out of 11 samples exceeding the acceptability criteria of ±10% relative difference in their internal evaluation <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref>. The positive interference from heterozygous HbE on the Variant II Turbo 2.0 method may be related to the incomplete cation-exchange chromatography separation. This potentially leads to overlapping HbE and HbA peaks that are sub-optimally resolved by the integration function of the algorithm.</ns0:p><ns0:p>These findings are of clinical concern for two reasons. Firstly, the acceptability criteria used by regulatory body is wider compared to the clinical requirements of routine laboratories. A 10% acceptability criteria implies that a relative difference of such magnitude is clinically acceptable. However, a 10% relative difference in HbA1c value can significantly alter the clinical interpretation of the results of individual patients. At a population level, the mean of HbA1c lies close to the clinical diagnostic threshold of 5.6% for pre-diabetes <ns0:ref type='bibr' target='#b10'>[10,</ns0:ref><ns0:ref type='bibr' target='#b1'>2]</ns0:ref>. The significant bias will lead to over-diagnosis of diabetes, and the consequent unnecessary treatment and erroneous allocation of healthcare resources <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>. As such, regulatory bodies should consider aligning their acceptability criteria with professional bodies to maintain the quality chain from manufacturer to laboratory and clinical end-users.</ns0:p><ns0:p>Secondly, the finding of significant bias meant that individual laboratory must exercise care when selecting a method. Additionally, in regions where a particular hemoglobin variant is prevalent, it may be prudent for the laboratory to verify the prior data, especially when its clinical requirements differ significantly from the manufacturer's acceptability criteria. However, such verification exercise is often beyond the means of individual routine laboratories. This can be overcome by forming a collaborative network of laboratories to jointly evaluate the methods. When such exercise is performed, it is desirable to employ a reference laboratory method that is not influenced by the presence of hemoglobin variant in question.</ns0:p><ns0:p>Nevertheless, access to such reference laboratory method is limited. Increased reporting of hemoglobin variant interference in the literature will help laboratory practitioners make more informed decisions and assist manufacturers in improving their analytical methods. It may be necessary to periodically re-evaluate hemoglobin variant interference as a manufacturer may reformulate their reagents or modify their analytical methods, particularly for the CE-HPLC <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>There is currently no consensus on how laboratory should manage hemoglobin variants that are incidentally detected during HbA1c measurements <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref>. Some hemoglobin variants can interfere with HbA1c measurements leading to spurious results. Others may alter the lifespan of the red blood cells, leading to an altered glycation rate of the hemoglobin. An example of this is homozygous hemoglobin S that causes sickle cell disease. It is associated with shortened red blood cell lifespan, which causes lower HbA1c values relative to the ambience glucose. As such, it is advantageous to communicate the presence of hemoglobin variants to the clinicians to help them optimally interpret the HbA1c values. Additionally, such information also provides an opportunity for work up of significant hemoglobinopathy.</ns0:p><ns0:p>In general, it is desirable to avoid the use of methods that may be significantly interfered by the hemoglobin variant that is prevalent in the population served. This is particularly true for laboratory methods such as CE-HPLC, where the presence of hemoglobin variants may be inferred only by careful manual reading of the chromatograms by skilled technologists. It is desirable to avoid reporting an HbA1c result with known hemoglobin variant interference. When detected, alternative method not affected by the hemoglobin variant should be used to report the HbA1c instead. Alternate biomarker such as fructosamine may also be considered. However, such workflows require access to alternate instruments or esoteric biomarkers, which may not even be readily available to most laboratories.</ns0:p><ns0:p>The use of non-separative laboratory method such as the immunoassays for HbA1c measurement may mask an undetected interfering hemoglobin variant. At the same time, immunoassays are also liable to interfering antibodies <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>. Both of these may lead to erroneous reporting that can evade laboratory detection. The use of devices with such test principles in the point of care setting is considered an increased risk as laboratory supervision may be limited. As such clinicians should remain highly vigilant against clinically discordant HbA1c. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 Passing-</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:note place='foot'>PeerJ An. Chem. reviewing PDF | (ACHEM-2020:09:52801:1:1:NEW 9 Dec 2020)</ns0:note>
<ns0:note place='foot'>Analytical, Inorganic, Organic, Physical, Materials Science</ns0:note>
</ns0:body>
" | "3 December 2020
Dear Reviewers and Editor,
Thank you very much for reviewing our manuscript entitled “Impact of heterozygous hemoglobin E on six commercial methods for hemoglobin A1c measurement.”
We have made revisions to the manuscript based on your valuable comments/suggestions. The explanations/descriptions in response to your comments/suggestions are given below:
Reviewers Comments:
Reviewer 1 (Randie Little):
I enjoyed reviewing the manuscript. The authors examine interference from HbAE using six commercial HbA1c methods. They compared results with an LC-MS/MS. There were no statistically difference results for HbAE for 4 of these methods. However, for two methods (VII Turbo 2.0 and Cobas b101) there were statistically significant differences despite manufacturer’s claim s to the contrary. This is an important paper that adds to the information we have on variant interference with HbA1c methods. This is especially important since there are discordant data in the literature for the VII T2.0 on this interference.
Response:
Thank you for your positive comments.
Specific comments:
1. The authors should clarify up front whether or not the LC-MS/MS method they use as a reference method is part of the IFCC network. If not, they need to show some data to indicate comparability with the IFCC network, which is monitored on a regular basis.
Response:
A statement has been included in the “Introduction” that the IDMS reference method has demonstrated comparability to the IFCC network.
The results from participation in RELA 2013 and 2014 for HbA1c was included in our previous publication “Achieving comparability with IFCC reference method for the measurement of hemoglobin A1c by use of an improved isotope-dilution mass spectrometry method. Anal Bioanal Chem. 407, 7579-7587 (2015)”. Additionally, results from participation in RELA 2018 for HbA1c can be viewed on the RELA – Homepage (http://dgkl-rfb.de:81/, Lab 109)
2. Line 126-127: 36% seems a bit high. I calculate ~31% of participants.
Response:
Please refer to our calculations in the tables below.
TABLE 1: 2019 GH5-C
Instruments
no. labs
Abbott Alinity ci series
18
Abbott Architect c System
234
Alere Afinion 2
22
Alere Afinion AS100
136
ARKRAY Adams HA-8180 series
22
Beckman AU HbA1c Advanced
10
Beckman AU Systems - Beckman reagent
78
Beckman UniCel DxC Synchron Systems
81
Bio-Rad D-10
135
Bio-Rad D-100
119
Bio-Rad Variant II
21
Bio-Rad Variant II Turbo
29
Bio-Rad Variant II Turbo 2.0
131
Roche cobas c311
21
Roche cobas c500 series
406
Roche cobas c513
59
Roche COBAS Integra 400
41
Sebia Capillarys 2 Flex Piercing
67
Siemens DCA Vantage
395
Siemens Dimension ExL
197
Siemens Dimension Vista
279
Siemens Dimension Xpand
14
Tosoh G8 Automated HPLC
337
Tosoh G11 Automated HPLC
11
Trinity Biotech Premier Hb9210 HPLC
86
Vitros 5,1 FS/4600/5600 Chemistry Systems
194
Total labs in 2019 GH5-C
3143
Instruments Investigated
no. labs
Beckman UniCel DxC Synchron Systems
81
Bio-Rad D-100
119
Bio-Rad Variant II Turbo 2.0
131
Roche cobas c500 series
406
Siemens DCA Vantage
395
sub-total
1132
% of total labs in 2019 GH5-C
36.0%
3. Table3: it would be helpful to highlight those over 6%.
Response:
We have bold the values which are clinically significant (set at 6% relative difference).
Table 3
Difference in HbA1c values. The values are derived using the regression equations between the commercial methods and the isotope dilution mass spectrometry reference method for samples with heterozygous HbE and normal haemoglobin at 6% and 9% concentrations. All HbA1c are reported in National Glycohemoglobin Standardization Program units. An a priori clinical significance was set at 6% relative difference (values in bold).
At HbA1c = 6% measured by IDMS
At HbA1c = 9% measured by IDMS
Routine laboratory Method
Manufacturer
Normal hemoglobin
Heterozygous HbE
% Difference
Normal hemoglobin
Heterozygous HbE
% Difference
Tina Quant HbA1c Gen 3.0
Roche Diagnostics
5.6
5.7
-0.9
8.6
8.5
1.7
Cobas B 101
Roche Diagnostics
6.2
5.8
6.4
9.2
8.6
7.4
D100
Bio-Rad Laboratories
5.8
5.7
0.3
8.4
8.4
0.4
Variant II Turbo HbA1c 2.0
Bio-Rad Laboratories
6.2
5.6
9.9
8.9
8.6
4.4
DCA Vantage
Siemens Healthcare
5.7
5.7
0.0
8.7
8.4
2.6
HbA1c Advance
Beckman Coulter Inc.
5.5
5.6
-1.9
8.3
8.2
0.9
4. Figure 1: label x-axes
Response:
The x-axes have been specified in the revised figure legend.
Figure 1
Passing-Bablok regression analysis of HbA1c measurements on samples with heterozygous HbE (solid line) and normal hemoglobin (dash lines). The dotted lines are the 95% confidence intervals of the regression. The x-axes reflect HbA1c values measured obtained using the IDMS reference method while y-axes reflect HbA1c values measured obtained using commercial methods. All HbA1c are reported in National Glycohemoglobin Standardization Program (%) units.
5. Figure 1: would it be possible to include the data points so that the distribution of points can be seen.
Response:
Thank you for the suggestion. We have refrained from including the individual data points in Figure 1 as it would obscure the regression lines along with the 95% confidence intervals, which we felt are the main features we wish to highlight in this illustrative item. We have provided supplemental figure to show the distribution of the data points.
Reviewer 2:
The authors of the manuscript report the impact of Hb E heterozygote on 6 commercial methods for Hb A1c measurement and IDMS was used as reference laboratory method. EA and A2A Hb types with different levels of Hb A1c were used to determined Hb A1c levels by 6 selected commercial methods. The manuscript is well written. However, there are some points should be addressed before publication.
Response:
Thank you for your positive comments.
Specific comments:
1. (Basic reporting) The authors wrote a clear manuscript with acceptable English and well-organized structure. However, the title of x-axis is required in figure 1.
Response:
The x-axes have been specified in the revised figure legend.
2. (Experimental design) If possible, the average of blood sugar levels from both EA and A2A groups should provide in the manuscript. The authors should specify the type of patients such as diabetic patients for replacing patients in the study subjects.
Response:
The subjects include diabetic patients and individuals who are undergoing wellness screening. This is to allow us to obtain the samples with HbA1c values spanning the measurement range. We have included our rationale in the sub-section “Study subjects”.
" | Here is a paper. Please give your review comments after reading it. |
685 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. This study examined the impact of heterozygous HbE on HbA1c measurements by six commonly used commercial methods. The results were compared with those from a modified isotopedilution mass spectrometry (IDMS) reference laboratory method on a liquid chromatograph coupled with tandem mass spectrometer (LC-MS/MS).</ns0:p><ns0:p>Methods. Twenty-three leftover samples of patients with heterozygous HbE (HbA1c range: 5.4%-11.6%), and nineteen samples with normal hemoglobin (HbA1c range: 5.0%-13.7%) were included. The selected commercial methods included the Tina-quant HbA1c Gen. 3 (Roche Diagnostics), Cobas B 101 (Roche Diagnostics), D100 (Bio-Rad Laboratories), Variant II Turbo HbA1c 2.0 (Bio-Rad Laboratories), DCA Vantage (Siemens Healthcare) and HbA1c Advanced (Beckman Coulter Inc.).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results.</ns0:head><ns0:p>With the exception of Cobas B 101 and the Variant II Turbo 2.0, the 95% confidence intervals of the Passing-Bablok regression lines between the results from the six commercial methods and the IDMS method overlapped. The latter suggested no statistically significant difference in results and hence no impact on HbA1c result despite the presence of heterozygous HbE. The method of Cobas B 101 gave positive bias at the range of concentrations examined (5.4%-11.6%), while that of Variant II Turbo 2.0 gave positive bias at concentrations up to approximately 9.5%. The finding of significant positive bias in the methods of Cobas B 101 and Variant II Turbo 2.0 agrees with the observations of some previous studies, but is contrary to manufacturer's claim indicating the absence of interference by heterozygous HbE. Our results also clearly showed the impact of heterozygous HbE across a fairly broad measurement range using a laboratory method (the Variant II Turbo 2.0). Laboratory practitioners and clinicians should familiarize themselves with prevailing hemoglobin variants in the population they serve and select the appropriate methods for HbA1c measurement.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Hemoglobin E (HbE) is a variant hemoglobin caused by a single point mutation, resulting in glutamic acid to lysine substitution at position 26 of the beta chain of the hemoglobin (β 26 Glu→Lys). The amino acid substitution shifts the overall molecular charge more basic, which can be detected by separative methods, such as electrophoresis and liquid chromatography. Globally, HbE is the second most common hemoglobin variant with a prevalence of up to 40% in certain populations in South and South-East Asia <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Hemologbin A1c (HbA1c) measurement is recommended for monitoring of long-term glycemic control and treatment titration in patients with diabetes <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. The measurement of HbA1c can be affected by the presence of hemoglobin variants, leading to spurious measurement that can adversely affect clinical decision making. This study examined the impact of heterozygous HbE on HbA1c measurements using six commonly used commercial methods and a modified isotopedilution mass spectrometry (IDMS) reference method. The IDMS reference method has demonstrated comparability to the IFCC network <ns0:ref type='bibr' target='#b4'>[3]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>Study subjects. Twenty-three de-identified leftover whole blood samples from patients who had heterozygous HbE (AE) identified by capillary zone electrophoresis (Capillarys Hemoglobin, Sebia, Cedex, France) and gel electrophoresis (Hydragel Hemoglobin, Sebia, Cedex, France) were included in this study (HbA1c range: 5.4% -11.6%, average 7.5%, determined by IDMS). Another twenty de-identified whole blood samples belonging to patients with normal hemoglobin were included as controls (HbA1c range: 5.0% -13.7%, average 8.1%, determined by IDMS). The study subjects included patients with diabetes and individuals undergoing wellness screening to allow inclusion of HbA1c samples spanning across the analytical measurement range from both HbE and the normal hemoglobin subjects.</ns0:p><ns0:p>Ethics declaration. This study was performed as part of the laboratory quality assurance system. The study protocol complies with local regulatory requirements and the Declaration of Helsinki. It has been approved by the institutional ethics review board (National Healthcare Group Domain Specific Review Board, Ref: 2017/00257) with an exemption for written consent for the use of de-identified leftover samples for this study.</ns0:p><ns0:p>Laboratory analysis. The blood samples were subjected to HbA1c measurement using the Tinaquant HbA1c Gen. 3 (immunoassay adapted on Cobas 501 analyser, Roche Diagnostics, Basel, Switzerland), Cobas B 101 (point-of-care immunoassay, Roche Diagnostics), D100 (cation-exchange high performance liquid chromatography (CE-HPLC), Bio-Rad Laboratories, Hercules, CA, USA), Variant II Turbo HbA1c 2.0 (CE-HPLC, Bio-Rad Laboratories), DCA Vantage (point-of-care immunoassay, Siemens Healthcare GmbH, Erlangen, Germany) and HbA1c Advanced (immunoassay adapted on DxC 700 AU, Beckman Coulter Inc., Miami, FL, USA). All the commercial methods had been certified by the National Glycohemoglobin Standardization Program and their details are summarised in Table <ns0:ref type='table'>1</ns0:ref>. The blood samples were also subjected to a previously described IDMS reference method <ns0:ref type='bibr' target='#b4'>[3]</ns0:ref> using an Agilent 1290 Infinity liquid chromatograph coupled with AB SCIEX 6500+ tandem mass spectrometer.</ns0:p><ns0:p>It has previously been suggested that the glycation rate of HbE is the same as normal hemoglobin (HbA0) since the modification on HbE is far from the site of glycation <ns0:ref type='bibr' target='#b5'>[4]</ns0:ref>. The IDMS HbA1c method involved measuring enzymatically digested N-terminal hexapeptides of the β-chain of the hemoglobin using LC-MS/MS and does not distinguish between HbE and HbA0. Hence, the IDMS method is unaffected by HbE, and the HbA1c results from this method were considered the reference values in our study. The HbA1c were also separately measured on the six commercial methods. The blood samples were initially analysed by Variant II Turbo 2.0 or Cobas B 101 to ensure that the samples could cover a wide concentration range for this study, then kept at 4°C for less than 7 days before testing on the other commercial methods. The hemolyzed blood samples were stored at -20°C for less than two months before testing on the LC-MS/MS.</ns0:p></ns0:div>
<ns0:div><ns0:head>Statistical analysis.</ns0:head><ns0:p>The results for samples with HbE and normal hemoglobin obtained from the methods of the six commercial methods were plotted against those of the IDMS method using Passing-Bablok analysis with 95% confidence intervals obtained by bootstrapping. HbA1c levels of 6% and 9% are clinically important decision values <ns0:ref type='bibr' target='#b5'>[4]</ns0:ref>. Hence, the relative difference of HbA1c results between samples with HbE and normal hemoglobin were compared. A relative difference of ±6% were considered significant, as recommended by the College of American Pathologists <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>. The statistical analysis was performed using Analyze-It (Microsoft, Redmount, WA, USA).</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The Passing-Bablok regression between the results from the routine methods of six selected commercial methods and IDMS method are shown in Figure <ns0:ref type='figure'>1</ns0:ref>. The regression equations are summarized in Table <ns0:ref type='table'>2</ns0:ref>. Except for results from Cobas B 101 and the Variant II Turbo 2.0 methods, all the 95% confidence intervals of the regression lines overlapped, suggesting no statistically significant difference in results for the methods of the other four commercial methods. In the presence of heterozygous HbE, the Cobas B 101 method had positive bias at the range of concentrations examined (5.4%-11.6%), while the Variant II Turbo 2.0 method had positive bias at concentrations up to approximately 9.5%.</ns0:p><ns0:p>The relative difference between heterozygous HbE and normal hemoglobin at HbA1c of 6% and 9% are summarized in Table <ns0:ref type='table'>3</ns0:ref>. Results from the Cobas B 101 method (difference = +6.4% for HbA1c at 6%; difference = +7.4% for HbA1c at 9%) and the Variant II Turbo 2.0 method (difference = +9.9% for HbA1c at 6%) exceeded the a priori criteria for significance. The relative difference at 9% for the Variant II Turbo 2.0 method was +4.4%, which was within the a priori criteria for significance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The six commercial methods examined in this study included commonly used mainframe analyzer and point-of-care systems. Together, they represented the methods used by 36% of the College of American Pathologists proficiency testing survey participants <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>. This study examined the effect of HbE on HbA1c measurement across the clinically important measurement range by comparing against metrologically traceable reference values determined by a chemical metrology laboratory using IDMS measurements. Using this statistical approach, the effect of HbE on the methods can be examined relative to the normal hemoglobin across a range of concentration, which ameliorates the effects of inter-method calibrator bias.</ns0:p><ns0:p>The Cobas B 101 method showed significant positive bias across the measurement range investigated in the presence of heterozygous HbE. This finding extended the observation of significant positive bias in the Cobas B 101 method in a previous study that examined only a single patient sample <ns0:ref type='bibr' target='#b7'>[6]</ns0:ref>. This finding is contrary to prior published manufacturer's claim, which indicated no significant analytical interference by heterozygous HbE <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>. The discrepancy may be explained by the use of an acceptability criteria of ±10% relative difference <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>, which is significantly wider than the clinical standards of ± 6% required by the laboratory profession <ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>. Laboratory practitioners should carefully communicate the significant positive interference in the Cobas B 101 method due to heterozygous HbE to the clinical user. The significant positive bias observed in the Cobas B101 method is analytically unexpected. The Cobas B 101 method is a point-of-care assay that applies the principle of turbidimetric immunoinhibition. In general, immunoassays are thought to be more resilient against heterozygous HbE since the amino acid substitution is far from the N-terminus of the β chain where HbA1c glycation and antibody binding occur <ns0:ref type='bibr' target='#b5'>[4]</ns0:ref>. The cause of this discrepancy remains unclear at present. On the other hand, the Variant II Turbo 2.0 method showed proportional positive bias in the presence of HbE. This finding is consistent with a previous report by Sthaneshwar et al. <ns0:ref type='bibr' target='#b9'>[8]</ns0:ref>. At lower HbA1c concentration, the positive relative bias is large (10%) and reduces at higher HbA1c concentrations. Interestingly, a previous publication reported no significant interference with HbE, despite having 1 out of 11 samples exceeding the acceptability criteria of ±10% relative difference in their internal evaluation <ns0:ref type='bibr' target='#b10'>[9]</ns0:ref>. The positive interference from heterozygous HbE on the Variant II Turbo 2.0 method may be related to the incomplete cation-exchange chromatography separation. This potentially leads to overlapping HbE and HbA peaks that are sub-optimally resolved by the integration function of the algorithm.</ns0:p><ns0:p>These findings are of clinical concern for two reasons. Firstly, the acceptability criteria used by regulatory body is wider compared to the clinical requirements of routine laboratories. A 10% acceptability criteria implies that a relative difference of such magnitude is clinically acceptable. However, a 10% relative difference in HbA1c value can significantly alter the clinical interpretation of the results of individual patients. At a population level, the mean of HbA1c lies close to the clinical diagnostic threshold of 5.6% for pre-diabetes <ns0:ref type='bibr' target='#b11'>[10,</ns0:ref><ns0:ref type='bibr' target='#b1'>2]</ns0:ref>. The significant bias will lead to over-diagnosis of diabetes, and the consequent unnecessary treatment and erroneous allocation of healthcare resources <ns0:ref type='bibr' target='#b11'>[10]</ns0:ref>. As such, regulatory bodies should consider aligning their acceptability criteria with professional bodies to maintain the quality chain from manufacturer to laboratory and clinical end-users.</ns0:p><ns0:p>Secondly, the finding of significant bias meant that individual laboratory must exercise care when selecting a method. Additionally, in regions where a particular hemoglobin variant is prevalent, it may be prudent for the laboratory to verify the prior data, especially when its clinical requirements differ significantly from the manufacturer's acceptability criteria. However, such verification exercise is often beyond the means of individual routine laboratories. This can be overcome by forming a collaborative network of laboratories to jointly evaluate the methods. When such exercise is performed, it is desirable to employ a reference laboratory method that is not influenced by the presence of hemoglobin variant in question.</ns0:p><ns0:p>Nevertheless, access to such reference laboratory method is limited. Increased reporting of hemoglobin variant interference in the literature will help laboratory practitioners make more informed decisions and assist manufacturers in improving their analytical methods. It may be necessary to periodically re-evaluate hemoglobin variant interference as a manufacturer may reformulate their reagents or modify their analytical methods, particularly for the CE-HPLC <ns0:ref type='bibr' target='#b12'>[11]</ns0:ref>.</ns0:p><ns0:p>There is currently no consensus on how laboratory should manage hemoglobin variants that are incidentally detected during HbA1c measurements <ns0:ref type='bibr' target='#b13'>[12]</ns0:ref>. Some hemoglobin variants can interfere with HbA1c measurements leading to spurious results. Others may alter the lifespan of the red blood cells, leading to an altered glycation rate of the hemoglobin. An example of this is homozygous hemoglobin S that causes sickle cell disease. It is associated with shortened red blood cell lifespan, which causes lower HbA1c values relative to the ambience glucose. As such, it is advantageous to communicate the presence of hemoglobin variants to the clinicians to help them optimally interpret the HbA1c values. Additionally, such information also provides an opportunity for work up of significant hemoglobinopathy.</ns0:p><ns0:p>In general, it is desirable to avoid the use of methods that may be significantly interfered by the hemoglobin variant that is prevalent in the population served. This is particularly true for laboratory methods such as CE-HPLC, where the presence of hemoglobin variants may be inferred only by careful manual reading of the chromatograms by skilled technologists. It is desirable to avoid reporting an HbA1c result with known hemoglobin variant interference. When detected, alternative method not affected by the hemoglobin variant should be used to report the HbA1c instead. Alternate biomarker such as fructosamine may also be considered. However, such workflows require access to alternate instruments or esoteric biomarkers, which may not even be readily available to most laboratories.</ns0:p><ns0:p>The use of non-separative laboratory method such as the immunoassays for HbA1c measurement may mask an undetected interfering hemoglobin variant. At the same time, immunoassays are also liable to interfering antibodies <ns0:ref type='bibr' target='#b14'>[13]</ns0:ref>. Both of these may lead to erroneous reporting that can evade laboratory detection. The use of devices with such test principles in the point of care setting is considered an increased risk as laboratory supervision may be limited. As such clinicians should remain highly vigilant against clinically discordant HbA1c.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>Four out of six commercial HbA1c laboratory methods examined in this study showed significant bias in measurement in the presence of heterozygous HbE when compared to an IDMS method. The impact of heterozygous HbE on HbA1c measurement may not be predicted from the assay principle, nor be adequately disclosed by the manufacturer. It is important for clinical laboratory to understand the impact of prevailing hemoglobin variant in their population during method selection and evaluation.</ns0:p></ns0:div><ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='16,42.52,250.12,525.00,309.00' type='bitmap' /></ns0:figure>
</ns0:body>
" | "15 Jan. 2021
Dear Reviewers and Editor,
Thank you very much for reviewing our manuscript entitled “Impact of heterozygous hemoglobin E on six commercial methods for hemoglobin A1c measurement.”
We have revised the manuscript based on your valuable comments/suggestions. The descriptions in response to your comments/suggestions are given below:
Reviewers Comments:
Reviewer 1 (Randie Little):
Comments:
The authors have addressed all of my concerns except that I still think that the X-axes in figure 1 should be labeled within the figure itself. I will leave this up to the editors.
Response:
The x-axes have been added and the legend has been revised in figure 1.
Figure 1
Figure 1. Passing-Bablok regression analysis of HbA1c measurements on samples with heterozygous HbE (solid line) and normal hemoglobin (dash lines). The dotted lines are the 95% confidence intervals of the regression. All HbA1c are reported in National Glycohemoglobin Standardization Program (%) units.
Reviewer 2:
Comments:
According to the objective of this study, this study examined the impact of heterozygous HbE on HbA1c measurements using six commonly used commercial methods and a modified isotope dilution mass spectrometry (IDMS) reference method, the authors need to revise the conclusions for linking to original research question. There is minor revision of the manuscript to be considered before further publication.
Response:
We have revised the conclusion to link up the original research question.
" | Here is a paper. Please give your review comments after reading it. |
687 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Reproducibility and reusability of research results is an important concern in scientific communication and science policy. A foundational element of reproducibility and reusability is the open and persistently available presentation of research data. However, many common approaches for primary data publication in use today do not achieve sufficient long-term robustness, openness, accessibility or uniformity. Nor do they permit comprehensive exploitation by modern Web technologies. This has led to several authoritative studies recommending uniform direct citation of data archived in persistent repositories. Data are to be considered as first-class scholarly objects, and treated similarly in many ways to cited and archived scientific and scholarly literature. Here we briefly review the most current and widely agreed set of principle-based recommendations for scholarly data citation, the Joint Declaration of Data Citation Principles (JDDCP). We then present a framework for operationalizing the JDDCP; and a set of initial recommendations on identifier schemes, identifier resolution behavior, required metadata elements, and best practices for realizing programmatic machine actionability of cited data. The main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories, including technical staff members in these organizations. But ordinary researchers can also benefit from these recommendations. The guidance provided here is intended to help achieve widespread, uniform human and machine accessibility of deposited data, in support of significantly improved verification, validation, reproducibility and re-use of scholarly/scientific data.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION Background</ns0:head><ns0:p>An underlying requirement for verification, reproducibility, and reusability of scholarship is the accurate, open, robust, and uniform presentation of research data. This should be an integral part of the scholarly publication process 1 . However, Alsheikh-Ali et al. found that a large proportion of research articles in high-impact journals either weren't subject to or didn't adhere to any data availability policies at all <ns0:ref type='bibr' target='#b1'>(Alsheikh-Ali et al. (2011)</ns0:ref>). We note as well that such policies are not currently standardized across journals, nor are they typically optimized for data reuse. This finding reinforces significant concerns recently expressed in the scientific literature about reproducibility and whether many false positives are being reported as fact <ns0:ref type='bibr' target='#b12'>(Colquhoun (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b55'>Rekdal (2014)</ns0:ref>; Begley and Ellis (2012); <ns0:ref type='bibr' target='#b53'>Prinz et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Greenberg (2009)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Ioannidis (2005)</ns0:ref>).</ns0:p><ns0:p>Data transparency and open presentation, while central notions of the scientific method along with their complement, reproducibility, have met increasing challenges as dataset sizes grow far beyond the capacity of printed tables in articles. An extreme example is the case of DNA sequencing data. This was one of the first classes of data, along with crystallographic data, for which academic publishers began to require database accession numbers as a condition of publishing, as early as the 1990's. At that time sequence data could actually still be published as text in journal articles. The Atlas of Protein Sequence and Structure, published from 1965-78, was the original form in which protein sequence data was compiled: a book, which could be cited <ns0:ref type='bibr' target='#b61'>(Strasser (2010)</ns0:ref>). Today the data volumes involved are absurdly large <ns0:ref type='bibr' target='#b57'>(Salzberg and Pop (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b58'>Shendure and Ji (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b60'>Stein (2010)</ns0:ref>). Similar transitions from printed tabular data to digitized data on the web have taken place across disciplines.</ns0:p><ns0:p>Reports from leading scholarly organizations have now recommended a uniform approach to treating research data as first-class research objects, similarly to the way textual publications are archived, indexed, and cited <ns0:ref type='bibr'>(Altman et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b4'>Altman and King (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b62'>Uhlir (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b5'>Ball and Duke (2012)</ns0:ref>). Uniform citation of robustly archived, described, and identified data in persistent digital repositories is proposed as an important step towards significantly improving the discoverability, documentation, validation, reproducibility, and reuse of scholarly data <ns0:ref type='bibr'>(Altman et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b4'>Altman and King (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b62'>Uhlir (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b5'>Ball and Duke (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b25'>Goodman et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b8'>Borgman (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Parsons et al. (2010)</ns0:ref>).</ns0:p><ns0:p>The Joint Declaration of Data Citation Principles (JDDCP) (Data Citation Synthesis Group (2014)) is a set of top-level guidelines developed by several stakeholder organizations as a formal synthesis of current best-practice recommendations for common approaches to data citation. It is based on significant study by participating groups and independent scholars 2 . The work of this group was hosted by the FORCE11 (http://force11.org) community, an open forum for discussion and action on important issues related to the future of research communication and e-Scholarship.</ns0:p><ns0:p>The JDDCP is the latest development in a collective process, reaching back to at least 1977, to raise the importance of data as an independent scholarly product and to make data transparently available for verification and reproducibility <ns0:ref type='bibr' target='#b3'>(Altman and Crosas (2013)</ns0:ref>).</ns0:p><ns0:p>The purpose of this document is to outline a set of common guidelines to operationalize JDDCPcompliant data citation, archiving, and programmatic machine accessibility in a way that is as uniform as possible across conforming repositories and associated data citations. The recommendations out-</ns0:p><ns0:p>• Principle 4 -Unique Identification: 'A data citation should include a persistent method for identification that is machine actionable, globally unique, and widely used by a community.'</ns0:p><ns0:p>• Principle 5 -Access: 'Data citations should facilitate access to the data themselves and to such associated metadata, documentation, code, and other materials, as are necessary for both humans and machines to make informed use of the referenced data.'</ns0:p><ns0:p>• Principle 6 -Persistence: 'Unique identifiers, and metadata describing the data, and its disposition, should persist -even beyond the lifespan of the data they describe.'</ns0:p><ns0:p>• Principle 7 -Specificity and Verifiability: 'Data citations should facilitate identification of, access to, and verification of the specific data that support a claim. Citations or citation metadata should include information about provenance and fixity sufficient to facilitate verifying that the specific time slice, version and/or granular portion of data retrieved subsequently is the same as was originally cited.'</ns0:p><ns0:p>• Principle 8 -Interoperability and Flexibility: 'Citation methods should be sufficiently flexible to accommodate the variant practices among communities, but should not differ so much that they compromise interoperability of data citation practices across communities.'</ns0:p><ns0:p>These Principles are meant to be adopted at an institutional or discipline-wide scale. The main target audience for the common implementation guidelines in this article consists of publishers, scholarly organizations, and persistent data repositories. Individual researchers are not meant to set up their own data archives. In fact this is contrary to one goal of data citation as we see it -which is to get away from inherently unstable citations via researcher footnotes indicating data availability at some intermittently supported laboratory website. However individual researchers can contribute to and benefit from adoption of these Principles by ensuring that primary research data is prepared for archival deposition at or before publication. We also note that often a researcher will want to go back to earlier primary data from their own lab -robust archival positively ensures it will remain available for their own use in future, whatever the vicissitudes of local storage and lab personnel turnover.</ns0:p></ns0:div>
<ns0:div><ns0:head>Implementation questions arising from the JDDCP</ns0:head><ns0:p>The JDDCP were presented by their authors as Principles. Implementation questions were left unaddressed. This was meant to keep the focus on harmonizing top-level and basically goal-oriented recommendations without incurring implementation-level distractions. Therefore we organized a follow-on activity to produce a set of implementation guidelines intended to promote rapid, successful, and uniform JDDCP adoption. We began by seeking to understand just what questions would arise naturally to an organization that wished to implement the JDDCP. We then grouped the questions into four topic areas, to be addressed by individuals with special expertise in each area. increasingly used by publishers, and is the archival form for biomedical publications in PubMed Central 4 . This group therefore developed a proposal for revision of the NISO Journal Article Tag Suite to support direct data citation. NISO-JATS version 1.1d2 (National Center for Biotechnology Information (2014)), a revision based on this proposal, was released on December 29, 2014, by the JATS Standing Committee, and is considered a stable release, although it is not yet an official revision of the NISO Z39.96-2012 standard.</ns0:p><ns0:p>The Publishing Workflows group met jointly with the Research Data Alliance's Publishing Data Workflows Working Group to collect and document exemplar publishing workflows. An article on this topic is in preparation, reviewing basic requirements and exemplar workflows from Nature Scientific Data, GigaScience (Biomed Central), F1000Research, and Geoscience Data Journal (Wiley).</ns0:p><ns0:p>The Common Repository APIs group is currently planning a pilot activity for a common API model for data repositories. Recommendations will be published at the conclusion of the pilot. This work is being undertaken jointly with the ELIXIR (http://www.elixir-europe.org/) Fairport working group.</ns0:p><ns0:p>The Identifiers, Metadata, and Machine Accessibility group's recommendations are presented in the remainder of this article. These recommendations cover:</ns0:p><ns0:p>• definition of machine accessibility;</ns0:p><ns0:p>• identifiers and identifier schemes;</ns0:p><ns0:p>• landing pages;</ns0:p><ns0:p>• minimum acceptable information on landing pages;</ns0:p><ns0:p>• best practices for dataset description; and</ns0:p><ns0:p>• recommended data access methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>RECOMMENDATIONS FOR ACHIEVING MACHINE ACCESSIBILITY</ns0:head><ns0:p>What is machine accessibility? Machine accessibility of cited data, in the context of this document and the JDDCP, means access by well-documented Web services <ns0:ref type='bibr' target='#b7'>(Booth et al. (2004)</ns0:ref>) -preferably RESTful Web services <ns0:ref type='bibr' target='#b21'>(Fielding (2000)</ns0:ref>; <ns0:ref type='bibr' target='#b22'>Fielding and Taylor (2002)</ns0:ref>; <ns0:ref type='bibr' target='#b56'>Richardson and Ruby (2011)</ns0:ref>) to data and metadata stored in a robust repository, independently of integrated browser access by humans.</ns0:p><ns0:p>Web services are methods of program-to-program communication using Web protocols. The World Wide Web Consortium (W3C, http://www.w3.org) defines them as 'software system[s] designed to support interoperable machine-to-machine interaction over a network' <ns0:ref type='bibr' target='#b29'>(Haas and Brown (2004)</ns0:ref>).</ns0:p><ns0:p>Web services are always 'on' and function essentially as utilities, providing services such as computation and data lookup, at web service endpoints. These are well-known Web addresses, or Uniform Resource Identifiers (URIs) (Berners-Lee et al. (1998); Jacobs and Walsh ( <ns0:ref type='formula'>2004</ns0:ref>)) 5 .</ns0:p><ns0:p>RESTful Web services follow the REST (Representational State Transfer) architecture developed by Fielding and others <ns0:ref type='bibr' target='#b21'>(Fielding (2000)</ns0:ref>). They support a standard set of operations such as 'get' (retrieve), 'post' (create), and 'put' (create or update) and are highly useful in building hypermedia applications by combining services from many programs distributed on various Web servers.</ns0:p><ns0:p>Machine accessibility and particularly RESTful Web service accessibility is highly desirable because it enables construction of 'Lego block' style programs built up from various service calls distributed across the Web, which need not be replicated locally. RESTful Web services are recommended over the other major Web service approach, SOAP interfaces <ns0:ref type='bibr' target='#b28'>(Gudgin et al. (2007)</ns0:ref>), due to our focus on the documents being served and their content. REST also allows multiple data formats such as JSON (JavaScript Object Notation) (ECMA (2013)), and provides better support for mobile applications (e.g., caching, reduced bandwidth, etc.).</ns0:p><ns0:p>4 NISO Z39.96-2012 is derived from the former 'NLM-DTD' model originally developed by the U.S. National Library of Medicine.</ns0:p><ns0:p>5 URIs are very similar in concept to the more widely understood Uniform Resource Locators (URL, or 'Web address'), but URIs do not specify the location of an object or service -they only identify it. URIs specify abstract resources on the Web. The associated server is responsible for resolving a URI to a specific physical resource -if the resource is resolvable. (URIs may also be used to identify physical things such as books in a library, which are not directly resolvable resources on the Web.)</ns0:p></ns0:div>
<ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ reviewing <ns0:ref type='table' target='#tab_1'>PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)</ns0:ref> Reviewing Manuscript Clearly, 'machine accessibility' is also an underlying prerequisite to human accessibility, as browser (client) access to remote data is always mediated by machine-to-machine communication. But for flexibility in construction of new programs and services, it needs to be independently available apart from access to data generated from the direct browser calls.</ns0:p></ns0:div>
<ns0:div><ns0:head>Unique Identification</ns0:head><ns0:p>Unique identification in a manner that is machine-resolvable on the Web and demonstrates a long-term commitment to persistence is fundamental to providing access to cited data and its associated metadata. There are several identifier schemes on the Web that meet these two criteria. The best identifiers for data citation in a particular community of practice will be those that meet these criteria and are widely used in that community.</ns0:p><ns0:p>Our general recommendation, based on the JDDCP, is to use any currently available identifier scheme that is machine actionable, globally unique, and widely (and currently) used by a community, and that has demonstrated a long-term commitment to persistence. Best practice, given the preceding, is to choose a scheme that is also cross-discipline. Machine actionable in this context means resolvable on the Web by Web services.</ns0:p><ns0:p>There are basically two kinds of identifier schemes available: (a) the native HTTP and HTTP(s) schemes where URIs are the identifiers and address resolution occurs natively; and (b) schemes requiring a resolving authority, like Digital Object Identifiers (DOIs).</ns0:p><ns0:p>Resolving authorities reside at well-known web addresses. They issue and keep track of identifiers in their scheme and resolve them by translating them to URIs which are then natively resolved by the Web. For example, the DOI resolver at http://doi.org resolves the DOI 10.1098/rsos.140216 to the URI http://rsos.royalsocietypublishing.org/content/1/3/140216. And the identifiers.org resolution service, at http://identifiers.org, resolves the PubMed identifier 16333295 to http://www.ncbi.nlm.nih.gov/pubmed/16333295. However resolved, a cited identifier should continue to resolve to an intermediary landing page (see below) even if the underlying data has been de-accessioned or is otherwise unavailable.</ns0:p><ns0:p>By a commitment to persistence, we mean that (a) if a resolving authority is required that authority has demonstrated a reasonable chance to be present and functional in the future; (b) the owner of the domain or the resolving authority has made a credible commitment to ensure that its identifiers will always resolve. A useful survey of persistent identifier schemes appears in <ns0:ref type='bibr' target='#b31'>(Hilse and Kothe (2006)</ns0:ref>).</ns0:p><ns0:p>Examples of identifier schemes meeting JDDCP criteria for robustly accessible data citation are shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> and described below. This is not a comprehensive list and the criteria above should govern. Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref> summarizes the approaches to achieving and enforcing persistence, and actions on object (data) removal from the archive, of each of the schemes.</ns0:p><ns0:p>The subsections below briefly describe the exemplar identifier schemes shown in Tables <ns0:ref type='table' target='#tab_1'>1 and 2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Digital Object Identifiers (DOIs)</ns0:head><ns0:p>Digital Object Identifiers (DOIs) are an identification system originally developed by trade associations in the publishing industry for digital content over the Internet. They were developed in partnership with the Corporation for National Research Initiatives (CNRI), and built upon CNRI's Handle System as an underlying network component. However, DOIs may identify digital objects of any type -certainly including data (International DOI Foundation ( <ns0:ref type='formula'>2014</ns0:ref>)). DOI syntax is defined as a U.S. National Information Standards Organization standard, ANSI/NISO Z39.84-2010. DOIs may be expressed as URIs by prefixing the DOI with a resolution address: http://dx.doi.org/<doi>. DOI Registration Agencies provide services for registering DOIs along with descriptive metadata on the object being identified. The DOI system Proxy Server allows programmatic access to DOI name resolution using HTTP (International DOI Foundation (2014)).</ns0:p><ns0:p>DataCite and CrossRef are the two DOI Registration Agencies of special relevance to data citation. They provide services for registering and resolving identifiers for cited data. Both require persistence commitments of their registrants and take active steps to monitor compliance. DataCite is specifically designed -as its name would indicate -to support data citation.</ns0:p><ns0:p>A recent collaboration between the software archive GitHub, the Zenodo repository system at CERN, FigShare, and Mozilla Science Lab, now makes it possible to cite software, giving DOIs to GitHubcommitted code (GitHub Guides (2014)).</ns0:p></ns0:div>
<ns0:div><ns0:head>6/17</ns0:head><ns0:p>PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)</ns0:p></ns0:div>
<ns0:div><ns0:head>Reviewing Manuscript</ns0:head><ns0:p>Handle System (HDLs)</ns0:p><ns0:p>Handles (HDLs) are identifiers in a general-purpose global name service designed for securely resolving names over the Internet, compatible with but not requiring the Domain Name Service. Handles are location independent and persistent. The system was developed by Bob Kahn at the Corporation for National Research Initiatives, and currently supports, on average, 68 million resolution requests per month -the largest single user being the Digital Object Identifier (DOI) system. Handles can be expressed as URIs (CNRI (2014); <ns0:ref type='bibr' target='#b19'>Dyson (2003)</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Identifiers.org Uniform Resource Identifiers (URIs)</ns0:head><ns0:p>Many common identifiers used in the life sciences, such as PubMed or Protein Data Bank IDs, are not natively Web-resolvable. Identifiers.org associates such database-dependent identifiers with persistent URIs and resolvable physical URLs. Identifiers.org was developed and is maintained at the European Bioinformatics Institute, and was built on top of the MIRIAM registry <ns0:ref type='bibr' target='#b39'>(Juty et al. (2012)</ns0:ref>).</ns0:p><ns0:p>Identifiers.org URIs are constructed using the syntax http://identifiers.org/<data resource name>/<native identifier>, where <data resource name> designates a particular database, and <native identifier> is the ID used within that database to retrieve the record. The Identifiers.org resolver supports multiple alternative locations (which may or may not be mirrors) for data it identifies. It supports programmatic access to data.</ns0:p></ns0:div>
<ns0:div><ns0:head>PURLs</ns0:head><ns0:p>PURLs are 'Persistent Uniform Resource Locators', a system originally developed by the Online Computer Library Center (OCLC). They act as intermediaries between potentially changing locations of digital resources, to which the PURL name resolves. PURLs are registered and resolved at http://purl.org, http://purl.access.gpo.gov, purl.bioontology.org and various other resolvers. PURLs are implemented as an HTTP redirection service and depend on the survival of their host domain name (OCLC (2015); Library of Congress (1997)). PURLs fail to resolve upon object removal. Handling this behavior through a metadata landing page (see below) is the responsibility of the owner of the cited object.</ns0:p></ns0:div>
<ns0:div><ns0:head>HTTP URIs</ns0:head><ns0:p>URIs (Uniform Resource Identifiers) are strings of characters used to identify resources. They are the identifier system for the Web. URIs begin with a scheme name, such as http or ftp or mailto, followed by a colon, and then a scheme-specific part. HTTP URIs will be quite familiar as they are typed every day into browser address bars, and begin with http:. Their scheme-specific part is next, beginning with '//', followed by an identifier, which often but not always is resolvable to a specific resource on the Web. URIs by themselves have no mechanism for storing metadata about any objects to which they are supposed to resolve, nor do they have any particular associated persistence policy. However, other identifier schemes with such properties, such as DOIs, are often represented as URIs for convenience (Berners-Lee et al. (1998); <ns0:ref type='bibr' target='#b37'>Jacobs and Walsh (2004)</ns0:ref>).</ns0:p><ns0:p>Like PURLs, native HTTP URIs fail to resolve upon object removal. Handling this behavior through a metadata landing page (see below) is the responsibility of the owner of the cited object.</ns0:p></ns0:div>
<ns0:div><ns0:head>Archival Resource Key (ARKs)</ns0:head><ns0:p>Archival Resource Keys (ARKs) are unique identifiers designed to support long-term persistence of information objects. An ARK is essentially a URL (Uniform Resource Locator) with some additional rules. For example, hostnames are excluded when comparing ARKs in order to prevent current hosting arrangements from affecting identity. The maintenance agency is the California Digital Library, which offers a hosted service for ARKs and DOIs <ns0:ref type='bibr' target='#b44'>(Kunze and Starr (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b41'>Kunze (2003)</ns0:ref>; <ns0:ref type='bibr' target='#b43'>Kunze and Rodgers (2001)</ns0:ref>; <ns0:ref type='bibr' target='#b38'>Janée et al. (2009)</ns0:ref>).</ns0:p><ns0:p>ARKs provide access to three things -an information object; related metadata; and the provider's persistence commitment. ARKs propose inflections (changing the end of an identifier) as a way to retrieve machine-readable metadata without requiring (or prohibiting) content negotiation for linked data applications. Unlike, for example, DOIs, there are no fees to assign ARKs, which can be hosted on an organization's own web server if desired. They are globally resolvable via the identifier-scheme-agnostic N2T (Name-To-Thing, http://n2t.net) resolver. The ARK registry is replicated at the California Digital Library, the Bibliothèque Nationale de France, and the U.S. National Library of Medicine <ns0:ref type='bibr' target='#b44'>(Kunze and Starr (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b52'>Peyrard et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>Kunze (2012)</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>7/17</ns0:head><ns0:p>PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)</ns0:p><ns0:p>Reviewing Manuscript a Registries maintained at California Digital Library, Bibliothèque National de France and National Library of Medicine</ns0:p></ns0:div>
<ns0:div><ns0:head>National Bibliography Number (NBNs)</ns0:head><ns0:p>National Bibliography Numbers (NBNs) are a set of related publication identifier systems with countryspecific formats and resolvers, utilized by national library systems in some countries. They are used by, for example, Germany, Sweden, Finland and Italy, for publications in national archives without publisher-assigned identifiers such as ISBNs. There is a URN namespace for NBNs that includes the country code; expressed as a URN, NBNs become globally unique <ns0:ref type='bibr' target='#b30'>(Hakala (2001)</ns0:ref>; Moats (1997)).</ns0:p></ns0:div>
<ns0:div><ns0:head>Landing pages</ns0:head><ns0:p>The identifier included in a citation should point to a landing page or set of pages rather than to the data itself <ns0:ref type='bibr' target='#b33'>(Hourclé et al. (2012)</ns0:ref>; <ns0:ref type='bibr' target='#b54'>Rans et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b10'>Clark et al. (2014)</ns0:ref>). And the landing page should persist even if the data is no longer accessible. By 'landing page(s)' we mean a set of information about the data via both structured metadata and unstructured text and other information. Landing pages should combine human-readable and machine-readable information on a selection of the following items.</ns0:p><ns0:p>There are three main reasons to resolve identifiers to landing pages rather than directly to data. First, as proposed in the JDDCP, the metadata and the data may have different lifespans, the metadata potentially surviving the data. This is true because data storage imposes costs on the hosting organization. Just as printed volumes in a library may be de-accessioned from time to time, based on considerations of their value and timeliness, so will datasets. The JDDCP proposes that metadata, essentially cataloging information on the data, should still remain a citable part of the scholarly record even when the dataset may no longer be available.</ns0:p><ns0:p>Second, the cited data may not be legally available to all, even when initially accessioned, for reasons of licensing or confidentiality (e.g. Protected Health Information). The landing page provides a method to host metadata even if the data is no longer present. And it also provides a convenient place where access credentials can be validated.</ns0:p><ns0:p>Third, resolution to a landing page allows for an access point that is independent from any multiple encodings of the data that may be available.</ns0:p><ns0:p>Landing pages should contain the following information. Items marked 'conditional' are recommended if the conditions described are present, e.g., access controls are required to be implemented if required by licensing or PHI considerations; multiple versions are required to be described if they are available; etc.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/17</ns0:head><ns0:p>PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)</ns0:p><ns0:p>Reviewing Manuscript The DataCite persistence contract language reads: 'Objects assigned DOIs are stored and managed such that persistent access to them can be provided as appropriate and maintain all URLs associated with the DOI.'</ns0:p><ns0:p>b The CrossRef persistence contract language reads in part: 'Member must maintain each Digital Identifier assigned to it or for which it is otherwise responsible such that said Digital Identifier continuously resolves to a response page. . . containing no less than complete bibliographic information about the corresponding Original Work (including without limitation the Digital Identifier), visible on the initial page, with reasonably sufficient information detailing how the Original Work can be acquired and/or a hyperlink leading to the Original Works itself . . . '</ns0:p><ns0:p>c CrossRef identifier policy reads: 'The ... Member shall use the Digital Identifier as the permanent URL link to the Response Page. The... Member shall register the URL for the Response Page with CrossRef, shall keep it up-to-date and active, and shall promptly correct any errors or variances noted by CrossRef.'</ns0:p><ns0:p>d For example, the French National Library has rigorous internal checks for the 20 million ARKs that it manages via its own resolver.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/17</ns0:head><ns0:p>PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)</ns0:p><ns0:p>Reviewing Manuscript</ns0:p><ns0:p>• (recommended) Dataset descriptions: The landing page must provide descriptions of the datasets available, and information on how to programmatically retrieve data where a user or device is so authorized. (See Dataset description for formats);</ns0:p><ns0:p>• (conditional) Versions: What versions of the data are available, if there is more than one version that may be accessed.</ns0:p><ns0:p>• (optional) Explanatory or contextual information: Provide explanations, contextual guidance, caveats, and/or documentation for data use, as appropriate.</ns0:p><ns0:p>• (conditional) Access controls: Access controls based on content licensing, Protected Health Information (PHI) status, Institutional Review Board (IRB) authorization, embargo, or other restrictions, should be implemented here if they are required.</ns0:p><ns0:p>• (recommended) Persistence statement. Reference to a statement describing the data and metadata persistence policies of the repository should be provided at the landing page. Data persistence policies will vary by repository but should be clearly described. (See Persistence guarantee for recommended language).</ns0:p><ns0:p>• (recommended) Licensing information: Information regarding licensing should be provided, with links to the relevant licensing or waiver documents as required (e.g., Creative Commons CC0 waiver description (https://creativecommons.org/publicdomain/zero/1.0/), or other relevant material).</ns0:p><ns0:p>• (conditional) Data availability and disposition: The landing page should provide information on the availability of the data if it is restricted, or has been de-accessioned (i.e. removed from the archive). As stated in the JDDCP, metadata should persist beyond de-accessioning.</ns0:p><ns0:p>• (optional) Tools/software: What tools and software may be associated or useful with the datasets, and how to obtain them (certain datasets are not readily usable without specific software).</ns0:p></ns0:div>
<ns0:div><ns0:head>Content encoding on landing pages</ns0:head><ns0:p>Landing pages should provide both human-readable and machine-readable content.</ns0:p><ns0:p>• HTML; that is, the native browser-interpretable format used to generate a graphical and/or languagebased display in a browser window, for human reading and understanding.</ns0:p><ns0:p>• At least one non-proprietary machine-readable format; that is, a content format with a fully specified syntax capable of being parsed by software without ambiguity, at a data element level. Options: XML, JSON/JSON-LD, RDF (Turtle, RDF-XML, N-Triples, N-Quads), microformats, microdata, RDFa.</ns0:p></ns0:div>
<ns0:div><ns0:head>Best practices for dataset description</ns0:head><ns0:p>Minimally the following metadata elements should be present in dataset descriptions:</ns0:p><ns0:p>• Dataset Identifier: A machine-actionable identifier resolvable on the Web to the dataset</ns0:p><ns0:p>• Title: The title of the dataset.</ns0:p><ns0:p>• Description: A description of the dataset, with more information than the title.</ns0:p><ns0:p>• Creator: The person(s) and/or organizations who generated the dataset and are responsible for its integrity.</ns0:p><ns0:p>• Publisher/Contact: The organization and/or contact who published the dataset and is responsible for its persistence.</ns0:p><ns0:p>• PublicationDate/Year/ReleaseDate -ISO 8601 standard dates are preferred <ns0:ref type='bibr' target='#b40'>(Klyne and Newman (2002)</ns0:ref>).</ns0:p><ns0:p>• Version: The dataset version identifier (if applicable).</ns0:p></ns0:div>
<ns0:div><ns0:head>10/17</ns0:head><ns0:p>As noted in the Landing pages section, when data is de-accessioned, the landing page should remain online, continuing to provide persistent metadata and other information including a notation on data de-accessioning. Authors and scholarly article publishers will decide on which repositories meet their persistence and stewardship requirements based on the guarantees provided and their overall experience in using various repositories. Guarantees need to be supported by operational practice.</ns0:p></ns0:div>
<ns0:div><ns0:head>IMPLEMENTATION: STAKEHOLDER RESPONSIBILITIES</ns0:head><ns0:p>Research communications are made possible by an ecosystem of stakeholders who prepare, edit, publish, archive, fund, and consume them. Each stakeholder group endorsing the JDDCP has, we believe, certain responsibilities regarding implementation of these recommendations. They will not all be implemented at once, or homogeneously. But careful adherence to these guidelines and responsibilities will provide a basis for achieving the goals of uniform scholarly data citation. 2. Registries: Registries of data repositories such as databib (http://databib.org) and r3data (http://www.re3data.org) should document repository conformance to these recommendations as part of their registration process, and should make this information readily available to researchers and the public. This also applies to lists of 'recommended' repositories maintained by publishers, such as those maintained by Nature Scientific Data 7 and F1000Research 8 .</ns0:p><ns0:p>3. Researchers: Researchers should treat their original data as first-class research objects. They should ensure it is deposited in an archive that adheres to the practices described here. We also encourage authors to publish preferentially with journals which implement these practices.</ns0:p><ns0:p>4. Funding agencies: Agencies and philanthropies funding research should require that recipients of funding follow the guidelines applicable to them.</ns0:p><ns0:p>5. Scholarly societies: Scholarly societies should strongly encourage adoption of these practices by their members and by publications that they oversee.</ns0:p><ns0:p>6. Academic institutions: Academic institutions should strongly encourage adoption of these practices by researchers appointed to them and should ensure that any institutional repositories they support also apply the practices relevant to them.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>These guidelines, together with the NISO JATS 1.1d2 XML schema for article publishing (National Center for Biotechnology Information ( <ns0:ref type='formula'>2014</ns0:ref>)), provide a working technical basis for implementing the Joint Data Citation Principles. They were developed by a cross-disciplinary group hosted by the Force11.org digital scholarship community 9 Data Citation Implementation Group 10 , during 2014, as a follow-on project to the successfully concluded Joint Data Citation Principles effort.</ns0:p><ns0:p>Registries of data repositories such as r3data (http://r3data.org) and publishers' lists of 'recommended' repositories for cited data, such as those maintained by Nature Publications (http://www.nature.com/sdata/ data-policies/repositories), should take ongoing note of repository compliance to these guidelines, and provide compliance checklists.</ns0:p><ns0:p>We are aware that some journals are already citing data in persistent public repositories, and yet not all of these repositories currently meet the guidelines we present here. Compliance will be an incremental improvement task.</ns0:p><ns0:p>Other deliverables from the DCIG are planned for release in early 2015, including a review of selected data-citation workflows from early-adopter publishers (Nature, Biomed Central, Wiley and Faculty of 1000). The NISO-JATS version 1.1d2 revision is now considered a stable release by the JATS Standing Committee, and is under final review by the National Information Standards Organization (NISO) for approval as the updated ANSI/NISO Z39.96-2012 standard. We believe it is safe for publishers to use the 1.1d2 revision for data citation now. A forthcoming article in this series will describe the JATS revisions in detail.</ns0:p><ns0:p>We hope that publishing this document and others in the series will accelerate the adoption of data citation on a wide scale in the scholarly literature, to support open validation and reuse of results.</ns0:p><ns0:p>Integrity of scholarly data is not a private matter, but is fundamental to the validity of published research. If data are not robustly preserved and accessible, the foundations of published research claims based upon them are not verifiable. As these practices and guidelines are increasingly adopted, it will no longer be acceptable to credibly assert any claims whatsoever that are not based upon robustly archived, identified, searchable and accessible data.</ns0:p><ns0:p>We welcome comments and questions which should be addressed to the [email protected] open discussion forum.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Archives and repositories: (a) Identifiers, (b) resolution behavior, (c) landing page metadata elements, (d) dataset description and (e) data access methods, should all conform to the technical recommendations in this article.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Examples of identifier schemes meeting JDDCP criteria.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Identifier</ns0:cell><ns0:cell>Full name</ns0:cell><ns0:cell /><ns0:cell>Authority</ns0:cell><ns0:cell>Resolution URI</ns0:cell></ns0:row><ns0:row><ns0:cell>scheme</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DataCite DOI</ns0:cell><ns0:cell cols='2'>DataCite-assigned Digi-</ns0:cell><ns0:cell>DataCite</ns0:cell><ns0:cell>http://dx.doi.org</ns0:cell></ns0:row><ns0:row><ns0:cell>(as URI)</ns0:cell><ns0:cell cols='2'>tal Object Identifier</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>CrossRef DOI</ns0:cell><ns0:cell cols='2'>CrossRef-assigned Digi-</ns0:cell><ns0:cell>CrossRef</ns0:cell><ns0:cell>http://dx.doi.org</ns0:cell></ns0:row><ns0:row><ns0:cell>(as URI)</ns0:cell><ns0:cell cols='2'>tal Object Identifier</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Identifiers.org</ns0:cell><ns0:cell cols='2'>Identifiers.org-assigned</ns0:cell><ns0:cell>Identifiers.org</ns0:cell><ns0:cell>http://identifiers.org</ns0:cell></ns0:row><ns0:row><ns0:cell>URI</ns0:cell><ns0:cell>Uniform</ns0:cell><ns0:cell>Resource</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Identifier</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>HTTP(s) URI</ns0:cell><ns0:cell cols='2'>HTTP or HTTP(s) Uni-</ns0:cell><ns0:cell cols='2'>Domain name owner</ns0:cell><ns0:cell>n/a</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>form Resource Identifier</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>PURL</ns0:cell><ns0:cell cols='2'>Persistent Uniform Re-</ns0:cell><ns0:cell cols='2'>Online Computer Library</ns0:cell><ns0:cell>http://purl.org</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>source Locator</ns0:cell><ns0:cell /><ns0:cell>Center (OCLC)</ns0:cell></ns0:row><ns0:row><ns0:cell>Handle (HDL)</ns0:cell><ns0:cell cols='2'>Handle System HDL</ns0:cell><ns0:cell cols='2'>Corporation for National</ns0:cell><ns0:cell>http://handle.net</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Research</ns0:cell><ns0:cell>Initiatives</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>(CNRI)</ns0:cell></ns0:row><ns0:row><ns0:cell>ARK</ns0:cell><ns0:cell cols='2'>Archival Resource Key</ns0:cell><ns0:cell cols='2'>Name Assigning or Map-</ns0:cell><ns0:cell>http://n2t.net; Name</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>ping Authorities (various) a</ns0:cell><ns0:cell>Mapping Authorities</ns0:cell></ns0:row><ns0:row><ns0:cell>NBN</ns0:cell><ns0:cell cols='2'>National Bibliographic</ns0:cell><ns0:cell>various</ns0:cell><ns0:cell>various</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Number</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Identifier scheme persistence and object removal behavior</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Identifier scheme</ns0:cell><ns0:cell>Achieving</ns0:cell><ns0:cell>Enforcing</ns0:cell><ns0:cell>Action on object removal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>persistence</ns0:cell><ns0:cell>persistence</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>DataCite DOI</ns0:cell><ns0:cell>registration</ns0:cell><ns0:cell>link checking</ns0:cell><ns0:cell>DataCite contacts owners; metadata</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>with contract a</ns0:cell><ns0:cell /><ns0:cell>should persist</ns0:cell></ns0:row><ns0:row><ns0:cell>CrossRef DOI</ns0:cell><ns0:cell>registration</ns0:cell><ns0:cell>link checking</ns0:cell><ns0:cell>CrossRef contacts owners per pol-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>with contract b</ns0:cell><ns0:cell /><ns0:cell>icy c ; metadata should persist</ns0:cell></ns0:row><ns0:row><ns0:cell>Identifiers.org URI</ns0:cell><ns0:cell>registration</ns0:cell><ns0:cell>link checking</ns0:cell><ns0:cell>metadata should persist</ns0:cell></ns0:row><ns0:row><ns0:cell>HTTP(s) URI</ns0:cell><ns0:cell>domain owner</ns0:cell><ns0:cell>none</ns0:cell><ns0:cell>domain owner responsibility</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>responsibility</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PURL URI</ns0:cell><ns0:cell>registration</ns0:cell><ns0:cell>none</ns0:cell><ns0:cell>domain owner responsibility</ns0:cell></ns0:row><ns0:row><ns0:cell>Handle (HDL)</ns0:cell><ns0:cell>registration</ns0:cell><ns0:cell>none</ns0:cell><ns0:cell>identifier should persist</ns0:cell></ns0:row><ns0:row><ns0:cell>ARK</ns0:cell><ns0:cell>user-defined</ns0:cell><ns0:cell>hosting server</ns0:cell><ns0:cell>host-dependent; metadata should</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>policies</ns0:cell><ns0:cell /><ns0:cell>persist d</ns0:cell></ns0:row><ns0:row><ns0:cell>NBN</ns0:cell><ns0:cell>IETF</ns0:cell><ns0:cell>domain</ns0:cell><ns0:cell>metadata should persist</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>RFC3188</ns0:cell><ns0:cell>resolver</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>a</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>Robust citation of archived methods and materials -particularly highly variable materials such as cell lines, engineered animal models, etc. -and software -are important questions not dealt with here. See<ns0:ref type='bibr' target='#b64'>(Vasilevsky et al. (2013)</ns0:ref>) for an excellent discussion of this topic for biological reagents.</ns0:note>
<ns0:note place='foot' n='1'>. Document Data Model -How should publishers adapt their document data models to support direct citation of data? 2. Publishing Workflows -How should publishers change their editorial workflows to support data citation? What do publisher data deposition and citation workflows look like where data is being cited today, such as in Nature Scientific Data or GigaScience?3. Common Repository Application Program Interfaces (APIs) -Are there any approaches that can provide standard programmatic access to data repositories for data deposition, search and retrieval?4. Identifiers, Metadata, and Machine Accessibility -What identifier schemes, identifier resolution patterns, standard metadata, and recommended machine programmatic accessibility patterns are recommended for directly cited data?The Document Data Model group noted that publishers use a variety of XML schemas<ns0:ref type='bibr' target='#b9'>(Bray et al. (2008)</ns0:ref>;<ns0:ref type='bibr' target='#b23'>Gao et al. (2012)</ns0:ref>;<ns0:ref type='bibr' target='#b51'>Peterson et al. (2012)</ns0:ref>) to model scholarly articles. However, there is a relevant National Information Standards Organization (NISO) specification, NISO Z39.96-2012, which is4/17PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)</ns0:note>
<ns0:note place='foot' n='6'>ORCiD IDs are numbers identifying individual researchers issued by a consortium of prominent academic publishers and others (Editors (2010);<ns0:ref type='bibr' target='#b48'>Maunsell (2014)</ns0:ref>).11/17PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='7'>http://www.nature.com/sdata/data-policies/repositories 8 http://f1000research.com/for-authors/data-guidelines 9 Force11.org (http://force11.org) is a community of scholars, librarians, archivists, publishers and research funders that has arisen organically to help facilitate the change toward improved knowledge creation and sharing. It is incorporated as a US 501(c)3 not-for-profit organization in California.10 (DCIG, https://www.force11.org/datacitationimplementation)12/17PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='13'>/17 PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)</ns0:note>
<ns0:note place='foot' n='17'>/17 PeerJ reviewing PDF | (v2014:12:3509:2:0:NEW 4 Feb 2015)Reviewing Manuscript</ns0:note>
</ns0:body>
" | "MASSGENERAL INSTITUTE FOR NEURODEGENERATIVE DISEASE
!
!!
!
Harry Hochheiser, Ph.D.
PeerJ Academic Editor
Assistant Professor of Biomedical Informatics
University of Pittsburgh
5607 Baum Boulevard BAUM 423
Pittsburgh, PA 15206-370
February 4, 2015
RE: Minor revisions to “Achieving human and machine accessibility of cited data in
scholarly publications”
Dear Dr. Hochheiser,
Thank you and the various reviewers once again for your careful review of our
manuscript, “Achieving human and machine accessibility of cited data in scholarly
publications”, resubmitted to PeerJ for publication. We have made the minor corrections
you suggested, while correcting a few additional errors detected in our own final review.
We also introduced a short bit of clarifying text into one section.
A summary of our revisions follows.
A. Changes Requested by the Academic Editor
1. At the end of the paragraph starting with 'The publishing workflow...', you have a
sentence with nested open parens and only one close paren.
2. The last sentence on page 5, describing REST applications, ends without a period.
Furthermore, the final claim seems a bit funny - REST is not particularly tied to mobile
apps.
3. The first paragraph on page 11 mentions an ORE resource map, without any citation or
description of what that might refer to.
We made each of the above corrections as requested.
For point 2, we also briefly noted why REST is considered more suitable for mobile apps
– basically because it supports http “get” caching and requires less bandwidth. For point
3 we added an ORE reference.
MIND Informatics, Dept. of Neurology, Massachusetts General Hospital
65 Landsdowne St, Cambridge MA 02139, USA
Telephone: (617) 726-3987 Facsimile: (617) 726-7718!
B. Additional Changes
1. Corrected pervasive substitution of right quote marks in place of left quote marks.
2. Expanded W3C/CWI affiliation on the title page to full institutional names.
3. Searched for and corrected all additional instances of unbalanced left and right
parentheses.
4. John Kunze, one of the authors of the ARK identifier scheme, at our request kindly
reviewed the document. He substituted more authoritative versions of some references for
ARK; added a few new ones; and added a short piece of text better clarifying how ARK
works.
5. We also corrected some references to “Document Object Identifiers” to read properly
as “Digital Object Identifiers”.
Thank you again for your impressively rapid and thorough review.
Sincerely yours,
Timothy W. Clark, Ph.D.
Director of Informatics, MassGeneral Institute for Neurodegenerative Disease
Assistant Professor of Neurology, Harvard Medical School
2
" | Here is a paper. Please give your review comments after reading it. |
690 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Edge-cloud computing has attracted increasing attention recently, due to its efficiency on providing services for not only delay-sensitive applications but also resource-intensive requests, by combining low-latency edge resources and abundant cloud resources.</ns0:p><ns0:p>Carefully designed strategy of service caching and task offloading helps to improve the user satisfaction and the resource efficiency. Thus, in this paper, we focus on joint service caching and task offloading problem in edge-cloud computing environments, to improve the cooperation between edge and cloud resources. First, we formulate the problem into a mix-integer nonLinear programming, which is proofed as NP-hard. Then, we propose a three-stage heuristic method for solving the problem in polynomial time. In the first stages, our method tries to make full use of abundant cloud resources by pre-offloading as many tasks as possible to the cloud. Our method aims at making full use of low-latency edge resources by offloading remaining tasks and caching corresponding services on edge resources. In the last stage, our method focuses on improving the performance of tasks offloaded to the cloud, by re-offloading some tasks from cloud resources to edge resources. The performance of our method is evaluated by extensive simulated experiments. The results show that our method has up-to 155%, 56.1%, and 155% better performance in user satisfaction, resource efficiency, and processing efficiency, respectively, compared with several classical and state-of-the-art task scheduling methods.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>As the development of computer and network technologies, mobile and Internet-of-Thing (IoT) devices have become more and more popular. As shown in the latest Cisco Annual Internet Report <ns0:ref type='bibr' target='#b3'>(Cisco, 2020)</ns0:ref>, global mobile and IoT devices will reach 13.1 billion and 14.7 billion by 2023, respectively. But a mobile or IoT device have very limited computing and energy capacities, constrained by the limited size <ns0:ref type='bibr' target='#b24'>(Wu et al., 2019)</ns0:ref>. Thus, user requirements usually cannot be satisfied by only using device resources, as mobile and IoT applications are the rapid growing in all of number, category and complexity. 'Globally, 299.1 billion mobile applications will be downloaded by 2023' <ns0:ref type='bibr' target='#b3'>(Cisco, 2020)</ns0:ref>.</ns0:p><ns0:p>To address the above issue, mobile cloud computing is proposed to expand the computing capacity by 'infinite' cloud resources <ns0:ref type='bibr' target='#b15'>(Rahmani et al., 2021)</ns0:ref>. But cloud computing generally has a poor network performance, because it provides services over Internet. This leads to dissatisfactions of demands for PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science latency-sensitive applications. Therefore, by putting a few computing resources close to user devices, edge computing is an effective approach to complement the cloud computing <ns0:ref type='bibr' target='#b10'>(Huda and Moh, 2022)</ns0:ref>.</ns0:p><ns0:p>Edge computing can provide services with a small network delay for users, because it usually has local area network (LAN) connections with user devices, over such as Wifi, micro base stations.</ns0:p><ns0:p>Unfortunately, an edge computing center (edge for short) generally is equipped with only a few servers due to the limited space <ns0:ref type='bibr'>(Wang et al., 2020)</ns0:ref>. Thus, edges cannot provide all services at the same time, due to their insufficient storage resources. In an edge-cloud computing, a request can be processed by an edge only when its service is cached in the edge. To provide services efficiently by edge-cloud computing, the service provider must design the service caching and the task offloading strategies carefully <ns0:ref type='bibr' target='#b13'>(Luo et al., 2021)</ns0:ref>. The service caching decides which services are cached on each edge, and the task offloading decides where is each request task processed. There are several works focusing on both service caching and task offloading problems. But these works have some issues must be addressed before their practical usages. Such as, several works assume there is an infinite number of communication channels for each edge, which ignored the allocation of edge network resources <ns0:ref type='bibr' target='#b32'>(Zhang et al., 2021b;</ns0:ref><ns0:ref type='bibr'>Xia et al., 2021c,a)</ns0:ref>. Some works focus on the homogeneous requests <ns0:ref type='bibr' target='#b5'>(Farhadi et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b7'>(Farhadi et al., , 2019))</ns0:ref>, which have limited application scope. Besides, All of existing related works use edge resources first for a better data transfer performance, and employ cloud resources only when edge resources are exhausted. This can lead to an inadequate usage of abundant cloud resources. By using these existing service caching and task offloading strategies, some requests that can tolerate a certain latency are offloaded to edges at first. This can result in insufficient edge resources for meeting low-latency requirements of some subsequent requests.</ns0:p><ns0:p>To address issues of existing works, in this paper, we focus on the joint service caching and task offloading problem for improving the cooperation between edge and cloud computing. We first formulate this problem into a Mix-Integer Non-Linear Programming (MINLP) for optimizing the user satisfaction and the resource utilization. And then, to solve the problem with polynomial time, we propose a multistage heuristic method. The aim of our method is to make full use of both the low-latency of edge resources and the abundance of cloud resources. In brief, the contributions of this paper are as followings.</ns0:p><ns0:p>• We formulate the joint service caching and task offloading problem into a MINLP for edge-cloud computing with two optimization objectives. The major one is to maximize the user satisfaction in terms of the number of tasks whose requirements are satisfied. The minor one is maximizing the overall resource utilization.</ns0:p><ns0:p>• We propose a heuristic method with three stages to address the joint service caching and task offloading problem in polynomial time. In the first stage, the method pre-offloads latency-insensitive request tasks to the cloud for exploiting the abundance of cloud resources. At the second stage, the proposed method processes latency-sensitive requests in the edge by caching their requested services on edge servers. In the last stage, our method re-offloads some requests from the cloud to the edge for improving their performance, when there are available edge resources at the end of the second stage.</ns0:p><ns0:p>• We conduct extensive simulated experiments to evaluate the performance of our proposed heuristic method. Experiment results show that our method can achieve better performance in user satisfaction, resource efficiency, and processing efficiency, compared with five of classical and state-of-the-art methods.</ns0:p><ns0:p>In the rest of this paper, The next section formulates the joint service caching and task offloading problem we concerned. The third section presents the proposed multi-stage heuristic method. Fourth section evaluates the proposed heuristic approach by simulated experiments. The subsequent section illustrates related works and the last section concludes this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>PROBLEM FORMULATION</ns0:head><ns0:p>In this paper, as shown in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, we consider the edge-cloud computing system consisting of various user devices, multiple edges and one cloud. Each device has a wireless connection with an edge over various access points, and has a Wide Area Network (WAN) connection with the cloud. Each edge is equipped with one or more edge servers. For each request task launched by a device, it can be offloaded to an edge server (ES) or the cloud. When a task is offloaded to an ES, the ES must have a network connection with its device 1 , and its requested service must be cached on the ES. If the task is offloaded to the cloud, there must be a cloud server (CS) that can meet all of its requirements. Next, we present the formulation for the joint service caching and task offloading problem in detail. The notations used in our formulation is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>System Model</ns0:head><ns0:p>In the edge-cloud computing system, there are E ESs respectively represented by e j , 1 ≤ j ≤ E. The storage and computing capacities of edge server e j respectively are b j and g j . We assume the communications between user devices and ESs employ the orthogonal frequency multi-access technology, as done by many published works, e.g., <ns0:ref type='bibr' target='#b17'>(Tian et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b25'>Wu et al., 2021)</ns0:ref>. For e j , there are N j communication channels for the data transmission of offloaded tasks. As the result data is much less than the input data <ns0:ref type='bibr' target='#b14'>(Peng et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b9'>Gu and Zhang, 2021;</ns0:ref><ns0:ref type='bibr' target='#b33'>Zhao et al., 2021)</ns0:ref>, we ignore the communication latency caused by the output data transmission. The communication channel capacity can be easily achieved by the transmission power, the channel bandwidth and the white Gaussian noise, according to Shannons theorem <ns0:ref type='bibr' target='#b25'>(Wu et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b30'>Xu et al., 2019)</ns0:ref>. We use w j to represent the capacity of each communication channel in edge server e j .</ns0:p><ns0:p>For satisfying requirements of users, V CSs (v k , 1 ≤ k ≤ V ) need to be rented from the cloud when edge resources are not enough. For CS v k , the computing capacity is g v k . The storage capacity of the cloud is assumed to be infinity, so the cloud can provide all services that users request. In general, a CS is equipped with one network interface card (NIC) for the network connection. We use w C to represent the network capacity of each CS for the data transmission of tasks that are offloaded to the cloud.</ns0:p><ns0:p>There are T tasks requested by all of users in the system, represented by t i , 1 ≤ i ≤ T . For task t i , it has a i input data amount should be processed by its requested service, and requires f i computing resources for its completion. Then, if task t i is offloaded to e j at m th channel, the data transmission latency is a i /w j , and the computing latency is f i /g j . The deadline of t i is d i , which means t i must be finished before d i . In this paper, we focus on the hard deadline requirements of tasks, and leave soft deadline constraints as our future work. For each task, it can be only offloaded to the ES that has connection with its device. We use binary constants o i, j , 1 ≤ i ≤ T, 1 ≤ j ≤ E, to represent these connectivities, where o i, j = 1 if t i can be offloaded to e j , and otherwise, o i, j = 0.</ns0:p><ns0:p>In total, the system provides S kinds of services (s l , 1 ≤ l ≤ S) for its users. For s l , it requires h l storage space. We use binary constants r i,l (1 ≤ i ≤ T, 1 ≤ l ≤ S) to identify the service requested by each task. When task t i requests service s l , r i,l = 1.</ns0:p><ns0:p>For the formulation of the joint service caching and task offloading problem, we define binary variables Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>x i, j,m (1 ≤ i ≤ T, 1 ≤ j ≤ E, 1 ≤ m ≤ N j )</ns0:formula><ns0:p>Computer Science The i th task. f i</ns0:p><ns0:p>The size of the computing resources required by t i . a i</ns0:p><ns0:p>The amount of the input data of t i . d i</ns0:p><ns0:p>The deadline of t i . p i</ns0:p><ns0:p>The start time of the input data transfer for t i that offloaded to an edge server. q i</ns0:p><ns0:p>The finish time of the input data transfer for t i that offloaded to an edge server. c i</ns0:p><ns0:p>The start time of the computing for t i that offloaded to an edge server. z i</ns0:p><ns0:p>The finish time of the computing for t i that offloaded to an edge server.</ns0:p><ns0:formula xml:id='formula_1'>p C i</ns0:formula><ns0:p>The start time of the input data transfer for t i that offloaded to a cloud server.</ns0:p><ns0:formula xml:id='formula_2'>q C i</ns0:formula><ns0:p>The finish time of the input data transfer for t i that offloaded to a cloud server.</ns0:p><ns0:formula xml:id='formula_3'>c C i</ns0:formula><ns0:p>The start time of the computing for t i that offloaded to a cloud server.</ns0:p><ns0:formula xml:id='formula_4'>z C i</ns0:formula><ns0:p>The finish time of the computing for t i that offloaded to a cloud server.</ns0:p></ns0:div>
<ns0:div><ns0:head>E</ns0:head><ns0:p>The number of edge servers. e j</ns0:p><ns0:p>The j th edge server b j</ns0:p><ns0:p>The storage capacity of e j . g j</ns0:p><ns0:p>The computing capacity of e j .</ns0:p></ns0:div>
<ns0:div><ns0:head>N j</ns0:head><ns0:p>The number of communication channels provided by e j . w j</ns0:p><ns0:p>The communication capacity each channel in e j .</ns0:p></ns0:div>
<ns0:div><ns0:head>V</ns0:head><ns0:p>The number of cloud servers. v k</ns0:p><ns0:p>The k th cloud server.</ns0:p><ns0:formula xml:id='formula_5'>g v k The computing capacity of v k . w C</ns0:formula><ns0:p>The network capacity of each cloud server.</ns0:p></ns0:div>
<ns0:div><ns0:head>S</ns0:head><ns0:p>The number of services. s l</ns0:p><ns0:p>The l th service. h l</ns0:p><ns0:p>The storage space required by s l . o i, j</ns0:p><ns0:p>The binary constant indicating whether t i can be offloaded to e j . r i,l</ns0:p><ns0:p>The binary constant indicating whether t i requests s l . x i, j,m</ns0:p><ns0:p>The binary variable indicating whether t i is offloaded to e j and m th channel of e j is allocated to t i for the data transmission.</ns0:p><ns0:formula xml:id='formula_6'>x e i, j</ns0:formula><ns0:p>The binary variable indicating whether t i is offloaded to e j . x e i, j = ∑ N j m=1 x i, j,m . y i,k</ns0:p><ns0:p>The binary variable indicating whether t i is offloaded to v k .</ns0:p></ns0:div>
<ns0:div><ns0:head>N f in</ns0:head><ns0:p>The number of tasks with deadline satisfactions.</ns0:p></ns0:div>
<ns0:div><ns0:head>U</ns0:head><ns0:p>The overall computing resource utilization.</ns0:p><ns0:p>allocations in ESs, as shown in Eq. (1), and use accesses only one channel when it is offloaded to an ES, thus, inequations (3) hold.</ns0:p><ns0:formula xml:id='formula_7'>y i,k (1 ≤ i ≤ T, 1 ≤ k ≤ V ) to</ns0:formula><ns0:formula xml:id='formula_8'>132 x i, j,m =    1, if t i is offloaded to e j</ns0:formula><ns0:p>and the m th channel is allocated for the task's data transmission 0, else .</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_9'>y i,k = 1, if t i is offloaded to v k 0, else . (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_10'>) E ∑ j=1 N ∑ m=1 x i, j,m + V ∑ k=1 y i,k ≤ 1, i = 1, ..., T.<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>To make following formulations more concise, we define binary variables x e i, j (1 ≤ i ≤ T, 1 ≤ j ≤ E) as the offloading decisions of tasks to ESs, where x e i, j = 1 if t i is offloaded to e j , and otherwise x e i, j = 0. Then x e i, j = 1 if and only if there is a channel allocated to t i on e j , i.e., ∑ N j m=1 x i, j,m = 1. Thus, Eq. ( <ns0:ref type='formula' target='#formula_11'>4</ns0:ref>) are Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>established. A task can be offloaded to an ES only if the ES has connection with its device, which can be formulated as Eq. ( <ns0:ref type='formula'>5</ns0:ref>).</ns0:p><ns0:p>x e i, j = N j ∑ m=1</ns0:p><ns0:p>x i, j,m , i = 1, ..., T, j = 1, ..., E.</ns0:p><ns0:p>x e i, j ≤ o i, j , i = 1, ..., T, j = 1, ..., E.</ns0:p><ns0:p>(5)</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Task Processing Model in Edge</ns0:head><ns0:p>When a task is offloaded to an ES for its processing, its requested service has been cached on the ES.</ns0:p><ns0:p>In such situation, the task transmits its input data to the ES on the channel allocated to the task (the m th channel such that x i, j,m = 1) for its computing. After the input data delivery is completed, the service will process these input data by computing resources of the ES.</ns0:p><ns0:p>For t i offloaded to e j for processing, its finish time of the input data transmission is the start time of the transmission plus the transmission time. This can be formulated as Eq. ( <ns0:ref type='formula' target='#formula_12'>6</ns0:ref>), where q i and p i respectively represent the finish time and start time of the input data transmission for t i offloaded to an ES.</ns0:p><ns0:formula xml:id='formula_12'>E ∑ j=1 N ∑ m=1 (x i, j,m • q i ) = E ∑ j=1 N ∑ m=1 (x i, j,m • (p i + a i /w j )), i = 1, ..., T.<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>For each channel on an ES, it is usually allocated to multiple tasks for their data transmission. To avoid the interference, we exploit the sequential data transmission model for these tasks on a channel. For any two tasks, say t i1 and t i2 , the input data transmission of one task can be only started after finishing another's, when they both use one channel on one ES. If the transmission of t i1 is started before that of t i2 , we have p i1 ≤ q i1 ≤ p i2 ≤ q i2 . Otherwise, p i2 ≤ q i2 ≤ p i1 ≤ q i1 . Thus, we have constraints (7) that must be satisfied.</ns0:p><ns0:formula xml:id='formula_13'>E ∑ j=1 N ∑ m=1 (x i1, j,m • x i2, j,m • (q i1 − p i2 ) • (q i2 − p i1 )) ≤ 0, i1, i2 = 1, ..., T.<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Similar to the data transmission constraints, the finish computing time of a task is its start computing time plus its computing latency. Thus, Eq. (8) hold, where cst i and z i are respectively the start time and the finish time of computing task t i offloaded to an ES. As the computing of a task can be started only after the finish of its input data transmission, constraints (9) must be met.</ns0:p><ns0:formula xml:id='formula_14'>E ∑ j=1 (x e i, j • z i ) = E ∑ j=1 (x e i, j • (c i + f i /g j )), i = 1, ..., T.<ns0:label>(8)</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>E ∑ j=1 N ∑ m=1 (x i, j,m • z i ) ≤ E ∑ j=1 (x e i, j • c i ), i = 1, ..., T.<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>Also, to avoid the interference, we assume tasks are computed sequentially, and thus, similar to constraints (7), we have constraints (10).</ns0:p><ns0:formula xml:id='formula_16'>E ∑ j=1 (x e i1, j • x e i2, j • (z i1 − c i2 ) • (z i2 − c i1 )) ≤ 0, i1, i2 = 1, ..., T.<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>When a task offloaded to an ES, its requested service must be cached in the ES. The storage capacity of an ES limits the number of services. This can be formulated into Eq. ( <ns0:ref type='formula' target='#formula_17'>11</ns0:ref>). Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_17'>T ∑ i=1 (x e i, j • r i,l • h l ) ≤ b j , j = 1, ..., E.<ns0:label>(11</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Task Processing Model in Cloud</ns0:head><ns0:p>When a task is offloaded to the cloud, there are mainly two ways for using CSs. One is that each cloud server is monopolized by a task. Another is that a CS can be multiplexed by multiple tasks. By using the first way, there is no data transmission wait time for tasks. But this will increase the cost of using CSs, because the CS is charged on a per-unit-time basis. For example, when a CS is used only 1.1 hours, it is charged two hours. Thus, we exploit the second way for multiplexing CSs to improve the resource and cost efficiencies. In such way, the formulation of the task processing in the cloud is similar to that in edges.</ns0:p><ns0:p>When a task is offloaded to a CS, its finish time of the input data transmission and the computing can be calculated by Eq. ( <ns0:ref type='formula' target='#formula_18'>12</ns0:ref>) and Eq. ( <ns0:ref type='formula' target='#formula_19'>13</ns0:ref>), respectively. Where q C i (p C i ) and z C i (c C i ) are respectively the finish time (the start time) of the input data transmission and the computing of t i offloaded to a CS, respectively.</ns0:p><ns0:formula xml:id='formula_18'>V ∑ k=1 (y i,k • q C i ) = V ∑ k=1 (y i,k • (p C i + a i /w C )), i = 1, ..., T. (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>) V ∑ k=1 (y i,k • z C i ) = E ∑ j=1 (y i,k • (c C i + f i /g v k )), i = 1, ..., T.<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>To avoid interferences, when multiple tasks are offloaded to one CS, the CS processes the data transmission and the computing sequentially. Thus, constraints ( <ns0:ref type='formula' target='#formula_20'>14</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_21'>15</ns0:ref>) must be satisfied, similar to constraints ( <ns0:ref type='formula' target='#formula_13'>7</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_16'>10</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_20'>V ∑ k=1 (y i1,k • y i2,k • (q C i1 − p C i2 ) • (q C i2 − p C i1 )) ≤ 0, i1, i2 = 1, ..., T.<ns0:label>(14)</ns0:label></ns0:formula><ns0:formula xml:id='formula_21'>V ∑ k=1 (y i1,k • y i2,k • (z C i1 − c C i2 ) • (z C i2 − c C i1 )) ≤ 0, i1, i2 = 1, ..., T.<ns0:label>(15)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head n='2.4'>Joint Service Caching and Task Offloading Problem Model</ns0:head><ns0:p>Based on the edge-cloud system and the task processing models, we can formulate the joint service caching and task offloading problem as the following Mix-Integer Non-Linear Programming (MINLP).</ns0:p><ns0:p>Maximizing N f in +U.</ns0:p><ns0:p>(16)</ns0:p><ns0:p>Subject to, Eq.( <ns0:ref type='formula' target='#formula_11'>4</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_22'>− (15),<ns0:label>(17)</ns0:label></ns0:formula><ns0:formula xml:id='formula_23'>N f in = T ∑ i=1 ( E ∑ j=1 x e i, j + V ∑ k=1 y i,k ),<ns0:label>(18)</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>U = ∑ T i=1 ((∑ E j=1 x e i, j + ∑ V k=1 y i,k ) * f i ) ∑ E j=1 (max T i=1 (x e i, j * z i ) * g j ) + ∑ V k=1 ( max T i=1 (y i,k * z C i ) 3600 * 3600 * g v k ) , (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_25'>)</ns0:formula><ns0:formula xml:id='formula_26'>z i ≤ d i , i = 1, ..., T,<ns0:label>(20)</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>z C i ≤ d i , i = 1, ..., T,<ns0:label>(21)</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>x i, j,m ∈ {0, 1}, i = 1, ..., T, j = 1, ..., E, m = 1, ..., N j ,<ns0:label>(22)</ns0:label></ns0:formula><ns0:formula xml:id='formula_29'>y i,k ∈ {0, 1}, i = 1, ..., T, k = 1, ...,V.<ns0:label>(23)</ns0:label></ns0:formula><ns0:p>There are two optimization objectives in our problem formulation, as shown in Eq. ( <ns0:ref type='formula'>16</ns0:ref>), which are maximizing the number of tasks whose requirements are satisfied (N f in ) and the computing resource utilization (U). N f in is the accumulated number of tasks finished in the edge-cloud computing system, which is one of popular metrics quantifying the user satisfaction and can be calculated by Eq. ( <ns0:ref type='formula' target='#formula_23'>18</ns0:ref>). U is the ratio between the amount of computing resources used by tasks and that of occupied resources.</ns0:p><ns0:p>The amount of computing resources used by tasks is the accumulated amount of computing resources required by finished tasks, which is</ns0:p><ns0:formula xml:id='formula_30'>∑ T i=1 ((∑ E j=1 x e i, j + ∑ V k=1 y i,k ) * f i ).</ns0:formula><ns0:p>For each ES/CS, its occupied time is the latest finish time of tasks offloaded to it. Thus, for an ES e j , the occupied time is max T i=1 (x e i, j * z i ),</ns0:p></ns0:div>
<ns0:div><ns0:head>6/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Algorithm 1 Multi-stage heuristic joint service caching and task offloading method (MSHCO)</ns0:p><ns0:p>Input: The information of tasks and resources in the edge-cloud computing system. Output: A joint service caching and task offloading strategy. 1: Pre-offloading tasks whose requirements can be satisfied by cloud resources to the cloud, using Algorithm 2; 2: For tasks that are not pre-offloaded to the cloud, caching their requested services on and offloading them to edges by algorithm 3; 3: Re-offloading tasks that have pre-offloaded to the cloud from the cloud to edges by algorithm 4; 4: return The joint service caching and task offloading strategy; and its occupied computing resource amount is max T i=1 (x e i, j * z i ) * g j . For a CS, its occupied time is the ceiling time of the latest finish time of tasks offloaded to it, because CSs are charged on a per-unit-time basis. In this paper, we assume CSs are charged on hour, as done by Amazon ES2, AliCloud, etc. Thus, for CS v k , the occupied time is max T i=1 (y i,k * z C i ) 3600 * 3600 and its occupied computing resource amount is</ns0:p><ns0:formula xml:id='formula_31'>max T i=1 (y i,k * z C i ) 3600 * 3600 * g v k .</ns0:formula><ns0:p>And therefore, the overall computing resource utilization can be calculated by Eq. ( <ns0:ref type='formula' target='#formula_24'>19</ns0:ref>). Noticing that U is no more than 1, the user satisfaction maximization is the major optimization objective, and the computing resource utilization maximization is the minor one. Constraints (17) mainly include the calculation of tasks' finish time and the deadline requirements. Constraints ( <ns0:ref type='formula' target='#formula_26'>20</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_27'>21</ns0:ref>) restrict the deadline of each task. Constraints ( <ns0:ref type='formula' target='#formula_28'>22</ns0:ref>) and ( <ns0:ref type='formula' target='#formula_29'>23</ns0:ref>) represent that each task can be only offloaded to only one ES/CS for its processing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Hardness of the Optimization Problem</ns0:head><ns0:p>For an instance of our optimization problem, there are limitless commutation channels and storage capacity for each ES. The problem instance is identical to the task scheduling on multiple heterogeneous cores by seeing each ES/CS as a computing core, which has been proofed as NP-hard problem <ns0:ref type='bibr' target='#b8'>(Gary and Johnson, 1979)</ns0:ref>. Thus, our optimization problem is NP-hard. The optimization problem is MINLP due to the non-liner constraints (e.g., Eq. ( <ns0:ref type='formula' target='#formula_24'>19</ns0:ref>)). Existing tools, e.g., lp solve <ns0:ref type='bibr' target='#b1'>(Berkelaar et al., 2021)</ns0:ref>, can be used for solving the problem based on simple branch and bound. But these tools need exponential time, and thus is not applicable to large-scale system. Therefore, we propose a heuristic method in the following section to achieve an approximate solution.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>MULTI-STAGE HEURISTIC METHOD</ns0:head><ns0:p>In this section, we propose a polynomial time heuristic method with three stages, outlined in Algorithm 1, for solving the joint service caching and task offloading problem presented in the previous section. We abbreviate the proposed method to MSHCO. In the first stage, MSHCO employs the abundance of cloud resources, by pre-offloading all tasks to the cloud considering deadline constraints. For tasks that their deadline constraints can not ensured by the cloud, MSHCO caches their requested services on edges and offloads them to edges. This is the second stage of MSHCO, which exploits limited edge resources for satisfying requirements of latency-sensitive tasks. At the last stage, to take full advantage of edge resources providing low latency network, MSHCO re-offloads pre-offloaded tasks from the cloud to edges, to improve the overall performance of task processing. In the following, we illustrate the details of MSHCO. Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> gives some symbols used in following illustrations.</ns0:p><ns0:p>As shown in Algorithm 2, in the first stage, MSHCO makes a decision for pre-offloading tasks to the cloud. For each task to be offloaded, MSHCO examines whether the task can be finished before its deadline by a CS that has been rented from the cloud (lines 3-4). If there is one such CS, MSHCO pre-offloads the task to the CS (lines 5-6), and repeats these steps (of lines 3-6) for the next task (line 7).</ns0:p><ns0:p>Otherwise, MSHCO tries to rent a new CS for the task (lines 8-12). MSHCO finds a CS type with the resource configuration that satisfies the task's requirements, and rents a new CS with the type (line 11).</ns0:p><ns0:p>Then, MSHCO pre-offloads the task to the new CS (line 12). If no rented CS or CS type can be found for finishing the task within its deadline, the task can not be offloaded to the cloud, and MSHCO examines the next task with previous procedures. After executing Algorithm 2, we achieve the CSs needed to be rented from the cloud, and the tasks pre-offloaded to each CS, stored in V (line 13).</ns0:p><ns0:p>The second stage of MSHCO is illustrated in Algorithm 3. For each task that is not pre-offloaded to the cloud (line 1), MSHCO finds an ES that can receive the request and has enough resources for Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Algorithm 2 Stage 1: pre-offloading tasks to the cloud</ns0:p><ns0:p>Input: tasks to be offloaded, T; CS types provided by the cloud, VT; Output: rented CSs for satisfying requirements of pre-offloaded tasks, and a task pre-offloading strategy in the cloud, V; tasks whose requirements cannot satisfied by the cloud, T; 1: for each t ∈ T do 2:</ns0:p><ns0:formula xml:id='formula_32'>for each v ∈ V do 3:</ns0:formula><ns0:p>calculating the finish time f t of t if it is offloaded to v based on Eq. ( <ns0:ref type='formula' target='#formula_18'>12</ns0:ref>)-( <ns0:ref type='formula' target='#formula_21'>15</ns0:ref> calculating the finish time f t of t if it is offloaded to a CS with type vt based on Eq. ( <ns0:ref type='formula' target='#formula_18'>12</ns0:ref>)-( <ns0:ref type='formula' target='#formula_21'>15</ns0:ref> same to lines 5 -7; 13: return V, T; processing the task. There are two situations when offloading a task to an ES, the requested service has or not cached on the ES. When the requested service has cached on the ES, it is only needed to examine whether the task can be finished within its deadline using one of communication channels (lines 3-8). If so, the task is offloaded to the ES (lines 6-8), and otherwise, the task's requirements cannot be satisfied by edge resources. When the requested service has not cached for the current examined task, MSHCO sees whether the ES has enough storage space for caching the requested service (line 9). If the ES has enough storage space and the task's deadline constraint can be met by the ES (line 12), the requested service will be cached on the ES (line 13), and the task is offloaded to the ES (line 14). After execution, Algorithm 3 provides a task offloading solution and a service caching solution in edges.</ns0:p><ns0:p>Benefiting from Algorithms 2 and 3, we have a strategy for jointly task offloading and service caching on the edge-cloud computing system. Because we first exploit cloud resources for task offloading, some tasks pre-offloaded to the cloud can be processed by edge resources. Usually edge resources provide a better network performance than cloud resources. Thus, MSHCO re-offloads these tasks from the cloud to edges in the last stage to improve the overall performance of task processing, as shown in Algorithm 4. In this stage, MSHCO examines whether each task having been pre-offloaded to the cloud can be processed by ESs (lines 1-2). The examining procedures are identical to Algorithm 2, except that when a task can be offloaded to an ES, the task will be removed from the CS that the task is pre-offloaded to (lines 9 and 16).</ns0:p><ns0:p>The main advantage of our method is that all tasks whose requirements can be satisfied by the cloud are pre-offloaded to the cloud. This can result in more edge resources for finishing more delay-sensitive tasks, compared with other methods that offload tasks to the cloud only when edge resources are exhausted.</ns0:p><ns0:p>Here, we give an example to illustrate the advantage of our method. Considering a simple edge-cloud for each e ∈ t.E do 3: if t.s ∈ e.S then 4:</ns0:p><ns0:p>for each channel of e, m =1 to e.N do 5:</ns0:p><ns0:p>calculating the finish time f t of t if it is offloaded to e at m th channel based on Eq. ( <ns0:ref type='formula' target='#formula_12'>6</ns0:ref>)-( <ns0:ref type='formula' target='#formula_16'>10</ns0:ref> Algorithm 4 Stage 3: re-offloading tasks from the cloud to edges Input: the service caching and task (pre-)offloading strategy in the cloud and edges, V and E; Output: An improved joint service caching and task offloading strategy, updated V and E.</ns0:p><ns0:p>1: for each v ∈ V do 2:</ns0:p><ns0:p>for each t ∈ v.T do 3:</ns0:p><ns0:p>for each e ∈ t.E do 4: if t.s ∈ e.S then 5:</ns0:p><ns0:p>for each channel of e, m =1 to e.N do 6:</ns0:p><ns0:p>calculating the finish time f t of t if it is offloaded to e at m th channel based on Eq. ( <ns0:ref type='formula' target='#formula_12'>6</ns0:ref>)-( <ns0:ref type='formula' target='#formula_16'>10</ns0:ref> same to lines 8 -10; 17: return E, V; environment, there are 4 tasks (t 1 , t 2 , t 3 , and t 4 ), 1 ES, and 1 CS type. The input data amount and the required computing size of t 1 and t 2 are 1 MB and 20 MHz, respectively. The input data amount and the required computing size of t 3 and t 4 are 2 MB and 40 MHz, respectively. The deadline of each task is set as 110 ms. All of services requested by these tasks are all deployed on the ES. The communication and computing capacities of the ES are 100MB/s and 2GHz, respectively. The network and computing capacities configured by the CS type are 10MB/s and 2GHz, respectively. The time consumed by t 1 or t 2 (t 3 or t 4 ) is 20 ms (40 ms) when it is offloaded to the ES, and 110 ms (220 ms) when offloaded to the cloud. In this case, if we offload tasks to the ES first, t 1 , t 2 , and t 3 are offloaded to ES with finish time of 80 ms, and t 4 is rejected. But by MSHCO, t 1 and t 2 are pre-offloaded to two CS rented from the cloud, at the first stage. t 3 and t 4 are offloaded to the ES at the second stage, and thus all tasks can be finished within their respective deadlines. We can see that MSHCO can satisfy requirements of more tasks. The third stage only changes the locations where some tasks are offloaded, and has no effect on which tasks are offloaded. Therefore, the user satisfaction is not changed by the third stage.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Algorithmic Complexity Analysis</ns0:head><ns0:p>In the first stage, MSHCO visits each rented CSs and CS types for each task. Thus, this stage has Manuscript to be reviewed Computer Science is very limited, and can be considered as a small constant. Thus, the time complexity of the first stage is O(T * V ). In the second stage, for a task, MSHCO searches all channels of ESs having connection with its device, and examines whether its requested service has been cached. In general, there are only a few edges connecting its device for each task. Therefore, the time complexity of the second stage is O(T * S). Similar to the second stage, the third stage has a time complexity of O(T * S) , too. Thus, in overall, the time complexity of MSHCO is (T * (V + S)), which is increased linearly with the numbers of tasks, rented CSs, and services in the edge-cloud computing system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>PERFORMANCE EVALUATION</ns0:head><ns0:p>In this section, we evaluate the performance of MSHCO by conducting extensive simulated experiments.</ns0:p><ns0:p>The simulated system is established referring to recent related works <ns0:ref type='bibr' target='#b31'>(Zhang et al., 2021a;</ns0:ref><ns0:ref type='bibr' target='#b4'>Dai et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b28'>Xia et al., 2022)</ns0:ref> and CS types provided by Amazon EC2 <ns0:ref type='bibr' target='#b0'>(Amazon.com, 2022)</ns0:ref>. Specifically, in the simulated system, there are 10 ESs and 1 CS type. Each ES has 20 GHz computing capacity, 10 communication channels, and 300 GB storage capacity. The communication capacity of each communication channel is set as 60 Mbps. The CS type has the configurations of 5 GHz computing capacity and 15 Mbps network transfer capacity, with the price of $ 0.1 per hour. There are 1000 tasks in the simulated system.</ns0:p><ns0:p>Each task randomly requests one of 100 services. The input data amount and required computing resource size of a task is randomly set in the ranges of <ns0:ref type='bibr'>[1.5, 6]</ns0:ref> MB and [0.5, 1.2] GHz. The deadlines of tasks are randomly set in range [1, 5] seconds. For each task, its device has a connection with randomly selected one ES. The storage space required by a service is randomly set in the range of <ns0:ref type='bibr'>[40,</ns0:ref><ns0:ref type='bibr'>80]</ns0:ref> GB.</ns0:p><ns0:p>To show the advantage of our method, we select the following classical and recently published works for performance comparison.</ns0:p><ns0:p>• First Fit (FF) is one of the classical and most widely used methods in various computing systems.</ns0:p><ns0:p>FF iteratively offloads each task to the ES/CS that can satisfy all requirements of the task.</ns0:p><ns0:p>• First Fit Decreasing (FFD) is identical to FF except that FFD makes offloading decision for the task with maximal computing resource size in each iteration.</ns0:p><ns0:p>• Earliest Deadline First (EDF) is one of deadline-aware method. EDF iteratively makes offloading decision for the task with earliest deadline.</ns0:p><ns0:p>• Popularity-based Service Caching (PSC) is the basis idea exploited by <ns0:ref type='bibr' target='#b23'>Wei et al. (2021)</ns0:ref>. PSC caches the service with maximal requests on ESs. For offloading decisions, it uses FF strategy.</ns0:p><ns0:p>• Approximation algorithm for Constrained Edge Data Caching (CEDC-A) <ns0:ref type='bibr' target='#b27'>(Xia et al., 2021b)</ns0:ref> caches a service on an ES, such that the caching solution provides maximum benefit. The benefit is quantified based on the number of network hops. For our comparison, we set the benefit value as the accumulated slack time for each caching solution, where the slack time is the different between the deadline and the finish time for a task.</ns0:p><ns0:p>We evaluate the performance of these methods in the following aspects. For following each metric, a higher value is better.</ns0:p><ns0:p>• User satisfaction strongly affects the income and the reputation of service providers. We use three metrics for its quantification, the number of finished tasks 2 , the computing resource size of finished tasks, and the processed input data amount of finished tasks.</ns0:p><ns0:p>• Resource efficiency is closely related to the cost of service provision. We use the computing resource utilization for quantifying the resource efficiency.</ns0:p><ns0:p>• Processing efficiency is representing the workload processing rate, which can determine the user satisfaction and the resource efficiency at a large extent. We use two metrics for the quantification, the computing rate and the data processing rate, which are the size of computing finished per unit time and the amount of input data processed per unit time, respectively. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For conducting the experiment, we randomly generate 100 edge-cloud system instances. For each instance, we achieve a set of performance values for each method, and scale each performance value of a method by that of FF for each metric. For example, Table <ns0:ref type='table' target='#tab_8'>3</ns0:ref> shows a part of the experiment data in the metric of finished task number. In the following, we report the average relative value for each metric. The data format: the number of finished tasks (the value scaled by that of FF)</ns0:p><ns0:p>In addition, we compare the solution solved by our method with the optimal solution in simulated mini edge-cloud systems, which is illustrated in the end of this section.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Comparison in User Satisfaction</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_8'>2</ns0:ref> shows the overall user satisfaction achieved by various methods in various performance metrics. As shown in the figure, we can see that MSHCO has 9.94%-95.6%, 9.85%-97.8%, and 7.97%-155% better overall user satisfaction, compared with other methods, respectively in the number, the computing size, and the input data amount of finished tasks. This result verifies that our proposed method has good effect on optimizing the user satisfaction. The main reason is that our method prioritizes the use of abundant cloud resources for processing tasks. This can avoid the situation that tasks whose requirements can be satisfied by the cloud occupy limited edge resources at first, and thus leaves more edge resources for latency-sensitive tasks. While other methods employ cloud resources only after exhausting all edge resources resources. Thus, in most of time, the improvement of our method in edges is greater than that in the cloud in optimizing user satisfaction. For example, compared with PSC, MSHCO has 14.8%, 15.1%, and 10.3% greater values in three user satisfaction metrics in edges, respectively, as shown in Fig. <ns0:ref type='figure' target='#fig_9'>3</ns0:ref>. But MSHCO has 32.3%, 32.4%, and 35.0% greater values in these three metrics in the cloud, respectively, as shown in Fig. <ns0:ref type='figure' target='#fig_11'>4</ns0:ref>. A similar result can be seen from Fig. <ns0:ref type='figure' target='#fig_9'>3</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_11'>4</ns0:ref>, by comparing MSHCO with FF, FFD, or EDF. While, there is an opposite behaviour for MSHCO vs. CEDC-A. MSHCO has about 1370% greater value in edges but only about 0.974% greater value in the cloud, in each user satisfaction metric, as shown in Fig. <ns0:ref type='figure' target='#fig_9'>3</ns0:ref> and Fig. <ns0:ref type='figure' target='#fig_11'>4</ns0:ref>. This is because CEDC-A offloads too less tasks to edges, compared with other methods. This can lead to more remaining tasks with requirements that can be satisfied by the cloud after conducting task offloading on edges. Meantime, MSHCO re-offloads some tasks from the cloud to edges for improving the overall performance of task processing, which results in more tasks offloaded to edge but less tasks offloaded to the cloud. Tables <ns0:ref type='table' target='#tab_10'>4, 5</ns0:ref>, and 6 show the statistical data of user satisfactions achieved by various methods in the three metrics, respectively. From these tables, we can see that MSHCO achieves the greatest minimal value for every metric. This means that MSHCO achieves the best user satisfaction any time. This further confirms the superior performance of our method in optimizing user satisfaction. The main benefit of MSHCO is the idea of making offloading decisions by three stages, which can be applied to any task scheduling method to improve its performance for edge-cloud computing. For a task scheduling method, the use of three-stage idea ensures a performance no worse than the scheduling method used alone. This is mainly because the first stage guarantees all tasks can be finished if their requirements can be satisfied by the cloud, without any edge resources. This enables the limited edge resources to finish tasks that 327 cannot be satisfied by the cloud at first, in the second stage. This can lead to more finished tasks that can </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Comparison in Resource Efficiency</ns0:head><ns0:p>Fig. <ns0:ref type='figure'>5</ns0:ref> shows the relative overall resource utilizations in the edge-cloud system when employing various methods for task offloading and service caching. From this figure, we can see that our method achieves 0.21%,0.21%, 0.522%, 16.9%, and 56.1% higher resource utilizations than FF, FFD, EDF, PSC, and CEDC-A, respectively. Thus, in overall, MSHCO can achieve a better resource efficiency. More finished tasks is helpful for improving the resource utilization, because the data transfer can be performed in parallel, which is benefit for reducing the idle computing time caused by waiting the input data. Thus, as our method achieves more finished tasks than other methods, our method has a high probability in achieving better resource utilization.</ns0:p><ns0:p>In fact, the computing resource utilization is decided by the idle computing time of ESs and CSs, which is related to the relative time of the input data transfer and the task computing. Due to much poor network performance of the cloud, the resource utilization achieved in edges is usually much better than that in the cloud. In our experiment results, edge resources has more than twice utilization than cloud resources in most of time. This results in a better opportunity for improving resource utilization for the cloud than for edges. Thus, compared with FF, FFD, and EDF, MSHCO achieves negligibly lower resource utilization in edges, but about 4.8% higher in the cloud, as shown in Fig. <ns0:ref type='figure'>6</ns0:ref> why CEDC-A has much lower resource utilization in edges is mainly because it offloads too less tasks to ESs. This can give rise to that not all of communication channels of ESs are used for offloaded tasks most of time, which results in a great percentage of time that is spent on waiting input data transfer for task processing.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Comparison in Processing Efficiency</ns0:head><ns0:p>Fig. <ns0:ref type='figure' target='#fig_13'>8</ns0:ref> gives the relative task processing efficiency achieved by various methods in overall. From the figure, we can see that MSHCO achieves 9.49%-97.7% and 7.06%-155% better processing efficiency in computing and data processing, respectively, compared with other methods. The rate of computing (or data processing) is the ratio between the accumulated computing size (or processed data amount) and the latest finish time of finished tasks. Thus, the processing efficiency is manly decided by the speedup of task parallel executions. In general, the resource utilization reflects the speedup to a large extent. Thus, we can deduce that MSHCO has a greater speedup from that it has a higher resource utilization in the edge-cloud system, compared with other methods. Therefore, MSHCO has a better processing efficiency in overall.</ns0:p><ns0:p>In edges, An ES provides 10 communication channels for receiving input data transfers of offloaded tasks. Thus, even though the increased number of offloaded tasks helps to improve the speedup of parallel executions in edges, it aggravates the scarcity of computing resources, especially when the computing resources is bottleneck. In this case, the latest finish time of offloaded tasks can be postponed as there are more waiting time for input data transfers. While each CS receives input data by only one NIC, and thus the above issue in an edge is less serious than in the cloud. Therefore, in general, the improvement of processing efficiency in edges is less than in the cloud, as shown in Fig. <ns0:ref type='figure' target='#fig_14'>9 and 10</ns0:ref>. For example, compared with EDF, MSHCO achieves 3.32% faster computing rate in edges while 16.3% in the cloud.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Comparison with Optimal Solution</ns0:head><ns0:p>To compare our method with the exhaustive method providing the optimal solution, we generate a miniscale edge-cloud system. The mini-scale system is consisted of one ES, and one CS type, and provides varied number of tasks. The number of tasks in u th group of experiments is 8 + 2 * u. The other parameters of the system is same to the previous experiment. For each group of experiments, we repeat 11 times, and report the average value of the relative performance of our method to the exhaustive method in finished task number. The result is shown in Fig. <ns0:ref type='figure' target='#fig_0'>11</ns0:ref>.</ns0:p><ns0:p>As shown in Fig. <ns0:ref type='figure' target='#fig_0'>11</ns0:ref>, we can see that our method finishes 14.0%-23.2% less tasks than the exhaustive method, and the overall trend is that the different performance between MSHCO and the exhaustive method is decreased with the increasing scale of the system. Thus, we could argue that our method can achieve a nearly optimal solution, in a large scale edge-cloud computing system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>RELATED WORKS</ns0:head><ns0:p>The edge-cloud computing is an efficient way to address the scarcity of edge and device resources and the high latency of cloud resources. The resource efficiency of edge-cloud computing can be improved by well designed task offloading strategy. Thus, in recent years, there are several works focusing on the design of task offloading methods. For example, Liu et al. <ns0:ref type='bibr' target='#b12'>(Liu et al., 2021)</ns0:ref> modelled the task offloading problem in edge-cloud computing as a Markov Decision Process (MDP), and applied deep reinforcement learning to achieve an offloading solution. <ns0:ref type='bibr' target='#b18'>Wang et al. (Wang et al., 2022)</ns0:ref> presented an integer particle swarm optimization-based task offloading method for optimizing Service-Level Agreement (SLA) satisfaction.</ns0:p><ns0:p>Sang et al. <ns0:ref type='bibr' target='#b16'>(Sang et al., 2022)</ns0:ref> proposed a heuristic algorithm for task offloading problem in device-edgecloud cooperative computing environments with the same objective to our work. These works assumed that all services can be provided by an edge, which does not match with the actual. In real world, each edge has very limited storage capacity, and thus only a few services can be deployed on an edge. Thus, for an efficient task offloading algorithm, the service caching problem must be concerned for deciding which services are deployed on each edge when making offloading decisions.</ns0:p><ns0:p>Focusing on the improvement of task processing performance in edge-cloud computing, several Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>works studied on solving the service caching problem. Wei et al. <ns0:ref type='bibr' target='#b23'>(Wei et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b22'>(Wei et al., , 2020) )</ns0:ref> proposed a popularity-based caching method based on content similarity. They first produced the popularity of contents based on the historical information of user requests and content similarity, and then replaced most unpopular contents by most popular contents when there is no available storage space in edges.</ns0:p><ns0:p>To simplify the edge-cloud system, they assumed that all dynamic parameters were following Poisson distribution in both their model and simulated experiments. This can lead to an unguaranteed performance in practice and unreliable experiment results. <ns0:ref type='bibr'>Wang et al. (Wang et al., 2020)</ns0:ref> exploited Markov chain to predict the cached data to improve the hit ratio. Xia et al. <ns0:ref type='bibr' target='#b29'>(Xia et al., 2021c)</ns0:ref> studied on the data caching problem for optimizing the overall service overhead. They modelled the problem into a time-averaged optimization. To solve the problem, they first transformed the optimization model to a linear convex optimization, and then applied Lyapunov optimization technology. Xia et al. <ns0:ref type='bibr' target='#b27'>(Xia et al., 2021b)</ns0:ref> proposed an approximation algorithm for edge data caching (CEDC-A). CEDC-A first choose a solution of caching a data on an ES, with maximum benefit, and then iteratively caching the data with maximum benefit on the ES. In these above two works, the network latency is quantified by the number of hops. While in practice, the real latency is not positively associated with the hop number. All of above works aimed at optimizing the data access latency by designing edge caching strategy. While the task processing performance is not only decided by the data access latency, but also by computing efficiency. Thus, task offloading must be concerned, complementary to edge caching. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>In this work, we study on the joint service caching and task offloading problem for edge-cloud computing.</ns0:p><ns0:p>We first formulate the problem into a MINLP with objectives of maximizing user satisfaction and resource efficiency, which is proofed as NP-hard. Then, we propose a polynomial time algorithm with three stages to solve the problem. The basis idea of our performance is first exploiting abundant cloud resources and low-latency edge resources to satisfy as many requirements as possible, in the first two stages, respectively.</ns0:p><ns0:p>Then, edge resources are fully used for improving the overall performance in the last stage by re-offloading tasks from the cloud to edges. Simulated experiments are conducted, and the results verify that our method has better performance in user satisfaction, resource efficiency and processing efficiency, compared with five of classical and up-to-date methods.</ns0:p><ns0:p>In this paper, we focus on the decisions of both task offloading and service caching, without considering Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the caching replacement. In practice, our method can be applied in collaboration with existing caching replacement approaches. While, the collaboration efficiency should be studied, which is one of our future work. In addition, we will also try to design efficient caching replacement strategy based on predicting user preferences and user similarity, to improve the resource efficiency and processing performance for edge-cloud computing systems.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The edge-cloud computing system.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>to represent offloading decisions and communication channel 1 The device of a task means the device that launches the request task 3/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Symbol Description TThe set of tasks {t i |i = 1, ..., T }. Each task, say t, has following attributes: the required computing resource size (t. f ), the input data amount (t.a), the deadline (t.d), the requested service (t.s), the set of edge servers having network connections with its device (t.E).SThe set of services {s l |l = 1, ..., S}. Each service has one attribute, which is its required storage space, s.b.E The set of ESs {e j | j = 1, ..., E}. Each ES has following attributes: the computing capacity (e.g), the number of communication channels (e.N), the communication channel capacity (e.w), the set of tasks offloaded to the ES with each channel (e.T m , m = 1, ..., e.N), the set of services cached on the ES (e.S). VT The set of CS types, {vt o |o = 1, ...,V T }, provided by the cloud. Each type has following configuration parameter: the computing capacity (vt.g), the network transmission capacity (vt.w), the price per unit time (vt.p). V The set of rented CSs {v k |k = 1, ...,V }. Each CS has following attributes: the configuration type (v.vt), the set of tasks offloaded to the CS (v.T).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>← v.T ∪ {t}; //pre-offloading t to the rented CS 6:T ← T − {t}; //removing t from un-offloaded task</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022) Manuscript to be reviewed Computer Science Algorithm 3 Stage 2: offloading tasks to edges Input: tasks that are not pre-offloaded to the cloud, T; services, S; edge servers, E; Output: A joint service caching and task offloading strategy in ESs, {e.T m , e.S | e ∈ E, m = 1, ..., e.N}. 1: for each t ∈ T do 2:</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>O(T * (V +V T )) time complexity at worst. In real word, the number of CS types provided by the cloud 9/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>2 A finished task means the task's requirements are met in the computing system.10/19PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. The overall user satisfaction achieved by various methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The user satisfaction achieved by various methods in edges.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>328only be finished by edge resources.329 12/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022) Manuscript to be reviewed Computer Science (a) Relative number of finished tasks (b) Relative computing size of finished tasks (c) Relative amount of processed input data</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The overall user satisfaction achieved by various methods in the cloud.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 5 .Figure 6 .Figure 7 .</ns0:head><ns0:label>567</ns0:label><ns0:figDesc>Figure 5. The relative overall resource utilization achieved by various methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. The relative overall processing efficiency achieved by various methods.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The relative processing efficiency achieved by various methods in edges.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 10 .Figure 11 .</ns0:head><ns0:label>1011</ns0:label><ns0:figDesc>Figure 10. The relative processing efficiency achieved by various methods in the cloud.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b2'>Bi et al. Bi et al. (2020)</ns0:ref> studied on the joint service caching and computation offloading for a mobile user allocated to an edge server. They modelled the problem into a mixed integer non-linear programming, and transformed it into a binary integer linear programming. To solve the problem with a reduced complexity, they designed an alternating minimization technique by exploiting underlying structures of caching causality and task dependency. To optimize the overall utility, Ko et al.<ns0:ref type='bibr' target='#b11'>(Ko et al., 2022)</ns0:ref> modelled the processing of user requests as a stochastic process, and proposed to apply water filling and channel inversion for the network resource allocation for users with same service preference. Then they applied Lagrange Multiplier to iteratively solve computation offloading and service caching problem for users with heterogeneous preference. In this work, an edge is assumed to have only one time division multiplexing communication channel. Tain et al.<ns0:ref type='bibr' target='#b17'>(Tian et al., 2021)</ns0:ref> modelled the service caching problem as an MDP, and solved the MDP by combining double deep Q-network (DQN) and Dueling-DQN. Xia et al.<ns0:ref type='bibr' target='#b28'>(Xia et al., 2022)</ns0:ref> proposed a two-phase game-theoretic algorithm to provide a strategy of data caching and task offloading. In the first phase, to serve the most users, they iteratively solved the data caching strategy with corresponded user and channel power allocation strategy. In the second phase, they tried to allocate more channel power for each user, for optimizing the overall data transfer rate. . These above works didn't concern the allocation of computing resources. Zhang et al.<ns0:ref type='bibr' target='#b31'>(Zhang et al., 2021a)</ns0:ref> focused on the joint problem of service caching, computation Offloading and resource allocation, for optimizing the weight cost of computing and network latency. They formulated the problem as a quadratically constrained quadratic program, and used semidefinite relaxation for addressing the problem. All of these above existing works consider the use of cloud resources only when edge resources are exhausted for offloading tasks. This can lead to an underutilized abundance of cloud resources. Thus, our work tries to exploit the heterogeneity of edges and clouds to improve the cooperation between them, and aiming at providing a joint strategy of service caching, computing offloading, computing resource allocation, and communication channel assignment, for user satisfaction and resource efficiency optimizations.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Notation Description</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Notation Description</ns0:cell></ns0:row><ns0:row><ns0:cell>T</ns0:cell><ns0:cell>The number of tasks.</ns0:cell></ns0:row><ns0:row><ns0:cell>t i</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>4/19 PeerJ</ns0:head><ns0:label /><ns0:figDesc /><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2022:03:72005:1:2:NEW 24 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The symbols used in algorithm illustrations</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The finished task number achieved by various methods in several system instances</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Time</ns0:cell><ns0:cell>FF</ns0:cell><ns0:cell>FFD</ns0:cell><ns0:cell>EDF</ns0:cell><ns0:cell>PSC</ns0:cell><ns0:cell>CEDC-A</ns0:cell><ns0:cell>MSHCO</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell cols='6'>192 (1.000) 192 (1.000) 191 (0.995) 173 (0.901) 119 (0.620) 206 (1.073)</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell cols='6'>174 (1.000) 174 (1.000) 174 (1.000) 154 (0.885) 85 (0.489) 186 (1.069)</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell cols='6'>186 (1.000) 186 (1.000) 191 (1.027) 182 (0.978) 108 (0.581) 210 (1.129)</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell cols='6'>178 (1.000) 178 (1.000) 183 (1.028) 164 (0.921) 110 (0.618) 208 (1.169)</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell cols='6'>176 (1.000) 176 (1.000) 179 (1.017) 155 (0.881) 98 (0.557) 190 (1.080)</ns0:cell></ns0:row><ns0:row><ns0:cell>6</ns0:cell><ns0:cell cols='6'>193 (1.000) 193 (1.000) 194 (1.005) 172 (0.891) 111 (0.575) 225 (1.166)</ns0:cell></ns0:row><ns0:row><ns0:cell>7</ns0:cell><ns0:cell cols='6'>185 (1.000) 185 (1.000) 183 (0.990) 176 (0.951) 106 (0.573) 202 (1.092)</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell cols='6'>198 (1.000) 198 (1.000) 201 (1.015) 178 (0.899) 115 (0.581) 231 (1.167)</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell cols='6'>172 (1.000) 172 (1.000) 175 (1.017) 155 (0.901) 96 (0.558) 187 (1.087)</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell cols='6'>185 (1.000) 185 (1.000) 184 (0.995) 174 (0.941) 107 (0.578) 199 (1.076)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>...</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='7'>100 175 (1.000) 175 (1.000) 174 (0.994) 168 (0.960) 98 (0.560) 193 (1.103)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The statistical data of the relative finished task number</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>FF FFD</ns0:cell><ns0:cell>EDF</ns0:cell><ns0:cell>PSC</ns0:cell><ns0:cell cols='2'>CEDC A MSHCO</ns0:cell></ns0:row><ns0:row><ns0:cell>MAXIMUM</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>1.0514 0.9821</ns0:cell><ns0:cell>0.6344</ns0:cell><ns0:cell>1.2216</ns0:cell></ns0:row><ns0:row><ns0:cell>AVERAGE</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>1.0049 0.9009</ns0:cell><ns0:cell>0.5647</ns0:cell><ns0:cell>1.1048</ns0:cell></ns0:row><ns0:row><ns0:cell>MINIMUM</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0.9677 0.8305</ns0:cell><ns0:cell>0.4859</ns0:cell><ns0:cell>1.0452</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The statistical data of the relative computing size of finished tasks</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>FF FFD</ns0:cell><ns0:cell>EDF</ns0:cell><ns0:cell>PSC</ns0:cell><ns0:cell cols='2'>CEDC A MSHCO</ns0:cell></ns0:row><ns0:row><ns0:cell>MAXIMUM</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.0544</ns0:cell><ns0:cell>0.992</ns0:cell><ns0:cell>0.6477</ns0:cell><ns0:cell>1.2314</ns0:cell></ns0:row><ns0:row><ns0:cell>AVERAGE</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>1.0042 0.8983</ns0:cell><ns0:cell>0.5576</ns0:cell><ns0:cell>1.1031</ns0:cell></ns0:row><ns0:row><ns0:cell>MINIMUM</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0.9661 0.8227</ns0:cell><ns0:cell>0.4771</ns0:cell><ns0:cell>1.0363</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>The statistical data of the relative amount of processed data</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>FF FFD</ns0:cell><ns0:cell>EDF</ns0:cell><ns0:cell>PSC</ns0:cell><ns0:cell cols='2'>CEDC A MSHCO</ns0:cell></ns0:row><ns0:row><ns0:cell>MAXIMUM</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1.048</ns0:cell><ns0:cell>1.0776</ns0:cell><ns0:cell>0.5149</ns0:cell><ns0:cell>1.1628</ns0:cell></ns0:row><ns0:row><ns0:cell>AVERAGE</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0.9957 0.9116</ns0:cell><ns0:cell>0.4219</ns0:cell><ns0:cell>1.0751</ns0:cell></ns0:row><ns0:row><ns0:cell>MINIMUM</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell cols='2'>0.9141 0.8084</ns0:cell><ns0:cell>0.3446</ns0:cell><ns0:cell>1.0333</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Response to Reviewers’ Comments
Thanks for the editor and reviewers’ comments concerning our manuscript entitled “A MultiStage Heuristic Method for Service Caching and Task Offloading to Improve the Cooperation
between Edge and Cloud Computing”. These comments are all valuable and very helpful for
revising and improving our paper, as well as the important guiding significance to our researches.
We have studied comments carefully and have made corrections which we hope meet with approval.
The responds to the editor and reviewer’s comments are as follows.
Editor
[Q1] The paper is very good and there is contribution in this article, however many things must
be clarified; different variables with multiple letters must be explained very well, ...
[A1] Thanks for the suggestion. We have improved the definition of variables with multiple letters,
and define each of them by a single letter in the revised manuscript.
[Q2] ... and tables, figures, and the analysis on the structure of MSHCO should also be better
explained...
[A2] Thanks for the comment. We have added more explanations on our tables, figures, and the
structure analysis of MSHCO. We hope these meet with approval.
[Q3] The language must also be improved.
[A3] Thank you for the comment. We have scanned our paper, and improved the presentation of
our paper.
Reviewer 1
[Q4] The authors defined different variables with multiple letters, i.e., in_i, map_{i,j}, bw_j, nft_i,
etc, which are confusing. Commonly, ‘in_i’ in an equation means ‘i’ * ‘n_i’. Therefore, try to define
a variable with a single letter.
[A4] Thanks for the suggestion. We have improved the definition of variables with multiple letters,
and define each of them by a single letter in the revised manuscript.
[Q5] There are multiple variables used in this manuscript. Adding a notation table to list all
symbols/notations could improve its readability.
[A5] Thanks for the suggestion. We add a notation table describing all symbols/notations, as
shown in Table 1.
[Q6] There is a highly related work [R1] on service caching and task offloading in MEC, which
should be well cited.
[R1] 'Joint Optimization of Service Caching Placement and Computation Offloading in Mobile
Edge Computing Systems,' IEEE Transactions on Wireless Communications, vol. 19, no. 7, pp.
4947-4963, July 2020.
[A6] Thanks for your suggestion. We have included this related work in the RELATED WORKS
section.
[Q7] With respect to the contribution of this work, since the proposed algorithm MSHCO is a
heuristic algorithm, what is the performance gap between MSHCO and the global optimal algorithm?
The numerical studies only show that MSHCO outperforms some existing benchmarks. However,
we have no idea how good enough is the algorithm, which is essential for potential readers who
want to continue studying this topic. Better provide an upper bound or the exhaustive search optimal.
[A7] Thanks for this suggestion. We conduct more experiments in mini-scale edge-cloud
environments, to compare the performance of MSHCO with that of the exhaustive search method.
The details are presented in the end of PERFORMANCE EVALUATION section.
[Q8] This work considers head deadline requirements of tasks, where each task is associated with
a deadline d_i. How do you handle the tasks if the deadline cannot be met? Will these unfinished
tasks resources for the proposed MSHCO? How about other benchmark algorithms evaluated in the
simulations?
[A8] Sorry for the confusion. If the deadline of a task cannot be met, the task will be rejected,
and thus it doesn’t consume any resources in the edge-cloud computing. This is the approach
handling tasks with hard deadlines, by all algorithms. This is mainly because the processing of a
task doesn’t provide income for service providers, when the task cannot be finished before its hard
deadline. There is another situation that some tasks have soft deadline requirements. There is an
income even when the soft deadline is violated for a task, but there is a punishment cost for the task.
This is a promising research topic optimizing the revenue of task processing for a service provider,
where the revenue is the total income less the total cost. The cost includes the resource cost, the
punishment cost of soft deadline violations, and et al. This is one of our future works.
[Q9] In the performance evaluation, besides the comparisons with other benchmarks, the analysis
on the structure of MSHCO is necessary. For example, how do these three stages of the algorithm
affect its performance?
[A9] Thanks for the comments. Based on experiment results in optimizing the user satisfaction,
we analysis the main benefit of our algorithm, i.e., the three-stage idea. We analysis how can our
method performance better than other method by the three-stage idea.
Reviewer 2
[Q10] Correct the punctuation error in line 37, used comma two times
[A10] Thank you for pointing out the error. We have corrected the error in our revised manuscript.
[Q11] On line 56, grammar error “But these works have some issues must be address”
[A11] Thank you for pointing out this error. We correct the error in our revised manuscript.
[Q12] The sentence on line 69, 70, 71, there is single sentence which should be avoided.
[A12] Thank you for pointing out this. We have rewritten the sentence into two shorter and
simpler sentences.
[Q13] Line 96 have also grammar problem, should be rectified
[A13] Thanks for pointing it out. We have rectified the problem in the revised manuscript.
[Q14] Please define the sentence on line 199, “and exams the next task”
[A14] Thanks for the comment. We have revised this sentence into “and repeats these steps for
the next task”, to improve the readability.
[Q15] Another grammar mistake on line 199-200-201 “Otherwise, MSHCO tries to rented a new
CS”. “MSHCO rents a new CS with found type”
[A15] Thanks for pointing these mistakes out. We have corrected them.
[Q16] Please also revise sentence on line 217-218.
[A16] Thanks for pointing it out. We have revised this sentence.
[Q17] There are no snapshots regarding experimental work, moreover authors should compare
their results in tabular format.
[A17] Thanks for the comment. For the performance evaluation, we conduct 100 experiments to
achieve reliable results. Therefore, we have 100 sets of values for each metric, and it is unrealistic
to present all experiment data in the manuscript. Thus, we add the statistical performance data of
various methods in tabular format, as shown in Table 4-6. In addition, we add a part of experiment
data in the metric of finished task number, which makes it easier for readers to understand our
experiment results.
[Q18] Please also elaborate the comparison of user satisfaction in more detail.
[A18] Thanks for the suggest. We have added more experiment data and analysis on the
comparison of user satisfaction, as shown in “4.1 Comparison in User Satisfaction”.
[Q19] It seems to be a stand alone study without the comparison with any state of the art, please
try to justify your method with a concrete comparison with some similar kind of latest study. The
same should also be discussed in the abstract.
[A19] Thank you for this comment. We have compared our method with Popularity-based Service
Caching (PSC) [R2] and an Approximation algorithm for Constrained Edge Data Caching (CEDCA) [R3]. Both of these baselines are published in 2021. In addition, to justify our method further,
we compare our method with the exhaustive search method ina mini-scale edge-cloud environments.
[R2] X. Wei, J. Liu, Y. Wang, C. Tang, and Y. Hu, 'Wireless edge caching based on content
similarity in dynamic environments,' Journal of Systems Architecture, Volume 115, 2021, 102000.
[R3] X. Xia, F. Chen, J. Grundy, M. Abdelrazek, H. Jin and Q. He, 'Constrained App Data
Caching over Edge Server Graphs in Edge Computing Environment,' IEEE Transactions on
Services Computing, 2021. Doi: 10.1109/TSC.2021.3062017. (Early Access)
[Q20] Please briefly justify the potential benefits of the proposed system.
[A20] Thanks for the suggestion. We have highlighted the main advantage of our method, and
illustrated it by a simple case in our revised paper, as shown in the following.
“The main advantage of our method is that all tasks whose requirements can be satisfied by the
cloud are pre-offloaded to the cloud. This can result in more edge resources for finishing more delaysensitive tasks, compared with other methods that offload tasks to the cloud only when edge
resources are exhausted. Here, we give an example to illustrate the advantage of our method.
Considering a simple edge-cloud environment, there are 4 tasks (t_1, t_2, t_3, and t_4), 1 ES, and 1
CS type. The input data amount and the required computing size of t_1 and t_2 are 1 MB and 20
MHz, respectively. The input data amount and the required computing size of t_3 and t_4 are 2 MB
and 40 MHz, respectively. The deadline of each task is set as 110 ms. All of services requested by
these tasks are all deployed on the ES. The communication and computing capacities of the ES are
100MB/s and 2GHz, respectively. The network and computing capacities configured by the CS type
are 10MB/s and 2GHz, respectively. The time consumed by t_1 or t_2 (t_3 or t_4) is 20 ms (40 ms)
when it is offloaded to the ES, and 110 ms (220 ms) when offloaded to the cloud. In this case, if we
offload tasks to the ES first, t_1, t_2, and t_3 are offloaded to ES with finish time of 80 ms, and t_4
is rejected. But by MSHCO, t_1 and t_2 are pre-offloaded to two CS rented from the cloud, at the
first stage. t_3 and t_4 are offloaded to the ES at the second stage, and thus all tasks can be finished
within their respective deadlines. We can see that MSHCO can satisfy requirements of more tasks.
The third stage only changes the locations where some tasks are offloaded, and has no effect on
which tasks are offloaded. Therefore, the user satisfaction is not changed by the third stage.”
[Q21] Please make sure to cite the references for the equations if they are not owned by you
[A21] Thanks for the suggestion. All equations are designed for formulating the joint service
caching and task offloading problem by ourselves.
[Q22] What are the formation basis and correctness justifications of Algorithm 2 & 3?
[A22] Thanks for the comment. In this paper, the main contribution of our paper is the three-stage
idea for task offloading. And we don’t propose novel task scheduling algorithm within the edge or
the cloud, by using first fit scheme. Our method is complementary to the task scheduling method
for a computing center (the edge or the cloud), and ensures a performance no worse than the task
scheduling method. Algorithm 2 and 3 are respectively scheduling tasks in the cloud and the edge
by first fit scheme. Certainly, we will study on the task scheduling for the edge or the cloud, for
designing new method with better performance.
[Q23] Because the cloud & edge computing is the hot area of research these days and there are a
lots of test beds available, therefore, authors are strongly recommended to perform a real study
instead of the simulations of proposed model. This way, they will have a better chance to prove the
authenticity of their work.
[A23] Thanks for this suggestion. We are improving our experiment environment and will
establish an edge-cloud test beds composed of microcontrollers and public clouds. While, the
experiment environment is not yet available for running experiments of performance evaluation,
mainly because of a lack of funds for now. In this paper, we conduct simulated experiments, as done
in many researches. And parameters of simulated experiment environments are set referring to
recent related works. We will evaluate the performance of our proposed methods in real
environments in the future to show their efficiency and effectiveness in practical applications.
[Q24] please make sure to remove the language and grammatical errors through out the
manuscript.
[A24] Thank you for the comment. We have scanned our paper, and revised errors that we found
out.
" | Here is a paper. Please give your review comments after reading it. |
691 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. Decision-making is an important part of most human activities regardless of their daily activities, profession, or political inclination. Some decisions are relatively simple specifically when the consequences are insignificant while others can be very complex and have significant effects. Real-life decision problems generally involve several conflicting points of view (criteria) needed to be considered and this is the reason recent decision-making processes are usually supported by data as indicated by different data mining techniques. Data mining is the process of extracting data to obtain useful information and a promising and widely applied method is association rule mining which has the ability to identify interesting relationships between sets of items in a dataset and predict the associative behavior for new data. However, the number of rules generated in association rules can be very large, thereby making the exploitation process difficult. This means it is necessary to prioritize the selection of more valuable and relevant rules.</ns0:p><ns0:p>Methods. Therefore, this study proposes a method to rank rules based on the lift ratio value calculated from the frequency and utility of the item. The three main functions in proposed method are mining of association rules from different databases (in terms of sources, characteristics, and attributes), automatic threshold value determination process, and prioritization of the rules produced.</ns0:p><ns0:p>Results. Experiments conducted on 6 datasets showed that the number of rules generated by the adaptive rule model is higher and sorted from the largest lift ratio value compared to the apriori algorithm.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Decision-making is an important part of most human activities regardless of their daily activities, profession, or political inclination. Some decisions are relatively simple specifically when the consequences are insignificant while others can be very complex and have significant effects. Real-life decision problems generally involve several conflicting points of view (criteria) which are needed to be considered in making appropriate decisions <ns0:ref type='bibr' target='#b15'>(Govindan & Jepsen, 2016)</ns0:ref>. Recent decision-making processes can be supported with data analysis because decision-makers are required to make the right strategic choices in this current volatile, uncertain, complex, and ambiguous period which is also known as the VUCA period <ns0:ref type='bibr' target='#b12'>(Giones et al., 2019)</ns0:ref>. Data mining is a method of extracting information and patterns stored in data <ns0:ref type='bibr' target='#b27'>(Luna et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b29'>Pan et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b30'>Prajapati et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b32'>Ryang & Yun, 2015;</ns0:ref><ns0:ref type='bibr' target='#b35'>Selvi & Tamilarasi, 2009</ns0:ref>; C. <ns0:ref type='bibr' target='#b42'>Zhang & Zhang, 2002)</ns0:ref> and its output can be used to support the decision-making process. It can be applied to both internal and external data due to the possibility of accessing data from anywhere, at any time, and from different sources <ns0:ref type='bibr' target='#b27'>(Luna et al., 2019)</ns0:ref>. The most basic and widely applied concept in data mining is the association rule <ns0:ref type='bibr' target='#b7'>(Dahbi et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b9'>Duong et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b25'>Lin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Luna et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b30'>Prajapati et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b32'>Ryang & Yun, 2015;</ns0:ref><ns0:ref type='bibr' target='#b35'>Selvi & Tamilarasi, 2009;</ns0:ref><ns0:ref type='bibr' target='#b41'>Weng & Chen, 2010</ns0:ref>; C. <ns0:ref type='bibr' target='#b42'>Zhang & Zhang, 2002;</ns0:ref><ns0:ref type='bibr' target='#b43'>S. Zhang & Wu, 2011)</ns0:ref> which has been discovered to be very important in determining and identifying interesting relationships between sets of items in a dataset and also to predict the association relationships for new data <ns0:ref type='bibr' target='#b40'>(Vu & Alaghband, 2014;</ns0:ref><ns0:ref type='bibr' target='#b41'>Weng & Chen, 2010</ns0:ref>; S. <ns0:ref type='bibr' target='#b43'>Zhang & Wu, 2011)</ns0:ref>. The basic concept in association rules is to generate rules based on items occurring frequently in transactions and this normally involves two main processes which include the determination of frequent itemset and the process of forming the rule itself. A frequent itemset is a collection of items occurring more frequently than the threshold value or minimum support specified in the transaction. The association rule looks very simple but has several challenges in its practical application ranging from the usage of very large, multiple, and heterogeneous data sources to the difficulty in the process of determining the minimum support (S. <ns0:ref type='bibr' target='#b43'>Zhang & Wu, 2011)</ns0:ref>. Moreover, the number of rules generated can be very large, thereby making the exploitation process to be difficult <ns0:ref type='bibr' target='#b11'>(El Mazouri et al., 2019)</ns0:ref>. This means it is necessary to prioritize the selection of more valuable and relevant rules to be used in the process <ns0:ref type='bibr' target='#b5'>(Choi et al., 2005)</ns0:ref>. Previous studies discussed different methods of prioritizing rules and ELECTRE II was discovered to have the ability of sorting the rules from the best to the worst <ns0:ref type='bibr' target='#b11'>(El Mazouri et al., 2019)</ns0:ref>. It is also a multi-criteria decision-making method which is based on the concept of outranking using pairwise comparisons of alternatives related to each criterion. Moreover, ranking results are usually obtained by considering different sizes of association rules with those at the top rank representing the most relevant and interesting <ns0:ref type='bibr' target='#b11'>(El Mazouri et al., 2019)</ns0:ref>. The rules generated from the association rule mining process with several other criteria related to business values are normally presented to the managers involved in the business. <ns0:ref type='bibr' target='#b5'>Choi et al. (2005)</ns0:ref> proposed a method that can create a synergy between decision analysis techniques and data mining for managers in order to determine the quality and quantity of rules based on the criteria determined by the decision-makers <ns0:ref type='bibr' target='#b5'>(Choi et al., 2005)</ns0:ref>. This prioritization association rule method has been used to analyze cases of road accidents which are considered the major public health problems in the world to identify the main factors contributing to the severity of the accidents. It is important to note that the study was not conducted to optimize transportation safety, but to generate sufficient insight and knowledge needed to enable logistics managers to make informed decisions in order to optimize the processes, avoid dangerous routes, and improve road safety. A large-scale data mining technique known as association rule mining was used to predict future accidents and enable drivers to avoid hazards but it was observed to have generated a very large number of decision rules, thereby making it difficult for the decision-makers to select the most relevant rules. This means a multi-criteria decision analysis approach needs to be integrated for decision-makers affected by the redundancy of extracted rules <ns0:ref type='bibr' target='#b0'>(Ait-Mlouk et al., 2017)</ns0:ref>. The current rule ranking method, which uses the electree method <ns0:ref type='bibr' target='#b5'>(Choi et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b11'>El Mazouri et al., 2019)</ns0:ref> and the AHP method <ns0:ref type='bibr' target='#b5'>(Choi et al., 2005)</ns0:ref>, is a separate process from the rule formation process. So, a separate process is needed to rank the rules that have been generated. And this process is not easy, but it requires the determination of alternatives and criteria from the decision-making team. Determination of alternatives and criteria takes a long time and requires room for discussion. The need for data support and some limitations observed also show the need to transform the association rule more adaptively to user needs. This is indicated by the fact that decision-makers have different criteria to determine the information they need which are needed to be considered in the rule formation process in order to ensure adaptive rules are produced <ns0:ref type='bibr'>(Hikmawati et al., 2020)</ns0:ref>. The adaptive rule method proposed by Hikmawati <ns0:ref type='bibr'>(Hikmawati et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b17'>(Hikmawati et al., , 2021a</ns0:ref><ns0:ref type='bibr' target='#b18'>(Hikmawati et al., , 2021b;;</ns0:ref><ns0:ref type='bibr'>Hikmawati & Surendro, 2020)</ns0:ref> has the ability to determine the frequent itemsets based on the occurrence frequency of an item and another criterion called item utility <ns0:ref type='bibr' target='#b22'>(Krishnamoorthy, 2018;</ns0:ref><ns0:ref type='bibr' target='#b25'>Lin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b26'>Liu & Qu, 2012;</ns0:ref><ns0:ref type='bibr' target='#b28'>Nguyen et al., 2019)</ns0:ref> to produce adaptive rule according to the criteria desired by the user. There is also a function in this model usually used to determine the minimum value of support based on the characteristics of the dataset as well as other criteria added to serve as an assessment tool in the rule formation process which is called adaptive support <ns0:ref type='bibr' target='#b18'>(Hikmawati et al., 2021b)</ns0:ref>. The adaptive rule model also has the ability to sort the rules generated using the lift ratio which considers the frequency and utility of the item to ensure the rules produced are sorted from those with the highest lift ratio value which are the most relevant. Therefore, the main contributions of the study are highlighted as follows: 1.</ns0:p><ns0:p>The lift ratio was not calculated based only on the frequency of the item but also its utility.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.</ns0:head><ns0:p>The rule ranking method was based on the frequency and utility of the item to ensure the rules produced are sequentially arranged based on the highest lift ratio value.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>This present study proposes the adaptive rule model as presented in Figure <ns0:ref type='figure'>1</ns0:ref>. Figure <ns0:ref type='figure'>1</ns0:ref>. Model Adaptive Rule <ns0:ref type='bibr'>(Hikmawati et al., 2020)</ns0:ref> Figure <ns0:ref type='figure'>1</ns0:ref> shows that the model has several functions which have been discussed in previous studies <ns0:ref type='bibr'>(Hikmawati et al., 2020</ns0:ref><ns0:ref type='bibr' target='#b17'>(Hikmawati et al., , 2021a</ns0:ref><ns0:ref type='bibr' target='#b18'>(Hikmawati et al., , 2021b;;</ns0:ref><ns0:ref type='bibr'>Hikmawati & Surendro, 2020)</ns0:ref> and the three main functions are explained as follows:</ns0:p><ns0:p>1. Mining of association rules from different databases in terms of sources, characteristics, and attributes. It is important to note that the database can either be internal or external <ns0:ref type='bibr' target='#b17'>(Hikmawati et al., 2021a)</ns0:ref>.</ns0:p><ns0:p>2. Automatic threshold value determination process.</ns0:p><ns0:p>The system can automatically calculate the threshold value according to the characteristics of the database and the criteria desired by the users and this means there is no need to determine the minimum support and minimum confidence values at the beginning <ns0:ref type='bibr' target='#b18'>(Hikmawati et al., 2021b)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Prioritization of the rules produced</ns0:head><ns0:p>The rules produced are ranked by the lift ratio value based on certain criteria determined by the user. This lift ratio is defined as the ratio between the support value of the rule with the antecedent and consequent support value and can be calculated using the following formula <ns0:ref type='bibr' target='#b2'>(Alam et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b11'>El Mazouri et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Kim & Yum, 2011;</ns0:ref><ns0:ref type='bibr' target='#b25'>Lin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b36'>Telikani et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b38'>Tseng & Lin, 2007)</ns0:ref>:</ns0:p><ns0:formula xml:id='formula_0'>[1] sup( ) ( ) ( ) sup( ) sup( ) sup( ) A B conf A B Lift A B A B B     </ns0:formula><ns0:p>Where:</ns0:p><ns0:p>-Lift is the Lift Ratio Value -A is the antecedent of the rule in the form of item-set -B is the consequent of the rule in the form of item-set sup is the support value conf is the confidence value There is a slight difference in the adaptive rule model which is associated with the fact that the lift ratio value is not calculated based on only the frequency of the item but also its utility which is a criterion defined by the user. This means the formula to calculate the lift ratio is the same but the support and confidence values generated are not based on only the item frequency. Moreover, the minimum support value is determined automatically using the characteristics of the dataset and the criteria specified by the user. The algorithm in the adaptive rule model is, therefore, presented in algorithm 1 while the complete stages of the model are indicated in Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>. <ns0:ref type='figure'>-----------------------RULES:'</ns0:ref>) for rule, confidence,lift in (res):</ns0:p><ns0:p>pre, post = rule print('Rule: %s ==> %s , %.3f, %.3f' % (str(pre), str(post), confidence, lift)) end for End Algorithm 1. Adaptive Rule algorithm</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 2. Model Flow Adaptive Rule</ns0:head><ns0:p>Based on the flow of the adaptive model in Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>, the adaptive rule algorithm can be explained as follows: 1. This algorithm starts by preparing input data. There are two types of input obtainable from several different databases in the model and these include: a. Transaction Dataset This dataset contains a list of transactions and items in each transaction and is normally used as the basis to form the rule. In the case of sales, this dataset is a collection of sales transactions containing items purchased in the same basket or a collection of purchase receipts. Some of its attributes include the transaction ID and the items purchased for each transaction. b. Specific criteria (utility for each item) This utility data can be obtained from external or internal databases and normally applied as the factor to determine the frequent itemset. This means it is possible to have items that rarely appear in the transactions but possess high utility value in the rule formation process. In real cases, the utility can be determined from the price, profit, customer reviews, and availability of goods. It is also important to note that each item can have a different utility value and this makes the rule formation process to be more adaptive to the needs of the users. Moreover, the user can determine the utilities to be considered in the rule formation process apart from the occurrence frequency of items. 2. From the two types of inputs, an iteration is carried out for the process of calculating the minimum threshold value with the aim of determining the frequent itemset. So that in the adaptive rule model the user does not need to determine the minimum support value at the beginning. The iteration is done as many as the number of items in the dataset. The minimum threshold calculation process follows the following steps: a. Calculating the support value for each item in the dataset using the following formula: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr'>[3]</ns0:ref> support ut Utility  </ns0:p><ns0:p>3. If all the items have calculated their utility values, then the iteration process is stopped. The next step is to calculate the average utility for all transactions with the following formula:</ns0:p><ns0:p>[4]</ns0:p><ns0:p>  aveutil sum len itemset  4. The output for this stage is the minimum threshold value used for the rule formation process.</ns0:p><ns0:p>The minimum threshold value is obtained from the average utility value divided by the total existing transactions, and this is represented mathematically as follows:</ns0:p><ns0:p>[5]</ns0:p></ns0:div>
<ns0:div><ns0:head>  minsup= avesup len transactionList</ns0:head><ns0:p>Where: support = support value for an item count= number of occurrences for an item len(transactionList)= transaction amount ut= utility value for an item Utility= utility and support value for an item Aveutil= Average utility of items Sum=sum of utility all item Len(itemset)=number of items Minsup= minimum threshold value (item density level) 5. After obtaining the minimum threshold value, the items that will be involved in the rule formation process are determined which are called frequent itemset. Items that have utility more than equal to the minimum threshold value are included in the frequent itemset.</ns0:p><ns0:p>6. The next process is the formation of rules in the adaptive rule model based on the a priori algorithm and this was conducted through the following steps:</ns0:p><ns0:p>a. The process in the apriori algorithm is in the form of iterations to form a combination of n-itemsets, starting with a 2-itemset combination. The loop is stopped when the item can no longer be combined.</ns0:p><ns0:p>b. The process of forming rules for the combination c. Calculation of the utility value for the combination such that when the utility value is below the minimum threshold, the rule is eliminated d. Calculation of the confidence value for each rule such that when the confidence value is below the minimum confidence, the rule is eliminated e. If more itemset combinations can be made, then repeat from Step a.</ns0:p><ns0:p>7. The next step after rule formation is sorting the rules based on the lift ratio value from the biggest to the smallest which was determined using both the frequency and utility of the item. It is, however, important to note that the support value is in the average of the utility when the antecedent or consequent consists of several items.</ns0:p><ns0:p>8. The output of this model is a set of rules that have been sorted based on the lift ratio value.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>An example of the implementation of the proposed method can be explained as follows:</ns0:p><ns0:p>1. Input data in the form of a transaction dataset consisting of four transactions can be seen in table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Transaction List 2. The second input is in the form of utility data items seen in table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. Utility Items 3. The next process is to calculate the minimum threshold by calculating the support for each item and multiplying by its utility. The results can be seen in table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>. Calculate Utility Items 4. From the results of calculating the utility of each item, the average utility value and the minimum threshold value are calculated.</ns0:p><ns0:p>Aveutil = 1.65</ns0:p><ns0:p>Min threshold = 0.4125 5. The frequent itemset is 1,2,3,4 because the utility value is > 0.4125 6. From the frequent itemset, a 2-itemset combination is drawn up which can be seen in table 4. In addition, the confidence value and lift ratio value are calculated for each rule.</ns0:p><ns0:p>Rules that have a utility value < 0.4125 will be eliminated and not included in the next itemset combination process.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref>. 2-itemset combination 7. In table 4 it can be seen that for items 24 and 42 do not meet the min threshold value so that a 3-itemset combination is prepared which can be seen in table <ns0:ref type='table' target='#tab_7'>5</ns0:ref>.</ns0:p><ns0:p>8. There are several rules that have a utility value < 0.4125 so they are not included in the list of rules. And from the existing results, it is not possible to arrange a 4-itemset combination.</ns0:p><ns0:p>9. The existing rules are sorted by the value of the lift ratio so that the final result can be seen in table <ns0:ref type='table'>6</ns0:ref>. The dataset used for this experiment, as previously described in Erna <ns0:ref type='bibr' target='#b18'>(Hikmawati et al., 2021b)</ns0:ref>, is a special dataset for the rule association case. This dataset is obtained from SPMF <ns0:ref type='bibr' target='#b9'>(Fournier-Viger et al., 2016)</ns0:ref>, UCI Dataset <ns0:ref type='bibr' target='#b4'>(Casey & Dua, 2019)</ns0:ref> and real transaction data. SPMF is an open-source data mining library in which there are 224 mining algorithms. In addition, various data sets are provided that can be used for data mining. Description of the dataset can be seen in table <ns0:ref type='table' target='#tab_1'>7</ns0:ref> and Characteristics of dataset can be seen in table 8. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The experimental instrument used was a laptop with an Intel Core i7-8550U CPU @ 1.80 GHz 1.99GHz, 16 GB Installed memory (RAM), and a 500GB SSD Hard drive. The proposed PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science adaptive rule model was applied to 6 datasets according to Table <ns0:ref type='table'>1</ns0:ref>. It is important to note that price item was another criterion used to determine the minimum threshold in this experiment and the results of the adaptive rule formation process are presented in Table <ns0:ref type='table'>9</ns0:ref>. Table <ns0:ref type='table'>9</ns0:ref>. Results of Formation of Adaptive Rule The results of the adaptive rule test were compared through a basic association rule model trial which involved using the a priori algorithm with the same 6 datasets. The process was conducted with a level-wise approach such that candidate items were generated for each level <ns0:ref type='bibr'>(Agrawal, 1994)</ns0:ref> and the same minimum support values were used. The findings of this apriori algorithm method are presented in Table <ns0:ref type='table'>10</ns0:ref>. Table <ns0:ref type='table'>10</ns0:ref>. Results of Rule Formation with apriori Algorithm The comparison of the number of frequent itemsets for the adaptive rule method and the a priori algorithm can be seen in Figure <ns0:ref type='figure' target='#fig_1'>3</ns0:ref> and Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. The comparison of the runtime and memory consumption for the adaptive rule method and the apriori algorithm can be seen in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>, Figure <ns0:ref type='figure'>6</ns0:ref> and Figure <ns0:ref type='figure'>7</ns0:ref>. Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>. Memory Consumption Figure <ns0:ref type='figure'>6</ns0:ref>. Runtime for dataset foodmart, sco, test and zoo Figure <ns0:ref type='figure'>7</ns0:ref>. Runtime for dataset connect and retail In addition, we have conducted an experimental design with a T-test between the number of rules, the number of frequent itemset, runtime and memory. The results of the T test can be seen in table <ns0:ref type='table' target='#tab_2'>11</ns0:ref>. Table <ns0:ref type='table' target='#tab_2'>11</ns0:ref>. Result from T-Test When viewed from the P-value > alpha, the alpha here is 0.05, which means that there is no significant difference between the adaptive rule method and the a priori algorithm. In terms of the number of rules, the number of frequent itemset, runtime and memory, there is no significant difference, but this method aims to produce the most relevant rules at the top. So it cannot be measured from the number of rules, the number of frequent itemset, runtime and memory. One of the limitations of the current adaptive rule method is that it has only been tested on a dataset with a relatively small number of transactions, namely 100 transactions, for further research, pruning and performance improvement will be carried out so that it can be implemented on large datasets. And then a suitable evaluation method will be carried out to measure the success of this adaptive rule method.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Tables <ns0:ref type='table'>9 and 10</ns0:ref> show that the number of frequent itemset and rules generated by the adaptive rule is more than those produced by the apriori algorithm and this means it provides more recommendation options for users. Moreover, the model allows the users to select appropriate choices despite the higher number of options due to its ability to sort the rules automatically based on the lift ratio value.</ns0:p><ns0:p>The number of frequent itemset generated by the adaptive rule is more because the determination of the frequent itemset is not only based on frequency but also involves the utility of the item in this case the price of the item. This allows items that rarely appear in transactions but have a high utility value to be categorized as frequent itemset so that they can be involved in the rule formation process. With the selection of a more varied frequent itemset, it also causes the number of rules generated to be more and can provide more diverse choices for users according to the specified utility criteria. If this is implemented in the recommender system, it will create recommended items not only items that often appear in transactions but which have high utility so that they can provide higher value to users.</ns0:p><ns0:p>Several previous studies focused on ranking rules and some are stated as follows:</ns0:p><ns0:p>1. Mazouri <ns0:ref type='bibr' target='#b11'>(El Mazouri et al., 2019)</ns0:ref> used the ELECTRE II method <ns0:ref type='bibr' target='#b15'>(Govindan & Jepsen, 2016)</ns0:ref> to sort the rules produced from the a priori algorithm while the multicriteria decision analysis method was applied to identify the most frequent conditions in accidents reported in France. The ELECTRE II method was applied to sort a large number of association rules generated <ns0:ref type='bibr' target='#b11'>(El Mazouri et al., 2019)</ns0:ref> from best to worst based on their sizes with those ranked top observed to be representing the most relevant and interesting association rules.</ns0:p><ns0:p>2. Choi <ns0:ref type='bibr' target='#b5'>(Choi et al., 2005)</ns0:ref> sorted the rules generated in the association rule mining process using the ELECTRE and AHP methods. The study focused on prioritizing the association rules produced from data mining through the explicit inclusion of conflicting business value criteria as well as the preferences of the managers concerning trade-off conditions.</ns0:p><ns0:p>The decision analysis method such as the Analytic Hierarchy Process (AHP) was used to collect the opinion of group decision-makers on relevant criteria to evaluate the business value of the rules and the relative importance of these criteria. Meanwhile, the association rule mining technique was later used to capture the competing set of rules with different business values which were further applied as input for rule prioritization. This means the final rule was selected from the appropriate decision method such as ELECTRE which was able to present meaningful results using machine learning and human intelligence.</ns0:p><ns0:p>3. Mlouk <ns0:ref type='bibr' target='#b0'>(Ait-Mlouk et al., 2017)</ns0:ref> also conducted a study to generate sufficient insight and knowledge to allow logistics managers to make informed decisions towards optimizing the processes, avoiding hazardous routes, and improving road safety. A large-scale data mining technique known as association rule mining was used to predict future accidents and enable drivers to avoid hazards but it was observed to have generated a very large number of decision rules, thereby making it difficult for the decision-makers to select the most relevant rules. This means a multi-criteria decision analysis approach needs to be integrated for decision-makers affected by the redundancy of extracted rules. When compared with previous research, the adaptive rule model has the advantage of ranking rules based on the lift ratio which is calculated not only from frequency but also involves utility Manuscript to be reviewed</ns0:p><ns0:p>Computer Science items such as price. In addition, the rule ranking process does not become a separate part so that when the rules are formed they are automatically sorted. The comparison between the previous method and the proposed method can be seen in table12. Table <ns0:ref type='table' target='#tab_4'>12</ns0:ref>. Comparison between previous method and proposed method.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The experiment conducted on the adaptive rule model using 6 datasets was used to draw the following conclusions: 1.</ns0:p><ns0:p>The number of rules generated in the six datasets was higher when compared to the a priori algorithm with the same minimum support value.</ns0:p></ns0:div>
<ns0:div><ns0:head>2.</ns0:head><ns0:p>The frequent itemset selection process was not based only on frequency but also considered the specified utility to provide more frequent itemsets.</ns0:p></ns0:div>
<ns0:div><ns0:head>3.</ns0:head><ns0:p>The minimum support value determined based on the frequency and utility values also considered the dataset characteristics which include the number of items, number of transactions, and average number of items per transaction. 4.</ns0:p><ns0:p>The rules were produced based on the lift ratio value calculated using the utility of the item, thereby allowing the users to see the most relevant rules easily. It is recommended that future studies focus on pruning and evaluating the adaptive rule model developed in order to determine its performance as well as to implement the model in an actual recommendation system. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>count len transactionList b. Calculating the utility for each item in the dataset using the following formula: PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Number of Frequent Itemsets Figure 4. Number of Rules</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Figure 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Model Flow Adaptive Rule</ns0:figDesc><ns0:graphic coords='19,42.52,178.87,525.00,462.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Number of Frequent Itemsets</ns0:figDesc><ns0:graphic coords='32,42.52,178.87,525.00,224.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Number of Rules</ns0:figDesc><ns0:graphic coords='35,42.52,178.87,525.00,224.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Memory Consumption</ns0:figDesc><ns0:graphic coords='36,42.52,178.87,525.00,224.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='41,42.52,178.87,525.00,224.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='44,42.52,178.87,525.00,224.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Description Dataset Table 8. Characteristics of the Dataset</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 (on next page)Table 1 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Transaction List</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022) 1 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Utility Items</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Item</ns0:cell><ns0:cell>Utility</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.5</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 (on next page)</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Calculate Utility Items</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Item</ns0:cell><ns0:cell cols='2'>Support Criteria</ns0:cell><ns0:cell>Utility</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>1.00</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>0.50</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>2.5</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>0.75</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2.25</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>1.25</ns0:cell></ns0:row><ns0:row><ns0:cell>5</ns0:cell><ns0:cell>0.50</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.25</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022) Manuscript to be reviewed Computer Science 1 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row><ns0:row><ns0:cell>3-itemset combination</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022) Manuscript to be reviewed 1 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Characteristics of the Dataset</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell cols='2'>Numbe</ns0:cell><ns0:cell>Average</ns0:cell><ns0:cell>Data Source</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>transaction</ns0:cell><ns0:cell>r</ns0:cell><ns0:cell>of</ns0:cell><ns0:cell>number</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>s</ns0:cell><ns0:cell>items</ns0:cell><ns0:cell /><ns0:cell>of items</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>on each</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>transacti</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>on</ns0:cell></ns0:row><ns0:row><ns0:cell>Zoo</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell /><ns0:cell>17</ns0:cell><ns0:cell>https://archive.ics.uci.edu/ml/datasets/Zoo</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Connect 100</ns0:cell><ns0:cell>130</ns0:cell><ns0:cell /><ns0:cell>11</ns0:cell><ns0:cell>https://www.philippe-fournier-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>viger.com/spmf/datasets/quantitative/connect.txt</ns0:cell></ns0:row><ns0:row><ns0:cell>Retail</ns0:cell><ns0:cell>72</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell>Retail Transaction Data from a Store in Nganjuk,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>East Java</ns0:cell></ns0:row><ns0:row><ns0:cell>Test</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell /><ns0:cell>3</ns0:cell><ns0:cell>Test Data</ns0:cell></ns0:row><ns0:row><ns0:cell>Foodmar</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>216</ns0:cell><ns0:cell /><ns0:cell>3.59</ns0:cell><ns0:cell>https://www.philippe-fournier-</ns0:cell></ns0:row><ns0:row><ns0:cell>t</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>viger.com/spmf/datasets/quantitative/foodmart.t</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>xt</ns0:cell></ns0:row><ns0:row><ns0:cell>Sco</ns0:cell><ns0:cell>51</ns0:cell><ns0:cell>22</ns0:cell><ns0:cell /><ns0:cell>2.52</ns0:cell><ns0:cell>Transaction data from a cafe in Bandung, West</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Java</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71862:1:1:NEW 18 May 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Bandung Institute of Technology
School of Electrical Engineering and Informatics
Ged. Achmad Bakrie, Lt. 2
Jl. Ganesha No 10, Bandung 40132, Indonesia
Tel : +6281321106858
Fax : +62-22-2534222
https://www.itb.ac.id/
Email : [email protected] Mei 11th, 2022
Dear Editors
Thank you for allowing of our manuscript could be considered for publication and prepared to incorporate major revisions, with an opportunity to address the reviewers’ comments.
We thank the reviewers for their generous comments on the manuscript and have have edited the manuscript to address their concerns.
We believe that the manuscript is now suitable for publication in PeerJ.
Erna Hikmawati
Doctoral Student of Electrical and Informatics Engineering
On behalf of all authors.
Reviewer: Kannimuthu Subramanian
Basic reporting
Author proposed novel method to find interesting patterns using lift ratio which incorporates both utility and frequency of itemsets in a database. The major highlight of this article is that authors proposed rule ranking method to extract utility based association rules. The article is structured and written well. Though this work addresses the issues of the existing work in utility based data mining, few corrections needs to be done.
1.1. Introduction should clearly addresses the issues of existing works.
1.2. The comprehensive review on utility based data mining should be done. The following recent references should be investigated and cited.
i) https://www.inderscienceonline.com/doi/abs/10.1504/IJITM.2015.066056
ii) https://www.tandfonline.com/doi/full/10.1080/08839514.2014.891839
iii) http://www.cai2.sk/ojs/index.php/cai/article/view/1333
iv) https://ieeexplore.ieee.org/abstract/document/6416812/
v) https://link.springer.com/article/10.1007/s11036-019-01385-6
vi) https://link.springer.com/article/10.1007/s12652-020-02187-5
vii) https://publications.waset.org/9997900/a-distributed-approach-to-extract-high-utility-itemsets-from-xml-data
viii) https://link.springer.com/article/10.1007/s11063-022-10793-x
1.3. Equations should be represented by using equation numbers.
1.4. The flow of algorithm should be clearly explained.
1.5. The algorithm needs to be written clearly with necessary indentations.
Author response: Thank you for the review given in our paper. Your comment will be very meaningful for us.
Author action:
1.1 We have added an issue that appeared in the previous ranking method
1.2 We have added some references about utility mining
1.3 We have added number to the equation
1.4 We have added an explanation of the flow algorithm
1.5 We have improved the writing of algorithm pseudocode by adding the appropriate identantion
Experimental design
2.1 Experimental results needs to be explained elaborated manner.
2.2 Dataset description is not available.
2.3 Experimental results should be supported with necessary graphs by considering various factors such as number of rules generated, support, utility, confidence etc.
Author response: Thank you for the review given in our paper. Your comment will be very meaningful for us.
Author action:
2.1 We have added an explanation of the experimental results
2.2 We have added a description of the dataset in table 1
2.3 We have added graphs for comparison of frequent itemset, number of rules, memory consumption and runtime for the a priori algorithm and adaptive rule model.
Validity of the findings
3.1 The validity and the findings is not clearly explained.
3.2 Statistical analysis can be performed.
Author response: Thank you for the review given in our paper. Your comment will be very meaningful for us.
Author action: -
3.1 We have added the benefits of the experimental findings
3.2 We have done an experimental design with the Annova test, and we have added the results.
Additional comments
The proposed work can be explained with suitable example. Author can use small dataset for this explanation.
Author response: Thank you for the review given in our paper. Your comment will be very meaningful for us.
Author action:
We've added an example of implementing a method on a dataset with 4 transactions and 5 items.
Selengkapnya tentang teks sumber ini
Reviewer: Elif Varol Altay
Basic reporting
No comment
Experimental design
No comment
Validity of the findings
No comment
Additional comments
The authors proposed a method for ranking the rules based on the lift ratio value, which was derived using the item's frequency and utility. I would ask the authors for clarification of some issues before acceptance of this paper. Therefore, I recommend a revision.
• Please, you should add a comparative study. A comparison between your algorithm and others should be added.
• The limitation(s) of the association rules mining methodologies proposed in this work should be extensively discussed.
• When the pseudo-codes of the proposed method are examined, it is seen that concepts such as D, S, X, and U are written in a different language. Please write them all in the same language and give pseudocodes as 'appendix'.
• Complexity analysis should be done.
• In Figure 1 'eksternal' should be fixed as 'external'
• You are using a specific database in Table 1. Please indicate your reasons for choosing these datasets.
• Number of transactions is small in all datasets. A larger data set should be added and analysis should be performed on large data sets.
• I'd like to see a more detailed analysis of the proposed algorithm's scalability. What are the main theoretical and practical benefits of the proposed algorithm? What about memory consumption and problem dimensions (especially big data)?
• 'II.2 and II.3' used in ' Tables II.2 and II.3 show that the number… ' expression should be corrected.
• Is the support value given as a percentage in the test results? Not mentioned in the article?
• The authors mentioned some studies in the literature (such as ELECTRE, ELECTRE II, AHP). They should also compare the proposed method with the existing methods in the literature.
• The properties of the datasets used are already given in Table 1. It is given again in Table 2. If it will be given in Table 1 separately, it should be removed from Table 2. Table 1 is not needed if Table 2 will also be given.
Author response: Thank you for the review given in our paper. Your comment will be very meaningful for us.
Author action:
• We have added a comparison table between the previous method and the proposed method
• We have added the limitations of the proposed method and plans for further research in the experimental results and discussion section
• We have changed and equated formula terms with pseudocode
• We have added an explanation of the experimental results and the results of the analysis using the T - test
• We have changed to external in figure 1.
• We've added why use all 6 datasets for experiments
• For experiments using large datasets that will be carried out in future research, we have explained this in the limitations
• We have added memory consumption analysis to the algorithm
• We have corrected the table numbers
• The minimum support value in this model is the number of item occurrences not in percentage
• We have added a comparison table for the proposed method with the previous method
• We have fixed table 1 and table 2
" | Here is a paper. Please give your review comments after reading it. |
692 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Proteins are the core of all functions pertaining to living things. They consist of an extended amino acid chain folding into a three-dimensional shape that dictates their behavior. Currently, Convolutional Neural Networks (CNNs) have been pivotal in predicting protein functions based on protein sequences. While it is a technology crucial to the niche, computation cost and translational invariance associated with CNN make it impossible to detect spatial hierarchies between complex and simpler objects. Therefore, this research utilizes Capsule networks to capture spatial information as opposed to CNNs. Since capsule networks focus on hierarchical links, Capsule Networks have a lot of potential for solving structural biology challenges. In comparison to the standard CNNs, our results exhibit improvement in accuracy. GOCAPGAN achieved an F1 score of 82.6%, precision score of 90.4% and recall score of 76.1%.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Proteins play an integral role in a number of biological processes, performing many cellular functions <ns0:ref type='bibr' target='#b8'>(Ashtiani et al., 2018)</ns0:ref>. Despite protein data being produced at an extremely high rate by different complex sequencing techniques, its functional understanding is yet to be discovered <ns0:ref type='bibr' target='#b51'>(Rekapalli et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b38'>Li et al., 2017)</ns0:ref>. Only about 1% of proteins have been explored and worked on experimentally and are manually annotated in the UniProt database <ns0:ref type='bibr' target='#b9'>(Boutet et al., 2016)</ns0:ref>. In-vitro and in-vivo investigations can clarify and explain protein functions, but these methods have shown to be time-consuming, expensive, and unable to keep up with the growing volume of protein data. This encourages the development of a precise, efficient, and time-effective computational technique that can directly calculate protein functions from data. In this regard, a variety of approaches have been offered. In general, researchers build a pipeline that determines protein functions given protein sequences by performing the following steps: Selection of a useful trait to encode input proteins, constructing datasets for experimenting and training purpose, selecting an appropriate algorithm, and evaluation of performance.</ns0:p><ns0:p>BLAST <ns0:ref type='bibr' target='#b6'>(Altschul et al., 1990</ns0:ref>) is a well-known computational technique which manually annotates the input sequences using the same functional sequences. As popular as it is, it has its shortcomings: 1) for a lot of input sequences, similar and functionally annotated sequences are hard to find; and 2) while some proteins have the same functions, they don't have sequence similarity. Hence, the results taken from methods like these that are based on homology are not always accurate and precise <ns0:ref type='bibr'>(Pandey et al., 2006)</ns0:ref>.</ns0:p><ns0:p>One option to overcome the drawbacks of other strategies is the extraction of relevant information from preserved subregions or input protein chain residues. <ns0:ref type='bibr' target='#b14'>Das et al. (Das et al., 2015)</ns0:ref>, proposed a domainbased technique for predicting protein functions, while Wang and colleagues presented <ns0:ref type='bibr' target='#b62'>(Wang et al., 2003)</ns0:ref>, a motif-based function classifier for proteins. Finally, numerous approaches depend significantly on Protein-Protein Interaction(PPI) information derived to properly compute and predict protein functions <ns0:ref type='bibr' target='#b27'>(Jiang and McQuay, 2011;</ns0:ref><ns0:ref type='bibr'>Peng et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Chatterji et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b24'>Hou, 2017;</ns0:ref><ns0:ref type='bibr' target='#b47'>Nguyen et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b50'>Rahmani et al., 2009)</ns0:ref>. The key concept backing these techniques is the idea that proteins with similar topological properties in PPI networks could also have similar functions <ns0:ref type='bibr' target='#b18'>(Gligorijević et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Moreover, quite a few protein function predictors requires and uses other types of data like make use of genomic context, <ns0:ref type='bibr' target='#b33'>Konc et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b56'>Stawiski et al. (2000)</ns0:ref>; <ns0:ref type='bibr' target='#b64'>Zhang et al. (2017)</ns0:ref> Manuscript to be reviewed Computer Science (2014) exploits protein structure, and <ns0:ref type='bibr' target='#b39'>Li et al. (2006)</ns0:ref> consumes the knowledge of gene expression. We are now focusing on two types of predictors: sequence-based techniques <ns0:ref type='bibr' target='#b10'>(Cai et al., 2003;</ns0:ref><ns0:ref type='bibr'>Peng et al., 2014)</ns0:ref> and PPI techniques <ns0:ref type='bibr' target='#b36'>(Kulmanov et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b50'>Rahmani et al., 2009)</ns0:ref>. PPI techniques that depend on data collected from these networks <ns0:ref type='bibr' target='#b36'>(Kulmanov et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b50'>Rahmani et al., 2009)</ns0:ref>, and sequence-based techniques that include using motifs, protein domains and residue-level information <ns0:ref type='bibr' target='#b10'>(Cai et al., 2003;</ns0:ref><ns0:ref type='bibr'>Peng et al., 2014)</ns0:ref>. Complementary data is often used by these strategies.</ns0:p><ns0:p>The proposed Gene Ontology Capsule GAN (GOCAPGAN) model is built on improving standard GANs to handle two essential issues: predicting functions based on sequence and annotating protein functions based on constrained categorized data. Therefore, different GAN variants were developed, primarily for picture synthesis challenges; whereas, just a few GAN variants are accessible for text generation problems. We used the current Wasserstein GAN (WGAN) model <ns0:ref type='bibr' target='#b7'>(Arjovsky et al., 2017)</ns0:ref> for our proposed version. We chose the WGAN model because of its high learning stability, ability to avoid mode collapse, and standard applicability for textual inputs, such as protein sequences in our situation. The use of GAN to tackle the issue of protein function prediction, as well as the originality of GOCAPGAN, are the study's main conceptual innovations. GAN is based on the use of unlabeled data, which is plentiful. Features are extracted from massive unlabeled datasets, which are utilized in the case of protein characterization. In order to generate protein sequences, the GAN is modified in the early phases. After generating sequences, the parameters of the GAN are tweaked in the second stage to predict protein functions based on the information gathered during sequence generation phase. Separate from the Uniprot database, the suggested prototype is tested on a dataset of proteins from Homo sapiens. In compared to previous techniques, the results of the GOCAPGAN model show significant improvements in several evaluation measures.</ns0:p><ns0:p>It is evident that GANs are a fascinating and a field with rapid development and that delivers on the promise of generative models providing realistic samples in a variety of domains. GANs are an intelligent method of preparing a generating model by putting together a direct learning problem which has two associate models: generator, which is trained to produce new examples, and discriminator, which attempts to predict examples as fake (from outside the domain) or real (from the domain). In an adversarial zero-sum game, the two models are trained until the discriminator model is dodged almost half of the time, conveying that the generator model is producing proper samples.</ns0:p><ns0:p>GOCAPGAN framework utlizes anewly developed capsule network at the discriminator level, which sets it part from previous models. As CNN is translational invariant, they fail to capture relationship among features, whereas recently introduced Capsule network <ns0:ref type='bibr' target='#b52'>(Sabour et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Hinton et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b22'>Hinton et al., , 2011) )</ns0:ref> consist of capsules that are a group of neurons that encodes three-dimensional information of an object in addition to the probability of it being present. Capsule Network is a new building element for deep learning that may be used to model hierarchical relationships within a neural network's internal knowledge representation. Contrary to CNN, information is encoded in vector form in capsule for storage of spatial data as well. For GOCAPGAN, the properties and features generated by the internal capsule layer explores the internal data distribution related to biological significance for enhanced outcome.</ns0:p><ns0:p>Capsule networks in current years have been used widely for object detection <ns0:ref type='bibr' target='#b40'>(Lin et al., 2022)</ns0:ref>, automated email spam detection <ns0:ref type='bibr' target='#b53'>(Samarthrao and Rohokale, 2022</ns0:ref>), text classification <ns0:ref type='bibr' target='#b66'>(Zhao et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b65'>(Zhao et al., , 2019;;</ns0:ref><ns0:ref type='bibr' target='#b32'>Kim et al., 2020)</ns0:ref>, web blog content curation <ns0:ref type='bibr' target='#b31'>(Khatter and Ahlawat, 2022)</ns0:ref>, fault diagnosis of rotating machinery <ns0:ref type='bibr' target='#b37'>(Li et al., 2022)</ns0:ref>, identifying aggression and toxicity in comments <ns0:ref type='bibr' target='#b55'>(Srivastava et al., 2018)</ns0:ref>, sentiment classification (Chen and Qian, 2019), biometric recognition system <ns0:ref type='bibr' target='#b25'>(Jacob, 2019)</ns0:ref> and simple classification hassles <ns0:ref type='bibr' target='#b41'>(Lukic et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b21'>Hilton et al., 2019)</ns0:ref>.</ns0:p><ns0:p>In the field of AI in biology and medicine, capsule network has also been explored to inspect Munro's microabscess <ns0:ref type='bibr' target='#b48'>(Pal et al., 2022)</ns0:ref>, brain tumor classification <ns0:ref type='bibr' target='#b3'>(Afshar et al., 2018</ns0:ref><ns0:ref type='bibr' target='#b5'>(Afshar et al., , 2019))</ns0:ref>, Pneumonia detection, especially coronavirus disease 2019 <ns0:ref type='bibr' target='#b63'>(Yang et al., 2022;</ns0:ref><ns0:ref type='bibr' target='#b1'>Afshar et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The current study is laid out as follows: Section two looks at some studies that have been done on the problem of GO term prediction. Section III goes over the implementation specifics in great detail. The important findings of this study are highlighted in part IV, which is followed by a discussion in section V.</ns0:p><ns0:p>Finally, in section VI, the research is concluded with some future recommendations. Manuscript to be reviewed Computer Science the experimental methods is Yeast two-hybrid (Y2H) used for recognizing protein functions. Y2H can examine an organism's entire genetic makeup for protein DNA interactions. Interactions in the worm, fly, and human <ns0:ref type='bibr' target='#b17'>(Ghavidel et al., 2005)</ns0:ref> were recently discovered using Y2H. The disadvantage of this method is that it works on experiments, which necessitate adequate resources and laboratories. Another drawback of experimental approaches is that the time needed to characterize proteins cannot be predicted.</ns0:p></ns0:div>
<ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Mass spectroscopy (MS) is a dynamic technique for examining protein interactions and predicting protein function. This method generates ions that may be detected using the mass to charge ratio, allowing for the identification of protein sequences <ns0:ref type='bibr' target='#b0'>(Aebersold and Mann, 2003)</ns0:ref>. Like conventional methods, this procedure also has several drawbacks and limitations: it necessitates the use of qualified staff and appropriate equipment, and it is time-consuming. MS is very expensive, and protein complex purification limits protein characterization <ns0:ref type='bibr' target='#b54'>(Shoemaker and Panchenko, 2007)</ns0:ref>. To predict protein activities, computational approaches use various protein information such as sequencing, structure, and other data available <ns0:ref type='bibr' target='#b42'>(Lv et al., 2019)</ns0:ref>. Although these techniques may have drawbacks, but with reference to time and resource management these techniques are quite reasonable. Several methods, including machine learning algorithms and methodologies based on genomic context, homology and protein network, have proved successful in automatically predicting protein function. Machine learning advance models, such as deep learning, have demonstrated to be more advanced than traditional machine learning models. Its superior performance is due to its capacity to assess incoming data automatically and more effectively represent non-linear patterns.</ns0:p><ns0:p>Protein function prediction and other bioinformatics applications have lately been done using deeplearning methods. <ns0:ref type='bibr' target='#b16'>(Deng et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b46'>Nauman et al., 2019)</ns0:ref>. <ns0:ref type='bibr'>Kulmanov et al. used</ns0:ref> DeepGO for function prediction. Deep learning was used to extract characteristics from protein interaction networks and sequences. One significant disadvantage of this method is that it necessitates a big amount of training data in order to make accurate predictions. It is also a computationally complicated model that consumes many resources <ns0:ref type='bibr' target='#b35'>(Kulmanov et al., 2017)</ns0:ref>. <ns0:ref type='bibr'>Kulmanov et al.</ns0:ref> DeepNF is made up of multimodal deep auto encoders that extract proteins' important properties from a variety of networks with diverse interactions. To integrate STRING networks, DeepNF utilises high-level protein characteristics constrained in a shared low-dimensional representation. For yeast/human STRING networks <ns0:ref type='bibr' target='#b18'>(Gligorijević et al., 2018)</ns0:ref>, the results indicated that prior approaches had outperformed deepNF. DeepNF's main flaw is that it only uses the STRING network. This causes issues since functions expressed by a single protein are not taken into account. This is problematic because capabilities expressed by a single protein are not taken into account. DeepNF was found to be the best option for a few STRING networks. Deep learning methods have a major drawback in that they require a large amount of labelled data, whereas protein functions have a finite amount of labelled data. Between protein sequences and function annotations, there is a large gap. Furthermore, many GO keywords have only a few protein sequences, making deep learning algorithms difficult to forecast. GANs are used to isolate and extract patterns from unlabeled data, so they can perform well in the function prediction situation. Researchers have started utilizing GANs for producing biological data <ns0:ref type='bibr' target='#b20'>(Gupta and Zou, 2018)</ns0:ref>. In our previous work, we also utilized GAN for protein function prediction <ns0:ref type='bibr' target='#b44'>(Mansoor et al., 2022)</ns0:ref> Because of their affinity for hierarchical relationships, Capsule Networks have a lot of potential for solving structural biology challenges. DeepCap-Kcr <ns0:ref type='bibr' target='#b30'>(Khanal et al., 2022)</ns0:ref>, a capsule network (CapsNet) based on a convolutional neural network (CNN) and long short-term memory (LSTM), was proposed as a deep learning model for robust prediction of Kcr sites on histone and nonhistone proteins (mammals). Manuscript to be reviewed Computer Science studies, however, no biological data is synthesized for further study.</ns0:p><ns0:p>Given the success of the capsule network, <ns0:ref type='bibr' target='#b59'>(Upadhyay and Schrater, 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Jaiswal et al., 2018)</ns0:ref> have investigated the capsule network with Generative Adversarial Networks (GANs) and found promising results, but they have not investigated the capsule network with GANs for protein function prediction.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>The GOCAPGAN model is based on the idea of modifying GOGAN <ns0:ref type='bibr' target='#b44'>(Mansoor et al., 2022)</ns0:ref> to solve the problem of predicting protein function from sparsely labelled data. The proposed paradigm can be divided into two stages. The first set of designs includes of the Generator and Discriminator architectures, which have been improved using residual blocks, with the last convolutional layer of a discriminator being replaced by a new and superior Capsule network to record data in vector form. This modified model in this phase is prepared to produce protein sequences. In the second stage, after generating sequences, the altered GAN's parameters are utilized to forecast protein functions based on the knowledge that GAN gained during the sequence generation stage. We first present an introduction to classical GAN, for a more comprehensive insight, in the following subsections. Later, the proposed GOCAPGAN model's first stage is discussed, highlighting the important components of the GOCAPGAN model, namely the GOCAPGAN Generator and GOCAPGAN Discriminator. As capsule network plays the crucial role in the discriminator architecture, it has been discussed in detail prior to the explanation of discriminator architecture. Finally, the second stage is to be discussed, where the GOCAPGAN model parameters are utilized to forecast protein functions with the help of a multi-label classifier and transfer learning.</ns0:p></ns0:div>
<ns0:div><ns0:head>Architecture of Basic GAN</ns0:head><ns0:p>A novel framework had been given by Ian Good Fellow that consisted of a system containing two major modules, namely Generator and Discriminator. Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> depicts the basic idea of GAN where Generator utilizes noise vector z as input creating novel data points and Discriminator functions as a classifier of the newly generated data points into a category of fake or real <ns0:ref type='bibr' target='#b19'>(Goodfellow et al., 2014)</ns0:ref>. showing the likelihood. The goal is to increase the likelihood of correctly detecting real data points as opposed to created data points. Cross-entropy is used to calculate the loss: plog(q) is a mathematical expression. The correct label for real data points is one, whereas the label for created data points is inverted. The main function of Discriminator is given in Eq (1):</ns0:p><ns0:formula xml:id='formula_0'>max D V (D) = [E x∼p data(x) [log D(x)] + E z∼p z (z) log(1 − D(G(z))] (1)</ns0:formula><ns0:p>On the Generator side, the Generator's primary function is to create data points with the highest value of D(x) in order to mislead the Discriminator. Eq (2) provides the main function for the Generator.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/14</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72463:1:0:NEW 17 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_1'>max G V (G) = E z∼p z (z) [log (1 − D (G (z)))] (2)</ns0:formula><ns0:p>The goal functions of the generator and discriminator are learned simultaneously through interchanging gradient descent once they have been specified. The Generator model parameters are fixed, and the Discriminator undergoes a gradient descent iteration using both original and produced data points, after which the sides are swapped. The generator has been programmed for another cycle, and the discriminator has been repaired. In alternate periods, both networks are trained until the Generator delivers high-quality data points. Eq (3) depicts the GAN loss function: Bottou, 2017) advised using Wasserstein-1 or Earth-Mover distance W (q, p) to resolve this issue. The Wasserstein-1 or Earth-Mover distance is the amount of effort required to convert a q-distribution to a p-distribution with the least amount of effort. The Kantorovich-Rubinstein duality <ns0:ref type='bibr' target='#b60'>(Villani, 2008)</ns0:ref> is used by the WGAN objective function, which is provided by: min</ns0:p><ns0:formula xml:id='formula_2'>min G max D V (D, G) = [E x∼p data(x) [log D(x)] + E z∼p z (z) log(1 − D(G(z))] (3) V (D, G) in Eq (3),</ns0:formula><ns0:formula xml:id='formula_3'>G max Dε ⃗ D E x∼P r [D(x)] − E ⃗ x∼P g [D(⃗ x)]<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>We propose a new model called GOCAPGAN that is based on the notions of the classical GAN model discussed above. The proposed GOCAPGAN model is made up of two primary parts: the GOCAPGAN Generator and the GOCAPGAN Discriminator. The Generator for the proposed model consists of multiple residual blocks in which each residual block contains dual convolutaional layers pursued by LeakuReLU.</ns0:p><ns0:p>Whereas, for discriminator, the internal structure of residual blocks is the same. However, instead of the last residual block, Capsule Network is utilized. The GOCAPGAN model's general architecture is seen in Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>GOCAPGAN Generator</ns0:head><ns0:p>The GOCAPGAN Generator network produces protein sequences after the training. The GOCAPGAN Generator's input size is specified as (Ψ,128), and Ψ represents batch size. There were four distinct batch sizes tested: 16, 32, 48, and 64. Smaller batch sizes led to faster training, but provided lower accuracy. Due to restricted computing resources, the batch size for the suggested trials was set at 32.</ns0:p><ns0:p>Once inputs are fed into the Generator, it generates features or representations. The input latent vector is converted to low-level features by the generator using linear transformation. The generator network is built up of residual blocks rather than a traditional feed forward neural network. The GOCAPGAN Generator is made up of six residual blocks. Each residual blocks uses two 1-D convolutional layers to learn information from given data. The activation function used is LeakyReLU. Gumbel Softmax outperforms softmax in terms of discrete text production <ns0:ref type='bibr' target='#b29'>(Joo et al., 2020)</ns0:ref>. After experimenting with various sequence lengths, it was discovered that sequence length 160 produced the best results. The total number of trainable parameters in the GOCAPGAN Generator architecture is 18,447,894.</ns0:p></ns0:div>
<ns0:div><ns0:head>Capsule Network</ns0:head><ns0:p>Capsule Network (CN) is an advanced neural network architecture conceptualized by Geoffrey E Hinton. <ns0:ref type='bibr' target='#b52'>(Sabour et al., 2017)</ns0:ref>. The goal of CN is to remedy some of CNN's shortcomings. In the past, CNN has been extensively studied in the fields of computer vision and other computer-assisted devices. They do, however, have several fundamental flaws and limits. In the following subsection, some limitations of CNN are discussed that motivates us to move towards CN: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Latent Vector Fake Sequences MAVKKKK.. MPKPKKK.. MTTTVKK..</ns0:p></ns0:div>
<ns0:div><ns0:head>Real Sequences</ns0:head><ns0:p>Real or Fake?</ns0:p><ns0:p>MAVKKKK.. MPKPKKK.. MTTVKKK..</ns0:p></ns0:div>
<ns0:div><ns0:head>Fine Tuning</ns0:head></ns0:div>
<ns0:div><ns0:head>C</ns0:head><ns0:p>Residual Blocks</ns0:p><ns0:p>Discriminator Network Sequences are generated from the generator and passed to the discriminator. The second step is classification of generated sequence into real or bogus. The discriminator and generator model are readjusted so that the discriminator is unable to identify the generated sequences into original or fake.</ns0:p><ns0:formula xml:id='formula_4'>L I N E A R C O N V 1 D Residual Blocks Generator Network L I N E A R C O N V 1 D 1 2 ReLU Conv1</ns0:formula></ns0:div>
<ns0:div><ns0:head>Translational Invariant:</ns0:head><ns0:p>Translational invariance is a property of CNNs. Consider an example to clarify what translational invariant implies, imagine that we have trained a model that can predict a presence of boat in a picture. Even if the identical image is translated to the right, CNN will still recognize it as a boat. However, because there is no method for CNN to predict translational property, this prediction ignores the extra information that the boat is moved to the right. Translational equivariance is required, indicating that the position of the object in the image should not be fixed in order for the CNN to detect it, but the CNN cannot identify the presence or position of one object related to others. Moreover, this results in difficulty identifying objects that hold special spatial relationship between features. In order to explain that, consider an example of a dissembled boat as depicted in Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>. As CNN is looking for key features only, it will identify both as boat as spatial relationship between features is missing in case of CNN. However, for capsule network 3a is a boat whereas 3b is not considered to be a boat.</ns0:p></ns0:div>
<ns0:div><ns0:head>Large Data:</ns0:head><ns0:p>In order to learn the features, CNN requires a lot of data to generalize the results.</ns0:p><ns0:p>To overcome these constraints, the usage of CN is employed. Capsules are a group of neurons, where each neuron in a capsule represents various properties of a particular group of information. For example, if we consider 4 neurons each will be responsible for its own information like color, width, angle and height of particular information and the combination of all these four neurons is called as capsule. capsule dictates existence property, which means that there's a capsule in correspondence to each entity, which gives: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>1. What is the likelihood that the information or entity exists.</ns0:p><ns0:p>2. Instantiation parameters of said entity.</ns0:p><ns0:p>The following are the main operations carried out within capsules:</ns0:p><ns0:p>Multiplication of the matrix of the input vectors with the weight matrix is calculated to encode the essential spatial link between low and high level features.</ns0:p><ns0:formula xml:id='formula_5'>xk| j = W jk x j + B k (5)</ns0:formula><ns0:p>The total of the weighted input vectors is used to select which higher-level capsule will receive the current capsule's output.</ns0:p><ns0:formula xml:id='formula_6'>s k = ∑ j c jk xk| j (6)</ns0:formula><ns0:p>After that, the squash function is used to apply non-linearity. The squashing function reduces a vector's length to a maximum of one and a minimum of zero, while retaining its orientation.</ns0:p><ns0:formula xml:id='formula_7'>v k = squash(s k ) (7)</ns0:formula></ns0:div>
<ns0:div><ns0:head>GOCAPGAN Discriminator</ns0:head><ns0:p>The GOCAPGAN Discriminator is divided into three sections to aid in the learning of how to discern between actual and fake proteins. To begin, low level features are obtained using a 1-D convolutional The input data from a convolutional layer if fed to a capsule network that learns internal representations' essential properties. This layer's output is transmitted to the principal capsule layer, which creates a combination of the observed features.</ns0:p><ns0:p>The input is fed into a convolutional sub-layer in this layer, after which it is passed to a reshaped sub-layer, which prepares the data for the squash operation before being passed to the capsule layer. The dynamic routing operation takes three rounds in the capsule layer. The data is then transmitted to a length layer. Finally, to verify whether the input sequence is real or bogus, a linear transformation is applied.</ns0:p><ns0:p>The GOCAPGAN Discriminator architecture has a total of 4,029,697 trainable parameters. RMSprop <ns0:ref type='bibr' target='#b58'>(Tieleman and Hinton, 2012)</ns0:ref> was used as an optimizer, with alpha set to 0.99 and eps set to 1e-08. The rate of learning was set at 0.0001. Various optimization algorithms are available and have been tested;</ns0:p><ns0:p>RMSprop delivered the best performance and accuracy on the stated dataset. Finally, the second stage is described in the following subsection, in which the GOCAPGAN model's parameters are used to predict protein functions using transfer learning and multi-label classifiers as shown in Figure <ns0:ref type='figure' target='#fig_9'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Transfer Learning</ns0:head><ns0:p>Transfer learning, also known as extracted features transferability, is a critical property of applying deep learning models to any problem. Transfer learning works by detaching the trained model's final layer, saving the weights of the previous layers, and then attaching a new final layer at the end. The features learned can be applied to a range of challenges if this update is useful for other sorts of classification.</ns0:p><ns0:p>Transfer learning was carried out in the GOCAPGAN model architecture in the following way:</ns0:p><ns0:p>Discriminator contains all the traits that can be used to distinguish among the actual and forged protein sequences. As a result, only the GOCAPGAN Discriminator was taken into account for transfer learning.</ns0:p><ns0:p>This GOCAPGAN Discriminator is now given genuine protein sequences without the last layer, and it generates features for them. These characteristics are then saved. The multi-label classifier receives the features obtained from this GOCAPGAN Discriminator minus the last layer, as well as their functions or classifications.</ns0:p></ns0:div>
<ns0:div><ns0:head>Multi-label Classifier</ns0:head><ns0:p>The extracted features and classes are the two inputs to the multi-label classifier. The features extracted originate from putting genuine proteins through the upgraded GOCAPGAN Discriminator, which is missing the last linear layer. The Gene Ontology (GO) class represents protein functions. The input is subsequently passed to the multi-label classifier's only dense layer. The dense layer outputs indicates the number of function projection. The dense layer uses a sigmoid activation algorithm. The binary crossentropy loss <ns0:ref type='bibr' target='#b61'>(Vincent et al., 2010)</ns0:ref> is used to calculate error, and it is given as:</ns0:p><ns0:formula xml:id='formula_8'>J(θ ) = − 1 m [ m ∑ i=1 y (i) logh θ (x (i) ) + (1 − y (i) )log(1 − h θ (x (i) ))]<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>On the proposed model, many optimizers were tested but Adam provided the best performance and accuracy. Adam was used to train the GOCAPGAN multi-label classifier.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>A protein is associated in many processes whether they be biological, molecular or simple phenotypic, this essential piece of information is acquired from its function. It also clarifies how various molecules interact with one another. Several approaches for standardizing protein function concepts have been presented, and we choose the most frequently used, the Gene Ontology (GO). This model proposes the preparation and training of only all three elements of gene ontology. One of the best strategy for mass computational studies in GO since it has wide range of general adoption and consistency across species. The data and code files for the proposed GOCAPGAN model is available at: https://github.com/musadaqmansoor/gocapgan.</ns0:p></ns0:div>
<ns0:div><ns0:head>Details of Dataset</ns0:head><ns0:p>Proteins from Homo sapiens were used in the experiment. The Uniprot <ns0:ref type='bibr' target='#b13'>(Consortium, 2015)</ns0:ref> database was used to acquire proteins. The system yielded a total of 72,945 proteins. Tremble (tr) entries that had not been evaluated and swiss-port entries that had been reviewed were among the proteins (sp). Protein length is governed by the number of residues, which varies amongst proteins. In Homo sapiens species, the length ranges up to 34,358 residues. A few sequences exceed 2000 residues in length, which is the highest residue length considered for computation. As a result, 70,956 proteins were employed in total. It's worth noting that our concept can be applied to different species without any changes. Longer sequences can be trained as well, however more computational resources and longer training time might be required. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Target Classes</ns0:head><ns0:p>Homo sapiens proteins were utilized in the suggested system.A conventional archive containing n all Homo sapiens protein sequences was used as the ground truth for the previously mentioned dataset. The suggested method is applicable to all conceivable Homo sapiens gene ontology classes. Because the GOCAPGAN model requires classes to have at least 16 protein sequences, there are 421 classes that are eligible to run the model. Twenty-five of these classes have been chosen for multi-label classification.</ns0:p><ns0:p>The detail of these classes are given in Table <ns0:ref type='table' target='#tab_4'>1</ns0:ref>. These classes were chosen based on the fact that they occur frequently. GTPase activity</ns0:p></ns0:div>
<ns0:div><ns0:head>Setup for Experiment</ns0:head><ns0:p>The suggested model is trained, tested, and validated using Google Colab as the standard system. CuDNN, Keras, Pytorch and Tensorflow libraries are used to implement the GOCAPGAN model in software.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preparation</ns0:head></ns0:div>
<ns0:div><ns0:head>GOCAPGAN Training</ns0:head><ns0:p>RMSProp is utilised as an optimizer for GOCAPGAN model training, while Wasserstein loss is employed as an evaluation metric. </ns0:p></ns0:div>
<ns0:div><ns0:head>GOCAPGAN Classifier</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_8'>3</ns0:ref> shows the various parameters and their values for multi-label classifier testing and training.</ns0:p></ns0:div>
<ns0:div><ns0:head>Quantitative Analysis</ns0:head><ns0:p>The system's performance was assessed via repeated k-fold cross validation. The number of splits (k) is three, and the number of repeats is five. The suggested model was evaluated using many performance indicators, including F1 score, recall, precision and hamming loss, which were stated as: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_9'>HammingLoss = 1 NL L ∑ l=1 N ∑ i=1 Y i,l X l,i<ns0:label>(12)</ns0:label></ns0:formula><ns0:p>On 421 gene ontology classes, the performance of the GOCAPGAN model was calculated and reported in Table <ns0:ref type='table' target='#tab_9'>4</ns0:ref>. The classes belonged to Homo sapiens only. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science observed that each function is treated separately and independently in our method. In general, a protein's ability to perform one function does not exclude it from doing others. As a result, our approach predicts each protein's function without prejudice. Despite this, there are links between functions. Let's imagine you have highly related functions X and Y, and having function X increases your chances of getting function Y. Our method assumes that annotated proteins have detailed functional annotations and uses this information to predict functions for proteins that aren't annotated. These annotated proteins could, in fact, have other activities that have yet to be found. With the passage of time and experimental research on protein function prediction, annotation of protein function may be on the path of further completion.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>In computational biology and bio-informatics, one of the major concern is determining functions of newly discovered proteins. Many conventional methods are still used to solve the gap between protein structure and function annotations, however these methods have a low accuracy. The current study proposes a novel new deep learning model for protein categorization built on the fusion of a Capsule Network and a GANs architecture. It shows how Capsule Networks can be applied to structural biology problems. To our knowledge, our team is the first in the field to use Capsule Networks in conjunction with GANs to build protein sequences that have also learned internal information. The results reveal that Capsule Networks outperform convolutional networks that have been around for a long time in terms of accuracy.</ns0:p><ns0:p>We intend to investigate further capsule network versions in the future, such as Convolutional Fully-Connected Capsule Network (CFC-CapsNet) and Prediction-Tuning Capsule Network (PT-CapsNet).</ns0:p><ns0:p>These developed architectures are unique and fast Capsule Networks, and they may provide an opportunity to identify additional qualities that could lead to higher assessment scores.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>; Maghawry et al. PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72463:1:0:NEW 17 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>also extended his work to DeepGOPlus. They created a unique technique for function prediction based solely on sequence. They combined a deep CNN model with sequence similarity predictions. Their CNN approach analyses the sequence for themes that predict protein activities and combines them with related protein functionalities (if available). The problem with this technique was that it worked better for similar sequences (Kulmanov and Hoehndorf, 2021). Rifaioglu et al used DEEPred for solving the function prediction problem. DEEPred was tuned and benchmarked utilizing three types of protein descriptors, training datasets of various sizes, and GO keywords from various levels. Electronically created GO annotations were also included in the training procedure to see how training with bigger but noisy data would affect performance (Sureyya Rifaioglu et al., 2019). Gligorijevic et al. used deep network fusion (deepNF) for the solution of function prediction.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>de Jesus et al., 2018) describes the implementation and application of a Capsule Network architecture to the classification of RAS protein family structures. HRAS and KRAS structures were successfully classified using a suggested Capsule Network trained on 2D and 3D structural encoding. In both of these 3/14 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72463:1:0:NEW 17 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Working of GAN. Unpredictable noise z from p(z) is supplied to the Generator, which produces data points. The z denotes sample whereas p(z) denotes probability distribution. Discriminator receives the data generated and assigns a value to it based on real data points or Generator. The discriminator then determines whether they are genuine or not.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>denotes entropy, which denotes real data points being supplied to the Discriminator with the goal of increasing the entropy to one. While the second component of Eq (3) indicates entropy, which shows created data points sent to Discriminator, with the goal of reducing entropy to zero. Overall, Generator tries to decrease the objective function whereas Discriminator tries to maximize it. GANs are frequently used to reduce divergences, although they are not always consistent with generator settings, which poses problems with GAN training. Arjovsky et al. (Martin Arjovsky and</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2. GOCAPGAN GAN Working. The first step is passing the latent vector to the generator. Sequences are generated from the generator and passed to the discriminator. The second step is classification of generated sequence into real or bogus. The discriminator and generator model are readjusted so that the discriminator is unable to identify the generated sequences into original or fake.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3. For CNN, both 3a and 3b are boats, as mere presence of entities indicates object existence. However, for capsule network 3a is a boat whereas 3b is not considered to be a boat.</ns0:figDesc><ns0:graphic coords='7,215.73,453.85,124.07,111.13' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>layer. The second part is a set of residual blocks for converting the data into a distinguishable state or set of values. The GOCAPGAN Discriminator's third component is a linear layer that reports the probability of the input sequence being true or false and can be used to evaluate the Discriminator's accuracy. The GOCAPGAN Discriminator is given both the protein sequence generated by the GOCAPGAN Generator and genuine protein sequences from the dataset. By traversing six residual blocks, the Discriminator learns to distinguish between synthesised and actual protein sequences. The GOCAPGAN Discriminator is made up of six residual blocks.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 illustrates the proposed Capsule Network design incorporated in Discriminator of GAN.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. GOCAPGAN Transfer Learning Mechanism. The first step deletes the last layer of the Discriminator and save the initial weights. The upgraded Discriminator then outputs features based on real protein sequences. The characteristics are then saved. Step 2 involves passing the stored features to a multi-label classifier for protein function projection.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Computational and Experimental methods are two main ways of calculating protein functions. Experimental methods make use of biological experiments to confirm and authenticate protein functions. One of</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72463:1:0:NEW 17 May 2022)</ns0:cell><ns0:cell>2/14</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Classes for Multi Label Classification.</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Gene Ontology Description</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0046872</ns0:cell><ns0:cell>Metal ion binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0005524</ns0:cell><ns0:cell>ATP binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0003677</ns0:cell><ns0:cell>DNA binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0008270</ns0:cell><ns0:cell>Zinc ion binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0044822</ns0:cell><ns0:cell>RNA Binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0003700</ns0:cell><ns0:cell>DNA-binding transcription factor activity</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0004930</ns0:cell><ns0:cell>G protein-coupled receptor activity</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0042803</ns0:cell><ns0:cell>protein homodimerization activity</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0005509</ns0:cell><ns0:cell>Calcium ion binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0004984</ns0:cell><ns0:cell>Olfactory receptor activity</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0003723</ns0:cell><ns0:cell>RNA Binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0003682</ns0:cell><ns0:cell>chromatin binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0004674</ns0:cell><ns0:cell>protein serine/threonine kinase activity</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0043565</ns0:cell><ns0:cell>sequence-specific DNA binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0000166</ns0:cell><ns0:cell>nucleotide binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0005525</ns0:cell><ns0:cell>GTP binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0000978</ns0:cell><ns0:cell>RNA polymerase II cis-regulatory region sequence-specific DNA binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0042802</ns0:cell><ns0:cell>identical protein binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0019899</ns0:cell><ns0:cell>enzyme binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0019901</ns0:cell><ns0:cell>protein kinase binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0005102</ns0:cell><ns0:cell>signaling receptor binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0098641</ns0:cell><ns0:cell>cadherin binding involved in cell-cell adhesion</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0008134</ns0:cell><ns0:cell>transcription factor binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0031625</ns0:cell><ns0:cell>ubiquitin protein ligase binding</ns0:cell></ns0:row><ns0:row><ns0:cell>GO:0003924</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>Table 2 indicates values of Hyper parameters for GOCAPGAN model training.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Parameters Set for GOCAPGAN GAN Training</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch Size</ns0:cell><ns0:cell>32</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Length of Sequence 160</ns0:cell></ns0:row><ns0:row><ns0:cell>Epochs</ns0:cell><ns0:cell>12</ns0:cell></ns0:row><ns0:row><ns0:cell>Lambda</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Noise</ns0:cell><ns0:cell>128</ns0:cell></ns0:row><ns0:row><ns0:cell>Rate of Learning</ns0:cell><ns0:cell>0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>RMSprop</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss Function</ns0:cell><ns0:cell>Wasserstein loss</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Parameters Set for GOCAPGAN Multi-Label Classifier Training</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Parameter</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Number of Folds</ns0:cell><ns0:cell>3</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Loss</ns0:cell><ns0:cell>Binary Cross Entropy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Number of Repeats 5</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Epochs</ns0:cell><ns0:cell>40</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>Optimizer</ns0:cell><ns0:cell>Adam</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>precision =</ns0:cell><ns0:cell cols='2'>tp tp + fp</ns0:cell><ns0:cell>(9)</ns0:cell></ns0:row><ns0:row><ns0:cell>recall =</ns0:cell><ns0:cell cols='3'>tp tp + fn</ns0:cell><ns0:cell>(10)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>F 1 score = 2 ×</ns0:cell><ns0:cell>precision × recall precision + recall</ns0:cell><ns0:cell>(11)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>421 Gene Ontology Classes Results</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='4'>Accuracy Precision Recall F1 Score</ns0:cell></ns0:row><ns0:row><ns0:cell>GOCAPGAN</ns0:cell><ns0:cell>84.1</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>93.2</ns0:cell><ns0:cell>84.5</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Comparison with other Techniques</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='5'>The GOCAPGAN model, which was developed in this study, is compared to DeepSeq, BLAST and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'>GOGAN. BLAST uses homology-based annotation transfer to predict protein function only based on</ns0:cell></ns0:row></ns0:table><ns0:note>sequence information. A local alignment algorithm known as BLAST is categorised as. BLAST searches for hits among protein sequences based on local region similarity. For proteins from the Homo sapiens species, precision, hamming loss, recall and F1 score of the GOCAPGAN, BLAST, GOGAN model, and DeepSeq are shown in the Table 5. As seen in Table 5, the GOCAPGAN model has approximately twice as many targeted classes as GOCAPGAN. In terms of precision and F1 score, the GOCAPGAN model outperforms DeepSeq, GOGAN and BLAST. 10/14 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72463:1:0:NEW 17 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Evaluation metrics of the GOCAPGAN model compared to GOGAN<ns0:ref type='bibr' target='#b44'>(Mansoor et al., 2022)</ns0:ref>, DeepSeq<ns0:ref type='bibr' target='#b46'>(Nauman et al., 2019)</ns0:ref>, and the BLAST<ns0:ref type='bibr' target='#b6'>(Altschul et al., 1990)</ns0:ref> methodThe functional annotation of proteins is critical now that the genomes of various model species have been sequenced. In the current research, a deep learning based model GOCAPGAN is suggested that exploits Generative Adversarial Networks along with Capsule Networks for synthesising protein sequences. As this process of synthetization enables our model to learn optimal features, these features are utilized in predicting protein function from sequences based on protein sequences. Unlike some methods of function prediction that are currently accessible, this model does not need custom-built attributes; instead, the architecture extracts information automatically from data sequences presented to the model. GOCAPGAN uses a convolutional layer in conjunction with a capsule layer to capture more features.Capsules outperform standard CNN since it assimilates training and respective temporal association among various elements in one product. Experimentation results clearly indicate the usefulness of the suggested GOCAPGAN paradigm. For the time being, the model has only been tested and verified on the UniPort dataset, which is freely available. To further elaborate the importance of Capsule Network, it is</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>Classes</ns0:cell><ns0:cell cols='4'>Precision Recall F1 Score Hamming Loss</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>GOCAPGAN 25</ns0:cell><ns0:cell>0.904</ns0:cell><ns0:cell>0.761</ns0:cell><ns0:cell>0.826</ns0:cell><ns0:cell>0.085</ns0:cell></ns0:row><ns0:row><ns0:cell>GOGAN</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>0.852</ns0:cell><ns0:cell>0.625</ns0:cell><ns0:cell>0.721</ns0:cell><ns0:cell>0.095</ns0:cell></ns0:row><ns0:row><ns0:cell>DeepSeq</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0.76</ns0:cell><ns0:cell>0.66</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.133</ns0:cell></ns0:row><ns0:row><ns0:cell>BLAST</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0.46</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.387</ns0:cell></ns0:row><ns0:row><ns0:cell>DISCUSSION</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='14'>/14 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72463:1:0:NEW 17 May 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Original Article Title:
Gene Ontology Capsule GAN: an improved architecture for protein function prediction
To:
Academic Editor, PeerJ Computer Science
Re: Response to Editor Comments
Dear Editor,
We want to thank the reviewers for their valuable comments on the submitted research article. We
have edited the article to address concerns of all the reviewers. We believe that the manuscript
presentation is now more sound and convincing.
We are uploading the following
(a) Our point-by-point response to the comments (response to Editor & Reviewers)
(b) An updated manuscript with yellow highlighting indicating changes
(c) A clean updated manuscript without highlights (PDF main document).
Best Regards,
Musadaq Mansoor, Mohammad Nauman, Hafeez Ur Rehman and Maryam Omar
REVIEWER 1:
Concern # 1. In Figure 2, the author mentions a 9*9 input sequence in the detail design of the capsule
network; nevertheless, the protein sequence is just one dimension. Please clarify this uncertainty.
Author Response: Thank you for the proposed correction. We have updated Figure 2 in the manuscript.
Concern # 2. Why did the authors use a capsule network at the GOCAPGAN discriminator level but not
in the generator?
Author Response: Thank you for insightful question. For classification of protein functions, the real work
is done in discriminator part, as the discriminator learns the features that distinguishes between real
and fake proteins. That is why the incorporation of capsule at discriminator was more appropriate for
our proposed solution.
Concern # 3. Figure 3 was not cited in the paper by the authors.
Author Response: We have updated the manuscript. The reference has been added at the end of
subsection GOCAPGAN Discriminator.
Concern # 4. The authors mention in line 232 that they would consider an example of a disassembled
boat, yet no visual picture of the boat is offered.
Author Response: Thank you for the suggestion. We have included the visual representation of the boat
in the manuscript as Figure 3.
REVIEWER 2:
Concern # 1- As mentioned in the paper GOCAPGAN is an extension of GOGAN, however there are
many similarities between GOCAPGAN and GOGAN, so it is suggested to update some content of
GOCAPGAN.
Author Action: Thank you for your suggestion. We have updated our manuscript. All changes are
reflected in tracked and clean manuscript.
Concern # 2- How the values of Table 2 and Table 3 were achieved?
Author Response: Thank you for your thoughtful inquiry. We compared numerous values for various
conceivable outcomes and then presented the best of them.
Concern # 3- In results, what is the difference between Table 4 and Table 5?
Author Response: Thank you for raising an important point. Regarding this, please note that table 4
represents weighted results of 421 classes (which have at least 16 protein sequences) and table 5
represents multi-label results of 25 classes.
REVIEWER 3:
Concern # 1. What is the difference between GAN (with CNN) and GAN (with Capsule), and how does
it solves your prediction problem?
Author Response: Thank you for the question.
GAN (with CNN): In CNN information is stored in scalar form.
GAN (with Capsule): In Capsule information is stored in vector form as depicted in Equation 5 of
manuscript.
How it helps in prediction problem: As more information is stored in vector form, more features are
obtained by Capsule thus giving us better classification results.
Concern # 2. The writers conducted a study to compile background information and utilized current
research articles to support the problem indicated in the introduction, however, include one or two
recent literature reviews to complete it.
Author Response: Thank you for your valuable suggestion. We have included the following in our
Literature Review section.
Kulmanov et al. also extended his work to DeepGOPlus. They created a unique technique for function
prediction based solely on sequence. They combined a deep CNN model with sequence similarity
predictions. Their CNN approach analyses the sequence for themes that predict protein activities and
combines them with related protein functionalities (if available). The problem with this technique was
that it worked better for similar sequences (Kulmanov and Hoehndorf, 2021). Rifaioglu et al used
DEEPred for solving the function prediction problem. DEEPred was tuned and benchmarked utilizing
three types of protein descriptors, training datasets of various sizes, and GO keywords from various
levels. Electronically created GO annotations were also included in the training procedure to see how
training with bigger but noisy data would affect performance (Sureyya Rifaioglu et al., 2019).
Concern # 3. However, the authors did not cite Figure 3 in the article.
Author Response: We have updated the manuscript. The reference has been added at the end of
subsection GOCAPGAN Discriminator.
Concern # 4. The authors need to explain why they did not include a hamming loss in Table 4.
Author Response: Thank you for insightful question. Hamming Loss is an evaluation metric for Multi
Label classification. Since table 4 represents weighted results of the classes, we do not need Hamming
Loss.
" | Here is a paper. Please give your review comments after reading it. |
693 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cancer's genomic complexity is gradually increasing as we learn more about it. Genomic classification of various cancers is crucial in providing oncologists with vital information for targeted therapy. Thus, issues of patient genomic classification become more pertinent to address. Prostate cancer is a cancer subtype that exhibits extreme heterogeneity. Prostate cancer contributes to 7.3% of new cancer cases worldwide, with a high prevalence in males. Breast cancer is the most common type of cancer in women and the second most significant cause of death from cancer in women. Breast cancer is caused by abnormal cell growth in the breast tissue, generally referred to as a tumour. Tumours are not synonymous with cancer; they can be benign (noncancerous), pre-malignant (precancerous), or malignant (cancerous). Fine Needle Aspiration (FNA) tests are used to biopsy the breast to diagnose breast cancer. Artificial Intelligence (AI) and Machine Learning (ML) models are used to diagnose with varying accuracy. In light of this, we used the Genetic Folding (GF) algorithm in predicting prostate cancer status in a given dataset.</ns0:p><ns0:p>An accuracy of 96% was obtained, thus being the current highest accuracy in prostate cancer diagnosis. The model was also used in breast cancer classification with a proposed pipeline that used Exploratory Data Analysis (EDA), label encoding, feature standardization, feature decomposition, log transformation, detect and remove the outliers with Z-score, and the BAGGINGSVM approach attained a 95.96% accuracy. The accuracy of this model was then assessed using the rate of change of PSA, age, BMI, and filtration by race. We discovered that integrating the rate of change of PSA and age in our model raised the model's area under the curve (AUC) by 6.8%, whereas BMI and race had no effect. As for breast cancer classification, no features were removed.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Cancer is, at times, severe disease or set of diseases that have historically been the most prevalent and challenging to treat <ns0:ref type='bibr' target='#b0'>(Adjiri, 2016)</ns0:ref>. It is commonly defined as the abnormal proliferation of various human cells, thus resulting in its etiological heterogeneity <ns0:ref type='bibr' target='#b13'>(Cooper, 2000)</ns0:ref>. This abnormal growth can be classified into two subsets (a) Malignant and (b) Benign. Benign tumours stay localized to their original site, whereas malignancies can invade and spread throughout the body (metastasize) <ns0:ref type='bibr' target='#b13'>(Cooper, 2000)</ns0:ref>. Breast cancer recently eclipsed lung cancer as the most prominent cancer subtype worldwide, equalling 11.7% (2.3 million) of the new cancer cases <ns0:ref type='bibr' target='#b34'>(Sung et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Prostate cancer is one of the most prevalent male malignancies globally, contributing 7.3% of the estimated incidence in 2020 <ns0:ref type='bibr' target='#b34'>(Sung et al., 2021)</ns0:ref>. This amounted to 1,414,259 new cases and 375,304 deaths from this disease <ns0:ref type='bibr' target='#b34'>(Sung et al., 2021)</ns0:ref>. The prostate is a dense fibromuscular gland shaped like an upside-down cone. It is located around the neck of the urinary bladder, external to the urethral sphincter and functions in a supportive role in the male reproductive system. The alkaline fluid it secretes into semen protects sperm cells from the acidic vaginal environment <ns0:ref type='bibr' target='#b33'>(Singh & Bolla, 2021)</ns0:ref>.</ns0:p><ns0:p>Although early prostate cancer is typically asymptomatic, it may manifest itself in the form of excessive urination, nocturia, haematuria, or dysuria <ns0:ref type='bibr' target='#b21'>(Leslie et al., 2021)</ns0:ref>. Classically, prostate cancer is detected with a Digital Rectal Examination (DRE) and a blood test for prostate-specific antigen (PSA) <ns0:ref type='bibr' target='#b15'>(Descotes, 2019)</ns0:ref>. TRUS-guided biopsy continues to be the gold standard for confirming diagnoses, despite its 15-46% false-negative rate and up to 38% tumour under grading rate compared to the Gleason score <ns0:ref type='bibr' target='#b15'>(Descotes, 2019;</ns0:ref><ns0:ref type='bibr' target='#b20'>Kvåle et al., 2009)</ns0:ref>.</ns0:p><ns0:p>However, the etiology and mechanisms underlying prostate cancer development are still determined <ns0:ref type='bibr' target='#b18'>(Howard et al., 2019)</ns0:ref>. The different mechanisms that develop prostate cancer ultimately affect the therapy proposed. Hence, patient stratification and tumour classification are PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science vital. Especially true considering that prostate cancer is heterogeneous in a clinical, spatial and morphological aspect <ns0:ref type='bibr' target='#b39'>(Tolkach & Kristiansen, 2018)</ns0:ref>. The following prostate cancer stages of development are currently posited: intraepithelial-neoplasia, androgen-dependent adenocarcinoma, and androgen-independent or Castration Resistant Cancer (CRC) <ns0:ref type='bibr' target='#b18'>(Howard et al., 2019)</ns0:ref>. Prostate cancer heterogeneity is further highlighted in a recent study shedding light on mRNA expressional variations from a normal prostate to full metastatic disease <ns0:ref type='bibr' target='#b25'>(Marzec et al., 2021)</ns0:ref>. This means that Metastatic CRC (mCRC) tumours are more complicated than primary prostate tumours. An association that is further aggravated when genomics come into play.</ns0:p><ns0:p>Prostate cancer development and progression have been heavily linked to Androgen Receptor (AR) signalling pathway; thus, Androgen Deprivation Therapy (ADT) has been used for patients who have advanced prostate cancer <ns0:ref type='bibr' target='#b17'>(Hatano & Nonomura, 2021)</ns0:ref>. Although primarily effective, a large proportion of patients develop androgen-independent or CRC. Thus, pharmacological therapies are considered, including abiraterone, enzalutamide, docetaxel and radium-223 <ns0:ref type='bibr' target='#b18'>(Howard et al., 2019)</ns0:ref>.</ns0:p><ns0:p>As mentioned above, breast cancer has become the most common cancer diagnosed, eclipsing lung cancer. The etiology of breast cancer is multi-faceted; many risk factors play a role in the likelihood of a diagnosis; these risk factors can be sub-divided into seven groups. (1) age (2) gender</ns0:p><ns0:p>(3) previous diagnosis of breast cancer (4) histology (5) family history of breast cancer, (6) reproduction-related risk factors and (7) exogenous hormone use <ns0:ref type='bibr' target='#b2'>(Alkabban & Ferguson, 2022)</ns0:ref>.</ns0:p><ns0:p>Race may also indicate a higher prevalence in non-Hispanic white individuals than African Americans, Hispanics, Native Americans, and Asian Americans <ns0:ref type='bibr' target='#b2'>(Alkabban & Ferguson, 2022)</ns0:ref>.</ns0:p><ns0:p>Many modalities of screening exist with varying specificities and sensitivities to breast cancer.</ns0:p><ns0:p>Mammography is one of the front-line tests to screen for breast cancer, with its sensitivity ranging from 75-90% and its specificity ranging from 90-95% <ns0:ref type='bibr' target='#b8'>(Bhushan et al., 2021)</ns0:ref>. Screening is done at a macro level to determine whether the growth is malignant or benign. However, classification at the micro-level is required to identify the molecular basis behind breast cancer progression to target specific therapies. Luminal-A tumours (58.5%) are the most prevalent subtype of breast cancer tumours, then triple-negative (16%), luminal-B (14%) and HER-2 positive (11.5%) being the least prevalent <ns0:ref type='bibr' target='#b4'>(Al-thoubaity, 2019)</ns0:ref>.</ns0:p><ns0:p>Breast cancer tumours are classified using the TNM classification system in which the primary tumour is denoted as T, the regional lymph nodes as N, and distant metastases as M. Breast cancer Manuscript to be reviewed</ns0:p><ns0:p>Computer Science can also be classified about its invasiveness, with lobular carcinoma in situ (LCIS) and ductal carcinoma in situ (DCIS) being non-invasive. Invasive ductal cancer accounts for 50-70% of invasive cancers, while invasive ductal cancer accounts for 10% <ns0:ref type='bibr' target='#b2'>(Alkabban & Ferguson, 2022)</ns0:ref>.</ns0:p><ns0:p>Once the molecular basis of breast cancer is identified, treatment can begin. Many chemotherapeutic agents are used in the treatment of breast cancer, including Tamoxifen (ERpositive breast cancer), Pertuzumab (HER2-overexpressing breast cancer) and Voxtalisib (HR+/HER-advanced breast cancer) <ns0:ref type='bibr' target='#b8'>(Bhushan et al., 2021)</ns0:ref>.</ns0:p><ns0:p>This paper aims to develop a proper kernel function using GF to separate benign prostate and breast cells from tumour-risk genetic cells using SVM, compared to six machine learning algorithms. In addition, the proposed GF-SVM classifier can classify breast and prostate cancer cells better than the six different classifiers. We have also included all features found in the datasets to test the ability of the proposed modelling to predict the best accuracy. The proposed GF-SVM implementation is reliable, scalable, and portable and can be an effective alternative to other existing evolutionary algorithms <ns0:ref type='bibr' target='#b26'>(Mezher, 2022)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Literature Review</ns0:head><ns0:p>Artificial intelligence (AI) is revolutionizing healthcare, incredibly patient stratification and tumour classification, which are pivotal components of targeted oncology. Training a Machine</ns0:p><ns0:p>Learning (ML) model to analyze large datasets means more accurate prostate cancer diagnoses <ns0:ref type='bibr' target='#b38'>(Tătaru et al., 2021)</ns0:ref>. Artificial Neural Networks (ANN) is a tool that has been used in advanced prognostic models for prostate cancer <ns0:ref type='bibr' target='#b19'>(Jović et al., 2017)</ns0:ref>. Other models employed in the classification of cancers based on gene expression include K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The accuracy of these models varies between 70% and 95%, depending on the number of genes analyzed. Previous studies have used these and other ML models to classify prostate cancer <ns0:ref type='bibr' target='#b11'>(Bouazza et al., 2015)</ns0:ref> thus, we aim to use the Genetic Folding (GF) algorithm in classifying patients with prostate cancer. <ns0:ref type='bibr' target='#b1'>(Alba et al., 2007)</ns0:ref> used a hybrid technique for gene selection and classification. They presented data of high dimensional DNA Microarray. They found it initiates a set of suitable solutions early in their development phase. Similarly <ns0:ref type='bibr' target='#b35'>(Tahir & Bouridane, 2006)</ns0:ref>, found that using a hybrid algorithm significantly improved their results in classification accuracy. However, they concluded their algorithm was generic and fit for diagnosing other diseases such as lung or breast cancer.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b9'>(Bouatmane et al., 2011)</ns0:ref> used an RR-SFS method to find a 99.9% accuracy. This was alongside other classification methods such as bagging/boosting with a decision tree. <ns0:ref type='bibr' target='#b22'>(Lorenz et al., 1997)</ns0:ref> compared the results from two commonly used classifiers, KNN and Bayes classifier. They found 78% and 79% respectfully. They had the opportunity to improve these results for ultrasonic tissue characterisation significantly.</ns0:p><ns0:p>Developing a quantitative CADx system that can detect and stratify the extent of breast cancer (BC) histopathology images was the primary clinical objective for <ns0:ref type='bibr' target='#b6'>(Basavanhally et al., 2010)</ns0:ref>, as they demonstrated in their paper the ability to detect the extent of BC using architectural features automatically. BC samples with a progressive increase in Lymphocytic Infiltration (LI) were arranged in a continuum. The region-growing algorithm and subsequent MRF-based refinement allow LI to isolate from the surrounding BC nuclei, stroma, and baseline level of lymphocytes <ns0:ref type='bibr' target='#b6'>(Basavanhally et al., 2010)</ns0:ref>. The ability for this image analysis to classify the extent of LI into low, medium and high categories can show promising translation into prognostic testing.</ns0:p><ns0:p>A metanalysis identified the number of trends concerning the different types of machine learning methods in predicting cancer susceptibility and outcomes. It was found that a growing number of machine learning methods usually improve the performance or prediction accuracy of the prognosis, particularly when compared to conventional statistical and expert-based systems <ns0:ref type='bibr' target='#b14'>(Cruz & Wishart, 2006)</ns0:ref>. There is no doubt that improvements in experimental design alongside biological validation would enhance many machine-based classifiers' overall quality and reproducibility <ns0:ref type='bibr' target='#b14'>(Cruz & Wishart, 2006)</ns0:ref>. As the quality of studies into machine learning classifiers improves, there is no doubt that they will become more routine use in clinics and hospitals.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Proposed Model</ns0:head></ns0:div>
<ns0:div><ns0:head>A. Dataset</ns0:head><ns0:p>The prostate dataset of 100 patients (Shown in Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>features that have been computed from digitized images of the cell nuclei. Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref> shows the first seven columns found in the breast dataset <ns0:ref type='bibr'>(Dua et. al, 2019)</ns0:ref>.</ns0:p><ns0:p>All the prostate cancer dataset observations included in this experiment have been collected from <ns0:ref type='bibr'>Kaggle.com (Sajid, 2021)</ns0:ref>. Prostate and Breast cancer patients were labeled with M, whereas those without the cancer were labeled with B, as seen in the following tables. </ns0:p></ns0:div>
<ns0:div><ns0:head>B. Support Vector Machine</ns0:head><ns0:p>This section will go through the basic SVM concepts in the context of two-class classification problems, either linearly or non-linearly separable. SVM is based on the Vapnik-Chervonenkis (VC) theory and the Structural Risk Minimization (SRM) principle <ns0:ref type='bibr'>(Boser et al., 1992)</ns0:ref> and <ns0:ref type='bibr'>(Corinna et al., 1995)</ns0:ref>. The goal is to identify the optimum trade-off between lowering training set error and increasing the margin to get the highest generalization ability while resisting overfitting. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Another significant benefit of SVM is convex quadratic programming, which generates only global minima, preventing the algorithm from being trapped in local minima. <ns0:ref type='bibr'>(Corinna et al., 1995) and</ns0:ref><ns0:ref type='bibr'>(N. Cristianini, 2000)</ns0:ref> provide in-depth explanations of SVM theory. The remaining part of this section will go through the fundamental SVM ideas and apply them to classic linear and nonlinear binary classification tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.'>Linear Margin Kernel Classifier</ns0:head><ns0:p>Suppose a binary classification problem is given as { , }, ∈ ℜN and a set of corresponding 𝑥 𝑖 𝑦 𝑖 𝑥 𝑖 labels are notated ∈ {-1,1} for i=1, 2, …, 𝑁, where ℜN denotes vectors in a d-dimensional 𝑦 𝑖 feature space. A hyperplane provided in SVM separates the data points using equation ( <ns0:ref type='formula'>1</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_0'>( ) = = 0 (1) 𝑥 𝑖 𝑤 𝑇 • 𝑋 + 𝑏</ns0:formula><ns0:p>Where 𝒘 is an n-dimensional coefficient vector that is normal to the hyperplane and 𝑏 is the offset from the origin and X is the features in a sample.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Nonlinear Margin Kernel Classifier</ns0:head><ns0:p>It has been shown that the input space can be mapped onto a higher-dimensional feature space where the training set cannot be separated. The input vectors are mapped into a new, higherdimensional feature space, indicated as , where N < K, to produce the SVM model:</ns0:p><ns0:formula xml:id='formula_1'>𝐹:ℜN→𝐻 𝐾 (2) 𝐹 ( 𝑥 ) = 𝑤 𝑇 • 𝐾(𝑥 𝑖 ,𝑥 𝑗 ) + 𝑏</ns0:formula><ns0:p>The mapping's functional , is implicitly determined by the kernel trick</ns0:p><ns0:formula xml:id='formula_2'>(𝑥)</ns0:formula><ns0:p>𝐾(𝑥 𝑖 ,𝑥 𝑗 ) = 𝜑(𝑥 𝑖 ) • 𝜑( , which is the dot product of two feature vectors in extended feature space. In SVMs, an Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>#Initialization # Set the operators and operands needed along with the size of GF chromosome input Op k = {plus_v, plus_s, minus_v, minus_s, multi_s,}, Operand n = {1,.., n-1}, Len = GF chromosome length # Initialize the pool of GF pairs set S = (indx 0 , operand 1 , Op 1 , operand 2 ), …, (indx n-1 , operand n , Op k , operand n ) # GF chromosome/population generation Algorithm GFs 1. For pop= 1 to number_of_generation 2.</ns0:p><ns0:p>For i=1 to len 3.</ns0:p><ns0:p>M= Select (Op i , Operands) from S 4.</ns0:p><ns0:p>GF_chromosome = concatenate (GF_chromosome, M)</ns0:p></ns0:div>
<ns0:div><ns0:head>5.</ns0:head><ns0:p>End for 6.</ns0:p><ns0:p>Construct pop = pop + GF_chromosome 7. </ns0:p></ns0:div>
<ns0:div><ns0:head>Genetic Folding Algorithm</ns0:head><ns0:p>The GF algorithm was created by <ns0:ref type='bibr' target='#b27'>(Mezher & Abbod, 2011)</ns0:ref>. The GF algorithm's fundamental principle is to group math equations using floating numbers. Floating numbers are created by taking random samples of operands and operators. In GF, the linear chromosomes serve as the genotype, while the parse trees serve as the phenotype. This genotype/phenotype technique works well for encoding a lengthy parse tree in each chromosome. The GF approach has shown to be effective in various computer problems, including binary, multi-classification, and regression datasets. For example, GF has been proven to outperform other members of the evolutionary algorithm family in binary classification, multi-classification (M. <ns0:ref type='bibr' target='#b28'>Mezher & Abbod, 2014)</ns0:ref>, and regression (M. <ns0:ref type='bibr' target='#b30'>Mezher & Abbod, 2012)</ns0:ref>. On the other hand, GF may yield kernels that can be employed in SVMs. Kernels are created by combining operands and operators to generate appropriate pairings, such as (3 + 4). At each pair, the pair is indexed at random in a GF array cell (known as GF kernel). Correlation between pairings is boosted by picking pairs at random that boost the strength of the generated GF kernels. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Each GF kernel chromosome was divided into a head segment that only carries functions and a tail segment containing terminals. However, the size of the head segment must be determined ahead of time, but the size of the tails segment does not need to be determined since the GF algorithm predicts the number of genes needed based on the pairs required for the drawn functions (M. <ns0:ref type='bibr' target='#b29'>Mezher & Abbod, 2017)</ns0:ref>. Furthermore, the GF algorithm predicts the number of operands (terminals) necessary each time the GF algorithm generates different operators (function) at random. The proposed modified GF algorithm has the pseudo-code that can be seen in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>:</ns0:p><ns0:p>The following operands and operators were used to forecast malignant and benign cells in the prostate dataset, as shown in Table <ns0:ref type='table' target='#tab_5'>3</ns0:ref>: The GF genome comprises a continuous, symbolic string or chromosome of equal length. Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> shows an example of a chromosome with varying pairs that may be used to create a valid GF kernel. </ns0:p></ns0:div>
<ns0:div><ns0:head>D. Model Evaluations</ns0:head><ns0:p>The classification performance of each model is evaluated using statistical classification accuracy.</ns0:p><ns0:p>The equation ( <ns0:ref type='formula'>3</ns0:ref> The True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) define this accuracy measurement. A TP is made when the algorithm predicts malignant (positive), and the actual result is malignant (positive). A TN is made when the algorithm predicts benign (negative), and the actual result is benign (negative). FP occurs when the algorithm predicts a benign (negative) instance as malignant (positive). Finally, when the GF algorithm classifies a malignant (positive) instance as benign (negative), the result is FN. The accuracy performance metrics are compared in Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref>.</ns0:p><ns0:p>We have also included another evaluating estimator to measure the quality of the proposed mode to choose the best. Some of the estimators tolerate the small samples better than others, while other measures the quality of the estimator with large samples. Mean Square Error (MSE) <ns0:ref type='bibr' target='#b31'>(Schluchter, 2014)</ns0:ref> has been used as a standard metric to measure model performance in medical, engineering, and educational studies. Assume E' is the predicted classes E'={(pred(y1), pred(y2), pred(y3), … } of the observed classes E={y1, y2, y3,..}, then the MSE is defined as the expectation of the squared deviation of E' from E:</ns0:p><ns0:p>(4)</ns0:p><ns0:formula xml:id='formula_3'>𝑀𝑆𝐸(𝐸 ' ) = (𝐸 ' -𝐸) 2</ns0:formula><ns0:p>MSE measures both the estimator's bias (accuracy), which shows how much its predicted value deviates consistently from the actual value and the estimator's variance (precision), which indicates how much its expected value fluctuates owing to sampling variability. Squaring the errors magnifies the effect of more significant errors. These calculations disproportionately penalize larger errors more than more minor errors. This attribute is critical if we want the model to be as accurate as possible.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>We are preprocessing the dataset using min-max standardization, without removing any features that prevented incorrect relevance assignments. The elimination of the features showed either weakness or strength correlation and reduced the accuracy values, as shown in Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref>. In statistics, correlation is any connection between two variables but does not indicate causation, implying that the variables are dependent on one another. The closer a variable is to 1, the more significant the positive correlation; the closer a variable is to -1, the stronger the negative correlation; and the closer a variable is to 0, the weaker the negative correlation. Each malignant PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>datapoint was subjected to analysis. We implemented the proposed method using Visual Studio Code by Python.</ns0:p><ns0:p>This paper conducted our experiments on 100 patients of 8 features collected from kaggle.com (Sajid, 2021) for prostate cancer and 596 patients of 30-features for the breast cancer dataset. The instances were classified as malignant or benign in the experiments. The following are the steps involved in conducting the GF algorithm to produce a valid GF kernel. GF starts by generating the initial GF genes (operators, operands) correctly (operand, operator, operand). Then, the GF algorithm generates a valid GF chromosome containing 50 genes. Based on a 5-fold crossvalidation approach, the dataset was divided into a training set and a testing set. The detailed GF algorithm is explained in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref> <ns0:ref type='bibr'>(Mohammad et al., 2016)</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The proposed GF was deployed for the first time in the prostate/breast cancer detection dataset, and it demonstrated a significant performance improvement over the existing models in this domain. Experiments were carried out with a broad set of features from the conducted datasets. The folding indicis of the best GF chromosome found for the produced kernel were: <ns0:ref type='bibr'>['1.2', '3.4', '5.6', '7.8', '9.10', '0.5', '0.6', '11.12', '13.14', '0.9', '15.16', '17.18', '19.20', '21.22', '0.14', '23.24', '25.26', '0.17', '0.18', '0.19', '0.20', '0.21', '0.22', '0.23', '0.24', '0.25', '0.26']</ns0:ref> In the same side, the best kernel string found for breast cancer dataset was shownin Figure <ns0:ref type='figure'>3</ns0:ref>(f):</ns0:p><ns0:p>['Plus_s <ns0:ref type='bibr'>', 'Plus_s', 'x', 'Plus_s', 'Multi_s', 'Multi_s', 'Minus_v', 'Plus_s', 'Plus_s', 'x', 'Minus_v', 'x', 'x', 'x', 'Minus_v']</ns0:ref> Where the folding indicis of the best GF chromosome found for breast cancer dataset were:</ns0:p><ns0:p>['1.2', '3.4', '0.2', '5.6', '7.8', '9.10', '0.6', '11.12', '13.14', '0.9', '0.10', '0.11', '0.12', '0.13', '0. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='figure'>4</ns0:ref>, the proposed GF model performed the best for prostate cancer classification.</ns0:p><ns0:p>After using 5-Fold, the absolute average accuracy on the prostate cancer dataset was 96.0%.</ns0:p><ns0:p>Additionally, when evaluated on a prostate cancer dataset, dimensionality reduction had an enhancing impact. </ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper demonstrated that using a GF algorithm to classify patients with prostate cancer provides better accuracy than KNN, SVM, DT, and LR models. The GF algorithm achieved an average accuracy of 96% without eliminating any features from the dataset. This enables healthcare professionals to classify prostate cancers more precisely and provide more targeted therapies. Further improvements can be introduced to the model's accuracy based on the results we achieved. Using multidimensional data whilst choosing a range of feature selection/classification algorithms can prove to be a promising tool for early-onset prostate cancer classification. We plan to work on the concept further and apply it to other types of cancer. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Fitness</ns0:head><ns0:label /><ns0:figDesc>kernels are generated to the most fitness 11. Result: Optimum GF kernel that have highest accuracy RBF 𝑘(𝑥 𝑖 , 𝑥 𝑗 ) = 𝑒𝑥𝑝 -𝛾(𝑥 𝑖 -𝑥 𝑗 ) 2 C.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Pseudo-code of GF Algorithm</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>) is used to determine the accuracy of the GF algorithm's correctly classified instances: PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022) 𝑇𝑃 + 𝑇𝑁 + 𝐹𝑃 + 𝐹𝑁 * 100%</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figures 2 Figure 2 .</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>Figures 2 and 3 (a-e) depict the complexity, diversity, ROC curve, accuracy, and best GF chromosome results for prostate and breast datasets. Figures 2 and 3 (a) shows the GF algorithm's performance concerning the complexity of the generic kernel. Figures 2 and 3 (b) show the population's diversity in each generation where GF maintains the best fitness values with a chance of accepting weak kernels. Figures 2 and 3 (c) shows that GF generic kernel was the best Area Under Curve (AUC) value compared with conventional SVM classifiers; linear, RBF, and polynomial kernel functions. Based on the mean square error (MSE), the proposed model shows (Figure 2, 3 (e)) minor differences between the observed values and the predicted classes. In Figure 2 (e), the proposed model beats the other kernels with at least the average of errors was better than the minimum box plot of the best model (RBF kernel). Figure 3 (e) shows a perfect model with no error produced of almost zero variance. In order to show the performance of the GF algorithm in each generic kernel, median, average, and standard deviation are calculated in Figure 2.3 (d). The best GF kernel represented in the Tree structure is shown in Figure 2, 3 (f) on the scaled prostate dataset and with setting hyperparameter.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>14'] PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Accuracies comparisons for prostate cancer dataset</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>) was used to test the proposed GF algorithm and analyze the outcomes. The dataset consists of 100 observations and nine variables; Radius (Rad.), Texture (Text.), Perimeter (Perim.), Area, Smoothness (Smooth), Compactness (Compact), Symmetry, Fractal_dimension (Fractal_dim.), and Diagnosis. The Breast Cancer dataset is a repository maintained by the University of California. The dataset contains 569 samples of malignant and benign tumor cells. Columns 1-30 contain real-value PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Samples of Prostate Cancer Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Rad.</ns0:cell><ns0:cell cols='2'>Text. Perim.</ns0:cell><ns0:cell>Area</ns0:cell><ns0:cell cols='4'>Smooth Compact Symmetry Fractal_dim.</ns0:cell><ns0:cell>Diagnosis</ns0:cell></ns0:row><ns0:row><ns0:cell>23</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>151</ns0:cell><ns0:cell>954</ns0:cell><ns0:cell>0.143</ns0:cell><ns0:cell cols='2'>0.278 0.242</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>133</ns0:cell><ns0:cell>1326</ns0:cell><ns0:cell>0.143</ns0:cell><ns0:cell cols='2'>0.079 0.181</ns0:cell><ns0:cell>0.057</ns0:cell><ns0:cell>B</ns0:cell></ns0:row><ns0:row><ns0:cell>21</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>130</ns0:cell><ns0:cell>1203</ns0:cell><ns0:cell>0.125</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.207</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>78</ns0:cell><ns0:cell>386</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell cols='2'>0.284 0.26</ns0:cell><ns0:cell>0.097</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>135</ns0:cell><ns0:cell>1297</ns0:cell><ns0:cell>0.141</ns0:cell><ns0:cell cols='2'>0.133 0.181</ns0:cell><ns0:cell>0.059</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>83</ns0:cell><ns0:cell>477</ns0:cell><ns0:cell>0.128</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.209</ns0:cell><ns0:cell>0.076</ns0:cell><ns0:cell>B</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>1040</ns0:cell><ns0:cell>0.095</ns0:cell><ns0:cell cols='2'>0.109 0.179</ns0:cell><ns0:cell>0.057</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>90</ns0:cell><ns0:cell>578</ns0:cell><ns0:cell>0.119</ns0:cell><ns0:cell cols='2'>0.165 0.22</ns0:cell><ns0:cell>0.075</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>19</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>88</ns0:cell><ns0:cell>520</ns0:cell><ns0:cell>0.127</ns0:cell><ns0:cell cols='2'>0.193 0.235</ns0:cell><ns0:cell>0.074</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>476</ns0:cell><ns0:cell>0.119</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.203</ns0:cell><ns0:cell>0.082</ns0:cell><ns0:cell>M</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Samples of Breast Cancer Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>radius_mea</ns0:cell><ns0:cell>texture_mea</ns0:cell><ns0:cell>perimeter_me</ns0:cell><ns0:cell>area_mea</ns0:cell><ns0:cell>smoothness_me</ns0:cell><ns0:cell>compactness_me</ns0:cell><ns0:cell>concavity_me</ns0:cell></ns0:row><ns0:row><ns0:cell>n</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>an</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>an</ns0:cell><ns0:cell>an</ns0:cell><ns0:cell>an</ns0:cell></ns0:row><ns0:row><ns0:cell>20.57</ns0:cell><ns0:cell>17.77</ns0:cell><ns0:cell>132.90</ns0:cell><ns0:cell>1326.00</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>19.69</ns0:cell><ns0:cell>21.25</ns0:cell><ns0:cell>130.00</ns0:cell><ns0:cell>1203.00</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.20</ns0:cell></ns0:row><ns0:row><ns0:cell>20.29</ns0:cell><ns0:cell>14.34</ns0:cell><ns0:cell>135.10</ns0:cell><ns0:cell>1297.00</ns0:cell><ns0:cell>0.10</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.20</ns0:cell></ns0:row><ns0:row><ns0:cell>12.45</ns0:cell><ns0:cell>15.70</ns0:cell><ns0:cell>82.57</ns0:cell><ns0:cell>477.10</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.16</ns0:cell></ns0:row><ns0:row><ns0:cell>18.25</ns0:cell><ns0:cell>19.98</ns0:cell><ns0:cell>119.60</ns0:cell><ns0:cell>1040.00</ns0:cell><ns0:cell>0.09</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell>0.11</ns0:cell></ns0:row><ns0:row><ns0:cell>13.71</ns0:cell><ns0:cell>20.83</ns0:cell><ns0:cell>90.20</ns0:cell><ns0:cell>577.90</ns0:cell><ns0:cell>0.12</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.09</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>𝑥 𝑗 )extensive range of kernel tricks are accessible for application. The most common SVM kernels are shown in Table2. In the table, and are predefined user parameters. The traditional Kernel functions</ns0:figDesc><ns0:table><ns0:row><ns0:cell>𝛾</ns0:cell><ns0:cell>𝑝</ns0:cell></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell>Kernel Function</ns0:cell></ns0:row><ns0:row><ns0:cell>Linear Kernel</ns0:cell><ns0:cell>𝑘(𝑥 𝑖 , 𝑥 𝑗 ) = 𝑥 𝑖 • 𝑥 𝑗</ns0:cell></ns0:row></ns0:table><ns0:note>Polynomial 𝑘(𝑥 𝑖 , 𝑥 𝑗 ) = (𝑥 𝑖 • 𝑥 𝑗 + 1) 𝑝 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>The symbols used to formulate Kernel functions</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type of symbols</ns0:cell><ns0:cell>Name</ns0:cell><ns0:cell>No. of Arity</ns0:cell></ns0:row><ns0:row><ns0:cell>Operators</ns0:cell><ns0:cell>'Plus_s'</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>'Minus_s'</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>'Multi_s'</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>'Plus_v'</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>'Minus_v'</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Operands</ns0:cell><ns0:cell>'x'</ns0:cell><ns0:cell>1</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>'y'</ns0:cell><ns0:cell>1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The encoding/decoding used to formulate Kernel functions</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Index</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell></ns0:row><ns0:row><ns0:cell>GF Decoding</ns0:cell><ns0:cell>'Minus_s',</ns0:cell><ns0:cell>'Plus_s',</ns0:cell><ns0:cell>'y'</ns0:cell><ns0:cell>'y'</ns0:cell><ns0:cell>'x'</ns0:cell></ns0:row><ns0:row><ns0:cell>GF Encoding</ns0:cell><ns0:cell>'1.2'</ns0:cell><ns0:cell>'3.4'</ns0:cell><ns0:cell>'0.2'</ns0:cell><ns0:cell>'0.3'</ns0:cell><ns0:cell>'0.4'</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>gives a comparative description of several ML algorithms based on the same preprocessing conditions. The accuracy performance of the proposed GF model was superior compared to the six ML approaches in the prostate cancer dataset. Furthermore, we expanded this comparison with the suggested hybrid model by using the SVM classifier with several conventional kernels such as linear, polynomial, and RBF kernels.The proposed model achieved 96.0% accuracy in the prostate cancer dataset, which is better than the ANN by 16% and the LR by 6%. The proposed GF beat the best-preset kernel by 8%, 20%, and 16% compared to linear, RBF, and polynomial kernels, respectively.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>The best kernel found for prostate cancer dataset (shown Figure 2(f)) is:</ns0:cell></ns0:row><ns0:row><ns0:cell>['Plus_s', 'Multi_s', 'Multi_s', 'Plus_s', 'Minus_s', 'Minus_v', 'Minus_v', 'Plus_s', 'Plus_s', 'x',</ns0:cell></ns0:row></ns0:table><ns0:note>'Minus_s', 'Plus_v', 'Plus_v', 'Plus_v', 'x', 'Plus_s', 'Minus_s', 'Minus_v', 'Minus_v', 'Minus_v', 'x', 'x', 'x', 'x', 'x', 'x', 'x'] </ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Comparisons between different Kernel functions and AI models Prostate cancer</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>Reference</ns0:cell><ns0:cell>Eliminated Features</ns0:cell><ns0:cell>Accuracy (%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Adaboost</ns0:cell><ns0:cell>(Alin, 2021)</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>75.8%</ns0:cell></ns0:row><ns0:row><ns0:cell>KNeighbors</ns0:cell><ns0:cell>(Smogomes, 2021)</ns0:cell><ns0:cell>['fractal_dimension',</ns0:cell><ns0:cell>71.0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>'texture', 'perimeter']</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>SVM (Linear)</ns0:cell><ns0:cell>GFLibPy</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>88.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (RBF)</ns0:cell><ns0:cell>GFLibPy</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>76.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM (Polynomial)</ns0:cell><ns0:cell>GFLibPy</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>80.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest (RF)</ns0:cell><ns0:cell>(Smogomes, 2021)</ns0:cell><ns0:cell>['fractal_dimension',</ns0:cell><ns0:cell>86.0%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>'texture', 'perimeter']</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Decision Tree (DT)</ns0:cell><ns0:cell>(Smogomes, 2021)</ns0:cell><ns0:cell>['fractal_dimension',</ns0:cell><ns0:cell>86.9%</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>'texture', 'perimeter']</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Logistic Regression (LR)</ns0:cell><ns0:cell>(Will, 2021)</ns0:cell><ns0:cell>['Area']</ns0:cell><ns0:cell>90.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed GF</ns0:cell><ns0:cell>Our paper</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>96.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>Artificial Neural Network (ANN)</ns0:cell><ns0:cell>(Cemhans, 2020)</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>80.0%</ns0:cell></ns0:row><ns0:row><ns0:cell>As seen in Figure</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Samples of Prostate Cancer Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Rad.</ns0:cell><ns0:cell cols='2'>Text. Perim.</ns0:cell><ns0:cell>Area</ns0:cell><ns0:cell cols='4'>Smooth Compact Symmetry Fractal_dim.</ns0:cell><ns0:cell>Diagnosis</ns0:cell></ns0:row><ns0:row><ns0:cell>23</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>151</ns0:cell><ns0:cell>954</ns0:cell><ns0:cell>0.143</ns0:cell><ns0:cell cols='2'>0.278 0.242</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell /><ns0:cell>133</ns0:cell><ns0:cell>1326</ns0:cell><ns0:cell>0.143</ns0:cell><ns0:cell cols='2'>0.079 0.181</ns0:cell><ns0:cell>0.057</ns0:cell><ns0:cell>B</ns0:cell></ns0:row><ns0:row><ns0:cell>21</ns0:cell><ns0:cell /><ns0:cell>130</ns0:cell><ns0:cell>1203</ns0:cell><ns0:cell>0.125</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.207</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell /><ns0:cell>78</ns0:cell><ns0:cell>386</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell cols='2'>0.284 0.26</ns0:cell><ns0:cell>0.097</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell /><ns0:cell>135</ns0:cell><ns0:cell>1297</ns0:cell><ns0:cell>0.141</ns0:cell><ns0:cell cols='2'>0.133 0.181</ns0:cell><ns0:cell>0.059</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell /><ns0:cell>83</ns0:cell><ns0:cell>477</ns0:cell><ns0:cell>0.128</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.209</ns0:cell><ns0:cell>0.076</ns0:cell><ns0:cell>B</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell /><ns0:cell>120</ns0:cell><ns0:cell>1040</ns0:cell><ns0:cell>0.095</ns0:cell><ns0:cell cols='2'>0.109 0.179</ns0:cell><ns0:cell>0.057</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>90</ns0:cell><ns0:cell>578</ns0:cell><ns0:cell>0.119</ns0:cell><ns0:cell cols='2'>0.165 0.22</ns0:cell><ns0:cell>0.075</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>19</ns0:cell><ns0:cell /><ns0:cell>88</ns0:cell><ns0:cell>520</ns0:cell><ns0:cell>0.127</ns0:cell><ns0:cell cols='2'>0.193 0.235</ns0:cell><ns0:cell>0.074</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell /><ns0:cell>84</ns0:cell><ns0:cell>476</ns0:cell><ns0:cell>0.119</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.203</ns0:cell><ns0:cell>0.082</ns0:cell><ns0:cell>M</ns0:cell></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Samples of Breast Cancer Dataset</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:1:0:NEW 5 Mar 2022) Manuscript to be reviewed Computer Science 1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Samples of Breast Cancer Dataset.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>radius_mea</ns0:cell><ns0:cell>texture_mea</ns0:cell><ns0:cell>perimeter_me</ns0:cell><ns0:cell>area_mea</ns0:cell><ns0:cell>smoothness_me</ns0:cell><ns0:cell>compactness_me</ns0:cell><ns0:cell>concavity_mea</ns0:cell></ns0:row><ns0:row><ns0:cell>n</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>an</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>an</ns0:cell><ns0:cell>an</ns0:cell><ns0:cell>n</ns0:cell></ns0:row><ns0:row><ns0:cell>20.57</ns0:cell><ns0:cell>17.77</ns0:cell><ns0:cell>132.90</ns0:cell><ns0:cell>1326.00</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>19.69</ns0:cell><ns0:cell>21.25</ns0:cell><ns0:cell>130.00</ns0:cell><ns0:cell>1203.00</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.20</ns0:cell></ns0:row><ns0:row><ns0:cell>20.29</ns0:cell><ns0:cell>14.34</ns0:cell><ns0:cell>135.10</ns0:cell><ns0:cell>1297.00</ns0:cell><ns0:cell>0.10</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.20</ns0:cell></ns0:row><ns0:row><ns0:cell>12.45</ns0:cell><ns0:cell>15.70</ns0:cell><ns0:cell>82.57</ns0:cell><ns0:cell>477.10</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.16</ns0:cell></ns0:row><ns0:row><ns0:cell>18.25</ns0:cell><ns0:cell>19.98</ns0:cell><ns0:cell>119.60</ns0:cell><ns0:cell>1040.00</ns0:cell><ns0:cell>0.09</ns0:cell><ns0:cell>0.11</ns0:cell><ns0:cell>0.11</ns0:cell></ns0:row><ns0:row><ns0:cell>13.71</ns0:cell><ns0:cell>20.83</ns0:cell><ns0:cell>90.20</ns0:cell><ns0:cell>577.90</ns0:cell><ns0:cell>0.12</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.09</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Dear Editor,
Thank you for allowing us to submit a revised draft of our manuscript titled 'A modified genetic folding approach for prostate cancer classification'. We appreciate the time and effort you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We are grateful to the reviewers for their insightful comments on our paper. We have been able to incorporate changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers' comments and concerns.
Comments from Reviewer 1
Basic reporting
1. How is the Genetic Folding algorithm developed in terms of its novelty, targeted issues in the dataset?
Response:
We added a paragraph to show the importance of our proposed model and the 101-107 lines about the paper's primary goal and significant contribution. We added the paragraph to give the reader a broader overview of the problem that we are trying to solve by building our proposed model.
2. The dataset appears to be small rather than comprehensive to reflect the effectiveness of the algorithm. One another better dataset example can be:'Using deep learning to enhance cancer diagnosis and classification'
Response:
Agreed. Accordingly, we have added a breast dataset to emphasize this point and illustrate the stability of the proposed model using more than 569 samples of breast cancer patients.
3. The figures could not exclusively identify the details for the paper.
Response:
We have incorporated more experiments and comparison figures to emphasise the paper's contribution. Also, more discussions have been added to the results and discussion sections.
4. The algorithms could not be translated properly with its current form.
Response:
The reviewer has raised an important point here. However, we believe that the new version of the proposed algorithm would be more appropriate because it contains the necessary pseudo-code components of the proposed GF model.
5. Please consider to make the reference clickable. For some reasons, I could not find the reference for Table 5.
Response:
We concur and have made revisions to the referencing throughout the manuscript due to this. In addition, we have added two missing references to the bibliographical list and made corrections for the authors' names, starting from line 198
Experimental design
Overall, it appears to be a work representing the quality of student project report, which has sufficiently reflected the terms and knowledge for building classifiers for particular dataset.
However, it fails to reflect the studied research questions, particularly it is difficult to understand the challenge to build a qualified and SOTA classifier for the given dataset. Meanwhile, it lacks sufficient details concerning the novelty and completeness of the utilised method. It appears to be an application work.
Response:
Thank you for mentioning this. However, we have included in the results section the following statistical analysis:
1. Average accuracy
2. Mean square error
3. Area under the curve
Then, we included comparisons with different references. However, many papers use the accuracy measurement the same way we used it in [1], [2]; they use it to indicate the accuracy of classification.
Validity of the findings
It is not strong enough to support the conclusion as SOTA performance for the given dataset.
Response:
Again, we have included in the results section the following statistical analysis and measurement figures:
1. Average accuracy
2. Mean square error
3. Area under the curve
Then, we included comparisons with different references.
Comments from Reviewer 2
Basic reporting
1. In this MS the authors apply an evolutionary Genetic Folding (GF)
algorithm to the binary classification of prostate cancer malignancy from clinical tumour features. The development and improvement of tumour stratification algorithms is an active area of research where advances might strongly benefit cancer diagnosis and treatment.
The authors provide a clear and concise overview of prostate cancer diagnosis and grading approaches, both clinical and computational. The authors then propose and implement a SVM with GF-kernel based prostate cancer classifier (GF-SVM), arguing that GF has been shown to outperform other evolutionary algorithms.
Response:
We agreed, and we would like to express our gratitude to the reviewer for his or her comments.
2. A few passages should be edited using more appropriate terminology (e.g. line 174 should read 'Instances were classified as...').
Response:
We agree and thus updated lines 271-272 to incorporate the suggestion.
3. There are a few typos I would recommend addressing before publication (e.g. repetition at lines 78-78, line 134 should reference Mezher et. al 2010).
Response:
We agree and thus we had eliminated the replicated sentences at lines 116-117 and line 198 has included the corrected references and author names.
Validity of the findings
The authors state (lines 205,206) that GF-SVM 'demonstrated a significant performance improvement over the existing models in this domain.' However, the authors have failed to perform any sort of statistical analysis to demonstrate that their method is significantly better, or even different from, the established methods (e.g. see [1]). I would recommend the authors included a pairwise analysis of all alternative models in Fig. 2C versus GF-SVM (e.g. via a binomial McNemar test [2] over at least 10 holdout shuffle replicates). Without this I do not believe this manuscript meets the 'statistically sound' criterion and I cannot recommend it for publication.
Response:
Thank you for bringing this to our attention. We have, however, included the following statistical analyses in the findings part of our paper:
1. Average accuracy
2. Mean square error
3. Area under the curve
Then, we included comparisons with different references.
Additional comments
The authors collect and made available a tabular dataset of 8 clinical features across 100 prostate cancer patients, as well as a repository for the Genetic Folding Open source library. Unfortunately, they do not provide the reader with any way of rapidly and conveniently reproducing their models and results (e.g. via a colab or jupyter notebook). I believe this would greatly help their model's chances of being re-used by others in the community.
Response:
Thank you for your recommendation, which we will consider for our future work. Presently, the toolbox includes around 15+ python files that need to be addressed before the project can be developed entirely in Jupyter or other replicating online environments.
References:
[1] S. Komarudin, D. Anggraeni, A. Riski, and A. Hadi, 'Classification of genetic expression in prostate cancer using support vector machine method', Journal of Physics: Conference Series, vol. 1613, p. 012032, Aug. 2020, doi: 10.1088/1742-6596/1613/1/012032.
[2] A. Gumaei, R. Sammouda, M. Al-Rakhami, H. AlSalman, and A. El-Zaart, 'Feature selection with ensemble learning for prostate cancer diagnosis from microarray gene expression', Health Informatics J, vol. 27, no. 1, p. 1460458221989402, Jan. 2021, doi: 10.1177/1460458221989402.
Best Regards,
Authors
" | Here is a paper. Please give your review comments after reading it. |
694 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Cancer's genomic complexity is gradually increasing as we learn more about it. Genomic classification of various cancers is crucial in providing oncologists with vital information for targeted therapy. Thus, issues of patient genomic classification become more pertinent to address. Prostate cancer is a cancer subtype that exhibits extreme heterogeneity. Prostate cancer contributes to 7.3% of new cancer cases worldwide, with a high prevalence in males. Breast cancer is the most common type of cancer in women and the second most significant cause of death from cancer in women. Breast cancer is caused by abnormal cell growth in the breast tissue, generally referred to as a tumour. Tumours are not synonymous with cancer; they can be benign (noncancerous), pre-malignant (precancerous), or malignant (cancerous). Fine Needle Aspiration (FNA) tests are used to biopsy the breast to diagnose breast cancer. Artificial Intelligence (AI) and Machine Learning (ML) models are used to diagnose with varying accuracy. In light of this, we used the Genetic Folding (GF) algorithm to predict prostate cancer status in a given dataset. An accuracy of 96% was obtained, thus being the current highest accuracy in prostate cancer diagnosis. The model was also used in breast cancer classification with a proposed pipeline that used Exploratory Data Analysis (EDA), label encoding, feature standardization, feature decomposition, log transformation, detect and remove the outliers with Z-score, and the BAGGINGSVM approach attained a 95.96% accuracy. The accuracy of this model was then assessed using the rate of change of PSA, age, BMI, and filtration by race. We discovered that integrating the rate of change of PSA and age in our model raised the model's area under the curve (AUC) by 6.8%, whereas BMI and race had no effect. As for breast cancer classification, no features were removed.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Cancer is, at times, a severe disease or set of diseases that have historically been the most prevalent and challenging to treat <ns0:ref type='bibr' target='#b0'>(Adjiri, 2016)</ns0:ref>. It is commonly defined as the abnormal proliferation of various human cells, thus resulting in its etiological heterogeneity <ns0:ref type='bibr' target='#b15'>(Cooper, 2000)</ns0:ref>. This abnormal growth can be classified into two subsets (a) Malignant and (b) Benign. Benign tumours stay localized to their original site, whereas malignancies can invade and spread throughout the body (metastasize) <ns0:ref type='bibr' target='#b15'>(Cooper, 2000)</ns0:ref>. Breast cancer recently eclipsed lung cancer as the most prominent cancer subtype worldwide, equalling 11.7% (2.3 million) of the new cancer cases <ns0:ref type='bibr' target='#b48'>(Sung et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Prostate cancer is one of the most prevalent male malignancies globally, contributing 7.3% of the estimated incidence in 2020 <ns0:ref type='bibr' target='#b48'>(Sung et al., 2021)</ns0:ref>. This amounted to 1,414,259 new cases and 375,304 deaths from this disease <ns0:ref type='bibr' target='#b48'>(Sung et al., 2021)</ns0:ref>. The prostate is a dense fibromuscular gland shaped like an upside-down cone. It is located around the neck of the urinary bladder, external to the urethral sphincter and functions in a supportive role in the male reproductive system. The alkaline fluid it secretes into semen protects sperm cells from the acidic vaginal environment <ns0:ref type='bibr' target='#b46'>(Singh & Bolla, 2021)</ns0:ref>.</ns0:p><ns0:p>Although early prostate cancer is typically asymptomatic, it may manifest itself in the form of excessive urination, nocturia, haematuria, or dysuria <ns0:ref type='bibr' target='#b28'>(Leslie et al., 2021)</ns0:ref>. Classically, prostate cancer is detected with a Digital Rectal Examination (DRE) and a blood test for prostate-specific antigen (PSA) <ns0:ref type='bibr' target='#b21'>(Descotes, 2019)</ns0:ref>. TRUS-guided biopsy continues to be the gold standard for confirming diagnoses, despite its 15-46% false-negative rate and up to 38% tumour under grading rate compared to the Gleason score <ns0:ref type='bibr' target='#b21'>(Descotes, 2019;</ns0:ref><ns0:ref type='bibr' target='#b27'>Kvåle et al., 2009)</ns0:ref>.</ns0:p><ns0:p>However, prostate cancer development's aetiology and mechanisms are still determined <ns0:ref type='bibr' target='#b25'>(Howard et al., 2019)</ns0:ref>. The different mechanisms that develop prostate cancer ultimately affect the therapy considering that prostate cancer is heterogeneous in a clinical, spatial and morphological aspect <ns0:ref type='bibr' target='#b52'>(Tolkach & Kristiansen, 2018)</ns0:ref>. The following prostate cancer stages of development are currently posited: intraepithelial-neoplasia, androgen-dependent adenocarcinoma, and androgenindependent or Castration Resistant Cancer (CRC) <ns0:ref type='bibr' target='#b25'>(Howard et al., 2019)</ns0:ref>. Prostate cancer heterogeneity is further highlighted in a recent study shedding light on mRNA expressional variations from a normal prostate to full metastatic disease <ns0:ref type='bibr' target='#b32'>(Marzec et al., 2021)</ns0:ref>. This means that Metastatic CRC (mCRC) tumours are more complicated than primary prostate tumours. An association that is further aggravated when genomics come into play.</ns0:p><ns0:p>Prostate cancer development and progression have been heavily linked to Androgen Receptor (AR) signalling pathway; thus, Androgen Deprivation Therapy (ADT) has been used for patients who have advanced prostate cancer <ns0:ref type='bibr' target='#b23'>(Hatano & Nonomura, 2021)</ns0:ref>. Although primarily effective, a large proportion of patients develop androgen-independent or CRC. Thus, pharmacological therapies are considered, including abiraterone, enzalutamide, docetaxel and radium-223 <ns0:ref type='bibr' target='#b25'>(Howard et al., 2019)</ns0:ref>.</ns0:p><ns0:p>As mentioned above, breast cancer has become the most common cancer diagnosed, eclipsing lung cancer. The etiology of breast cancer is multi-faceted; many risk factors play a role in the likelihood of a diagnosis; these risk factors can be sub-divided into seven groups. (1) age (2) gender</ns0:p><ns0:p>(3) previous diagnosis of breast cancer (4) histology (5) family history of breast cancer, (6) reproduction-related risk factors and (7) exogenous hormone use <ns0:ref type='bibr' target='#b3'>(Alkabban & Ferguson, 2022)</ns0:ref>.</ns0:p><ns0:p>Race may also indicate a higher prevalence in non-Hispanic white individuals than African Americans, Hispanics, Native Americans, and Asian Americans <ns0:ref type='bibr' target='#b3'>(Alkabban & Ferguson, 2022)</ns0:ref>.</ns0:p><ns0:p>Many modalities of screening exist with varying specificities and sensitivities to breast cancer.</ns0:p><ns0:p>Mammography is one of the front-line tests to screen for breast cancer, with its sensitivity ranging from 75-90% and its specificity ranging from 90-95% <ns0:ref type='bibr' target='#b9'>(Bhushan et al., 2021)</ns0:ref>. Screening is done at a macro level to determine whether the growth is malignant or benign. However, classification at the micro-level is required to identify the molecular basis behind breast cancer progression to target specific therapies. Luminal-A tumours (58.5%) are the most prevalent subtype of breast cancer tumours, then triple-negative (16%), luminal-B (14%) and HER-2 positive (11.5%) being the least prevalent <ns0:ref type='bibr' target='#b4'>(Al-thoubaity, 2019)</ns0:ref>.</ns0:p><ns0:p>Breast cancer tumours are classified using the TNM classification system in which the primary tumour is denoted as T, the regional lymph nodes as N, and distant metastases as M. Breast cancer Manuscript to be reviewed</ns0:p><ns0:p>Computer Science can also be classified about its invasiveness, with lobular carcinoma in situ (LCIS) and ductal carcinoma in situ (DCIS) being non-invasive. Invasive ductal cancer accounts for 50-70% of invasive cancers, while invasive ductal cancer accounts for 10% <ns0:ref type='bibr' target='#b3'>(Alkabban & Ferguson, 2022)</ns0:ref>.</ns0:p><ns0:p>Once the molecular basis of breast cancer is identified, treatment can begin. Many chemotherapeutic agents are used in the treatment of breast cancer, including Tamoxifen (ERpositive breast cancer), Pertuzumab (HER2-overexpressing breast cancer) and Voxtalisib (HR+/HER-advanced breast cancer) <ns0:ref type='bibr' target='#b9'>(Bhushan et al., 2021)</ns0:ref>. This paper aims to develop a proper kernel function using GF to separate benign prostate and breast cells from tumour-risk genetic cells using SVM, compared to six machine learning algorithms. In addition, the proposed GF-SVM classifier can classify breast and prostate cancer cells better than the six different classifiers. We have also included all features found in the datasets to test the ability of the proposed modelling to predict the best accuracy. The proposed GF-SVM implementation is reliable, scalable, and portable and can be an effective alternative to other existing evolutionary algorithms (M. A. <ns0:ref type='bibr' target='#b33'>Mezher, 2022)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Literature Review</ns0:head><ns0:p>Artificial intelligence (AI) is revolutionizing healthcare, incredibly patient stratification and tumour classification, which are pivotal components of targeted oncology. Training a Machine Learning (ML) model to analyze large datasets means more accurate prostate cancer diagnoses <ns0:ref type='bibr' target='#b51'>(Tătaru et al., 2021)</ns0:ref>. Artificial Neural Networks (ANN) is a tool that has been used in advanced prognostic models for prostate cancer <ns0:ref type='bibr' target='#b26'>(Jović et al., 2017)</ns0:ref>. Other models employed in the classification of cancers based on gene expression include K-Nearest Neighbor (KNN) and Support Vector Machines (SVM). The accuracy of these models varies between 70% and 95%, depending on the number of genes analyzed. Previous studies have used these and other ML models to classify prostate cancer <ns0:ref type='bibr' target='#b12'>(Bouazza et al., 2015)</ns0:ref> thus, we aim to use the Genetic Folding (GF) algorithm in classifying patients with prostate cancer. <ns0:ref type='bibr' target='#b2'>(Alba et al., 2007)</ns0:ref> used a hybrid technique for gene selection and classification. They presented data of high dimensional DNA Microarray. They found it initiates a set of suitable solutions early in their development phase. Similarly <ns0:ref type='bibr' target='#b49'>(Tahir & Bouridane, 2006)</ns0:ref>, found that using a hybrid algorithm significantly improved their results in classification accuracy. However, they concluded their algorithm was generic and fit for diagnosing other diseases such as lung or breast cancer.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2022:01:69883:2:1:NEW 22 May 2022)</ns0:ref> Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b10'>(Bouatmane et al., 2011)</ns0:ref> used an RR-SFS method to find a 99.9% accuracy. This was alongside other classification methods such as bagging/boosting with a decision tree. <ns0:ref type='bibr' target='#b29'>(Lorenz et al., 1997)</ns0:ref> compared the results from two commonly used classifiers, KNN and Bayes classifier. They found 78% and 79%, respectively. They had the opportunity to improve these results for ultrasonic tissue characterisation significantly.</ns0:p><ns0:p>Developing a quantitative CADx system that can detect and stratify the extent of breast cancer (BC) histopathology images was the primary clinical objective for <ns0:ref type='bibr' target='#b7'>(Basavanhally et al., 2010)</ns0:ref>, as they demonstrated in their paper the ability to detect the extent of BC using architectural features automatically. BC samples with a progressive increase in Lymphocytic Infiltration (LI) were arranged in a continuum. The region-growing algorithm and subsequent MRF-based refinement allow LI to isolate from the surrounding BC nuclei, stroma, and baseline level of lymphocytes <ns0:ref type='bibr' target='#b7'>(Basavanhally et al., 2010)</ns0:ref>. The ability of this image analysis to classify the extent of LI into low, medium and high categories can show promising translation into prognostic testing.</ns0:p><ns0:p>A meta-analysis identified several trends concerning the different types of machine learning methods in predicting cancer susceptibility and outcomes. It was found that a growing number of machine learning methods usually improve the performance or prediction accuracy of the prognosis, particularly when compared to conventional statistical or expert-based systems <ns0:ref type='bibr' target='#b19'>(Cruz & Wishart, 2006)</ns0:ref>. There is no doubt that improvements in experimental design alongside biological validation would enhance many machine-based classifiers' overall quality and reproducibility <ns0:ref type='bibr' target='#b19'>(Cruz & Wishart, 2006)</ns0:ref>. As the quality of studies into machine learning classifiers improves, there is no doubt that they will become more routine use in clinics and hospitals.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Proposed Model</ns0:head><ns0:p>A. Dataset</ns0:p><ns0:p>The prostate dataset of 100 patients (Shown in Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>features that have been computed from digitized images of the cell nuclei. Table <ns0:ref type='table'>2</ns0:ref> shows the first seven columns found in the breast dataset (D. & C., 2019).</ns0:p><ns0:p>All the prostate cancer dataset observations included in this experiment have been collected from Kaggle.com <ns0:ref type='bibr' target='#b40'>(Saifi, 2018)</ns0:ref>. Prostate and Breast cancer patients were labeled with M, whereas those without the cancer were labeled with B, as seen in the following tables.</ns0:p></ns0:div>
<ns0:div><ns0:head>B. Support Vector Machine</ns0:head><ns0:p>This section will go through the basic SVM concepts in the context of two-class classification problems, either linearly or non-linearly separable. SVM is based on the Vapnik-Chervonenkis (VC) theory and the Structural Risk Minimization (SRM) principle <ns0:ref type='bibr' target='#b44'>(Shawe-taylor et al., 1996)</ns0:ref>.</ns0:p><ns0:p>The goal is to identify the optimum trade-off between lowering training set error and increasing the margin to get the highest generalization ability while resisting overfitting. Another significant benefit of SVM is convex quadratic programming, which generates only global minima, preventing the algorithm from being trapped in local minima. <ns0:ref type='bibr' target='#b16'>(Cristianini & Shawe-Taylor, 2000)</ns0:ref> and <ns0:ref type='bibr' target='#b18'>(Cristianini et al., 2001)</ns0:ref> provide in-depth explanations of SVM theory. The remaining part of this section will go through the fundamental SVM ideas and apply them to classic linear and nonlinear binary classification tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.'>Linear Margin Kernel Classifier</ns0:head><ns0:p>Suppose a binary classification problem is given as { , }, ∈ ℜN and a set of corresponding labels are notated ∈ {-1,1} for i=1, 2, …, , where ℜN denotes vectors in a d-dimensional feature space. A hyperplane provided in SVM separates the data points using equation ( <ns0:ref type='formula'>1</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_0'>( ) = = 0 (1) • +</ns0:formula><ns0:p>Where is an n-dimensional coefficient vector that is normal to the hyperplane and is the offset from the origin and X is the features in a sample.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Nonlinear Margin Kernel Classifier</ns0:head><ns0:p>It has been shown that the input space can be mapped onto a higher-dimensional feature space where the training set cannot be separated. The input vectors are mapped into a new, higherdimensional feature space, indicated as , where N < K, to produce the SVM model: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>(2)</ns0:p><ns0:formula xml:id='formula_1'>( ) = • ( , ) +</ns0:formula><ns0:p>The mapping's functional , is implicitly determined by the kernel trick ( ) ( , ) = ( ) • ( , which is the dot product of two feature vectors in extended feature space. In SVMs, an ) extensive range of kernel tricks are accessible for application. The most common SVM kernels are shown in Table <ns0:ref type='table'>3</ns0:ref>. In the table, and are predefined user parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head>C. Genetic Folding Algorithm</ns0:head><ns0:p>The GF algorithm was created by (M. A. <ns0:ref type='bibr' target='#b35'>Mezher & Abbod, 2011)</ns0:ref>. The GF algorithm's fundamental principle is to group math equations using floating numbers. Floating numbers are created by taking random samples of operands and operators. In GF, the linear chromosomes serve as the genotype, while the parse trees serve as the phenotype. This genotype/phenotype technique works well for encoding a lengthy parse tree in each chromosome. The GF approach has shown to be effective in various computer problems, including binary, multi-classification, and regression datasets. For example, GF has been proven to outperform other members of the evolutionary algorithm family in binary classification, multi-classification (M. <ns0:ref type='bibr' target='#b36'>Mezher & Abbod, 2014)</ns0:ref>, and regression (M. <ns0:ref type='bibr' target='#b38'>Mezher & Abbod, 2012)</ns0:ref>.</ns0:p><ns0:p>On the other hand, GF may yield kernels employed in SVMs. Kernels are created by combining operands and operators to generate appropriate pairings, such as (3 + 4). At each pair, the pair is indexed at random in a GF array cell (known as GF kernel). Correlation between pairings is boosted by picking pairs at random that boost the strength of the generated GF kernels. Each GF kernel chromosome was divided into a head segment that only carries functions and a tail segment containing terminals. However, the size of the head segment must be determined ahead of time, but the size of the tails segment does not need to be determined since the GF algorithm predicts the number of genes needed based on the pairs required for the drawn functions (M. <ns0:ref type='bibr' target='#b37'>Mezher & Abbod, 2017)</ns0:ref>. Furthermore, the GF algorithm predicts the number of operands (terminals) necessary each time the GF algorithm generates different operators (function) at random. The proposed modified GF algorithm has the pseudo-code that can be seen in Figure <ns0:ref type='figure'>1</ns0:ref>:</ns0:p><ns0:p>The following operands and operators were used to forecast malignant and benign cells in the prostate dataset, as shown in Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The GF genome comprises a continuous, symbolic string or chromosome of equal length. Table <ns0:ref type='table'>5</ns0:ref> shows an example of a chromosome with varying pairs that may be used to create a valid GF kernel.</ns0:p></ns0:div>
<ns0:div><ns0:head>D. Model Evaluations</ns0:head><ns0:p>The classification performance of each model is evaluated using statistical classification accuracy.</ns0:p><ns0:p>The equation ( <ns0:ref type='formula'>3</ns0:ref>) is used to determine the accuracy of the GF algorithm's correctly classified instances:</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_2'>= + + + + * 100%</ns0:formula><ns0:p>The True Positive (TP), True Negative (TN), False Positive (FP), and False Negative (FN) define this accuracy measurement. A TP is made when the algorithm predicts malignant (positive), and the actual result is malignant (positive). A TN is made when the algorithm predicts benign (negative), and the actual result is benign (negative). FP occurs when the algorithm predicts a benign (negative) instance as malignant (positive). Finally, when the GF algorithm classifies a malignant (positive) instance as benign (negative), the result is FN. The accuracy performance metrics are compared in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p><ns0:p>We have also included another evaluating estimator to measure the quality of the proposed mode to choose the best. Some of the estimators tolerate the small samples better than others, while other measures the quality of the estimator with large samples. Mean Square Error (MSE) <ns0:ref type='bibr' target='#b41'>(Schluchter, 2014)</ns0:ref> has been used as a standard metric to measure model performance in medical, engineering, and educational studies. Assume E' is the predicted classes E'={(pred(y1), pred(y2), pred(y3), … } of the observed classes E={y1, y2, y3,..}, then the MSE is defined as the expectation of the squared deviation of E' from E: (4)</ns0:p><ns0:formula xml:id='formula_3'>( ' ) = ∑ = 1 ( ' -) 2</ns0:formula><ns0:p>MSE measures the estimator's bias (accuracy), which shows how much its predicted value deviates consistently from the actual value and the estimator's variance (precision), which indicates how much its expected value fluctuates due to sampling variability. Squaring the errors magnifies the effect of more significant errors. These calculations disproportionately penalize larger errors more than more minor errors. This attribute is critical if we want the model to be as accurate as possible.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:2:1:NEW 22 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>We are preprocessing the dataset using min-max standardization, without removing any features that prevented incorrect relevance assignments. The elimination of the features showed either weakness or strength correlation and reduced the accuracy values, as shown in Table <ns0:ref type='table'>5</ns0:ref>. In statistics, correlation is any connection between two variables but does not indicate causation, implying that the variables are dependent on one another. The closer a variable is to 1, the more significant the positive correlation; the closer a variable is to -1, the stronger the negative correlation; and the closer a variable is to 0, the weaker the negative correlation. Each malignant datapoint was subjected to analysis. We implemented the proposed method using Visual Studio Code by Python.</ns0:p><ns0:p>This paper conducted our experiments on 100 patients of 8 features collected from kaggle.com <ns0:ref type='bibr' target='#b40'>(Saifi, 2018)</ns0:ref>for prostate cancer and 596 patients of 30-features for the breast cancer dataset. The instances were classified as malignant or benign in the experiments. The following are the steps involved in conducting the GF algorithm to produce a valid GF kernel. GF starts by correctly generating the initial GF genes (operators, operands) (operand, operator, operand). Then, the GF algorithm generates a valid GF chromosome containing 50 genes. Based on a 5-fold crossvalidation approach, the dataset was divided into a training set and a testing set. The detailed GF algorithm is explained in Figure <ns0:ref type='figure'>1</ns0:ref> <ns0:ref type='bibr'>(Mohammad et al., 2016)</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The proposed GF was deployed for the first time in the prostate/breast cancer detection dataset, and it demonstrated a significant performance improvement over the existing models in this domain. Experiments were carried out with a broad set of features from the conducted datasets. The folding indicis of the best GF chromosome found for the produced kernel were: <ns0:ref type='bibr'>['1.2', '3.4', '5.6', '7.8', '9.10', '0.5', '0.6', '11.12', '13.14', '0.9', '15.16', '17.18', '19.20', '21.22', '0.14', '23.24', '25.26', '0.17', '0.18', '0.19', '0.20', '0.21', '0.22', '0.23', '0.24', '0.25', '0.26']</ns0:ref> In the same side, the best kernel string found for breast cancer dataset was shownin Figure <ns0:ref type='figure'>3</ns0:ref>(f):</ns0:p><ns0:p>['Plus_s', 'Plus_s', 'x', 'Plus_s', 'Multi_s', 'Multi_s', 'Minus_v', 'Plus_s', 'Plus_s', 'x', 'Minus_v', 'x', 'x', 'x', 'Minus_v']</ns0:p><ns0:p>Where the folding indicis of the best GF chromosome found for breast cancer dataset were: <ns0:ref type='bibr'>['1.2', '3.4', '0.2', '5.6', '7.8', '9.10', '0.6', '11.12', '13.14', '0.9', '0.10', '0.11', '0.12', '0.13', '0.14']</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:2:1:NEW 22 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As seen in Figure <ns0:ref type='figure'>4</ns0:ref>, the proposed GF model performed the best for prostate cancer classification.</ns0:p><ns0:p>After using 5-Fold, the absolute average accuracy on the prostate cancer dataset was 96.0%.</ns0:p><ns0:p>Additionally, when evaluated on a prostate cancer dataset, dimensionality reduction had an enhancing impact.</ns0:p><ns0:p>Consequently, the RF model achieved an accuracy of 96.0 %, more significant than the proposed model by 0.09%, which was the best result. The accuracy comparison for breast cancer is shown in Figure <ns0:ref type='figure'>5</ns0:ref>, where all features are included. The proposed model was validated using just 5-folds, while the RF model-averaged accuracies ranged from 0-fold to 20-folds.</ns0:p><ns0:p>Although our system does not get the highest accuracy results in the breast cancer dataset, it is a close second. To emphasize the relevance of our findings, we compare the P-values of the hypothesis tests for both predefined SVM kernels (Figure <ns0:ref type='figure'>6</ns0:ref> (a and b)) and ML models (Figure <ns0:ref type='figure'>6</ns0:ref> (c and d)) to the reported P-value results for the proposed models in Figures <ns0:ref type='figure'>4 and 5</ns0:ref>. Except for the prostate cancer dataset, all results are significant compared to a standard P = 0.05 criterion.</ns0:p><ns0:p>Except for RBF, the SVM kernels in both datasets have an insignificant structure to GF.</ns0:p><ns0:p>Furthermore, in both SVM kernels and ML models, the dataset size played the most critical role in GF's ability to forecast all negative samples and a representative sample for each positive sample. Figure <ns0:ref type='figure'>6</ns0:ref> displays the P-value results for both the prostate and cancer datasets compared to the predefined kernel-SVM and ML models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper demonstrated that using a GF algorithm to classify patients with prostate cancer provides better accuracy than KNN, SVM, DT, and LR models. The GF algorithm achieved an average accuracy of 96% without eliminating any features from the dataset. This enables healthcare professionals to classify prostate cancers more precisely and provide more targeted therapies. Further improvements can be introduced to the model's accuracy based on our results.</ns0:p><ns0:p>Using multidimensional data whilst choosing a range of feature selection/classification algorithms can be a promising tool for early-onset prostate cancer classification. We plan to work on the concept further and apply it to other types of cancer. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>proposed. Hence, patient stratification and tumour classification are vital. Especially true PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:2:1:NEW 22 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:01:69883:2:1:NEW 22 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figures 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figures 2 and 3 (a-e) depict the complexity, diversity, ROC curve, accuracy, and best GF chromosome results for prostate and breast datasets. Figures 2 and 3 (a) show the GF algorithm's performance concerning the complexity of the generic kernel. Figures 2 and 3 (b) show the population's diversity in each generation, where GF maintains the best fitness values with a chance of accepting weak kernels. Figures 2 and 3 (c) shows that GF generic kernel was the best Area Under Curve (AUC) value compared with conventional SVM classifiers; linear, RBF, and polynomial kernel functions. Based on the MSE, the proposed model shows (Figure 2, 3 (e)) minor differences between the observed values and the predicted classes. In Figure 2 (e), the proposed model beats the other kernels with at least the average of errors was better than the minimum box plot of the best model (RBF kernel).Figure 3 (e) shows a perfect model with no error produced of almost zero variance. In order to show the performance of the GF algorithm in each generic kernel, median, average, and standard deviation are calculated in Figure 2.3 (d). The</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>) was used to test the proposed GF algorithm</ns0:cell></ns0:row></ns0:table><ns0:note>The breast cancer dataset is a repository maintained by the University of California. The dataset contains 569 samples of malignant and benign tumor cells. Columns 1-30 contain real-value PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:2:1:NEW 22 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>PeerJ</ns0:figDesc><ns0:table /><ns0:note>Comput. Sci. reviewing PDF | (CS-2022:01:69883:2:1:NEW 22 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>gives a comparative description of several ML algorithms based on the same preprocessing conditions. The accuracy performance of the proposed GF model was superior compared to the six ML approaches in the prostate cancer dataset. Furthermore, we expanded this comparison with the suggested hybrid model by using the SVM classifier with several conventional kernels such as linear, polynomial, and RBF kernels.The proposed model achieved 96.0% accuracy in the prostate cancer dataset, which is better than the ANN by 16% and the LR by 6%. The proposed GF beat the best-preset kernel by 8%, 20%,</ns0:figDesc><ns0:table /><ns0:note>and 16% compared to linear, RBF, and polynomial kernels, respectively. The best kernel found for prostate cancer dataset (shown Figure2(f)) is: ['Plus_s', 'Multi_s', 'Multi_s', 'Plus_s', 'Minus_s', 'Minus_v', 'Minus_v', 'Plus_s', 'Plus_s', 'x', 'Minus_s', 'Plus_v', 'Plus_v', 'Plus_v', 'x', 'Plus_s', 'Minus_s', 'Minus_v', 'Minus_v', 'Minus_v', 'x', 'x', 'x', 'x', 'x', 'x', 'x'] </ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>1 Rad. Text. Perim. Area Smooth Compact Symmetry Fractal_dim. Diagnosis</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>23</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>954</ns0:cell><ns0:cell>0.143</ns0:cell><ns0:cell cols='2'>0.278 0.242</ns0:cell><ns0:cell>0.079</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell cols='2'>1326 0.143</ns0:cell><ns0:cell cols='2'>0.079 0.181</ns0:cell><ns0:cell>0.057</ns0:cell><ns0:cell>B</ns0:cell></ns0:row><ns0:row><ns0:cell>21</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell cols='2'>1203 0.125</ns0:cell><ns0:cell>0.16</ns0:cell><ns0:cell>0.207</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>14</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>386</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell cols='2'>0.284 0.26</ns0:cell><ns0:cell>0.097</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>9</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell cols='2'>1297 0.141</ns0:cell><ns0:cell cols='2'>0.133 0.181</ns0:cell><ns0:cell>0.059</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>477</ns0:cell><ns0:cell>0.128</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.209</ns0:cell><ns0:cell>0.076</ns0:cell><ns0:cell>B</ns0:cell></ns0:row><ns0:row><ns0:cell>16</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell cols='2'>1040 0.095</ns0:cell><ns0:cell cols='2'>0.109 0.179</ns0:cell><ns0:cell>0.057</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>15</ns0:cell><ns0:cell>18</ns0:cell><ns0:cell>578</ns0:cell><ns0:cell>0.119</ns0:cell><ns0:cell cols='2'>0.165 0.22</ns0:cell><ns0:cell>0.075</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>19</ns0:cell><ns0:cell>24</ns0:cell><ns0:cell>520</ns0:cell><ns0:cell>0.127</ns0:cell><ns0:cell cols='2'>0.193 0.235</ns0:cell><ns0:cell>0.074</ns0:cell><ns0:cell>M</ns0:cell></ns0:row><ns0:row><ns0:cell>25</ns0:cell><ns0:cell>11</ns0:cell><ns0:cell>476</ns0:cell><ns0:cell>0.119</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.203</ns0:cell><ns0:cell>0.082</ns0:cell><ns0:cell>M</ns0:cell></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:69883:2:1:NEW 22 May 2022)</ns0:note></ns0:figure>
</ns0:body>
" | "Dear Editor,
Thank you for allowing us to submit a revised draft of our manuscript titled 'A Modified Genetic Folding Approach for Prostate and Breast Cancer Classification'. We appreciate the time and effort you and the reviewers have dedicated to providing your valuable feedback on our manuscript. We have been able to incorporate the final changes to reflect most of the suggestions provided by the reviewers. We have highlighted the changes within the manuscript. Here is a point-by-point response to the reviewers' comments and concerns.
Reviewer 1
Comment:
Thanks for the revision efforts. It is found that more details have been added, however, it lacks a section of describing the distinguishable technical contributions.
Reply
More technical details and comparisons were added starting from pages 15-18.
Comment:
Moreover, the reference is very limited. Please consider including more up-to-date reference, such as:
• Weakly supervised prostate tmp classification via graph convolutional networks;
Integrating genomic data and pathological images to effectively predict breast cancer clinical outcome;
• Supervised machine learning model for high dimensional gene data in colon cancer detection;
• A survey on machine learning approaches in gene expression classification in modelling computational diagnostic system for complex diseases;
• OncoNetExplainer: explainable predictions of cancer types based on gene expression data;
A Novel Statistical Feature Selection Measure for Decision Tree Models on Microarray Cancer Detection;
• Selecting features subsets based on support vector machine-recursive features elimination and one dimensional-Naïve Bayes classifier using support vector machines for classification of prostate and breast cancer.
Reply:
Thank you for your suggestions on the new references. The most pertinent suggested references have been added to our paper's bibliography.
Experimental design
Comment:
Please refer to the mentioned reference regarding the experimental design. The p-value will be desired to evaluate the effectiveness of the model.
Reply:
P-values were calculated and compared for both kernel-SVM and ML models versus the proposed mode over three rounds. The proposed model demonstrates that it has achieved significant values when compared to the other available models.
Validity of the findings
Comment:
It could be validated if the source code and datasets are publicly available.
Reply:
As shown in line 164, the datasets and code are now available on GitHub.
Reviewer 2
Basic reporting
Comment:
The authors have expanded their model performance analysis by including a pairwise model accuracy comparison among a set of alternative SVC kernels and their proposed GF kernel in Fig. 2C,E. This analysis shows that GF outperforms all other evaluated kernels on these classification problems. I believe the manuscript now satisfies this journal's criterion for publication.
Reply:
Thank you for recommending our paper for the publication.
Comment:
However, I would personally still recommend the authors included some sort of pairwise statistical test in this analysis (e.g. Mann-Whitney, McNemar) to robustly assess the significance of the observed improvement in performance.
Reply:
For both datasets, p-value tests were added. We also compared the proposed model to predefined SVM kernels and different models to the proposed model. Pages 16 and 17 show the comparison.
Best Regards
" | Here is a paper. Please give your review comments after reading it. |
695 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Synthesizing human movement is useful for most applications where the use of avatars is required. These movements should be as realistic as possible and thus must take into account anthropometric characteristics (weight, height, etc.), gender, and the performance of the activity being developed. The aim of this study is to develop a new methodology based on the combination of principal component analysis and partial least squares regression model that can generate realistic motion from a set of data (gender, anthropometry and performance). 18 volunteer runners have participated in the study.</ns0:p><ns0:p>The joint angles of the main body joints were recorded in an experimental study using 3D motion tracking technology. A five-step methodology has been employed to develop a model capable of generating a realistic running motion. The described model has been validated for running motion, showing a highly realistic motion which fits properly with the real movements measured. The described methodology could be applied to synthesize any type of motion: walking, going up and down stairs, etc. As future work, we want to integrate the motion in realistic body shapes, generated with a similar methodology and from the same simple original dat.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>It is well known that there is a large degree of information contained in the kinematics of a moving body which is influenced by parameters such as: gender, age, anthropometrical features, emotional state, personality traits, etc. <ns0:ref type='bibr' target='#b22'>(Troje 2008)</ns0:ref> A number of studies demonstrate the capability of the human visual system to detect, recognize and interpret the information encoded in the biological motion <ns0:ref type='bibr' target='#b10'>(Johansson 1973)</ns0:ref>. There are also many attempts to analyse this information encrypted in human motion. Some researchers use discrete kinematics parameters such as ranges, speed, etc. <ns0:ref type='bibr' target='#b4'>(Dvorak et al. 1992</ns0:ref>) Others focus their studies on the sequence of movement along time, instead of recording simple parameters. In these cases, they analyse the complete function of time f(t) <ns0:ref type='bibr' target='#b6'>(Feipel et al. 1999)</ns0:ref>. A number of kinematical models are based on frequency domain manipulations <ns0:ref type='bibr' target='#b3'>(Davis, Bobick, and Richards 2000)</ns0:ref> and multiresolution filtering <ns0:ref type='bibr' target='#b2'>(Bruderlin and Williams 1995)</ns0:ref>. Nevertheless, the most common objective of these studies is to model and to classify the movement pattern of the person being measured, rather than creating new motions from the extracted information.</ns0:p><ns0:p>In this regard, motion synthesis is currently attracting a great deal of attention within the computer graphics community as a means of animating three dimensional realistic characters and avatars; and in the robotic field to provide controlled real-time dynamic motion for the locomotion and other activities <ns0:ref type='bibr' target='#b11'>(Kajita et al. 2002)</ns0:ref>. With the computational resources available today, largescale models of the body [i.e. models that have many degrees of freedom and are actuated by many muscles] may be used to perform realistic simulations <ns0:ref type='bibr' target='#b17'>(Pandy 2001</ns0:ref>). Nevertheless, it is necessary to perform lab experiments to track the positions and orientations of body segments executing the task aimed to be synthesized. Recording motion data directly from real actors and mapping them to computer characters is a common technique used to generate high quality motion <ns0:ref type='bibr' target='#b12'>(Li, Wang, and Shum 2002)</ns0:ref>. However, this technique requires a high effort in experimental work. Besides, new measures are needed to include changes in the pattern of movement, such as age, weight, gender or speed. In this sense, it would be useful to create a methodology based on biomechanical models constructed from a database of motions, instead of a single actor, able to generate realistic motions of individuals with different anthropometrical characteristics, with sufficient accuracy and without the need to perform laboratory measurements.</ns0:p><ns0:p>Several authors have addressed the motion modelling and synthesis for biped walking, jumping, pedalling <ns0:ref type='bibr' target='#b21'>(Troje 2002)</ns0:ref>, or even stair-ascending <ns0:ref type='bibr' target='#b24'>(Vasilescu 2002)</ns0:ref>. Classically the mathematical approach of the synthesis of movement has been the dynamic optimization of biomechanical body structures <ns0:ref type='bibr' target='#b17'>(Pandy 2001)</ns0:ref>. These models provide detailed information of the functioning of some structures, such as the description of muscle function during normal gait.</ns0:p><ns0:p>However, this approach becomes an unworkable problem when a greater number of body structures are included in the model. A new approach based on Principal Components Analysis (PCA) can facilitate the understanding of the information contained in the kinematics of a moving human body and avoids the inclusion of the dynamics in the model. PCA can extract depth information contained in the mathematical function and its derivatives not normally available through traditional statistical methods <ns0:ref type='bibr' target='#b23'>(Ullah and Finch 2013)</ns0:ref>. In this way PCA can be used on different levels. For instance, <ns0:ref type='bibr' target='#b21'>Troje (2002)</ns0:ref> used PCA in two steps for the purpose of analysing and synthesizing human gait patterns. In the first one, they extracted the main components from the entire database, in order to eliminate redundancy and to reduce the dimensionality. In the second step, PCA was applied particularly for each walker in order to retain the encoded information of each walker-specific variation. In our research, we will also use a model (based on PCA) to extract the most relevant information from the pattern of running. This information will be used to develop </ns0:p></ns0:div>
<ns0:div><ns0:head>Measurement and Protocol</ns0:head><ns0:p>The measurements were performed using commercial equipment based on 17 inertial sensors:</ns0:p><ns0:p>MOVEN studio. The commercial system has been validated by previous studies <ns0:ref type='bibr' target='#b27'>(Zhang et al. 2013;</ns0:ref><ns0:ref type='bibr' target='#b18'>Thies et al. 2007)</ns0:ref>. A sampling frequency of 120Hz was used. This system showed a very high sensitivity to electromagnetic fields. For this reason, the measurements of running trials were done outdoors in a location free of electromagnetic pollution.</ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental procedure</ns0:head><ns0:p>For the purpose of controlling the pace of running, a 20-metre-long corridor, delimited with cones every 5 meters was set up. Thus, we obtained four areas, one area of acceleration, two of constant speed and a final deceleration area. Running at constant speed presents a periodic timing in which the period depends on the velocity <ns0:ref type='bibr' target='#b16'>(Novacheck 1998)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and deceleration periods are out of phase and the duration of cycles is variable. Therefore, the running cycles used to create our model were selected within the area of constant speed.</ns0:p><ns0:p>In the case of running, the pattern of the movement changes with velocity (e.g. stride length, maximum joint angles, etc.). For this reason, each runner completed six running trials at different speeds. Initially, subjects started running at normal speed. In the second measurement subjects ran at their maximum speed. The third and fourth trial were performed at a pace between normal and maximum speed. The fifth trial was performed at the minimum speed at which each runner was able to run, on the edge between walking and running. Running defined in this case as when there is no phase of bipodal support <ns0:ref type='bibr' target='#b1'>(Biewener et al. 2004</ns0:ref>). The last trial was performed at an average speed between the lower and the normal speed. This procedure allowed us to obtain six observations representing the whole range of speeds that each subject could execute.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mathematical procedure</ns0:head><ns0:p>The methodology used in our study comprised 5 steps: 1-Reduction of intra-personal variability: joint angles are periodic by nature. We took the most representative single stride for the purpose of reducing variability and dismiss the phases with no consistent speed, such as acceleration and deceleration steps. The selected stride was picked in the middle of the running sequence, guaranteeing constant speed. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science all the measurements to the percentage of the running cycle. The cubic spline was applied to normalize at 50 equispaced time intervals per each variable (see the example in Fig. <ns0:ref type='figure' target='#fig_8'>1</ns0:ref>). The application of the cubic spline to the 64 kinematical variables makes a total amount of 50 x 64 (3200) data per each subject Fig. <ns0:ref type='figure' target='#fig_8'>1</ns0:ref>: Knee angle vs % of the cycle 3-Data cleaning: at this point a detailed checking and cleaning of inaccuracies of the kinematical data was conducted. These type of inaccuracies were caused mostly by the measurement system. The prevention of errors at this point is preferable to their later correction once the model has been created. All the measurements have been manually analysed thoroughly by an experienced examiner. The identified inaccuracies were treated as follows: a) Angular offsets: this common error usually appears during the standing posture and can affect the later joint angles registers during the trial (running) <ns0:ref type='bibr' target='#b15'>(Mills et al. 2007)</ns0:ref>. Offsets have been corrected manually eliminating (adding or subtracting) the difference between the initial angle observed and the expected angle of the body segment at this position. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science b) Positioning error: due to the fact that our measurement system, based on inertial sensors, uses the earth's magnetic field to determine the reference position of each subject, it is quite common to find subjects with slight differences in their initial reference positions.</ns0:p><ns0:p>In this case, we have proceeded by correcting the reference system aligning it with the direction of running forward. Thus, we can guarantee that all measurements are equally oriented. c) Non-physiological angles: some errors in the registration of joint angles were detected in the database. These inaccuracies came from errors of the inertial sensors. In this case, it was not possible to correct the error effectively, thus we proceed by eliminating these observations from the database and repeating the measurement. 4-Dimensionality reduction: the database of all measurements was combined in a single matrix W. The initial number of observations is 108 (18 subjects x 6 velocities = 108). But three measurements fail, therefore W has 105 rows (observations) and 3200 columns (50 equispaced time intervals x 64 kinematical variables). Motion data of each observation is enclosed in the rows of the matrix .</ns0:p><ns0:p>𝑊 = (𝑤 𝑖 ), 𝑖 = 1,…, 105 Before the creation of the bio-motion generator, by means of a regression model, it was needed to reduce the dimensionality of the motion data. Computing a PCA on the running data (contained in matrix W), resulted in a decomposition of the data matrix into an average running vector 𝑊 𝑤 0 and 3200 weighed components, arranged in a 3200 x3200 matrix :</ns0:p><ns0:formula xml:id='formula_0'>𝑉 (1) 𝑊 = 𝑊 0 + 𝜶•𝑉</ns0:formula><ns0:p>where is a 105x3200 matrix with all rows equal to and with is a high speed to low speed, etc. The decision of how many components to retain was a critical issue in the exploratory factor analysis. To perform this decision we used the methodology of Parallel Analysis (PA) <ns0:ref type='bibr' target='#b9'>(Hayton, Allen, and Scarpello 2004)</ns0:ref>. PA is a Monte-Carlo based simulation method that compares the observed eigenvalues (components) with those obtained from randomized normal variables. A component is retained if its explained variance or information is higher than the information provided by the eigenvectors derived from the random data. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>𝑊 0 𝑤 0 𝜶 = (𝛼 𝑖 ) 𝑖 = 1,</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The correlation was obtained as a regression model, combining a partial least squares (PLS) regression model as a first step and a linear regression model (LRM) as a second step. PLS methodology is explained in <ns0:ref type='bibr' target='#b26'>Wold (2006)</ns0:ref> and <ns0:ref type='bibr' target='#b8'>Geladi (1986)</ns0:ref>. . This type of regression model is suitable for the kind of data involved in the bio-motion generator since the input data of the model is strongly correlated (anthropometrical information).</ns0:p><ns0:p>The PLS regression model takes the 1D data -age, height, weight and velocity-as input information and produces a set of PCA scores as output. The LRM model was applied to these output PCA scores to reflect the influence of gender in the PCA scores.</ns0:p><ns0:p>In the first step, we estimated a PLS model considering anthropometrical data and velocity of the movement as independent variables and the PCA scores as dependent variables. The general formula of a PLS model is:</ns0:p><ns0:p>(3)</ns0:p><ns0:p>𝑌 -𝑌 0 = 𝐵•(𝑋 -𝑋 0 ) + 𝐸 where is the matrix of dependent variables, is the matrix of independent variables, and Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Where is the matrix of mean anthropometrical data and velocity, and is the prediction</ns0:p><ns0:formula xml:id='formula_2'>𝑋 0 𝐸</ns0:formula><ns0:p>error matrix. PLS decomposes the independent and dependent variables in component spaces in order to obtain their correlation. The number of significant PLS components in the model was selected in a leave-one-out procedure and according to the explained variance (R 2 ) criteria.</ns0:p><ns0:p>Secondly, the influence of gender was modelled with a LRM of the prediction error matrix 𝑬 with coefficients and , where is the number of retained PCA</ns0:p><ns0:formula xml:id='formula_3'>𝒂 = (𝑎 1 ,…,𝑎 𝑐 ) 𝒃 = (𝑏 1 ,…,𝑏 𝑐 ) 𝑐 components:</ns0:formula><ns0:p>(5)</ns0:p><ns0:formula xml:id='formula_4'>𝐸 = 𝒂 + 𝒃•𝑔𝑒𝑛𝑑𝑒𝑟</ns0:formula><ns0:p>where for men and for women.</ns0:p><ns0:formula xml:id='formula_5'>𝑔𝑒𝑛𝑑𝑒𝑟 = 0 𝑔𝑒𝑛𝑑𝑒𝑟 = 1</ns0:formula><ns0:p>This way, the motion information related to gender which is part of the PLS error matrix , 𝐸 and uncorrelated with the prediction derived from the PLS regression, was modelled. Notice that and were considered zero whenever their F-value was below a desired level of statistical a j b j significance of 95%.</ns0:p><ns0:p>Once we have obtained , and , whenever we want to synthesize a running motion from</ns0:p></ns0:div>
<ns0:div><ns0:head>𝐵 a b</ns0:head><ns0:p>new anthropometrical data and velocity, we obtain the corresponding scores of the new realistic 𝛼 running motion by the following formula: (6)</ns0:p><ns0:formula xml:id='formula_6'>𝜶 = [𝒂 -𝐵•𝑋 0 ] + [ 𝐵 𝒃 ] • [𝑋 𝑔𝑒𝑛𝑑𝑒𝑟]</ns0:formula><ns0:p>where is the matrix of mean anthropometrical data and velocity. Manuscript to be reviewed Computer Science considered the true angle curve of the running motion. The predicted motion is estimated using the 'leave-one-out' validation technique. That is, not including that observation in the bio-motion generator. We wish to determine if both curves are reproducible and sufficiently similar to consider that they represent the same motion. For this purpose, we use the Intraclass Correlation Coefficient (ICC) as a measurement of the reliability, and the Standard Error of Measurement (SEM) as a direct measurement of the global error between true and predicted angles. Theoretically the ICC is defined as the ratio between the true variance and the predicted variance. The ICC varies between 0 and 1 and can be interpreted as the proportion of variance due to the methodology (true versus predicted data) in the total variance. An ICC greater than 0.8 is generally considered to be good <ns0:ref type='bibr' target='#b7'>(Fleiss 2011</ns0:ref>). The ICC is determined between the measured or true curve (T c ) and the estimated curve (E c ) provided by the bio-motion generator. The ICC is determined from the variance of both curves (T c ) and (E c ) following the next equation: ;</ns0:p><ns0:formula xml:id='formula_7'>(7) 𝐼𝐶𝐶 = 𝜎 2 𝐸 𝑐 (𝜎 2 𝑇 𝑐 + 𝜎 2 𝐸 𝑐</ns0:formula><ns0:p>) On the other hand, SEM represents the existent difference between observed (T c ) and estimated curves (E c ) determined with the bio-motion generator, and provide an indication of the real magnitude of the error. ( <ns0:ref type='formula'>8</ns0:ref>)</ns0:p><ns0:formula xml:id='formula_8'>𝑆𝐸𝑀 = 𝜎 𝑐 • ( 1 -𝐼𝐶𝐶 ) ;</ns0:formula><ns0:p>Where σ c is the combined standard deviation of the true scores (T c ) and observed scores (E c ).</ns0:p><ns0:p>And S E is the combined standard deviation of the true scores and observed scores.</ns0:p><ns0:p>We have obtained the SEM for each pair of true and predicted angles for the three spatial directions in all the joints that form the human model. For that reason, we have represented the SEM by its descriptive statistics (mean, std., 5-percentile and 95-percentile).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Parallel analysis</ns0:head><ns0:p>The results of the PA (Fig. <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>) have been obtained with the explained variance of the main components extracted from the original data and the same obtained from randomized data. The intersection point of both curves indicates the optimal number of components to extract from the PCA. The original number of dimensions was 72 (3 related to the pelvis translation + 69 related to the body segments orientation). The results of the PA recommend retaining the first 12 eigenvalues, which explain the 88.16% of the total variance. Thus the PCA allowed a percentage of data reduction of 83%, from 72 variables to 12 weighed components. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Regression model</ns0:head><ns0:p>As it has been explained in the methodology, the regression model consists of two parts, the first including the anthropometrical data (PLS) and the second the gender (LRM). The dependent variables of the PLS are the scores of the first 12 principal components (PC) of the kinematical running motion. Therefore, they are uncorrelated and the optimal number of PLS components are separately determined for each PC score (PC 1... PC 12) according to its adjusted R 2 plot (Fig. <ns0:ref type='figure'>3</ns0:ref>).</ns0:p><ns0:p>PLS components are retained until their R 2 curve exhibits a decrease or a non-significant increase.</ns0:p><ns0:p>Thus, for instance, two PLS components are retained for PC 1, whereas no components are considered for PC 7 and PC 9. Notice that for those PC with 0 retained components, the PLS model provides their mean value as output. This way, the motion information associated to those PC which is provided by the PLS model is the average motion. <ns0:ref type='table' target='#tab_4'>2</ns0:ref>). The prediction obtained in the first step of the model is improved by the influence of gender on these PC. PC 7 and PC 9 are only affected by gender, since their number of retained PLS components was 0. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Validation of the bio-motion generator</ns0:head><ns0:p>The results of the reliability study, computed from the 90 observations and the same calculated by means of the leave-one-out technique, showed that the mean and standard deviation of ICC, was 0.91(0.04) with a 5 percentile of 0.829 and 95 percentile of 0.971. Only one subject exhibit an ICC lower than 0.8 in 2 observations (Fig. <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>). </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>In this paper we have demonstrated that the five-step methodology on which the bio-motion generator is based provides running motion models closely resembling the measurements obtained with real subjects. However, while the SEM study shows that the vast majority of errors detected between actual and predicted data of the bio-motion generator are less than 10°, there are a percentage of observations (8%) in which greater errors are observed. This can be explained because the model has been obtained from a small number of subjects --only 18--and therefore the bio-motion generator is not able to adjust the running specific characteristics of each corridor. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Future work in this line of research must be done to increase the database of real subjects measured and incorporate greater variability in anthropometric and performance characteristics. The bio-motion generator is based on a methodology which comprises 5 steps. In the fourth step we tackle a dimensionality reduction based on PCA. This step is similar to that performed by <ns0:ref type='bibr' target='#b21'>Troje (2002)</ns0:ref>. However, there are some differences, as he obtained four main components that explain more than 98% of the variance and we have obtained 12 components explaining 88.16% of variance. The greater variability of our study is explained partly by the greater variability of the running against walking and on the other hand by the greater speed range in our study in relation to Troje, in which each subject could select a single comfortable walking speed. On the other hand,</ns0:p><ns0:p>Troje made a second reduction of the dimensionality based on the simplicity of temporal behaviour of the walking components which could be modelled with pure sine functions with a scaled fundamental frequency. This approach was not valid for the motion of running, due to the fact that the 12 PCs of running cannot be modelled with a proportional frequency. This suggests that running is a more complex motion than walking in the sense that there does not exist a proportion between the frequency of oscillation of the different body segments. The fifth step of the methodology consists of a two-step linear regression which correlates a given list of 1D measurements with the PCA scores of movement. A linear regression technique has been used before to approximate motion models from a reduced marker set and estimate the remaining markers <ns0:ref type='bibr' target='#b13'>(Liu et al. 2005)</ns0:ref> or to model the motion-style and the spatio-temporal movement <ns0:ref type='bibr' target='#b19'>(Torresani, Hackney, and Bregler 2006)</ns0:ref>. However, it has not been used before to synthesize new human motion directly from a set of anthropometrical and performance data. In this sense, it can be considered a real breakthrough in the field of synthesis of human motion.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The major contribution of this paper is a novel statiscal methodology for modelling human movements. The method described in this article has been developed and validated for running motion, but this same methodology could be used to synthesize other types of motion: walking, going up and down stairs, or even for sport movements such as: jumping, pedalling, golf swing and putting, etc. Our work aims to provide a realistic motion to body shapes that can be developed with the methodology described in the work of <ns0:ref type='bibr' target='#b0'>Ballester et al. (2014)</ns0:ref>. Those body shapes could include an adjusted skeleton formed by a hierarchical set of interconnected joints and can be used to move the body shape with the required or desired motion provided by our methodology (Fig. <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>). The integration of both methods will allow generating realistic avatars supplied with realistic motion from a set of adjustable and simple anthropometrical and performance data and without the need of the realization of new measurements.</ns0:p><ns0:p>A limitation of this study is the sample size. Further work needs to be done in order to validate with a broader sample of people. Notwithstanding this limitations, the findings suggest that the model is valid.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>Leave-one-out R^2 estimation plots for the PLS mode. Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:p>Frequency histogram of the IC. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>2-Time normalization: time normalization is usually employed for the temporal alignment of cyclic data obtained from different trials with different duration. In our approach, the number of samples for each stride depended on the velocity of the running. At this point we normalised the variable 'time' applying an interpolation technique based on cubic splines through the measured values of the whole sequence of samples. This technique enables the normalization of PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>…,105 105x3200 matrix of PCA scores. Each observation was thus expressed by a linear combination 𝑤 𝑖 of scores and PCA components (columns of matrix ). Components represented factors related 𝛼 𝑖 𝑉 PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016) Manuscript to be reviewed Computer Science to gender, anthropometrical traits and running speed. And scores represented individual characteristics of each runner and performance of the running trial related to the previous factors. PCA components are arranged in descendent order of explained variance of the original matrix data. Thus the first columns in matrix retained most of the information in the data sample and it 𝑉 was possible to select a reduced number of components . 𝑐 (2) 𝑤 𝑖𝑐 = 𝑤 0 + 𝛼 𝑖𝑐 •𝑉 𝑐 Above denoted the average of all the running samples. The matrix contained the first 𝑤 0 𝑉 𝑐 components. represents the scores of each observation of the database in the reduced α 𝑖𝑐 𝑐 dimension space formed by the selected components. As score values change from negative to positive values, the movement of the runner changes from men to women; high BMI to low BMI;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>5-Regression model: one of the objectives of our work was to generate a statistical model capable of synthetizing new realistic running motion from a set of desired data: age, gender, height, weight and velocity, also called 1D data. Accordingly, to devise the bio-motion generator, we needed to establish the correlation between the 1D data and PCA scores, which provide the signature of each motion. PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>𝑋 0</ns0:head><ns0:label>0</ns0:label><ns0:figDesc>Validation methodology of the bio-motion generatorTo validate the five-step methodology described to develop the bio-motion generator wepropose a comparison between each recorded observation and the prediction of running motion generated by the model by means of the 'leave-one-out' procedure. The recorded observation is PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Fig. 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Fig. 2: Relation between the explained variance and the number of principal components</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Fig. 3: Leave-one-out R^2 estimation plots for the PLS model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig. 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Fig. 4: Frequency histogram of the ICC.</ns0:figDesc><ns0:graphic coords='20,93.25,220.59,465.80,224.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Fig. 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Fig. 6: Reconstructed virtual biomechanical model (skeleton+motion).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Fig - 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Fig -4 Frequency histogram of the SEM.</ns0:figDesc><ns0:graphic coords='33,42.52,204.37,525.00,435.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Fig 5 -</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Fig 5 -Reconstructed virtual biomechanical model (skeleton+motion).</ns0:figDesc><ns0:graphic coords='34,42.52,204.37,525.00,252.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='10,159.13,182.39,314.75,241.95' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='32,42.52,199.12,525.00,294.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,178.87,525.00,343.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='36,42.52,178.87,525.00,276.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 . Description of the anthropometrical parameters</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>In this sense our research has three goals. The first one is to generate a database of running movements of a full human model. The second is to extract the signature of each motion, by means of PCA technique and to correlate the distinctive styles of each runner with their anthropometrical Table1). Ethical approval was obtained from the ethics committee of the Universitat Politècnica València. All participants gave written informed consent.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>and velocity.</ns0:cell></ns0:row><ns0:row><ns0:cell>Nowadays there exists a line of research developed in the field of anthropometry for the</ns0:cell></ns0:row><ns0:row><ns0:cell>purpose of obtaining a model of human body shape from a database of processed raw scans (Vinué</ns0:cell></ns0:row><ns0:row><ns0:cell>et al. 2014). The methodology followed in that line of research provides sufficient resolution to</ns0:cell></ns0:row><ns0:row><ns0:cell>synthesize accurate realistic representations of body shapes from a set of simple anthropometrical</ns0:cell></ns0:row><ns0:row><ns0:cell>parameters. Ballester et al (2014) describe a method based on the harmonization of body scan data</ns0:cell></ns0:row><ns0:row><ns0:cell>followed by a Shape Analysis procedure using Principal Component Analysis. The combination</ns0:cell></ns0:row><ns0:row><ns0:cell>of these techniques allows the generation of human 3D shape models from anthropometric</ns0:cell></ns0:row><ns0:row><ns0:cell>measurement data (age, height, weight, BMI, waist girth, hip girth, bust/chest girth, etc.). Our</ns0:cell></ns0:row><ns0:row><ns0:cell>hypothesis is that the use of a similarly based methodology to generate human motion instead of</ns0:cell></ns0:row><ns0:row><ns0:cell>human body shapes is possible, valid and reliable. The novelty in our approach is the generation</ns0:cell></ns0:row><ns0:row><ns0:cell>of running data from a set of easily measurable anthropometric parameters and a desired value of</ns0:cell></ns0:row><ns0:row><ns0:cell>running speed.</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)Manuscript to be reviewed Computer Science a bio-motion generator which will solve the opposite problem of synthesizing new realistic movements. In addition, existing literature focused on synthesizing motion does not correlate the generated movement to age, gender, performance parameters such as velocity or anthropometrical features. characteristics, age, gender, and performance parameters such as the velocity of the action. The third is to develop a bio-motion generator based on a statistical model capable of synthesizing new realistic running motion from a set of desired data: age, gender, height, body mass index (BMI)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 . ANOVA table for the linear models</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Df</ns0:cell><ns0:cell>Sum Sq Mean Sq F-value Pr (>F)</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016) PeerJ Comput. Sci. reviewing PDF | (CS-2016:05:10965:1:1:NEW 10 Oct 2016)</ns0:note></ns0:figure>
</ns0:body>
" | "
Dear editors,
We thank the reviewers for their comments on the manuscript that helped us to improve the manuscript. We have edited the manuscript to address their concerns.
We have also corrected the editor’s comments.
We believe that the manuscript is now suitable for publication in PeerJ.
Dr. Juan V. Durá-Gil
On Behalf of all authors.
Resubmission Requirements
Please remove the Keywords from your manuscript and make sure they are included in the metadata here instead
Done
1) Please upload the tables in separate Word documents including the titles and any necessary legends in the text fields using the Edit button to the right of the file name here <https://peerj.com/manuscripts/10965/files>. Tables should not be an image pasted into the Word document.
2) The file should be named using the table number: Table1.doc, Table2.doc.
Done. We have created separate Word documents for the tables.
Reviewer 1
1) The dataset is fine but contains almost no variability on the data the researchers want to measure. The researchers indicate that one of their goals is to 'determine a statistical model capable of synthesizing new realistic running motion from a set of desired data: age, gender, height, body mass index (BMI) and velocity.' However, the limited number of people in their experiments does not allow to do that. Which are the height and BMI of the subjects included? It is not indicated. Also, according to the tables of age, there is little variation in that dimension to obtain reliable models.
The table of the BMI has been replaced by a new table including the data of height, weight and BMI of the study sample. In this table is possible to observe the difference between maximum and minimum values of these variables, which demonstrate the existence of a variability within the group of males and females. For instance, the height in the male group varies between 1.63 to 1.91 m.
The less variation in the parameter of age is due to the reason that this study is considered a first approach to adapt the methodology used by Ballester (Ballester et al., 2014) to the generation of human motion.
We consider this study as first step that proves that our model is feasible. To make that clear, we add this text in the conclusion:
“A limitation of this study is the sample size. Further work needs to be done in order to validate with a broader sample of people. Notwithstanding this limitations, the findings suggest that the model is valid.”
2) Line 170. They indicate 3200 columns. However, it is not clear why. They should clarify on that point. If they are using 20 joints, it makes roughtly 60 variables for the whole model in a given time instant. So, it makes around 50 measures, which at a frame rate of 120Hz, is no more than half second. Is that right? In any case, the authors must indicate much more clearly the number of variables of their model, a picture of it and where the 3200 columns come from.
It has been included a paragraph in the point 2-Time normalization of the Mathematical Procedure to clarify the origin of the 3200 columns.
50 equispaced time intervals x 64 kinematical variables = 3200
3) This links to the Mathematical procedure in line 139. It would be much more technical to have steps 1-3 be somehow graphically shown with some figures.
It has been included graphical information that could help to clarify the mathematical procedure. See the new Figure 2
4) The explanation of the PCA dimensionality reduction is not very technical.
We have extended the explanation. Please see the section Mathematical Procedure, 4-Dimensionality reduction.
5) The term bio-motion generator is randomly introduced in the text. First in the title, but it does not appear again until line 173 (near the middle of the paper).
We have included the term “bio-motion generation” in more points of the text in order to improve the redaction and the comprehension of the manuscript.
6) The authors apply a PLS approach coupled with a LRM to do the inference from age, gender,etc to movement. However, the authors should provide a more complete explanation of these techniques.
We have extended the explanation. Please see the section Mathematical Procedure, 5-Regression Model. Also, we add 2 references to support the PLS approach: Wood-2006 and Geladi-1986.
7) line 208. The authors should explicitly indicate the domain of x (line 208), i.e., what is the size of a_i. Also, which is the domain of variable 'gender'. In general a more technical and formal description of all variables is required.
line 220, coefficients alpha and beta, have 12 components. Why 12?
Matrix X is built by the concatenation of anthropometrical parameters and speed values as follows
Variable gender is binary and defined as for men and for women.
Coefficients of the LRM have 12 elements as it is the number of retained PCA components. The 12 PCA components retained are the first 12 eigenvalues. And therefore the number of columns of matrix E, and the dimension of the output of the LRM model. The decision for retaining 12 components is explained in the section Results – Parallel analysis. 12 is the intersection point of Original data and Random data (Figure 1).
8) Equations 6,7 are not way of calculating values (line 245), they are representation of the curve values.
Agreed. We have eliminated the equations, because they were more confusing than illustrative.
9) ICC should be an equation, though, instead of just text (line 249)
Agreed. ICC has been added as an equation
10) The value Se in Eq 8, is not clearly indicated how it is computed.
SE represents the combined standard deviation of the both parameters: real data (Tc) and estimated data (Ec). In order to make this information clearer, we have changed the notation. Now we use σc instead of SE.
We expect the change of notation avoids confusion, because the recognized symbol of the standard deviation is σ.
Reviewer 2
1) The section of Materials is named by the authors 'instrumentation' and it include reasonings on the methodology used. This organization can be improved including all expositions on methodology in section Methods that could be named Materials & Methods.
Agreed. We have changed the name of the sections.
2) The authors propose to use PCA to selection of main features on movement of an human to generate new movements according with other variables (age and gender) that have been not considered for this problem. The idea is not very original because the authors propose a similar methodology similar to published by Vinué et al., 2014 (see paper lines 88 -95).
In Vinué et al. ( 2014) the authors propose a new approach for defining optimal prototypes for apparel design. They introduce two classification algorithms based on a clustering algorithm, modified to deal with anthropometric data. The outputs of both algorithms include a set of representative subjects taken from the original data set which constitute their desired prototypes.
Partitioning and selection of prototypes are calculated based on a set of linear measurements obtained from 3D body scans: bust circumference, chest circumference, neck to ground length, waist circumference and hip circumference.
On the other hand, our method aims at synthesizing new movement data from a set of given anthropometrical parameters (age, height, weight, gender) and running speed.
We add the text below to the manuscript:
“Ballester et al. (2014) describe a method based on the harmonization of body scan data followed by a Shape Analysis procedure using Principal Component Analysis. The combination of these techniques allows the generation of human 3D shape models from anthropometric measurement data (age, height, weight, BMI, waist girth, hip girth, bust/chest girth, etc.). Our hypothesis is that the use of a similarly based methodology to generate human motion instead of human body shapes is possible, valid and reliable.. The novelty in our approach is the generation of running data from a set of easily measurable anthropometric parameters and a desired value of running speed.”
3) First problem: The authors use a commercial system based on 17 inertial sensors but there is not any information on the previous validation system. Without this prerequisite all data may not be valid.
The laboratory of the Institute of Biomechanics of Valencia has quality protocols, which includes sensors calibration. However, we understand the concern of the reviewer. For this reason we add two references that validates the commercial system:
• Zhang, J.-T., Novak, A. C., Brouwer, B., & Li, Q. (2013). Concurrent validation of Xsens MVN measurement of lower limb joint angular kinematics. Physiological Measurement, 34(8), N63.
• Thies, S. B., Tresadern, P., Kenney, L., Howard, D., Goulermas, J. Y., Smith, C., & Rigby, J. (2007). Comparison of linear accelerations from three measurement systems during «reach & grasp». Medical Engineering & Physics, 29(9), 967-972.
4) Second problem: Because the main objective of work is to predict movements of humans with similar features but different age and/or gender, where is the real data sample to confirm the results obtanided?
The validation procedure was explained in the section Materials & Methods; at the end of subsection Mathematical procedure. For better understanding, we have created a new subsection: Validation of the bio-motion generator.
We use the ‘leave-one-out’ cross validation technique. For better understanding, we have create a new subsection: Validation of the bio-motion generator.
5) In my opinion the idea is well but to confirm the conclusions of the paper is needed a major effort to obtain a real sample of humans with similar features of body and differents age and gender, and it is not easy
Agreed. We add this text to the conclusions: More work is necessary in order to confirm the results with a broader sample of people.
6) Also, the authors should have a effort to clarify the differences between the proposed methodology and the methodology proposed by Vinué et al., 2014 to judge the originality of the work
As mentioned above (see the answer to the issue 2) Vinué et al. (2014) introduce two classification algorithms based on a clustering algorithm, modified to deal with anthropometric data. We do not use the methodology proposed by Vinué. Our method aims at synthesizing new movement data from a set of given anthropometrical parameters (age, height, weight, gender) and running speed.
Our method is based on the work of Ballester et al. (2014). We expect that the new text will make clear this point.
" | Here is a paper. Please give your review comments after reading it. |
696 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background: COVID-19 has forced many schools and universities worldwide, including Saudi Arabia, to move from traditional face-to-face learning to online learning. Most online learning activities involve the use of video conferencing apps to facilitate synchronous learning sessions. Some faculty were not accustomed to using video conferencing apps, but they had no other choice than to jump on board regardless of their readiness, one of which had something to do with security and privacy awareness. Several threats and vulnerabilities are haunting video conferencing app users. Most require human factors to succeed. Neglecting the security measures of video conferencing apps is a big part of how they will become real.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods:</ns0:head><ns0:p>We used a survey experiment to determine the level of security and privacy awareness among Saudi Arabian faculty regarding the use of video conferencing apps, as well as the factors associated with it. We analyzed the data using the Knowledge-Attitudes-Behaviors (KAB) model and Partial Least Squares Structural Equation Modeling (PLS-SEM) method on 307 faculty members from 43 Saudi Arabian universities.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results:</ns0:head><ns0:p>We found that the average awareness score of video conferencing apps' security and privacy settings falls into the 'Poor' category. Further analysis showed that perceived security, familiarity with the app, and digital literacy of faculty members are significantly associated with higher awareness. Privacy concerns are significantly associated with higher awareness only for STEM faculty, while attitudes toward ICT for teaching and learning is negatively associated with such awareness among faculty with more than 10 years of experience. This study lays the groundwork for future research and user education on video conferencing app security and privacy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>During this COVID-19 pandemic, video conferencing apps have been gaining popularity as they are being used as the most viable means to facilitate business processes during work-from-home or learning-from-home approaches. There is an intriguing bidirectional relationship between ICT adoption and education indicators as revealed by a cross-country analysis of research <ns0:ref type='bibr' target='#b70'>(Pratama 2017)</ns0:ref>, and video conferencing apps are no exception. In many educational institutions, they have become the most widely used tool for supporting online learning around the world, including in Saudi Arabia <ns0:ref type='bibr'>(Alshehri et al. 2020)</ns0:ref>. This is due to their unique features, which make them ideal for use in teaching and learning (A. <ns0:ref type='bibr' target='#b2'>Alammary, Carbone, and Sheard 2016)</ns0:ref>. They enable instructors to set up online synchronous classes that can be recorded and accessed later, either for those who want to revisit the class or those who want to catch up any class that they missed. Students can attend these online classes from anywhere by using personal computers, laptops, or even cell phones <ns0:ref type='bibr' target='#b23'>(Camilleri and Camilleri 2021)</ns0:ref>.</ns0:p><ns0:p>Numerous video conferencing applications were available, even prior to the COVID-19 pandemic. While Zoom emerged as the clear winner in the COVID-19 pandemic-induced surge in video conferencing app use, many other video conferencing apps, including Google Meet, Microsoft Teams, and Blackboard Collaborate Ultra, also saw an increase in downloads and usage <ns0:ref type='bibr' target='#b84'>(Trueman 2020)</ns0:ref>. Blackboard Collaborate Ultra, in particular, is the one used by the vast majority of universities in Saudi Arabia because the Ministry of Education (MOE) has designated the Blackboard platform as the official e-learning platform (Iffat Rahmatullah 2021). While the majority of Saudi Arabian faculty were unfamiliar with Blackboard Collaborate Ultra prior to the COVID-19 pandemic, they had no choice but to jump on board regardless of their readiness during the pandemic (Ali <ns0:ref type='bibr' target='#b3'>Alammary, Alshaikh, and Alhogail 2021)</ns0:ref>.</ns0:p><ns0:p>On the other hand, the shift from traditional face-to-face to online learning that occurred during the pandemic has also raised many concerns related to cybersecurity and protection of individuals and organizational information resources <ns0:ref type='bibr' target='#b6'>(Almaiah, Al-Khasawneh, and Althunibat 2020)</ns0:ref>. Several cybersecurity and privacy threats and vulnerabilities are plaguing video conferencing app users, including the exposure of user data, unwanted and disruptive intrusions, the propagation of malware, or the hijacking of host machines through remote control tools <ns0:ref type='bibr' target='#b49'>(Joshi and Singh 2017)</ns0:ref>. According to recent security reports, the number of cybersecurity attacks targetting many organizations, including universities, has significantly increased <ns0:ref type='bibr' target='#b39'>(Hakak et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Lallie et al. 2021)</ns0:ref>. Many cybersecurity incidents are caused by employees who do not follow security rules because of their low level of security awareness in the first place <ns0:ref type='bibr' target='#b44'>(Hina and Dominic 2016)</ns0:ref>. Faculty members become the primary actors in higher education due to their nature as instructors who often must also administer and host online classes on their own. As such, assessing their awareness of security and privacy settings when using video conferencing apps becomes critical to ensuring that the online learning experience is as secure and private as possible for all parties involved. This is the first study to use a survey experiment research design to conduct a practical assessment of Saudi Arabian faculty members' security and privacy awareness regarding the use of Blackboard Collaborate Ultra as the primary video conferencing app during the COVID-19 pandemic. The study's specific objectives are to 1) comprehend and investigate the factors associated with the Saudi Arabian faculty's level of security and privacy awareness regarding the use of video conferencing apps, particularly Blackboard Collaborate Ultra, which is the most widely used in this country for teaching and research purposes during and possibly beyond the COVID-19 pandemic, and 2) assist universities, particularly in Saudi Arabia, in improving their security policies and practices.</ns0:p></ns0:div>
<ns0:div><ns0:head>Online learning in Saudi Arabian universities during the COVID-19 pandemic</ns0:head><ns0:p>The Saudi National E-learning Center (NELC) was founded in 2005. NELC is responsible for establishing governance frameworks and regulations for e-learning and online learning in Saudi Arabia. NELC plays a significant role in enhancing the online learning experience in universities as well as supporting and promoting effective practices in online learning (A. Y. Alqahtani and Rajkhan 2020). Furthermore, the center develops policies and procedures regarding the provisioning and management of online learning programs. The policies specify the technologies that universities have to implement and the level of support they should provide to their faculty and students. Policies also include standards and practices to design accessible online learning environments (National eLearning Center 2021).</ns0:p><ns0:p>NELC has also provided guidelines to universities that specify e-learning infrastructure including hardware (e.g., servers, storage, and networking), e-learning solutions (e.g., learning management systems and video conferencing apps), establishing dedicated deanships to manage e-learning matters, providing training and awareness programs, and other e-learning and online learning initiatives <ns0:ref type='bibr' target='#b59'>(Malik et al. 2018)</ns0:ref>. As a result of the efforts of NELC and the investment in e-learning, Saudi Arabian universities have an adequate online learning infrastructure <ns0:ref type='bibr' target='#b82'>(Thorpe and Alsuwayed 2019)</ns0:ref>. Other researchers found that the IT infrastructure in Saudi Arabian universities has successfully handled the transformation from face-to-face to online learning during the COVID-19 pandemic (A. Y. Alqahtani and Rajkhan 2020). However, the university's level of maturity played a considerable role in its ability to overcome challenges and utilize elearning solutions before the pandemic.</ns0:p><ns0:p>Moreover, research indicates that faculty and students in Saudi Arabia have considerably favorable attitudes toward e-learning <ns0:ref type='bibr' target='#b45'>(Hoq 2020)</ns0:ref>. Others found that Saudi Arabian students preferred e-learning due to its flexibility and better communication with their teachers and peers. However, the same study also showed that students perceived online learning to be less beneficial than traditional face-to-face instructions (El-Sayed Ebaid 2020). While students' attitudes toward e-learning are influenced partly by their previous experience and readiness for online learning (N. <ns0:ref type='bibr' target='#b9'>Alqahtani, Innab, and Bahari 2021)</ns0:ref>, numerous other factors, such as gender, level of the course, and quality of online learning approaches also play some roles <ns0:ref type='bibr' target='#b85'>(Vadakalu Elumalai et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Amid the COVID-19 pandemic that has presented significant challenges to societies, Saudi Arabia, like many other countries, has attempted to adapt to the looming crisis. Since education is one of the sectors that was greatly affected by the pandemic, Saudi Arabian universities have taken a number of initiatives to accommodate the decision to adopt online learning during the COVID-19 pandemic. Saudi Arabia's MOE has provided e-learning solutions licenses to all universities in order to support and facilitate the transition to e-learning and online learning, including Blackboard Leaning Management System (LMS) along with its video conferencing app, Blackboard Collaborate Ultra (Ministry of Education 2021). The MOE also provided free internet access to students around the country as well as increased the bandwidth to accommodate the high demand for the internet connection. In collaboration with charity organizations, the MOE has also supported deserving students with laptops and required training (A. Y. Alqahtani and Rajkhan 2020).</ns0:p><ns0:p>According to the MOE, approximately 1.6 million students has taken more than 4 million tests online in 43 private and public universities by May 2020. Over 58,179 faculty members participated in this transition by delivering their lectures, conducting exams, and having discussions fully online, averaging over 1.5 million online classes per week (Ministry of Education 2021). Saudi Arabia's efforts to pursue successful transitions to online learning to enable the education process for more than six million students in schools and universities during the COVID-19 pandemic has been commended by <ns0:ref type='bibr'>UNESCO (Vadakalu Elumalai et al. 2020)</ns0:ref>. Regardless, for these transitions to be successful, it is critical to ensure that faculty members serving as frontliner instructors are capable of providing students with the best online learning experience possible, which includes confirming that their awareness level is sufficient to keep online learning activities secure and private for all parties involved.</ns0:p></ns0:div>
<ns0:div><ns0:head>Theoretical framework</ns0:head><ns0:p>In 2006, Kruger and Kearney developed the KAB model, a prototype for assessing information security awareness, which consists of three different dimensions (i.e., knowledge, attitudes, and behaviors) <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>. Each dimension is measured through a series of multiple-choice questions with either correct or incorrect answers for all three dimensions, in addition to the third option of 'Don't know' only for the 'knowledge' and 'attitudes' dimensions. Since then, this KAB model has been widely used as an instrument for assessing information security awareness <ns0:ref type='bibr' target='#b61'>(McCormac et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b68'>Onumo, Ullah-Awan, and Cullen 2021)</ns0:ref>.</ns0:p><ns0:p>Additionally, based on our review of the literature, we discovered several factors within individuals that are associated with their security and privacy awareness. Specifically, we identified five factors to include in this study: attitudes toward information and communication Manuscript to be reviewed Computer Science technology (ICT) for teaching and research, digital literacy, privacy concerns, perceived security awareness, and familiarity with the video conferencing app platform..</ns0:p></ns0:div>
<ns0:div><ns0:head>Attitudes toward ICT for Teaching & Research and Digital Literacy</ns0:head><ns0:p>The role of ICT in the improvement of education is undeniable. However, it is no secret that not all faculty share the same attitudes toward the use of ICT for teaching and research, either due to their lack of experience that may translate to lower levels of digital literacy <ns0:ref type='bibr' target='#b24'>(Cavas et al. 2009)</ns0:ref> or simply personal preferences <ns0:ref type='bibr' target='#b17'>(Bauwens et al. 2020)</ns0:ref>. Taking these findings into account, we hypothesize that: H1: Faculty with more positive attitudes toward ICT for teaching and research have a higher level of awareness of video conferencing apps' security and privacy settings. H2: Faculty with a higher level of digital literacy have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p></ns0:div>
<ns0:div><ns0:head>Privacy Concerns and Perceived Security Awareness</ns0:head><ns0:p>A lot of literature has been written on people's worries about how their private information is shared when they use information technology goods and services <ns0:ref type='bibr' target='#b69'>(Petronio and Child 2020)</ns0:ref>, as well as on how they see themselves having the security awareness to protect them <ns0:ref type='bibr' target='#b18'>(Bulgurcu, Cavusoglu, and Benbasat 2010;</ns0:ref><ns0:ref type='bibr' target='#b56'>Li et al. 2019)</ns0:ref>. These studies contend that privacy concerns, along with perceived security awareness, lead to more careful judgments about whether and how to utilize information technology-related goods or services, including video conferencing apps in the case of this study. Taking these findings into account, we hypothesize that H3: Faculty with a higher level of privacy concerns have a higher level of awareness of video conferencing apps' security and privacy settings. H4: Faculty with a higher level of perceived security awareness have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p></ns0:div>
<ns0:div><ns0:head>Familiarity with the App</ns0:head><ns0:p>There are many different video conferencing apps used by different institutions amid the worldwide adoption of distance learning due to the COVID-19 pandemic. While most video conferencing apps will have the same major features, each app may have its own unique user interface and minor features. For that reason, we hypothesize that H5: Faculty who are more familiar with the video conferencing app in use have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_0'>2022:03:71735:1:1:NEW 20 Apr 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Based on all the hypotheses above, the conceptual model of security and privacy awareness in video conferencing apps in this study is shown in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Target Population</ns0:head><ns0:p>Saudi Arabia has 29 public universities and 14 private universities (Ministry of Education 2020). As of the middle of the spring 2020 semester, Saudi authorities suspended face-to-face teaching in all these universities as a result of the outspread of the COVID-19 pandemic. MOE requested universities to move all courses online by using the available e-learning solutions. Online learning will continue in the next three semesters. The target population for this study was faculty from any Saudi Arabian university who were teaching during these semesters. This includes teaching assistants, lecturers, assistant professors, associate professors, and full professors. In their latest report, which was published in 2020, the MOE reported that there were approximately 71,000 faculty teaching in Saudi Arabian universities (Ministry of Education 2021). Saudi Arabian universities normally provide their faculty with computer devices, assign them email addresses and require them to check their emails regularly. Therefore, it can be said that the entire target population of this study was theoretically accessible.</ns0:p></ns0:div>
<ns0:div><ns0:head>Measures</ns0:head><ns0:p>In this study, there are five latent exogenous variables (i.e., attitudes toward ICT for teaching and research, digital literacy, perceived privacy concerns, perceived security awareness, and familiarity with Blackboard Collaborate Ultra). As summarized in Table <ns0:ref type='table'>1</ns0:ref>, we either developed, adopted, or adapted measurement items from other studies for all five of them. In addition, three other observed variables (i.e., knowledge, attitudes, and behaviors) will be combined to form a single composite score to answer the first research question and treated as a latent endogenous variable (i.e., awareness of security and privacy settings on Blackboard Collaborate Ultra) to answer the second research question.</ns0:p></ns0:div>
<ns0:div><ns0:head>Attitudes toward ICT for teaching and research</ns0:head><ns0:p>We adapted the scales from <ns0:ref type='bibr' target='#b67'>(Ng 2012)</ns0:ref> to measure the attitudes toward utilizing ICT for teaching and research purposes among the faculty in this study. Specifically, we combined the teaching and research parts into one item to make it four items total, as opposed to eight items total. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Digital Literacy</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b67'>(Ng 2012)</ns0:ref> to measure digital literacy among the faculty in this study. Specifically, we omitted three out of six original items for the technical dimensions while keeping all two items from each cognitive and social-emotional dimension to make it six items total as opposed to nine items in total. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Privacy Concerns</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b58'>(Malhotra, Kim, and Agarwal 2004)</ns0:ref> to measure privacy concerns among the faculty in this study. Specifically, we picked only the most relevant two out of three to four original items for each of the control, awareness of practice, and collection dimensions to make it six, as opposed to ten items in total. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Perceived Security Awareness</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b18'>(Bulgurcu, Cavusoglu, and Benbasat 2010)</ns0:ref> to measure perceived security awareness among the faculty in this study. We used all six items in the original scales without making any modifications or omissions to any of the items. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Familiarity with Blackboard Collaborate Ultra</ns0:head><ns0:p>We developed three items to measure the faculty' familiarity with Blackboard Collaborate Ultra. The first two items were about their perceived familiarity and their usage frequency, measured on a 5-point Likert scale, while the last one was about whether or not they had read the terms of agreement for Blackboard Collaborate Ultra.</ns0:p></ns0:div>
<ns0:div><ns0:head>Awareness of Security and Privacy Settings on Blackboard Collaborate Ultra</ns0:head><ns0:p>We developed five questions about Blackboard Collaborate Ultra to test our participants' knowledge, attitudes, and behaviors, representing their awareness of security and privacy settings on Blackboard Collaborate Ultra. The first three questions consist of side-by-side pictures of the default and modified security and privacy settings on this app. We asked respondents to identify which one the default was (i.e., knowledge), which one they preferred (i.e., attitudes), and which one they mostly used (i.e., behaviors). For the last two questions, we provided them with two hypothetical scenarios involving security and privacy incidents, in which we asked them to tell us which options are available and what course of actions they are going to do in each scenario. For each answer, we gave them a score of 0 for the wrong answer for the knowledge dimension or the worst option for the attitudes and behavior dimensions, a score of 5 if they chose 'Don't know' or if their answer was partially correct, and a score of 10 if they picked the correct answer or the best option, security-wise. The complete questions are available in Appendix 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>A research ethics application was submitted to the University of Jeddah ethics committee in preparation for the data collection phase, in which an ethics approval number (UJ-REC-021) was granted for this study. We collected the data using a set of questionnaires that was delivered using Qualtrics, an online survey software program. We provided an explanation of the purpose of the study on the landing page of the survey. Participants were also informed of the approximate time required to complete the survey and were asked to give their consent to participate. They were notified that their participation is voluntary and that they may choose not to participate or withdraw participation at any time. To avoid potential biases, no gifts or incentives of any kind were promised to the participants.</ns0:p><ns0:p>The questionnaire itself had three parts. The first one was to collect demographic information about participants, including their gender, age, education level, academic fields, academic rank, teaching experience, and university name. This part was used as control variables in the analysis.</ns0:p><ns0:p>The second part contained a five-level Likert scales type of questions for participants to indicate their attitudes toward information and communications technology, their digital literacy, their perceived privacy concerns, their perceived privacy awareness, and their familiarity with Blackboard Collaborate Ultra. This part was used to measure the exogenous variables in the model.</ns0:p><ns0:p>The third part was where we conducted the experiment to measure the actual security and privacy awareness score of all participants representing their awareness of security and privacy settings on Blackboard Collaborate Ultra, which is the endogenous variable in our model. It consisted of four scenarios. Each scenario had a) two screenshots captured from Blackboard Collaborate Ultra either during a running session or from the settings window, and b) several questions designed to assess the knowledge, attitudes, and behaviors of each participant regarding some important security and privacy settings in Blackboard Collaborate Ultra, as well as some potential security and privacy issues that may arise while using the app.</ns0:p><ns0:p>The first scenario was to test the participants' awareness of the risks associated with granting guest access. The second scenario was related to enabling critical permissions such as media file sharing and whiteboard access. The third scenario concerned private chat rooms that could be abused to spread malicious and inappropriate content. The final scenario was designed to assess participants' awareness of what to do when malicious links are posted to the chat.</ns0:p><ns0:p>Before starting the data collection, we conducted a pilot study to confirm the content validity of the survey items, assess their difficulty, and get rough estimates of the time required to complete the survey. Validity was evaluated in terms of content and face validity. Content validity can help 'establish an instrument's credibility, accuracy, relevance, and breadth of knowledge regarding the domain'. Face validity, on the other hand, is used to examine the survey items in terms of 'ease of use, clarity, and readability' <ns0:ref type='bibr' target='#b20'>(Burton and Mazerolle 2011)</ns0:ref>. Twelve faculty from several Saudi Arabian universities were invited to participate in the pilot study. The pilot survey was developed by using Qualtrics survey software as an online survey. Participants were provided with an empty text field to comment on the relevance and clarity of that item. They were also requested to state whether a revision was required for that item. If a revision was suggested, participants were encouraged to provide a revision for the item. All comments that were provided by the participants were analyzed, and some survey items were modified to reflect the suggested changes. Some items were reworded to increase their relativity or clarity. Others were removed for irrelevance or to eliminate repetitions. There were also items that were added to measure some missing aspects.</ns0:p><ns0:p>The survey was then administered to faculty members from the 43 public and private universities in Saudi Arabia. To reach a representative sample, invitations were sent using emails and the survey link was shared on the most popular social media platforms in Saudi Arabia, i.e., WhatsApp, LinkedIn, Telegram, and Twitter. The aim was to reach all possible academic communities on these platforms. The responses were collected during the months of October and November 2021. Around 470 responses were received, of which 307 were found to be valid responses. The vast majority of the eliminated responses were incomplete ones, but there were also very few cases where participants indicated their refusal to participate in the survey.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>To investigate the actual level of security and privacy awareness on video conferencing apps among Saudi Arabian faculty in this study, we calculated a composite score with a 3:2:5 weight as suggested by Kruger and Kearney for each of knowledge, attitudes, and behaviors variable based on the answers to the third part of the survey <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>. We then normalized the score to a range of 0 to 100, which will be the final security and privacy awareness score of each participant. As per Kruger and Kearney's categorization, any score below 60 is categorized as 'Poor', a score of 80 or above is considered 'Good,' and anything in between is considered 'Average' <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>.</ns0:p><ns0:p>To identify factors associated with the awareness of video conferencing apps' security and privacy settings, we used the Partial Least Squares Structural Equation Modeling (PLS-SEM) method with the help of 'plssem' package in STATA 15 <ns0:ref type='bibr' target='#b86'>(Venturini and Mehmetoglu 2019)</ns0:ref>. PLS-SEM is a widely used structural equation modeling technique that allows for the estimation of complex relationships between latent variables in path models. It is particularly advantageous for the exploration and development of theory, as well as when prediction is the primary objective of a study, and it performs well with a small sample size <ns0:ref type='bibr' target='#b36'>(Hair, Howard, and Nitzl 2020)</ns0:ref>. Prior to running the path analysis, we checked the standardized loading of each measurement item to identify any item that should be excluded from the model, and we verified that discriminant validity is met using the squared interfactor correlation and average variance extracted (AVE). To deal with endogeneity problem, we used the control variable approach by conducting several multigroup comparison analyses across all demographic factors <ns0:ref type='bibr' target='#b46'>(Hult et al. 2018)</ns0:ref>. The full code and dataset are available at our GitHub repository.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Participant Demographic Information</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:1:1:NEW 20 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> summarizes the demographic information of all participants in this study. As can be seen, the sample is quite balanced in terms of gender (50.81% male vs 49.19% female), academic fields (46.91% STEM vs 53.09% non-STEM), and position (57.33% tenured vs 42.67% non-tenured). Most participants are below 45 years of age (80.78%), hold a PhD degree (60.59%), and have at least ten years of experience in academia (63.84%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Awareness of Video Conferencing Apps' Security and Privacy Settings</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> illustrates the distribution of the composite score for the awareness of video conferencing apps' security and privacy settings. As it turns out, the overall score for all participants in this study (M = 44.27, SD = 16.06) falls into the 'Poor' level of awareness as per Kruger and Kearney's categorization <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Factors Associated with Awareness of Security and Privacy Settings</ns0:head><ns0:p>Out of 26 measurement items for the five exogenous variables and three observed variables for the endogenous variable in our model, we found seven of them, all from the exogenous variables, to have a loading score of less than 0.700, and thus had to be omitted from the model <ns0:ref type='bibr' target='#b37'>(Hair et al. 2012)</ns0:ref>. The summary statistics of all remaining measurement items along with their loading scores are summarized in Table <ns0:ref type='table' target='#tab_0'>3</ns0:ref>. It is also important to note that, as summarized in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>, each latent variable's AVE score is greater than the squared interfactor correlation scores, confirming that the model's discriminant validity has been met <ns0:ref type='bibr' target='#b36'>(Hair, Howard, and Nitzl 2020)</ns0:ref>.</ns0:p><ns0:p>The results from PLS-SEM showed a relative goodness-of-fit (GoF) value of 0.99, which meets the rule of thumb <ns0:ref type='bibr' target='#b42'>(Henseler and Sarstedt 2013;</ns0:ref><ns0:ref type='bibr' target='#b87'>Vinzi, Trinchera, and Amato 2010)</ns0:ref>. Furthermore, the path analysis showed that all five relationships are statistically significant as we hypothesized. However, there is one unexpected result in the negative coefficient of attitudes toward ICT for teaching and learning on the awareness of video conferencing apps' security and privacy settings. Our subsequent multigroup comparison analysis showed the results are quite robust across all demographic factors, with the exception of two. First, the structural effect of privacy concerns on the awareness of video conferencing apps' security and privacy settings is significant only for those with a STEM background and not for others with a non-STEM background. Second, the negative structural effect of attitudes toward ICT for teaching and learning on the awareness of video conferencing apps' security and privacy settings is significant only for those having more than ten years of experience in academia and not for others having less.</ns0:p><ns0:p>Based on the findings, Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the final model of the awareness of video conferencing apps' security and privacy settings among Saudi Arabian faculty in this study, while Table <ns0:ref type='table'>5</ns0:ref> summarizes the model's hypothesis tests results.</ns0:p><ns0:p>This study investigates the actual security and privacy awareness among Saudi Arabian faculty when using video conferencing apps, particularly Blackboard Ultra, for teaching during the COVID-19 pandemic. One of the study's key findings is that, in general, faculty in Saudi Arabia still have poor security and privacy awareness on video conferencing apps. This may be understandable considering that most of them only got into this new technology on a daily basis because of the pandemic (Ali Alammary, Alshaikh, and Alhogail 2021). As evidenced by earlier studies, the use of Blackboard in Saudi Arabia was rather low prior to the pandmic (Al Meajel and Sharadgah 2018; Tawalbeh 2017). Furthermore, studies on general awareness of cybersecurity practices are still not at the desired level (Ali Alammary, Alshaikh, and Alhogail 2021). Therefore, more efforts need to be made to raise awareness through conducting several strategies, not only to raise awareness but rather to build a cybersecurity culture <ns0:ref type='bibr' target='#b11'>(Alshaikh 2020</ns0:ref>). An example of a proposed strategy to build a cybersecurity culture is establishing support groups called 'cyber champions' to raise academic privacy awareness and influence faculty' behaviors toward adopting cybersecurity practices. Several studies have found that using support groups and peers to change cybersecurity behavior is an effective strategy <ns0:ref type='bibr' target='#b13'>(Alshaikh and Adamson 2021;</ns0:ref><ns0:ref type='bibr' target='#b27'>Cram et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b32'>Guo et al. 2011)</ns0:ref>. This approach could be used by universities to raise awareness among their academic community about the importance of using video conferencing apps in a secure and private manner.</ns0:p><ns0:p>Among the five latent exogenous variables examined in this study, perceived security awareness has the strongest positive effect on awareness of security and privacy settings in video conferencing apps. This particular finding is unsurprising. While this is not always the case <ns0:ref type='bibr' target='#b80'>(Tariq, Brynielsson, and Artman 2014)</ns0:ref>, a higher perceived level of security awareness typically results in improved security and privacy practices <ns0:ref type='bibr' target='#b55'>(Lebek et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b71'>Pratama and Firmansyah 2021;</ns0:ref><ns0:ref type='bibr' target='#b47'>Hwang et al. 2021)</ns0:ref>. Therefore, it is only natural for faculty who have a higher perceived level of security awareness to be more willing to invest time in learning the security and privacy settings for any apps they use, including video conferencing apps.</ns0:p><ns0:p>The second largest positive effect on awareness of video conferencing app security and privacy settings was discovered to be familiarity with the video conferencing app itself. As we predicted when initially developing a new construct for this variable, familiarity with the app in use is critical, given the abundance of available options, each with its own set of features and settings. As a result, those less familiar with the video conferencing app in use may be unaware of all security and privacy settings available to them. This finding is also consistent with findings from other studies regarding familiarity with concepts, technical terms, or security-related systems that can aid in increasing people's security awareness <ns0:ref type='bibr' target='#b74'>(Schmidt et al. 2008;</ns0:ref><ns0:ref type='bibr'>Zwilling et al. 2022)</ns0:ref>. Fortunately, this is a relatively straightforward issue to address, for example with adequate technical support to educate faculty on the subject. Digital literacy was found to have a moderate positive effect on the awareness of security and privacy settings on video conferencing apps. A simple explanation is that individuals with higher digital literacy possess the necessary skills to independently navigate all security and privacy settings. As such, individuals with a higher level of digital literacy tend to have a heightened security and privacy awareness, which is also consistent with the findings from several previous studies <ns0:ref type='bibr' target='#b73'>(Sasvári, Nemeslaki, and Rauch 2015;</ns0:ref><ns0:ref type='bibr' target='#b66'>Nemeslaki and Sasvari 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>Cranfield et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Privacy concerns were found to be the fourth exogenous variable having a positive effect on the awareness of security and privacy settings on video conferencing apps. This finding is consistent with the extensive discussion in the literature regarding the relationship between privacy concerns and security awareness in general <ns0:ref type='bibr' target='#b25'>(Chung et al. 2021;</ns0:ref><ns0:ref type='bibr' target='#b77'>Siponen 2001)</ns0:ref>. However, unlike the previous three exogenous variables, this one has a significant positive effect only on participants with a STEM background. In other words, the effect is rather inexistent among faculty members with social sciences, arts, humanities, health, and medical sciences background. This finding could be explained by STEM faculty members' increased familiarity with ICT in general, including its benefits and risks. As another study pointed out, STEM faculty were more aware of cybersecurity threats such as phishing than their non-STEM colleagues <ns0:ref type='bibr' target='#b29'>(Diaz, Sherman, and Joshi 2020)</ns0:ref>. As a result, they have a higher level of privacy concerns than their non-STEM counterparts and are thus more aware of the security and privacy settings of any applications they use, including video conferencing apps.</ns0:p><ns0:p>Unlike the others, attitudes toward ICT for teaching and research were found to have a detrimental effect on awareness of security and privacy settings on video conferencing apps. While this finding is quite surprising, the fact that it is significant only among participants with more than ten years of teaching experience may reveal an intriguing story. Senior faculty appear to be accustomed to whatever ICT solutions they used prior to the COVID-19 pandemic, which almost certainly did not include video conferencing apps. Having to learn something new in order to do something they are already very familiar with may have become a barrier that prevents them from fully utilizing this new technology, particularly in terms of its security and privacy settings. After all, reluctant to change, whether among individuals in general <ns0:ref type='bibr' target='#b16'>(Audia and Brion 2007)</ns0:ref>, or among faculty in particular <ns0:ref type='bibr' target='#b63'>(McCrickerd 2012;</ns0:ref><ns0:ref type='bibr' target='#b78'>Tallvid 2016)</ns0:ref> is not something new. The good news is that, while statistically significant, the negative effect of this variable in this model is the smallest of all exogenous variables. As such, concentrating on the other variables that have a positive effect may be sufficient to offset it.</ns0:p><ns0:p>Apart from academic field and teaching experience, no statistically significant difference in any other demographic factor was observed, including age, gender, educational attainment, and tenure track status. On the one hand, this implies that the endogeneity issue is largely addressed in this model, which contributes to the robustness of the findings. On the other hand, the PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:1:1:NEW 20 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science relatively similar scores across demographic groups can be seen as good news. It indicates equality in terms of security and privacy awareness among Saudi Arabian faculty, as is also the case in some countries <ns0:ref type='bibr' target='#b34'>(Hadlington, Binder, and Stanulewicz 2020;</ns0:ref><ns0:ref type='bibr' target='#b72'>Pratama, Firmansyah, and Rahma 2022)</ns0:ref>, despite the fact that some other countries continue to demonstrate inequality <ns0:ref type='bibr' target='#b61'>(McCormac et al. 2017;</ns0:ref><ns0:ref type='bibr'>Zwilling et al. 2022)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, we used the Knowledge-Attitude-Behavior (KAB) model to determine the actual security and privacy awareness of faculty in Saudi Arabia regarding video conferencing apps, in particular Blackboard Collaborate Ultra, which is the most commonly used one in the country. We discovered that the average score falls into the 'Poor' category (mean = 44.27, SD = 16.06), which is not surprising given that many of them are only using this new technology on a daily basis as a result of the pandemic. Furthermore, based on the results of the subsequent analysis using PLS-SEM method, we discovered that all five latent variables in our model to have significant relationships with the actual security and privacy awareness on video conferencing apps among Saudi Arabian faculty. More specifically, perceived security awareness has the strongest effect of them all, followed by familiarity with the video conferencing app platform, and digital literacy. Meanwhile, perceived privacy concerns are only significant for those with a STEM background, and surprisingly, attitudes toward the use of ICT for teaching and learning is negatively related to the actual security and privacy awareness among those having more than ten years of experience in academia.</ns0:p><ns0:p>This study lays the groundwork for future research and interventions aimed at increasing user awareness of security and privacy concerns when using video conferencing apps for teaching and research purposes. Given the rapid adoption of video conferencing apps as a result of distance learning in the face of the COVID-19 pandemic, addressing this issue is becoming increasingly critical. Finally, we suggest that similar studies be done in other parts of the world to account for cultural differences that might make people less aware of cybersecurity and privacy, especially when it comes to video conferencing apps. Manuscript to be reviewed There is a lot of potential in the use of mobile technologies (e.g., smartphones, tablets) for teaching and research att4 <ns0:ref type='bibr' target='#b67'>(Ng, 2012)</ns0:ref> I know how to solve my own technical problems. dl1 I can learn new technologies easily. dl2 I know about a lot of different technologies. dl3 I am confident with my search and evaluate skills in regard to obtaining information from the Web dl4 I am familiar with issues related to web-based activities e.g., cyber safety, search issues, plagiarism dl5 ICT enables me to collaborate better with their peers on project work and other learning activities dl6</ns0:p><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:p>Digital Literacy I frequently obtain help with my university work from my friends over the Internet e.g., through email, social media, or Videoconference dl7 <ns0:ref type='bibr' target='#b67'>(Ng, 2012)</ns0:ref> User online privacy is really a matter of users' right to exercise control and autonomy over decisions about how their information is collected, used, and shared. pc1 I believe that online privacy is invaded when control is lost or unwillingly reduced as a result of a marketing transaction.</ns0:p><ns0:p>pc2 Companies seeking information online should disclose the way the data are collected, processed, and used. pc3 It is very important to me that I am aware and knowledgeable about how my personal information will be used. pc4 When online companies ask me for personal information, I sometimes think twice before providing it. pc5</ns0:p><ns0:p>Privacy Concerns I'm concerned that online companies are collecting too much personal information about me. pc6 <ns0:ref type='bibr' target='#b58'>(Malhotra et al., 2004)</ns0:ref> </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:1:1:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Final</ns0:head><ns0:label /><ns0:figDesc>model of video conferencing security and privacy settings awareness in this study PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:1:1:NEW 20 Apr 2022) ICT for teaching or conducting research att1 ICT makes teaching or conducting research more interesting att2 I am more motivated to teach or to conduct research with ICT att3 Attitudes</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,345.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Summary statistics of all measurement items</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variables</ns0:cell><ns0:cell>Measurement Items</ns0:cell><ns0:cell>Mean</ns0:cell><ns0:cell cols='3'>SD Loading Cronbach</ns0:cell><ns0:cell cols='2'>DG rho_A</ns0:cell></ns0:row><ns0:row><ns0:cell>Attitudes toward</ns0:cell><ns0:cell>att1</ns0:cell><ns0:cell>4.42</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell cols='2'>0.884 0.928</ns0:cell><ns0:cell>0.891</ns0:cell></ns0:row><ns0:row><ns0:cell>Using ICT for</ns0:cell><ns0:cell>att2</ns0:cell><ns0:cell>4.26</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Teaching &</ns0:cell><ns0:cell>att3</ns0:cell><ns0:cell>4.19</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.886</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Research</ns0:cell><ns0:cell>att4 *</ns0:cell><ns0:cell>4.18</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.669</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl1 *</ns0:cell><ns0:cell>3.85</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.672</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl2</ns0:cell><ns0:cell>4.21</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.783</ns0:cell><ns0:cell cols='2'>0.787 0.875</ns0:cell><ns0:cell>0.796</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl3</ns0:cell><ns0:cell>3.90</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.809</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Digital Literacy</ns0:cell><ns0:cell>dl4</ns0:cell><ns0:cell>4.22</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.757</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl5</ns0:cell><ns0:cell>3.98</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.771</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl6</ns0:cell><ns0:cell>4.26</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.765</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl7 *</ns0:cell><ns0:cell>3.99</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.438</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc1*</ns0:cell><ns0:cell>4.21</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.680</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc2</ns0:cell><ns0:cell>4.30</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.728</ns0:cell><ns0:cell cols='2'>0.772 0.866</ns0:cell><ns0:cell>0.790</ns0:cell></ns0:row><ns0:row><ns0:cell>Privacy Concerns</ns0:cell><ns0:cell>pc3 pc4</ns0:cell><ns0:cell>4.53 4.66</ns0:cell><ns0:cell>0.73 0.63</ns0:cell><ns0:cell>0.713 0.767</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc5 *</ns0:cell><ns0:cell>4.45</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.690</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc6 *</ns0:cell><ns0:cell>4.28</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.663</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>psa1</ns0:cell><ns0:cell>3.88</ns0:cell><ns0:cell>0.99</ns0:cell><ns0:cell>0.828</ns0:cell><ns0:cell cols='2'>0.947 0.966</ns0:cell><ns0:cell>0.948</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>psa2</ns0:cell><ns0:cell>3.76</ns0:cell><ns0:cell>1.02</ns0:cell><ns0:cell>0.844</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Perceived Security</ns0:cell><ns0:cell>psa3</ns0:cell><ns0:cell>3.97</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.804</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Awareness</ns0:cell><ns0:cell>psa4</ns0:cell><ns0:cell>3.48</ns0:cell><ns0:cell>1.07</ns0:cell><ns0:cell>0.891</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>psa5</ns0:cell><ns0:cell>3.47</ns0:cell><ns0:cell>1.07</ns0:cell><ns0:cell>0.887</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>psa6</ns0:cell><ns0:cell>3.61</ns0:cell><ns0:cell>1.06</ns0:cell><ns0:cell>0.848</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Familiarity with</ns0:cell><ns0:cell>fam1</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>1.18</ns0:cell><ns0:cell>0.857</ns0:cell><ns0:cell cols='2'>0.741 0.884</ns0:cell><ns0:cell>0.764</ns0:cell></ns0:row><ns0:row><ns0:cell>Blackboard</ns0:cell><ns0:cell>fam2</ns0:cell><ns0:cell>4.25</ns0:cell><ns0:cell>1.19</ns0:cell><ns0:cell>0.791</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Collaborate Ultra</ns0:cell><ns0:cell>fam3 *</ns0:cell><ns0:cell>2.32</ns0:cell><ns0:cell>1.51</ns0:cell><ns0:cell>0.540</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Awareness of</ns0:cell><ns0:cell>k</ns0:cell><ns0:cell>3.74</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.936</ns0:cell><ns0:cell cols='2'>0.913 0.945</ns0:cell><ns0:cell>0.917</ns0:cell></ns0:row><ns0:row><ns0:cell>Security and</ns0:cell><ns0:cell>a</ns0:cell><ns0:cell>3.82</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.943</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Privacy Settings</ns0:cell><ns0:cell>b</ns0:cell><ns0:cell>2.79</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>0.890</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Discriminant validityNote: diagonal elements are average variance extracted (AVE), off-diagonal elements are squared interfactor correlation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell>Att</ns0:cell><ns0:cell>DL</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>PSA</ns0:cell><ns0:cell>Fam</ns0:cell><ns0:cell>Awareness</ns0:cell></ns0:row><ns0:row><ns0:cell>Att</ns0:cell><ns0:cell>0.810</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DL</ns0:cell><ns0:cell>0.219</ns0:cell><ns0:cell>0.700</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PC</ns0:cell><ns0:cell>0.077</ns0:cell><ns0:cell>0.077</ns0:cell><ns0:cell>0.683</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PSA</ns0:cell><ns0:cell>0.065</ns0:cell><ns0:cell>0.218</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fam</ns0:cell><ns0:cell>0.012</ns0:cell><ns0:cell>0.069</ns0:cell><ns0:cell>0.008</ns0:cell><ns0:cell>0.034</ns0:cell><ns0:cell>0.792</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Awareness</ns0:cell><ns0:cell>0.072</ns0:cell><ns0:cell>0.326</ns0:cell><ns0:cell>0.058</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.280</ns0:cell><ns0:cell>0.852</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:1:1:NEW 20 Apr 2022)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:1:1:NEW 20 Apr 2022) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:1:1:NEW 20 Apr 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "April 20, 2022
Dear Editor,
We would like to express our gratitude to you and all reviewers for their constructive comments on the manuscript, which we have revised to address their concerns.
Specifically, we provided our response to your and their comments and suggestions directly beneath them in blue.
We believe that this manuscript is now ready for publication in PeerJ Computer Science.
Dr. Ahmad R. Pratama
Assistant Professor of Informatics
On behalf of all authors
Editor's Decision
Please address the reviewer's comments, in particular reviewer 3. You need to justify the proposal and prove its validity of the proposal. Further, the logical flow of the presentation and articulation of the ideas should be coherent. Please note that revision of the manuscript does not guarantee the acceptance until the reviewers agree to the corrections carried out. So please do the corrections carefully and submit the list of corrections carried out against each of the suggestion.
We have addressed each of the reviewer comments with a strong emphasis on reviewer 3’s comments as suggested. We repeated the analysis and found out a mistake in the previous analysis, to which we were deeply grateful to reviewer 3 for his comments and suggestions. We have made significant revisions throughout the paper, as evidenced by the revised manuscript.
Reviewer 1 (Anonymous)
Basic reporting
This study investigated security and privacy awareness among academics in Saudi Arabia
relating to the use of video conferencing apps.
Authors theoretical claim seems to be acceptable but there is no novelty in the work. The authors try to replicate the same story throughout the paper.
As we stated on page 3 line 81, this is the first study to use a survey experiment research design to conduct a practical assessment of Saudi Arabian faculty members' security and privacy awareness regarding the use of Blackboard Collaborate Ultra as the primary video conferencing app during the COVID-19 pandemic.
Experimental design
A survey experiment research design is employed in this paper
-
Validity of the findings
There are few experimental findings in the paper and it is not sufficient. Need more experimental analysis on the data.
We repeated and expanded on the analysis in response to reviewer 3's comments and suggestions, in which we discovered several new findings in the revised manuscript.
Reviewer 2 (Durai Raj Vincent)
Basic reporting
No comment
-
Experimental design
Since video conference apps are common among all kinds of professions and why this survey is targeted at academicians?
While video conferencing apps are widely used in a variety of professions, their use by faculty members, particularly for teaching purposes, is quite different. Combining them in a single analysis requires us to account for confounding variables, which also necessitates the use of additional samples. We believe that it would be an interesting follow-up study.
What is the logic behind the calculation of the security awareness score?
As stated on page 4 line 150, we used Kruger and Kearney’s KAB model to develop measurement items that we then used to calculate the security and privacy awareness score in this study.
Is this study proposes any particular application as a safer one out of this study?
In this study, we were not looking for which particular video conferencing app is safer than the others. Instead, we were specifically focusing on Blackboard Collaborate Ultra, which is the most commonly used video conferencing app in Saudi Arabia as we discussed on page 2 line 58.
Validity of the findings
The considered sample size should have been higher to come to one conclusion.
Our sample size is more than adequate for the PLS-SEM method as we discussed on page 8 line 350.
Possibility of having high bias due to the non respondents and sample size
age group wise findings should have been included
As stated on page 8 line 354, we have addressed any potential bias issue by running multigroup comparison analysis by using demographic factors (i.e., age, gender, teaching experience, tenure status, and academic field) as control variables in our PLS-SEM analysis.
What was the duration of data collection? Collection through WhatsApp, LinkedIn, Telegram, and Twitter is a viable way to reach academicians?
As stated on page 8 line 329, the responses were collected during the months of October and November 2021. Also, as stated on page 6 line 215, Saudi Arabian universities normally provide their faculty with computer devices, assign them email addresses and require them to check their emails regularly. We sent out the survey primarily through emails in addition to social media.
What is being considered as Non-STEM category as Math and Science also considered in STEM category
This information is provided in Table 2. The Non-STEM category includes Arts, Humanities, and Social Sciences, as well as Medical and Health Sciences.
Additional comments
Results based discussion should be much more elaborated based on the findings
The metrics used should be properly explained
Sample questionnaire can be included
We revised the discussion section to provide additional context for the findings. The metrics are explained in greater detail in the 'Measures' subsection on page 6 line 219. The questionnaire is available in the appendix.
Reviewer 3 (Martin Thomas Falk)
Basic reporting
This study examines the factors that influence academics' actual awareness of security and privacy in relation to videoconferencing apps. The data are based on a recent survey and PLS-SEM is used. This is a timeless study and SEM is appropriate. Some results are not plausible. For example, the results that perceived security awareness is negatively related to actual security and privacy awareness are a bit strange. The document needs improvement in all aspects. Below there are a number of comments that must be addressed
We are extremely appreciative of your comments and suggestions, which assisted us in identifying the errors in the previous manuscript, including the manner in which we conducted the PLS-SEM analysis.
Main comments
Introduction: Please try to improve the description of the purpose of the research. Please add one sentence on the data and methods used (PLS-SEM). Please also clearly state the contribution of the research. Is it the first study on privacy and security issues of video conferencing apps?
Conceptual background: Section 2 “e-learning in Saudi Arabia” should be improved. The paragraph on internet speed does not fit. Section: Video conferencing in online learning. Please cite more recent references. Video conferencing apps greatly improved. References from 1976 are not helpful here.
We rewrote and restructured some parts of the paper to improve the description of the research objectives, the contribution of this research, and the PLS-SEM method that we used. To our knowledge, this is the first study to use a survey experiment research design to conduct a practical assessment of Saudi Arabian faculty members' security and privacy awareness regarding the use of Blackboard Collaborate Ultra as the primary video conferencing app during the COVID-19 pandemic. We replaced the out-of-date reference with some more recent ones.
The theoretical background must be improved: Privacy issues cannot be explained by privacy concerns (see H3). There is a correlation by definition. The theoretical framework is inadequate. Think clearly about what is the dependent variable and what is the independent variable. From my point of view there are too many hypotheses.
We made some revisions on the theoretical framework. Specifically, we revised the name of our dependent/endogenous variable to “awareness score of video conferencing apps’ security and privacy settings”. That is to say, general privacy concerns and perceived security awareness are both plausible explanatory factors to it. We eliminated all hypotheses about demographic variables and instead used them in the multigroup comparison to address endogeneity issues.
Experimental design
PLS-SEM cannot estimate causal relationships. What you estimate are correlations. You need to be more careful in the interpretation of results (e.g., not writing ”x impacts y ”, or ”has effect on ”) and generally create awareness that a correlation does not necessarily indicate causation. Apart from acknowledging that endogeneity might be present, there are also ways to deal with endogeneity, such as instrumental variables, control function approaches, propensity score matching, experiments and natural experiments. Hair et al. (2020) note that four conditions must be met to assume causality: (1) Time sequence —the cause must occur prior to the effect; (2) Covariance —changes in the hypothesized independent variable are associated with changes in the dependent variable; (3) Non-spurious associations — the relationship is not due to other variables that may affect cause and effect; and (4) Theoretical support — a logical explanation for the relationship.
Reference
Hair, J. F., Jr , Howard, M. C. , & Nitzl, C. (2020a). Assessing measurement model quality in PLS-SEM using confirmatory composite analysis. Journal of Business Research, 109 , 101–110 .
Pesämaa, O., Zwikael, O., HairJr, J., & Huemann, M. (2021). Publishing quantitative papers with rigor and transparency. International Journal of Project Management, 39(3), 217-222
We believe that following the revisions made in accordance with the suggested references, the four conditions for assuming causality have been met. Regardless, we altered the way our hypotheses were written and the way we interpreted the results.
Validity of the findings
Method: PLS-SEM is used. Yes it is correct that PLS-SEM is estimated by least squares. Please add information on the stata command. Robustness checks are missing. Please report some validity tests (Degree of correlation between constructs, Path significance -Assesses how strongly two constructs relate to each other, Coefficient of determination - Measure of the predictive accuracy of the structural model, Effect size- Measure of the impact of the exogenous construct on the endogenous one, Predictive relevance - Measure of a model's predictive power and Robustness checks).
We added the STATA command information as specified on page 8 line 346, conducted robustness checks as specified on page 9 line 386, and included all necessary results, including those in Tables 3-5 and Figure 3.
The article needs to be thoroughly edited and proofread by a qualified language editor. There is an illogical flow of ideas and a lack of coherence. Sections are disjointed without any link between them.
We revised the paper extensively to address coherence issues and used a premium language editor to ensure it was free of grammatical errors and maintained a consistent, professional writing style.
Additional comments
no comment.
" | Here is a paper. Please give your review comments after reading it. |
697 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background: COVID-19 has forced many schools and universities worldwide, including Saudi Arabia, to move from traditional face-to-face learning to online learning. Most online learning activities involve the use of video conferencing apps to facilitate synchronous learning sessions. Some faculty were not accustomed to using video conferencing apps, but they had no other choice than to jump on board regardless of their readiness, one of which had something to do with security and privacy awareness. Several threats and vulnerabilities are haunting video conferencing app users. Most require human factors to succeed. Neglecting the security measures of video conferencing apps is a big part of how they will become real.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods:</ns0:head><ns0:p>We used a survey experiment to determine the level of security and privacy awareness among Saudi Arabian faculty regarding the use of video conferencing apps, as well as the factors associated with it. We analyzed the data using the Knowledge-Attitudes-Behaviors (KAB) model and Partial Least Squares Structural Equation Modeling (PLS-SEM) method on 307 faculty members from 43 Saudi Arabian universities.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results:</ns0:head><ns0:p>We found that the average awareness score of video conferencing apps' security and privacy settings falls into the 'Poor' category. Further analysis showed that perceived security, familiarity with the app, and digital literacy of faculty members are significantly associated with higher awareness. Privacy concerns are significantly associated with higher awareness only for STEM faculty, while attitudes toward ICT for teaching and learning is negatively associated with such awareness among faculty with more than 10 years of experience. This study lays the groundwork for future research and user education on video conferencing app security and privacy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>During this COVID-19 pandemic, video conferencing apps have been gaining popularity as they are being used as the most viable means to facilitate business processes during work-from-home or learning-from-home approaches. There is an intriguing bidirectional relationship between ICT adoption and education indicators as revealed by a cross-country analysis of research <ns0:ref type='bibr' target='#b72'>(Pratama 2017)</ns0:ref>, and video conferencing apps are no exception. In many educational institutions, they have become the most widely used tool for supporting online learning around the world, including in Saudi Arabia <ns0:ref type='bibr'>(Alshehri et al. 2020)</ns0:ref>. This is due to their unique features, which make them ideal for use in teaching and learning <ns0:ref type='bibr' target='#b4'>(Alammary, Carbone, and Sheard 2016)</ns0:ref>. They enable instructors to set up online synchronous classes that can be recorded and accessed later, either for those who want to revisit the class or those who want to catch up any class that they missed. Students can attend these online classes from anywhere by using personal computers, laptops, or even cell phones <ns0:ref type='bibr' target='#b22'>(Camilleri and Camilleri 2021)</ns0:ref>.</ns0:p><ns0:p>Numerous video conferencing applications were available, even prior to the COVID-19 pandemic. While Zoom emerged as the clear winner in the COVID-19 pandemic-induced surge in video conferencing app use, many other video conferencing apps, including Google Meet, Microsoft Teams, and Blackboard Collaborate Ultra, also saw an increase in downloads and usage <ns0:ref type='bibr' target='#b86'>(Trueman 2020)</ns0:ref>. Blackboard Collaborate Ultra, in particular, is the one used by the vast majority of universities in Saudi Arabia because the Ministry of Education (MOE) has designated the Blackboard platform as the official e-learning platform (Iffat Rahmatullah 2021). While the majority of Saudi Arabian faculty were unfamiliar with Blackboard Collaborate Ultra prior to the COVID-19 pandemic, they had no choice but to jump on board regardless of their readiness during the pandemic (Ali <ns0:ref type='bibr' target='#b1'>Alammary, Alshaikh, and Alhogail 2021)</ns0:ref>.</ns0:p><ns0:p>On the other hand, the shift from traditional face-to-face to online learning that occurred during the pandemic has also raised many concerns related to cybersecurity and protection of individuals and organizational information resources <ns0:ref type='bibr' target='#b5'>(Almaiah, Al-Khasawneh, and Althunibat 2020)</ns0:ref>. Several cybersecurity and privacy threats and vulnerabilities are plaguing video conferencing app users, including the exposure of user data, unwanted and disruptive intrusions, the propagation of malware, or the hijacking of host machines through remote control tools <ns0:ref type='bibr' target='#b50'>(Joshi and Singh 2017)</ns0:ref>. According to recent security reports, the number of cybersecurity attacks targetting many organizations, including universities, has significantly increased <ns0:ref type='bibr' target='#b41'>(Hakak et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b54'>Lallie et al. 2021)</ns0:ref>. Many cybersecurity incidents are caused by employees who do not follow security rules because of their low level of security awareness in the first place <ns0:ref type='bibr' target='#b45'>(Hina and Dominic 2016)</ns0:ref>. Faculty members become the primary actors in higher education due to their nature as instructors who often must also administer and host online classes on their own. As such, assessing their awareness of security and privacy settings when using video conferencing apps becomes critical to ensuring that the online learning experience is as secure and private as possible for all parties involved. This is the first study to use a nationwide survey to conduct a practical assessment of Saudi Arabian faculty members' security and privacy awareness regarding the use of Blackboard Collaborate Ultra as the primary video conferencing app during the COVID-19 pandemic. The study's specific objectives are to 1) comprehend and investigate the factors associated with the Saudi Arabian faculty's level of security and privacy awareness regarding the use of video conferencing apps, particularly Blackboard Collaborate Ultra, which is the most widely used in this country for teaching and research purposes during and possibly beyond the COVID-19 pandemic, and 2) assist universities, particularly in Saudi Arabia, in improving their security policies and practices.</ns0:p></ns0:div>
<ns0:div><ns0:head>Online learning in Saudi Arabian universities during the COVID-19 pandemic</ns0:head><ns0:p>The Saudi National E-learning Center (NELC) was founded in 2005. NELC is responsible for establishing governance frameworks and regulations for e-learning and online learning in Saudi Arabia. NELC plays a significant role in enhancing the online learning experience in universities as well as supporting and promoting effective practices in online learning (A. Y. Alqahtani and Rajkhan 2020). Furthermore, the center develops policies and procedures regarding the provisioning and management of online learning programs. The policies specify the technologies that universities have to implement and the level of support they should provide to their faculty and students. Policies also include standards and practices to design accessible online learning environments (National eLearning Center 2021).</ns0:p><ns0:p>NELC has also provided guidelines to universities that specify e-learning infrastructure including hardware (e.g., servers, storage, and networking), e-learning solutions (e.g., learning management systems and video conferencing apps), establishing dedicated deanships to manage e-learning matters, providing training and awareness programs, and other e-learning and online learning initiatives <ns0:ref type='bibr' target='#b61'>(Malik et al. 2018)</ns0:ref>. As a result of the efforts of NELC and the investment in e-learning, Saudi Arabian universities have an adequate online learning infrastructure <ns0:ref type='bibr' target='#b84'>(Thorpe and Alsuwayed 2019)</ns0:ref>. Other researchers found that the IT infrastructure in Saudi Arabian universities has successfully handled the transformation from face-to-face to online learning during the COVID-19 pandemic (A. Y. Alqahtani and Rajkhan 2020). However, the university's level of maturity played a considerable role in its ability to overcome challenges and utilize elearning solutions before the pandemic.</ns0:p><ns0:p>Moreover, research indicates that faculty and students in Saudi Arabia have considerably favorable attitudes toward e-learning <ns0:ref type='bibr' target='#b46'>(Hoq 2020)</ns0:ref>. Others found that Saudi Arabian students preferred e-learning due to its flexibility and better communication with their teachers and peers. However, the same study also showed that students perceived online learning to be less beneficial than traditional face-to-face instructions (El-Sayed Ebaid 2020). While students' attitudes toward e-learning are influenced partly by their previous experience and readiness for online learning (N. <ns0:ref type='bibr' target='#b8'>Alqahtani, Innab, and Bahari 2021)</ns0:ref>, numerous other factors, such as gender, level of the course, and quality of online learning approaches also play some roles <ns0:ref type='bibr' target='#b87'>(Vadakalu Elumalai et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Amid the COVID-19 pandemic that has presented significant challenges to societies, Saudi Arabia, like many other countries, has attempted to adapt to the looming crisis. Since education is one of the sectors that was greatly affected by the pandemic, Saudi Arabian universities have taken a number of initiatives to accommodate the decision to adopt online learning during the COVID-19 pandemic. Saudi Arabia's MOE has provided e-learning solutions licenses to all universities in order to support and facilitate the transition to e-learning and online learning, including Blackboard Leaning Management System (LMS) along with its video conferencing app, Blackboard Collaborate Ultra (Ministry of Education 2021). The MOE also provided free internet access to students around the country as well as increased the bandwidth to accommodate the high demand for the internet connection. In collaboration with charity organizations, the MOE has also supported deserving students with laptops and required training (A. Y. Alqahtani and Rajkhan 2020).</ns0:p><ns0:p>According to the MOE, approximately 1.6 million students has taken more than 4 million tests online in 43 private and public universities by May 2020. Over 58,179 faculty members participated in this transition by delivering their lectures, conducting exams, and having discussions fully online, averaging over 1.5 million online classes per week (Ministry of Education 2021). Saudi Arabia's efforts to pursue successful transitions to online learning to enable the education process for more than six million students in schools and universities during the COVID-19 pandemic has been commended by <ns0:ref type='bibr'>UNESCO (Vadakalu Elumalai et al. 2020)</ns0:ref>. Regardless, for these transitions to be successful, it is critical to ensure that faculty members serving as frontliner instructors are capable of providing students with the best online learning experience possible, which includes confirming that their awareness level is sufficient to keep online learning activities secure and private for all parties involved.</ns0:p></ns0:div>
<ns0:div><ns0:head>Theoretical framework</ns0:head><ns0:p>In 2006, Kruger and Kearney developed the KAB model, a prototype for assessing information security awareness, which consists of three different dimensions (i.e., knowledge, attitudes, and behaviors) <ns0:ref type='bibr' target='#b52'>(Kruger and Kearney 2006)</ns0:ref>. Each dimension is measured through a series of multiple-choice questions with either correct or incorrect answers for all three dimensions, in addition to the third option of 'Don't know' only for the 'knowledge' and 'attitudes' dimensions. Since then, this KAB model has been widely used as an instrument for assessing information security awareness <ns0:ref type='bibr' target='#b63'>(McCormac et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b70'>Onumo, Ullah-Awan, and Cullen 2021)</ns0:ref>.</ns0:p><ns0:p>Additionally, based on our review of the literature, we discovered several factors within individuals that are associated with their security and privacy awareness. Specifically, we identified five factors to include in this study: attitudes toward information and communication Manuscript to be reviewed Computer Science technology (ICT) for teaching and research, digital literacy, privacy concerns, perceived security awareness, and familiarity with the video conferencing app platform.</ns0:p></ns0:div>
<ns0:div><ns0:head>Perceived Security Policy Awareness and Privacy Concerns</ns0:head><ns0:p>A lot of literature has been written on people's worries about how their private information is shared when they use information technology goods and services <ns0:ref type='bibr' target='#b71'>(Petronio and Child 2020)</ns0:ref>, as well as on how they see themselves having the security awareness to protect them <ns0:ref type='bibr' target='#b17'>(Bulgurcu, Cavusoglu, and Benbasat 2010;</ns0:ref><ns0:ref type='bibr' target='#b57'>Li et al. 2019)</ns0:ref>. These studies contend that perceived security policy awareness, along with privacy concerns, lead to more careful judgments about whether and how to utilize information technology-related goods or services. Consequently, we expect the same to be true for the video conferencing applications examined in this study. Therefore, the first two hypotheses in this study are: H1: Faculty with a higher level of perceived security policy awareness have a higher level of awareness of video conferencing apps' security and privacy settings. H2: Faculty with a higher level of privacy concerns have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p></ns0:div>
<ns0:div><ns0:head>Attitudes toward ICT for Teaching & Research and Digital Literacy</ns0:head><ns0:p>The role of ICT in the improvement of education is undeniable. However, it is no secret that not all faculty share the same attitudes toward the use of ICT for teaching and research, either due to their lack of experience that may translate to lower levels of digital literacy <ns0:ref type='bibr' target='#b23'>(Cavas et al. 2009)</ns0:ref> or simply personal preferences <ns0:ref type='bibr' target='#b16'>(Bauwens et al. 2020)</ns0:ref>. Taking these findings into account, we hypothesize that: H3: Faculty with more positive attitudes toward ICT for teaching and research have a higher level of awareness of video conferencing apps' security and privacy settings. H4: Faculty with a higher level of digital literacy have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p></ns0:div>
<ns0:div><ns0:head>Familiarity with the App</ns0:head><ns0:p>There are many different video conferencing apps used by different institutions amid the worldwide adoption of distance learning due to the COVID-19 pandemic. While most video conferencing apps will have the same major features, each app may have its own unique user interface and minor features. For that reason, we hypothesize that H5: Faculty who are more familiar with the video conferencing app in use have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2022:03:71735:2:0:NEW 8 May 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Based on all the hypotheses above, the conceptual model of security and privacy awareness in video conferencing apps in this study is shown in Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Target Population</ns0:head><ns0:p>Saudi Arabia has 29 public universities and 14 private universities (Ministry of Education 2020). As of the middle of the spring 2020 semester, Saudi authorities suspended face-to-face teaching in all these universities as a result of the outspread of the COVID-19 pandemic. MOE requested universities to move all courses online by using the available e-learning solutions. Online learning will continue in the next three semesters. The target population for this study was faculty from any Saudi Arabian university who were teaching during these semesters. This includes teaching assistants, lecturers, assistant professors, associate professors, and full professors. In their latest report, which was published in 2020, the MOE reported that there were approximately 71,000 faculty teaching in Saudi Arabian universities (Ministry of Education 2021). Saudi Arabian universities normally provide their faculty with computer devices, assign them email addresses and require them to check their emails regularly. Therefore, it can be said that the entire target population of this study was theoretically accessible.</ns0:p></ns0:div>
<ns0:div><ns0:head>Measures</ns0:head><ns0:p>In this study, there are five latent exogenous variables (i.e., attitudes toward ICT for teaching and research, digital literacy, perceived privacy concerns, perceived security awareness, and familiarity with Blackboard Collaborate Ultra). As summarized in Table <ns0:ref type='table'>1</ns0:ref>, we either developed, adopted, or adapted measurement items from other studies for all five of them. In addition, three other observed variables (i.e., knowledge, attitudes, and behaviors) will be combined to form a single composite score to answer the first research question and treated as a latent endogenous variable (i.e., awareness of security and privacy settings on Blackboard Collaborate Ultra) to answer the second research question.</ns0:p></ns0:div>
<ns0:div><ns0:head>Perceived Security Policy Awareness</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b17'>(Bulgurcu, Cavusoglu, and Benbasat 2010)</ns0:ref> to measure perceived security policy awareness among the faculty in this study. We omitted the first three out of six items in the original scales because they measure security awareness in general, as opposed to awareness of security policy within an organization, which is the focus of the last three items. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Privacy Concerns</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b60'>(Malhotra, Kim, and Agarwal 2004)</ns0:ref> to measure privacy concerns among the faculty in this study. Specifically, we picked only the most relevant two out of three Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to four original items for each of the control, awareness of practice, and collection dimensions to make it six, as opposed to ten items in total. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Attitudes toward ICT for teaching and research</ns0:head><ns0:p>We adapted the scales from <ns0:ref type='bibr' target='#b69'>(Ng 2012)</ns0:ref> to measure the attitudes toward utilizing ICT for teaching and research purposes among the faculty in this study. Specifically, we combined the teaching and research parts into one item to make it four items total, as opposed to eight items total. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Digital Literacy</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b69'>(Ng 2012)</ns0:ref> to measure digital literacy among the faculty in this study. Specifically, we omitted three out of six original items for the technical dimensions while keeping all two items from each cognitive and social-emotional dimension to make it six items total as opposed to nine items in total. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Familiarity with Blackboard Collaborate Ultra</ns0:head><ns0:p>We developed three items to measure the faculty' familiarity with Blackboard Collaborate Ultra. The first two items were about their perceived familiarity and their usage frequency, measured on a 5-point Likert scale, while the last one was about whether or not they had read the terms of agreement for Blackboard Collaborate Ultra.</ns0:p></ns0:div>
<ns0:div><ns0:head>Awareness of Security and Privacy Settings on Blackboard Collaborate Ultra</ns0:head><ns0:p>We developed five questions about Blackboard Collaborate Ultra to test our participants' knowledge, attitudes, and behaviors, representing their awareness of security and privacy settings on Blackboard Collaborate Ultra. The first three questions consist of side-by-side pictures of the default and modified security and privacy settings on this app. We asked respondents to identify which one the default was (i.e., knowledge), which one they preferred (i.e., attitudes), and which one they mostly used (i.e., behaviors). For the last two questions, we provided them with two hypothetical scenarios involving security and privacy incidents, in which we asked them to tell us which options are available and what course of actions they are going to do in each scenario. For each answer, we gave them a score of 0 for the wrong answer for the knowledge dimension or the worst option for the attitudes and behavior dimensions, a score of 5 if they chose 'Don't know' or if their answer was partially correct, and a score of 10 if they picked the correct answer or the best option, security-wise. The complete questions are available in Appendix 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>A research ethics application was submitted to the University of Jeddah ethics committee in preparation for the data collection phase, in which an ethics approval number (UJ-REC-021) was granted for this study. We collected the data using a set of questionnaires that was delivered using Qualtrics, an online survey software program. We provided an explanation of the purpose of the study on the landing page of the survey. Participants were also informed of the approximate time required to complete the survey and were asked to give their consent to participate. They were notified that their participation is voluntary and that they may choose not to participate or withdraw participation at any time. To avoid potential biases, no gifts or incentives of any kind were promised to the participants.</ns0:p><ns0:p>The questionnaire itself had three parts. The first one was to collect demographic information about participants, including their gender, age, education level, academic fields, academic rank, teaching experience, and university name. This part was used as control variables in the analysis.</ns0:p><ns0:p>The second part contained a five-level Likert scales type of questions for participants to indicate their attitudes toward information and communications technology, their digital literacy, their perceived privacy concerns, their perceived privacy awareness, and their familiarity with Blackboard Collaborate Ultra. This part was used to measure the exogenous variables in the model.</ns0:p><ns0:p>The third part was where we measured the actual security and privacy awareness score of all participants representing their awareness of security and privacy settings on Blackboard Collaborate Ultra, which is the endogenous variable in our model. It consisted of four scenarios. Each scenario had a) two screenshots captured from Blackboard Collaborate Ultra either during a running session or from the settings window, and b) several questions designed to assess the knowledge, attitudes, and behaviors of each participant regarding some important security and privacy settings in Blackboard Collaborate Ultra, as well as some potential security and privacy issues that may arise while using the app.</ns0:p><ns0:p>The first scenario was to test the participants' awareness of the risks associated with granting guest access. The second scenario was related to enabling critical permissions such as media file sharing and whiteboard access. The third scenario concerned private chat rooms that could be abused to spread malicious and inappropriate content. The final scenario was designed to assess participants' awareness of what to do when malicious links are posted to the chat.</ns0:p><ns0:p>Before starting the data collection, we conducted a pilot study to confirm the content validity of the survey items, assess their difficulty, and get rough estimates of the time required to complete the survey. Validity was evaluated in terms of content and face validity. Content validity can help 'establish an instrument's credibility, accuracy, relevance, and breadth of knowledge regarding the domain'. Face validity, on the other hand, is used to examine the survey items in terms of 'ease of use, clarity, and readability' <ns0:ref type='bibr' target='#b19'>(Burton and Mazerolle 2011)</ns0:ref>. Twelve faculty from several Saudi Arabian universities were invited to participate in the pilot study. The pilot survey was developed by using Qualtrics survey software as an online survey. Participants were provided with an empty text field to comment on the relevance and clarity of that item. They were also requested to state whether a revision was required for that item. If a revision was suggested, participants were encouraged to provide a revision for the item. All comments that were provided by the participants were analyzed, and some survey items were modified to reflect the suggested changes. Some items were reworded to increase their relativity or clarity. Others were removed for irrelevance or to eliminate repetitions. There were also items that were added to measure some missing aspects.</ns0:p><ns0:p>The survey was then administered to faculty members from the 43 public and private universities in Saudi Arabia. To reach a representative sample, invitations were sent using emails and the survey link was shared on the most popular social media platforms in Saudi Arabia, i.e., WhatsApp, LinkedIn, Telegram, and Twitter. The aim was to reach all possible academic communities on these platforms. The responses were collected during the months of October and November 2021. Around 470 responses were received, of which 307 were found to be valid responses. The vast majority of the eliminated responses were incomplete ones, but there were also very few cases where participants indicated their refusal to participate in the survey.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>To investigate the actual level of security and privacy awareness on video conferencing apps among Saudi Arabian faculty in this study, we calculated a composite score with a 3:2:5 weight as suggested by Kruger and Kearney for each of knowledge, attitudes, and behaviors variable based on the answers to the third part of the survey <ns0:ref type='bibr' target='#b52'>(Kruger and Kearney 2006)</ns0:ref>. We then normalized the score to a range of 0 to 100, which will be the final security and privacy awareness score of each participant. As per Kruger and Kearney's categorization, any score below 60 is categorized as 'Poor', a score of 80 or above is considered 'Good', and anything in between is considered 'Average' <ns0:ref type='bibr' target='#b52'>(Kruger and Kearney 2006)</ns0:ref>.</ns0:p><ns0:p>To identify factors associated with the awareness of video conferencing apps' security and privacy settings, we used the Partial Least Squares Structural Equation Modeling (PLS-SEM) method with the help of 'plssem' package in STATA 15 <ns0:ref type='bibr' target='#b88'>(Venturini and Mehmetoglu 2019)</ns0:ref>. PLS-SEM is a widely used structural equation modeling technique that allows for the estimation of complex relationships between latent variables in path models. It is particularly advantageous for the exploration and development of theory, as well as when prediction is the primary objective of a study, and it performs well with a small sample size <ns0:ref type='bibr' target='#b35'>(Hair, Howard, and Nitzl 2020)</ns0:ref>. Prior to running the path analysis, we checked the standardized loading of each measurement item to identify any item that should be excluded from the model, and we verified that discriminant validity is met using the squared interfactor correlation and average variance extracted (AVE). To deal with endogeneity problem, we used the control variable approach by conducting several multigroup comparison analyses across all demographic factors <ns0:ref type='bibr' target='#b47'>(Hult et al. 2018)</ns0:ref>. The full code and dataset are available at our GitHub repository. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Participant Demographic Information</ns0:head><ns0:p>Table <ns0:ref type='table'>2</ns0:ref> summarizes the demographic information of all participants in this study. As can be seen, the sample is quite balanced in terms of gender (50.81% male vs 49.19% female), academic fields (46.91% STEM vs 53.09% non-STEM), and position (57.33% tenured vs 42.67% non-tenured). Most participants are below 45 years of age (80.78%), hold a PhD degree (60.59%), and have at least ten years of experience in academia (63.84%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Awareness of Video Conferencing Apps' Security and Privacy Settings</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> illustrates the distribution of the composite score for the awareness of video conferencing apps' security and privacy settings. As it turns out, the overall score for all participants in this study (M = 44.27, SD = 16.06) falls into the 'Poor' level of awareness as per Kruger and Kearney's categorization <ns0:ref type='bibr' target='#b52'>(Kruger and Kearney 2006)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Factors Associated with Awareness of Security and Privacy Settings</ns0:head><ns0:p>Out of 23 measurement items for the five exogenous variables and three observed variables for the endogenous variable in our model, we found several of them, all from the exogenous variables, to have a low standardized loading score, and thus had to be omitted from the model <ns0:ref type='bibr' target='#b38'>(Hair et al. 2012)</ns0:ref>. After multiple iterations of this validity test for the measurement model, 17 items are retained, each with a standardized loading score greater than 0.800. The summary statistics of all remaining measurement items along with their validity test results are summarized in Table <ns0:ref type='table'>3</ns0:ref>. It is also important to note that, as summarized in Table <ns0:ref type='table' target='#tab_1'>4</ns0:ref>, each latent variable's AVE score is greater than the squared interfactor correlation scores, confirming that the model's discriminant validity has been met <ns0:ref type='bibr' target='#b35'>(Hair, Howard, and Nitzl 2020)</ns0:ref>.</ns0:p><ns0:p>The results from PLS-SEM showed a very high average R 2 value of 0.91, indicating a substantial coefficient of determination <ns0:ref type='bibr' target='#b36'>(Hair, Ringle, and Sarstedt 2011)</ns0:ref>, in addition to a relative goodnessof-fit (GoF) value of 0.99, which meets the rule of thumb <ns0:ref type='bibr' target='#b43'>(Henseler and Sarstedt 2013;</ns0:ref><ns0:ref type='bibr' target='#b89'>Vinzi, Trinchera, and Amato 2010)</ns0:ref>. Furthermore, the path analysis showed that all five relationships are statistically significant as we hypothesized. However, there is one unexpected result in the negative coefficient of attitudes toward ICT for teaching and learning on the awareness of video conferencing apps' security and privacy settings. Our subsequent multigroup comparison analysis showed the results are quite robust across all demographic factors, with the exception of two. First, the structural effect of privacy concerns on the awareness of video conferencing apps' security and privacy settings is significant only for those with a STEM background and not for others with a non-STEM background. Second, the negative structural effect of attitudes toward ICT for teaching and learning on the awareness of video conferencing apps' security and privacy settings is significant only for those having more than ten years of experience in academia and not for others having less. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Based on the findings, Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the final model of the awareness of video conferencing apps' security and privacy settings among Saudi Arabian faculty in this study, while Table <ns0:ref type='table'>5</ns0:ref> summarizes the model's hypothesis tests results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This study investigates the actual security and privacy awareness among Saudi Arabian faculty when using video conferencing apps, particularly Blackboard Ultra, for teaching during the COVID-19 pandemic. One of the study's key findings is that, in general, faculty in Saudi Arabia still have poor security and privacy awareness on video conferencing apps. This may be understandable considering that most of them only got into this new technology on a daily basis because of the pandemic (Ali Alammary, Alshaikh, and Alhogail 2021). As evidenced by earlier studies, the use of Blackboard in Saudi Arabia was rather low prior to the pandmic (Al <ns0:ref type='bibr' target='#b0'>Meajel and Sharadgah 2018;</ns0:ref><ns0:ref type='bibr' target='#b83'>Tawalbeh 2017)</ns0:ref>. Furthermore, studies on general awareness of cybersecurity practices are still not at the desired level (Ali Alammary, Alshaikh, and Alhogail 2021). Therefore, more efforts need to be made to raise awareness through conducting several strategies, not only to raise awareness but rather to build a cybersecurity culture <ns0:ref type='bibr' target='#b10'>(Alshaikh 2020</ns0:ref>). An example of a proposed strategy to build a cybersecurity culture is establishing support groups called 'cyber champions' to raise academic privacy awareness and influence faculty' behaviors toward adopting cybersecurity practices. Several studies have found that using support groups and peers to change cybersecurity behavior is an effective strategy <ns0:ref type='bibr' target='#b12'>(Alshaikh and Adamson 2021;</ns0:ref><ns0:ref type='bibr' target='#b26'>Cram et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b31'>Guo et al. 2011)</ns0:ref>. This approach could be used by universities to raise awareness among their academic community about the importance of using video conferencing apps in a secure and private manner.</ns0:p><ns0:p>Among the five latent exogenous variables examined in this study, perceived security policy awareness has the strongest positive effect on awareness of security and privacy settings in video conferencing apps. This particular finding is not surprising. While this is not always the case <ns0:ref type='bibr' target='#b82'>(Tariq, Brynielsson, and Artman 2014)</ns0:ref>, a higher perceived level of security awareness typically results in improved security and privacy practices <ns0:ref type='bibr' target='#b56'>(Lebek et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b73'>Pratama and Firmansyah 2021;</ns0:ref><ns0:ref type='bibr' target='#b48'>Hwang et al. 2021)</ns0:ref>. Therefore, it is only natural for faculty who have a higher awareness level of security policies within their organization to be more willing to invest time in learning the security and privacy settings for any app they use, including video conferencing apps.</ns0:p><ns0:p>The second largest positive effect on awareness of video conferencing app security and privacy settings was discovered to be familiarity with the video conferencing app itself. As we predicted when initially developing a new construct for this variable, familiarity with the app in use is critical, given the abundance of available options, each with its own set of features and settings. As a result, those less familiar with the video conferencing app in use may be unaware of all security and privacy settings available to them. This finding is also consistent with findings from other studies regarding familiarity with concepts, technical terms, or security-related systems that PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science can aid in increasing people's security awareness <ns0:ref type='bibr' target='#b77'>(Schmidt et al. 2008;</ns0:ref><ns0:ref type='bibr'>Zwilling et al. 2022)</ns0:ref>. Fortunately, this is a relatively straightforward issue to address, for example with adequate technical support to educate faculty on the subject. Digital literacy was found to have a moderate positive effect on the awareness of security and privacy settings on video conferencing apps. A simple explanation is that individuals with higher digital literacy possess the necessary skills to independently navigate all security and privacy settings. As such, individuals with a higher level of digital literacy tend to have a heightened security and privacy awareness, which is also consistent with the findings from several previous studies <ns0:ref type='bibr' target='#b75'>(Sasvári, Nemeslaki, and Rauch 2015;</ns0:ref><ns0:ref type='bibr' target='#b68'>Nemeslaki and Sasvari 2015;</ns0:ref><ns0:ref type='bibr' target='#b27'>Cranfield et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Privacy concerns were found to be the fourth exogenous variable having a positive effect on the awareness of security and privacy settings on video conferencing apps. This finding is consistent with the extensive discussion in the literature regarding the relationship between privacy concerns and security awareness in general <ns0:ref type='bibr' target='#b24'>(Chung et al. 2021;</ns0:ref><ns0:ref type='bibr' target='#b79'>Siponen 2001)</ns0:ref>. However, unlike the previous three exogenous variables, this one has a significant positive effect only on participants with a STEM background. In other words, the effect is rather inexistent among faculty members with social sciences, arts, humanities, health, and medical sciences background. This finding could be explained by STEM faculty members' increased familiarity with ICT in general, including its benefits and risks. As another study pointed out, STEM faculty were more aware of cybersecurity threats such as phishing than their non-STEM colleagues <ns0:ref type='bibr' target='#b28'>(Diaz, Sherman, and Joshi 2020)</ns0:ref>. As a result, they have a higher level of privacy concerns than their non-STEM counterparts and are thus more aware of the security and privacy settings of any applications they use, including video conferencing apps.</ns0:p><ns0:p>Unlike the others, attitudes toward ICT for teaching and research were found to have a detrimental effect on awareness of security and privacy settings on video conferencing apps. While this finding is quite surprising, the fact that it is significant only among participants with more than ten years of teaching experience may reveal an intriguing story. Senior faculty appear to be accustomed to whatever ICT solutions they used prior to the COVID-19 pandemic, which almost certainly did not include video conferencing apps. Having to learn something new in order to do something they are already very familiar with may have become a barrier that prevents them from fully utilizing this new technology, particularly in terms of its security and privacy settings. After all, reluctant to change, whether among individuals in general <ns0:ref type='bibr' target='#b15'>(Audia and Brion 2007)</ns0:ref>, or among faculty in particular <ns0:ref type='bibr' target='#b65'>(McCrickerd 2012;</ns0:ref><ns0:ref type='bibr' target='#b80'>Tallvid 2016)</ns0:ref> is not something new. The good news is that, while statistically significant, the negative effect of this variable in this model is the smallest of all exogenous variables. As such, concentrating on the other variables that have a positive effect may be sufficient to offset it.</ns0:p><ns0:p>Apart from academic field and teaching experience, no statistically significant difference in any other demographic factor was observed, including age, gender, educational attainment, and tenure track status. On the one hand, this implies that the endogeneity issue is largely addressed in this model, which contributes to the robustness of the findings. On the other hand, the relatively similar scores across demographic groups can be seen as good news. It indicates equality in terms of security and privacy awareness among Saudi Arabian faculty, as is also the case in some countries <ns0:ref type='bibr' target='#b33'>(Hadlington, Binder, and Stanulewicz 2020;</ns0:ref><ns0:ref type='bibr' target='#b74'>Pratama, Firmansyah, and Rahma 2022)</ns0:ref>, despite the fact that some other countries continue to demonstrate inequality <ns0:ref type='bibr' target='#b63'>(McCormac et al. 2017;</ns0:ref><ns0:ref type='bibr'>Zwilling et al. 2022)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, we used the Knowledge-Attitude-Behavior (KAB) model to determine the actual security and privacy awareness of faculty in Saudi Arabia regarding video conferencing apps, in particular Blackboard Collaborate Ultra, which is the most commonly used one in the country. We discovered that the average score falls into the 'Poor' category (mean = 44.27, SD = 16.06), which is not surprising given that many of them are only using this new technology on a daily basis as a result of the pandemic. Furthermore, based on the results of the subsequent analysis using PLS-SEM method, we discovered that all five latent variables in our model to have significant relationships with the actual security and privacy awareness on video conferencing apps among Saudi Arabian faculty. More specifically, perceived security awareness has the strongest effect of them all, followed by familiarity with the video conferencing app platform, and digital literacy. Meanwhile, perceived privacy concerns are only significant for those with a STEM background, and surprisingly, attitudes toward the use of ICT for teaching and learning is negatively related to the actual security and privacy awareness among those having more than ten years of experience in academia.</ns0:p><ns0:p>This study lays the groundwork for future research and interventions aimed at increasing user awareness of security and privacy concerns when using video conferencing apps for teaching and research purposes. Given the rapid adoption of video conferencing apps as a result of distance learning in the face of the COVID-19 pandemic, addressing this issue is becoming increasingly critical. Finally, we suggest that similar studies be done in other parts of the world to account for cultural differences that might make people less aware of cybersecurity and privacy, especially when it comes to video conferencing apps. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,345.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Manuscript to be reviewed</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='3'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Collaborate Ultra</ns0:cell><ns0:cell>fam3 *</ns0:cell><ns0:cell>2.32</ns0:cell><ns0:cell>1.51</ns0:cell><ns0:cell>0.542</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Awareness of</ns0:cell><ns0:cell>k</ns0:cell><ns0:cell>3.74</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.937</ns0:cell><ns0:cell>0.937</ns0:cell><ns0:cell cols='2'>0.913 0.945</ns0:cell><ns0:cell>0.918</ns0:cell></ns0:row><ns0:row><ns0:cell>Variables Security and Privacy Settings</ns0:cell><ns0:cell>a Measurement Items b</ns0:cell><ns0:cell>3.82 Mean 2.79</ns0:cell><ns0:cell>0.77 SD 0.69</ns0:cell><ns0:cell>Standardized 0.942 (Initial) Loading 0.889</ns0:cell><ns0:cell>Standardized 0.942 (Final) Loading 0.889</ns0:cell><ns0:cell>Cronbach</ns0:cell><ns0:cell cols='2'>DG rho_A</ns0:cell></ns0:row><ns0:row><ns0:cell>Policy Awareness Perceived Security</ns0:cell><ns0:cell>pspa1 pspa2 pspa3</ns0:cell><ns0:cell>3.48 3.47 3.61</ns0:cell><ns0:cell>1.07 1.07 1.06</ns0:cell><ns0:cell>0.949 0.970 0.934</ns0:cell><ns0:cell>0.949 0.934 0.970</ns0:cell><ns0:cell cols='2'>0.947 0.966</ns0:cell><ns0:cell>0.948</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc1*</ns0:cell><ns0:cell>4.21</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.680</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc2</ns0:cell><ns0:cell>4.30</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.728</ns0:cell><ns0:cell>0.846</ns0:cell><ns0:cell cols='2'>0.772 0.866</ns0:cell><ns0:cell>0.790</ns0:cell></ns0:row><ns0:row><ns0:cell>Privacy Concerns</ns0:cell><ns0:cell>pc3 pc4</ns0:cell><ns0:cell>4.53 4.66</ns0:cell><ns0:cell>0.73 0.63</ns0:cell><ns0:cell>0.714 0.767</ns0:cell><ns0:cell>0.802 0.831</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc5 *</ns0:cell><ns0:cell>4.45</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.690</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc6 *</ns0:cell><ns0:cell>4.28</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.663</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Attitudes toward</ns0:cell><ns0:cell>att1</ns0:cell><ns0:cell>4.42</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.892</ns0:cell><ns0:cell cols='2'>0.884 0.928</ns0:cell><ns0:cell>0.891</ns0:cell></ns0:row><ns0:row><ns0:cell>Using ICT for</ns0:cell><ns0:cell>att2</ns0:cell><ns0:cell>4.26</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Teaching &</ns0:cell><ns0:cell>att3</ns0:cell><ns0:cell>4.19</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.886</ns0:cell><ns0:cell>0.907</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Research</ns0:cell><ns0:cell>att4 *</ns0:cell><ns0:cell>4.18</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.699</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl1 *</ns0:cell><ns0:cell>3.85</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.671</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl2</ns0:cell><ns0:cell>4.21</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.783</ns0:cell><ns0:cell>0.823</ns0:cell><ns0:cell cols='2'>0.787 0.875</ns0:cell><ns0:cell>0.796</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl3</ns0:cell><ns0:cell>3.90</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.809</ns0:cell><ns0:cell>0.863</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Digital Literacy</ns0:cell><ns0:cell>dl4 *</ns0:cell><ns0:cell>4.22</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.757</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl5</ns0:cell><ns0:cell>3.98</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.771</ns0:cell><ns0:cell>0.823</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl6 *</ns0:cell><ns0:cell>4.26</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.765</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl7 *</ns0:cell><ns0:cell>3.99</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.438</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Familiarity with</ns0:cell><ns0:cell>fam1</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>1.18</ns0:cell><ns0:cell>0.857</ns0:cell><ns0:cell>0.914</ns0:cell><ns0:cell cols='2'>0.741 0.884</ns0:cell><ns0:cell>0.764</ns0:cell></ns0:row><ns0:row><ns0:cell>Blackboard</ns0:cell><ns0:cell>fam2</ns0:cell><ns0:cell>4.25</ns0:cell><ns0:cell>1.19</ns0:cell><ns0:cell>0.790</ns0:cell><ns0:cell>0.865</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022) Manuscript to be reviewed Computer Science 3 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Discriminant validity</ns0:cell></ns0:row><ns0:row><ns0:cell>Note: diagonal elements are average variance extracted (AVE), off-diagonal elements are</ns0:cell></ns0:row><ns0:row><ns0:cell>squared interfactor correlation</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Discriminant validityNote: diagonal elements are average variance extracted (AVE), off-diagonal elements are squared interfactor correlation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Variable</ns0:cell><ns0:cell>PSPA</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Att</ns0:cell><ns0:cell>DL</ns0:cell><ns0:cell>Fam</ns0:cell><ns0:cell>AoSPS</ns0:cell></ns0:row><ns0:row><ns0:cell>PSPA</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PC</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0.683</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Att</ns0:cell><ns0:cell>0.065</ns0:cell><ns0:cell>0.077</ns0:cell><ns0:cell>0.810</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DL</ns0:cell><ns0:cell>0.218</ns0:cell><ns0:cell>0.077</ns0:cell><ns0:cell>0.219</ns0:cell><ns0:cell>0.700</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fam</ns0:cell><ns0:cell>0.034</ns0:cell><ns0:cell>0.008</ns0:cell><ns0:cell>0.012</ns0:cell><ns0:cell>0.069</ns0:cell><ns0:cell>0.792</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>AoSPS</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.058</ns0:cell><ns0:cell>0.072</ns0:cell><ns0:cell>0.326</ns0:cell><ns0:cell>0.280</ns0:cell><ns0:cell>0.852</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:2:0:NEW 8 May 2022)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "May 7, 2022
Dear Editor,
We would like to thank you and all reviewers for their follow-up comments on the manuscript, which we have incorporated into the revised version.
Specifically, we responded to your and their comments and suggestions in blue directly below them.
We believe this manuscript is now ready to be published in PeerJ Computer Science.
Dr. Ahmad R. Pratama
Assistant Professor of Informatics
On behalf of all authors
Editor's Decision
Reviewers have commented about the hypothesis and also about the validity of the recommendations statistically. In particular include the statistical measures that are considered during the experimental analysis.
We have made some adjustments on the way we presented our hypotheses. All statistical measures that are provided by STATA have been presented, either within the text body, tables, or figures. We did add some more information that was not included in the previous version, such as the R2 value, representing coefficient of determination.
Reviewer 1 (Anonymous)
Basic reporting
no comment
-
Experimental design
no comment
-
Validity of the findings
no comment
-
Additional comments
no comment
-
Reviewer Martin Falk
Basic reporting
Some of the hypotheses are trivial and self-explanatory: check them out: Perceived security awareness has a higher awareness of the security and privacy settings of video conferencing apps. I suggest to de-emphasise this.
We made some adjustments and renamed the variable from 'perceived security awareness' to 'perceived security policy awareness' to clarify that this latent variable does not refer to user awareness of cybersecurity in general, but rather to their awareness of security policy within their organization (page 6 line 230–235). This latent variable is crucial, as evidenced by the significant decrease in R2 (from 0.91 to 0.5) when it is removed from the model.
However, we agreed to de-emphasize this hypothesis by moving it to the first position, as it is the one supported by the literature the most. In addition, we mentioned in the discussion section that “This particular finding is not surprising” (page 11 line 426) and “it is only natural for faculty who have a higher awareness level of security policies within their organization to be more willing to invest time in learning the security and privacy settings for any app they use, including video conferencing apps” (page 11 line 429–432). The remainder of the discussion (page 11–13) focuses on the other hypotheses, particularly those in which we discovered a moderating effect of academic field and teaching experience.
Experimental design
'We used a survey experiment to determine '
Please rewrite. This paper applied PLS-SEM to a stand alone survey. This is not an experimental design.
We have removed all the words “experiment” within the manuscript.
Validity of the findings
PLS-SEM
Please report all the needed validity tests for the measurement and structural model. This part is not complete.
We followed the guidelines from the journal paper written by the authors of plssem package in STATA in reporting the PLS-SEM results in this paper.
Venturini, Sergio, and Mehmet Mehmetoglu. 'plssem: A stata package for structural equation modeling with partial least squares.' Journal of Statistical Software 88 (2019): 1-35.
Below are the complete PLS-SEM results for the final iteration (excluding the previous iterations, in which items to be omitted were identified, and the subsequent multigroup analysis, in which moderation effects were identified) as provided by STATA:
. plssem (PSPA > pspa1 pspa2 pspa3) (PC > pc2 pc3 pc4) (Att > att1 att2 att3) (DL > dl2 dl3 dl5) (Fam > fam1 fam2) (AoSPS > k a b)
> , structural(AoSPS PSPA PC Att DL Fam) stats
Iteration 1: outer weights rel. diff. = 3.05e-01
Iteration 2: outer weights rel. diff. = 6.34e-03
Iteration 3: outer weights rel. diff. = 7.93e-04
Iteration 4: outer weights rel. diff. = 4.12e-05
Iteration 5: outer weights rel. diff. = 5.78e-06
Iteration 6: outer weights rel. diff. = 2.76e-07
Iteration 7: outer weights rel. diff. = 4.25e-08
Partial least squares SEM Number of obs = 306
Average R-squared = 0.90799
Average communality = 0.79021
Weighting scheme: path Absolute GoF = 0.84706
Tolerance: 1.00e-07 Relative GoF = 0.99039
Initialization: indsum Average redundancy = 0.77355
Table of summary statistics for indicator variables
---------------------------------------------------------------------------------------------------------
Indicator | mean sd median min max N missing
--------------+------------------------------------------------------------------------------------------
pspa1 | 3.476 1.070 4.000 1.000 5.000 307 0
pspa2 | 3.466 1.067 4.000 1.000 5.000 307 0
pspa3 | 3.612 1.059 4.000 1.000 5.000 307 0
pc2 | 4.296 0.800 4.000 1.000 5.000 307 0
pc3 | 4.531 0.729 5.000 1.000 5.000 307 0
pc4 | 4.658 0.634 5.000 1.000 5.000 307 0
att1 | 4.417 0.822 5.000 1.000 5.000 307 0
att2 | 4.257 0.872 4.000 1.000 5.000 307 0
att3 | 4.189 0.899 4.000 1.000 5.000 307 0
dl2 | 4.208 0.773 4.000 1.000 5.000 307 0
dl3 | 3.902 0.869 4.000 1.000 5.000 307 0
dl5 | 3.984 0.865 4.000 1.000 5.000 307 0
fam1 | 3.870 1.178 4.000 1.000 5.000 307 0
fam2 | 4.245 1.188 5.000 1.000 5.000 306 1
k | 3.741 0.799 3.670 1.000 5.000 307 0
a | 3.820 0.771 4.000 1.330 5.000 307 0
b | 2.794 0.695 3.000 0.670 4.000 307 0
---------------------------------------------------------------------------------------------------------
Measurement model - Standardized loadings
--------------------------------------------------------------------------------------------------------
| Reflective: Reflective: Reflective: Reflective: Reflective: Reflective:
| PSPA PC Att DL Fam AoSPS
--------------+-----------------------------------------------------------------------------------------
pspa1 | 0.949
pspa2 | 0.970
pspa3 | 0.934
pc2 | 0.846
pc3 | 0.802
pc4 | 0.831
att1 | 0.892
att2 | 0.902
att3 | 0.907
dl2 | 0.823
dl3 | 0.863
dl5 | 0.823
fam1 | 0.914
fam2 | 0.865
k | 0.936
a | 0.943
b | 0.890
--------------+-----------------------------------------------------------------------------------------
Cronbach | 0.947 0.772 0.884 0.787 0.741 0.913
DG | 0.966 0.866 0.928 0.875 0.884 0.945
rho_A | 0.948 0.790 0.891 0.796 0.764 0.917
--------------------------------------------------------------------------------------------------------
Discriminant validity - Squared interfactor correlation vs. Average variance extracted (AVE)
--------------------------------------------------------------------------------------------------------
| PSPA PC Att DL Fam AoSPS
--------------+-----------------------------------------------------------------------------------------
PSPA | 1.000 0.026 0.065 0.218 0.034 0.750
PC | 0.026 1.000 0.077 0.077 0.008 0.058
Att | 0.065 0.077 1.000 0.219 0.012 0.072
DL | 0.218 0.077 0.219 1.000 0.069 0.326
Fam | 0.034 0.008 0.012 0.069 1.000 0.280
AoSPS | 0.750 0.058 0.072 0.326 0.280 1.000
--------------+-----------------------------------------------------------------------------------------
AVE | 0.905 0.683 0.810 0.700 0.792 0.852
--------------------------------------------------------------------------------------------------------
Structural model - Standardized path coefficients
-----------------------------
Variable | AoSPS
--------------+--------------
PSPA | 0.737
| (0.000)
PC | 0.065
| (0.001)
Att | -0.040
| (0.049)
DL | 0.134
| (0.000)
Fam | 0.356
| (0.000)
--------------+--------------
r2_a | 0.906
-----------------------------
p-values in parentheses
We have presented all information above in the revised manuscript. The results are presented in Table 3 for the measurement model validity test (along with the summary statistics), in Table 4 for the discriminant validity test, and Figure 3 for the structural model. In addition, we reported the R2 value and the goodness of fit in page 10 line 385–386. There is no other validity test result in STATA that we did not report in the manuscript.
Additional comments
Please follow the style guidelines
Alammary, A., A. Carbone, and J. Sheard. 2016. “Blended Learning in Higher Education: Delivery Methods Selection.” In Twenty-Fourth European Conference on Information Systems (ECIS 2016). İstanbul,Turkey.
Alammary, Ali, Moneer Alshaikh, and Areej Alhogail. 2021. “The Impact of the COVID-19 Pandemic on the Adoption of e-Learning among Academics in Saudi Arabia.” Behaviour & Information Technology, September, 1–23. https://doi.org/10.1080/0144929x.2021.1973106
Thank you for calling attention to the inconsistent citation style. Evidently, our citation manager misidentified the authors’ names. We have manually overwritten it to ensure that everything is accurate in the most recent manuscript.
Reviewer Durai Raj Vincent P M
Basic reporting
The comments are given in the previous review are addressed by the authors
Thank you.
Experimental design
Ok
-
Validity of the findings
Ok
-
Additional comments
Nil
-
" | Here is a paper. Please give your review comments after reading it. |
698 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>activities involve the use of video conferencing apps to facilitate synchronous learning sessions. While some faculty members were not accustomed to using video conferencing apps, they had no other choice than to jump on board regardless of their readiness, one of which involved security and privacy awareness. On the other hand, video conferencing apps users face a number of security and privacy threats and vulnerabilities, many of which rely on human factors to be exploited. In this study, we used survey data from 307 faculty members at 43 Saudi Arabian universities to determine the level of awareness among Saudi Arabian faculty regarding security and privacy settings of video conferencing apps and to investigate the factors associated with it. We analyzed the data using the Knowledge-Attitudes-Behaviors (KAB) model and the Partial Least Squares Structural Equation Modeling (PLS-SEM) method. We found that the average awareness score of video conferencing apps' security and privacy settings falls into the 'Poor' category, which is not surprising considering that many faculty members only started using this new technology on a daily basis because of the pandemic. Further analysis showed that perceived security, familiarity with the app, and digital literacy of faculty members are significantly associated with higher awareness. Privacy concerns are significantly associated with higher awareness only among STEM faculty members, while attitudes toward ICT for teaching and research are negatively associated with such awareness among senior faculty members with more than ten years of experience. This study lays the foundation for future research and user education on the security and privacy settings of video conferencing applications.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>During COVID-19 pandemic, video conferencing apps have gained popularity as the most viable way to facilitate business processes during work-from-home or learning-from-home strategies. According to a cross-country study <ns0:ref type='bibr' target='#b71'>(Pratama 2017)</ns0:ref>, there is an intriguing bidirectional relationship between ICT adoption and education indicators, and video conferencing apps are no exception. In many educational institutions, including those in Saudi Arabia, they have become the most popular tool for supporting online learning <ns0:ref type='bibr'>(Alshehri et al. 2020</ns0:ref>). Due to their unique characteristics, they are ideal for use in teaching and learning <ns0:ref type='bibr' target='#b4'>(Alammary, Carbone, and Sheard 2016)</ns0:ref>. They enable instructors to set up online synchronous classes that can be recorded and accessed at a later time, either for those who want to review the class again or for those who missed a class. Using personal computers, laptops, or even mobile phones, students can attend these online classes from anywhere <ns0:ref type='bibr' target='#b21'>(Camilleri and Camilleri 2021)</ns0:ref>.</ns0:p><ns0:p>Even prior to the COVID-19 pandemic, a variety of video conferencing applications were available. Many other video conferencing apps, such as Google Meet, Microsoft Teams, and Blackboard Collaborate Ultra, also saw an increase in downloads and usage as a result of the COVID-19 pandemic <ns0:ref type='bibr' target='#b85'>(Trueman 2020)</ns0:ref>. The vast majority of universities in Saudi Arabia use Blackboard Collaborate Ultra because the Ministry of Education (MOE) has designated the Blackboard platform as the official e-learning platform <ns0:ref type='bibr' target='#b48'>(Iffat Rahmatullah 2021)</ns0:ref>. Prior to the COVID-19 pandemic, the majority of Saudi Arabian faculty were unfamiliar with Blackboard Collaborate Ultra; however, they had no choice but to adopt it regardless of their level of preparedness (Ali <ns0:ref type='bibr' target='#b1'>Alammary, Alshaikh, and Alhogail 2021)</ns0:ref>.</ns0:p><ns0:p>On the other hand, the shift from traditional face-to-face to online learning that occurred during the pandemic has raised numerous cybersecurity and protection of individual and organizational information resource concerns <ns0:ref type='bibr' target='#b5'>(Almaiah, Al-Khasawneh, and Althunibat 2020)</ns0:ref>. Several cybersecurity and privacy threats and vulnerabilities plague users of video conferencing apps, including the exposure of user data, unwanted and disruptive intrusions, the spread of malware, and the hijacking of host machines through remote control tools <ns0:ref type='bibr' target='#b49'>(Joshi and Singh 2017)</ns0:ref>. Recent security reports indicate that the number of cyberattacks against numerous organizations, including universities, has increased significantly <ns0:ref type='bibr' target='#b40'>(Hakak et al. 2020;</ns0:ref><ns0:ref type='bibr' target='#b53'>Lallie et al. 2021)</ns0:ref>. Many cybersecurity incidents are caused by employees who disregard security policies due to their initial lack of security awareness <ns0:ref type='bibr' target='#b44'>(Hina and Dominic 2016)</ns0:ref>. Due to their role as instructors, who must frequently administer and host online classes on their own, faculty members become the primary actors in higher education. As a result, assessing their knowledge of security and privacy settings when using video conferencing apps is crucial for ensuring that everyone's online learning experience is as secure and private as possible. This is the first study to use a national survey to assess the security and privacy awareness of Saudi Arabian faculty members regarding the use of Blackboard Collaborate Ultra as the primary video conferencing app during the COVID-19 pandemic. The specific objectives of this study are to 1) comprehend and investigate the factors associated with the Saudi Arabian faculty's level of security and privacy awareness regarding the use of video conferencing apps, particularly Blackboard Collaborate Ultra, which is the most widely used video conferencing app in this country for teaching and research purposes during and possibly beyond the COVID-19 pandemic, and 2) assist universities, particularly in Saudi Arabia, in improving their security policies.</ns0:p></ns0:div>
<ns0:div><ns0:head>Online learning in Saudi Arabian universities during the COVID-19 pandemic</ns0:head><ns0:p>In 2005, the Saudi National E-learning Center (NELC) was established. In Saudi Arabia, NELC is in charge of establishing governance frameworks and regulations for e-learning and online learning. NELC plays a significant role in enhancing the online learning experience in universities and promoting effective online learning practices (A. Y. Alqahtani and Rajkhan 2020). In addition, the center develops policies and procedures for the delivery and administration of online learning programs. The policies stipulate the technologies that universities must implement and the level of support they must provide for their faculty and students. Policies also include standards and practices for creating online learning environments that are accessible (National eLearning Center 2021).</ns0:p><ns0:p>NELC has provided universities with guidelines that specify e-learning infrastructure including hardware (e.g., servers, storage, and networking), e-learning solutions (e.g., learning management systems and video conferencing apps), establishing dedicated deanships to manage e-learning matters, providing training and awareness programs, and other e-learning and online learning initiatives <ns0:ref type='bibr' target='#b60'>(Malik et al. 2018)</ns0:ref>. As a result of NELC's efforts and investment in elearning, universities in Saudi Arabia have a sufficient online learning infrastructure <ns0:ref type='bibr' target='#b83'>(Thorpe and Alsuwayed 2019)</ns0:ref>. During the COVID-19 pandemic, other researchers discovered that the IT infrastructure in Saudi Arabian universities successfully supported the transition from face-toface to online learning (A. Y. Alqahtani and Rajkhan 2020). However, the maturity level of the university played a significant role in its ability to overcome obstacles and implement e-learning solutions before the pandemic.</ns0:p><ns0:p>Research indicates that faculty and students in Saudi Arabia have overwhelmingly positive attitudes towards e-learning <ns0:ref type='bibr' target='#b45'>(Hoq 2020)</ns0:ref>. Others have discovered that Saudi Arabian students prefer e-learning due to its adaptability and enhanced communication with their teachers and peers. However, the same study revealed that students viewed online instruction as less advantageous than traditional face-to-face instruction (El-Sayed Ebaid 2020). While students' attitudes toward e-learning are partially influenced by their prior experience and readiness for online learning (N. <ns0:ref type='bibr' target='#b8'>Alqahtani, Innab, and Bahari 2021)</ns0:ref>, numerous other factors, including gender, course level, and quality of online learning approaches, also play a role <ns0:ref type='bibr' target='#b86'>(Vadakalu Elumalai et al. 2020</ns0:ref>).</ns0:p><ns0:p>In the midst of the COVID-19 pandemic, which has presented significant challenges to societies, Saudi Arabia, along with numerous other nations, has attempted to adapt to the impending crisis. During the COVID-19 pandemic, Saudi Arabian universities took a number of steps to accommodate the decision to adopt online learning, as education was one of the sectors most severely impacted by the pandemic. To support and facilitate the transition to e-learning and online learning, the Ministry of Education (MOE) of Saudi Arabia has provided all universities with e-learning solutions licenses, including Blackboard Learning Management System (LMS) and its video conferencing app, Blackboard Collaborate Ultra (Ministry of Education 2021). In addition to providing free internet access to students across the nation, the Ministry of Education increased bandwidth to accommodate the high demand for internet connections. In collaboration with charitable organizations, the Ministry of Education has also provided laptops and training to deserving students (A. Y. Alqahtani and Rajkhan 2020).</ns0:p><ns0:p>As of May 2020, the MOE reports that approximately 1.6 million students have taken more than 4 million online exams at 43 private and public universities. Over 58,179 faculty members participated in this transition by delivering lectures, administering exams, and holding online discussions, averaging 1.5 million online classes per week (Ministry of Education 2021). UNESCO has lauded Saudi Arabia's efforts to pursue successful transitions to online learning to facilitate the education of over six million students in schools and universities during the COVID-19 pandemic <ns0:ref type='bibr' target='#b86'>(Vadakalu Elumalai et al. 2020)</ns0:ref>. For these transitions to be successful, it is essential that faculty members serving as frontline instructors are able to provide students with the best online learning experience possible. This includes ensuring that their awareness level is sufficient to keep online learning activities secure and private for all parties.</ns0:p></ns0:div>
<ns0:div><ns0:head>Theoretical framework</ns0:head><ns0:p>In 2006, Kruger and Kearney developed the KAB model, a prototype for assessing information security awareness, which consists of three different dimensions (i.e., knowledge, attitudes, and behaviors) <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>. Each dimension is measured by a series of multiplechoice questions with either correct or incorrect answers for all three dimensions, as well as a 'Don't know' option for the 'knowledge' and 'attitudes' dimensions only. Since then, this KAB model has been widely adopted as a tool for evaluating information security awareness <ns0:ref type='bibr' target='#b62'>(McCormac et al. 2017;</ns0:ref><ns0:ref type='bibr' target='#b69'>Onumo, Ullah-Awan, and Cullen 2021)</ns0:ref>.</ns0:p><ns0:p>In addition, based on our literature review, we identified a number of individual factors that are associated with security and privacy awareness. We identified five specific factors to include in this study: attitudes toward information and communication technology (ICT) for teaching and research, digital literacy, privacy concerns, perceived security awareness, and familiarity with the video conferencing app platform.</ns0:p></ns0:div>
<ns0:div><ns0:head>Perceived Security Policy Awareness and Privacy Concerns</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There is a large body of literature on people's concerns about how their private information is shared when they use information technology products and services <ns0:ref type='bibr' target='#b70'>(Petronio and Child 2020)</ns0:ref>, as well as on how they perceive themselves to have the security awareness to protect them <ns0:ref type='bibr' target='#b17'>(Bulgurcu, Cavusoglu, and Benbasat 2010;</ns0:ref><ns0:ref type='bibr' target='#b57'>Li et al. 2019)</ns0:ref>. According to these studies, perceived security policy awareness and privacy concerns result in more cautious decisions regarding the use of information technology-related goods and services. Consequently, we anticipate the same for the video conferencing applications investigated in this study. The first two hypotheses in this study are therefore: H1: Faculty with a higher level of perceived security policy awareness have a higher level of awareness of video conferencing apps' security and privacy settings. H2: Faculty with a higher level of privacy concerns have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p></ns0:div>
<ns0:div><ns0:head>Attitudes toward ICT for Teaching & Research and Digital Literacy</ns0:head><ns0:p>Indisputable is the contribution of ICT to the improvement of education. However, it is no secret that not all faculty share the same attitudes toward the use of ICT for teaching and research, either because of their lack of experience, which may translate into lower levels of digital literacy <ns0:ref type='bibr' target='#b22'>(Cavas et al. 2009)</ns0:ref>, or because of their personal preferences <ns0:ref type='bibr' target='#b16'>(Bauwens et al. 2020)</ns0:ref>. Taking these findings into account, we hypothesize that: H3: Faculty with more positive attitudes toward ICT for teaching and research have a higher level of awareness of video conferencing apps' security and privacy settings. H4: Faculty with a higher level of digital literacy have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p></ns0:div>
<ns0:div><ns0:head>Familiarity with the App</ns0:head><ns0:p>In the wake of the global adoption of distance learning in response to the COVID-19 pandemic, numerous institutions use a variety of video conferencing apps. Each app may have its own user interface and minor features, despite the fact that the majority of video conferencing apps will have the same major features. Consequently, we hypothesize that: H5: Faculty who are more familiar with the video conferencing app in use have a higher level of awareness of video conferencing apps' security and privacy settings.</ns0:p><ns0:p>Based on all the aforementioned hypotheses, the conceptual model awareness in video conferencing apps' security and privacy settings in this study is depicted in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Target Population</ns0:head><ns0:p>There are 29 public universities and 14 private universities in Saudi Arabia (Ministry of Education 2020). Due to the spread of the COVID-19 pandemic, the Saudi government has suspended face-to-face instruction in all of these universities as of the middle of the spring 2020 semester. The Ministry of Education demanded that universities move all courses online utilizing the available e-learning solutions. Online education will continue for the following three semesters. This study's target population consisted of professors from any Saudi Arabian university who were teaching during the study period. Included in this category are teaching assistants, lecturers, assistant professors, associate professors, and full professors. According to the Ministry of Education's most recent report, published in 2020, there were approximately 71,000 faculty teaching in Saudi Arabian universities (Ministry of Education 2021). Universities in Saudi Arabia typically provide their faculty with computers, assign them email addresses, and require them to regularly check their emails. Consequently, it can be stated that the entire population of interest to this study was theoretically accessible.</ns0:p></ns0:div>
<ns0:div><ns0:head>Measures</ns0:head><ns0:p>There are five latent exogenous variables in this study (i.e., attitudes toward ICT for teaching and research, digital literacy, perceived privacy concerns, perceived security awareness, and familiarity with Blackboard Collaborate Ultra). As summarized in Table <ns0:ref type='table'>1</ns0:ref>, we created, adopted, or adapted items from other studies for all five. In addition, three other observed variables (i.e., knowledge, attitudes, and behaviors) will be merged into a single composite score to answer the first research question and will be treated as a latent endogenous variable (i.e., awareness of security and privacy settings on Blackboard Collaborate Ultra) to answer the second research question.</ns0:p></ns0:div>
<ns0:div><ns0:head>Perceived Security Policy Awareness</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b17'>(Bulgurcu, Cavusoglu, and Benbasat 2010)</ns0:ref> to measure perceived security policy awareness among the faculty in this study. We omitted the first three out of six items in the original scales because they measured security awareness in general, as opposed to awareness of security policy within an organization, which was the focus of the last three items that we kept. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Privacy Concerns</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b59'>(Malhotra, Kim, and Agarwal 2004)</ns0:ref> to measure privacy concerns among the faculty in this study. Specifically, we selected only the two most pertinent items from three to four original items for each of the control, awareness of practice, and collection dimensions to make it six, as opposed to ten items in total. The scales were measured on a 5point Likert scale. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Attitudes toward ICT for teaching and research</ns0:head><ns0:p>We adapted the scales from <ns0:ref type='bibr' target='#b68'>(Ng 2012)</ns0:ref> to measure the attitudes of the faculty in this study toward using ICT for teaching and research. Specifically, we merged the teaching and research components into a single item, reducing the total number of items from eight to four. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Digital Literacy</ns0:head><ns0:p>We adopted the scales from <ns0:ref type='bibr' target='#b68'>(Ng 2012)</ns0:ref> to measure digital literacy of the faculty in this study. In particular, we eliminated three of the original six items for the technical dimensions while retaining both items for the cognitive and social-emotional dimensions, reducing the total number of items from nine to six. The scales were measured on a 5-point Likert scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Familiarity with Blackboard Collaborate Ultra</ns0:head><ns0:p>We developed three items to measure the faculty' familiarity with Blackboard Collaborate Ultra. On a 5-point Likert scale, the first two questions assessed respondents' familiarity with and frequency of use of Blackboard Collaborate Ultra, while the last questioned whether or not they had read the terms of service.</ns0:p></ns0:div>
<ns0:div><ns0:head>Awareness of Security and Privacy Settings on Blackboard Collaborate Ultra</ns0:head><ns0:p>We developed five questions about Blackboard Collaborate Ultra to assess our participants' knowledge, attitudes, and behaviors regarding Blackboard Collaborate Ultra's security and privacy settings. The first three questions contain side-by-side images of the app's default and altered security and privacy settings. We asked respondents to indicate which one was the default (i.e., knowledge), which one they preferred (i.e., attitudes), and which one they primarily utilized (i.e., behaviors). For the final two questions, we presented them with two hypothetical scenarios involving security and privacy incidents and asked them to describe the available options and the actions they would take in each scenario. For each response, we assigned a score of 0 for any incorrect answer for the knowledge dimension or the worst option for the attitudes and behavior dimensions, a score of 5 for 'Don't know' or a partially correct response, and a score of 10 for the correct answer or the most secure option. The complete questions are available in Appendix 1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>In preparation for the data collection phase, a research ethics application was submitted to the University of Jeddah ethics committee, which granted this study an ethics approval number (UJ-REC-021). We collected the data by distributing a series of questionnaires using the online survey software Qualtrics. On the survey's landing page, we provided an explanation of the study's purpose. In addition to being informed of the length of time required to complete the survey, participants were asked for their permission to participate. They were informed that participation is voluntary and that they may opt out or withdraw at any time. To eliminate the possibility of bias, no gifts or incentives of any kind were promised to the participants.</ns0:p><ns0:p>The questionnaire contained three sections. The first objective was to collect participant demographic information, including gender, age, level of education, academic fields, academic rank, teaching experience, and university name. This component served as a control variable for the analysis.</ns0:p><ns0:p>The second section consisted of Likert-type questions with five levels for participants to indicate their attitudes toward ICT, their digital literacy, their perceived privacy concerns, their perceived privacy awareness, and their familiarity with Blackboard Collaborate Ultra. This section was used to measure the model's exogenous variables.</ns0:p><ns0:p>In the third section, we measured the actual security and privacy awareness score of each participant, which represented their awareness of security and privacy settings on Blackboard Collaborate Ultra and served as the endogenous variable in our model. Four scenarios were included. Each scenario included: a) two screenshots captured from Blackboard Collaborate Ultra either during a running session or from the settings window; and b) several questions designed to assess the knowledge, attitudes, and behaviors of each participant regarding some important security and privacy settings in Blackboard Collaborate Ultra, as well as potential security and privacy issues that may arise while using the application.</ns0:p><ns0:p>The objective of the first scenario was to assess participants' awareness of the risks associated with guest access. The second scenario involved the activation of vital permissions, such as media file sharing and whiteboard access. The third scenario involved private chat rooms that could be abused to disseminate harmful and inappropriate content. The final scenario was intended to assess participants' knowledge of what to do when malicious links are posted in chat.</ns0:p><ns0:p>Before we began collecting data, we conducted a pilot study to confirm the content validity of the survey items, evaluate their difficulty, and obtain rough estimates of the time required to complete the survey. Validity was evaluated based on its content and appearance. Validity of content can help 'establish an instrument's credibility, accuracy, relevance, and breadth of domain knowledge.' Face validity, on the other hand, examines survey items for 'ease of use, clarity, and readability' <ns0:ref type='bibr' target='#b19'>(Burton and Mazerolle 2011)</ns0:ref>. Twelve faculty members from multiple universities in Saudi Arabia were invited to participate in the pilot study. The pilot survey was created as an online survey using Qualtrics survey software. Participants were given a blank text field to comment on the item's relevance and clarity. In addition, they were required to indicate whether a revision was necessary for the item. Participants were encouraged to provide a revision for the item if one was suggested. All participant feedback was analyzed, and a few survey questions were modified to reflect the suggested alterations. Several items were rewritten PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to improve their relevance or clarity. Others were eliminated due to irrelevance or duplication. In addition, items were added to measure certain missing aspects.</ns0:p><ns0:p>The survey was then distributed to faculty members at Saudi Arabia's 43 public and private universities. To reach a representative sample, invitations were emailed, and the survey link was shared on the four most popular social media platforms in Saudi Arabia: WhatsApp, LinkedIn, Telegram, and Twitter. On these platforms, the objective was to reach as many academic communities as possible. The responses were gathered between October and November of 2021. Around 470 responses were received, and 307 were determined to be complete and valid. The vast majority of responses that were eliminated were incomplete, but there were also a small number of participants who refused to participate in the survey.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data analysis</ns0:head><ns0:p>To determine the actual level of security and privacy awareness on video conferencing apps among Saudi Arabian faculty in this study, we calculated a composite score with a 3:2:5 weight as suggested by Kruger and Kearney for each of knowledge, attitudes, and behaviors variable based on the responses to the third survey part <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>. The final security and privacy awareness score for each participant was then normalized to a range between 0 and 100. According to Kruger and Kearney's classification, a score below 60 is categorized as 'Poor,' a score of 80 or higher is categorized as 'Good,' and a score in between is categorized as 'Average' <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>.</ns0:p><ns0:p>We employed the Partial Least Squares Structural Equation Modeling (PLS-SEM) technique with the 'plssem' package in STATA 15 to identify factors associated with the awareness of video conferencing apps' security and privacy settings <ns0:ref type='bibr' target='#b87'>(Venturini and Mehmetoglu 2019)</ns0:ref>. PLS-SEM is a widely employed structural equation modeling approach that permits the estimation of complex relationships between latent variables in path models. It is especially useful for the exploration and development of theory, as well as when prediction is the primary purpose of a study, and it performs well with a small sample size <ns0:ref type='bibr' target='#b34'>(Hair, Howard, and Nitzl 2020)</ns0:ref>. Prior to running the path analysis, we examined the standardized loadings of each measurement item to identify any that should be excluded from the model, and we used the squared interfactor correlation and average variance extracted (AVE) from the path analysis to confirm discriminant validity. To address the issue of endogeneity, we employed the control variable strategy by conducting multiple multigroup comparison analyses across all demographic variables <ns0:ref type='bibr' target='#b46'>(Hult et al. 2018</ns0:ref>). The complete code and data set can be found in our GitHub repository.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head></ns0:div>
<ns0:div><ns0:head>Participant Demographic Information</ns0:head><ns0:p>The demographic information of all participants in this study is summarized in Manuscript to be reviewed Computer Science STEM), and position, the sample is quite balanced (57.33% tenured vs 42.67% non-tenured). The majority of participants are under 45 years old (80.7%), hold a PhD (60.5%), and have at least ten years of academic experience (63.84%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Awareness of Video Conferencing Apps' Security and Privacy Settings</ns0:head><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> depicts the distribution of the composite score for awareness of the security and privacy settings of video conferencing applications. According to Kruger and Kearney's classification, the overall score for all participants in this study (M = 44.27, SD = 16.06) falls into the 'Poor' level of awareness <ns0:ref type='bibr' target='#b51'>(Kruger and Kearney 2006)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Factors Associated with Awareness of Security and Privacy Settings</ns0:head><ns0:p>Several measurement items in our model, all from the exogenous variables, were found to have a low standardized loading score, and thus had to be eliminated <ns0:ref type='bibr' target='#b38'>(Hair et al. 2012)</ns0:ref>. After multiple iterations of this validity test for the measurement model, 17 items with standard loading scores exceeding 0.800 are retained. Table <ns0:ref type='table'>3</ns0:ref> provides a summary of the summary statistics for all remaining measurement items and the results of their validity tests. Importantly, as summarized in Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>, the AVE scores for each latent variable are greater than the squared interfactor correlation scores, confirming that the model's discriminant validity has been met <ns0:ref type='bibr' target='#b34'>(Hair, Howard, and Nitzl 2020;</ns0:ref><ns0:ref type='bibr' target='#b87'>Venturini and Mehmetoglu 2019)</ns0:ref>.</ns0:p><ns0:p>The results from PLS-SEM demonstrated a very high average R 2 value of 0.91, indicating a substantial coefficient of determination <ns0:ref type='bibr' target='#b35'>(Hair, Ringle, and Sarstedt 2011)</ns0:ref>, as well as a relative goodness-of-fit (GoF) value of 0.99, which meets the rule of thumb <ns0:ref type='bibr' target='#b42'>(Henseler and Sarstedt 2013;</ns0:ref><ns0:ref type='bibr' target='#b88'>Vinzi, Trinchera, and Amato 2010)</ns0:ref>. In addition, the path analysis demonstrated that all five hypothesized relationships are statistically significant. One unexpected result is the negative correlation between attitudes toward ICT for teaching and learning and awareness of the security and privacy settings of video conferencing applications. Our subsequent multigroup comparison analysis revealed that, with the exception of two demographic factors, the results are quite consistent across the board. First, the structural effect of privacy concerns on the awareness of the security and privacy settings of video conferencing apps is significant only for STEM faculty and not for non-STEM faculty. Second, the negative structural effect of attitudes toward ICT for teaching and learning on awareness of the security and privacy settings of video conferencing apps is significant only for those with more than ten years of academic experience and not for those with less.</ns0:p><ns0:p>Based on the findings, Figure <ns0:ref type='figure'>3</ns0:ref> illustrates the final model of the awareness of video conferencing apps' security and privacy settings among Saudi Arabian faculty in this study, while Table <ns0:ref type='table'>5</ns0:ref> summarizes the model's hypothesis tests results.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>This study investigates Saudi Arabian faculty's awareness of security and privacy settings of video conferencing apps, especially Blackboard Ultra that they use to teach during the COVID-19 pandemic. One of the study's key findings is that, in general, faculty in Saudi Arabia still have poor security and privacy awareness on video conferencing apps. This may be understandable considering that most of them only got into this new technology on a daily basis because of the pandemic (Ali Alammary, Alshaikh, and Alhogail 2021). As evidenced by earlier studies, the use of Blackboard and subsequentially Blackboard Collaborate Ultra in Saudi Arabia was relatively low prior to the pandmic (Al <ns0:ref type='bibr' target='#b0'>Meajel and Sharadgah 2018;</ns0:ref><ns0:ref type='bibr' target='#b82'>Tawalbeh 2017</ns0:ref>). This finding is consistent with a previous study's conclusion that general awareness of cybersecurity practices in Saudi Arabia is still below the desired level (Ali Alammary, Alshaikh, and Alhogail 2021). Therefore, additional efforts must be made to raise awareness by implementing multiple strategies, not only to raise awareness but also to establish a cybersecurity culture <ns0:ref type='bibr' target='#b10'>(Alshaikh 2020)</ns0:ref>. Establishing support groups called 'cyber champions' to raise academic privacy awareness and influence faculty behavior toward adopting cybersecurity practices is an example of a proposed strategy to build a cybersecurity culture. Multiple studies have found that using support groups and peers to modify cybersecurity behavior is an effective method <ns0:ref type='bibr' target='#b12'>(Alshaikh and Adamson 2021;</ns0:ref><ns0:ref type='bibr' target='#b25'>Cram et al. 2019;</ns0:ref><ns0:ref type='bibr' target='#b30'>Guo et al. 2011)</ns0:ref>. Universities could employ this strategy to educate their academic community about the significance of maximizing the security and privacy settings within video conferencing apps.</ns0:p><ns0:p>The effect of perceived security policy awareness on awareness of security and privacy settings in video conferencing apps is the strongest among the five latent exogenous variables examined in this study. This particular discovery is not unexpected. Despite the fact that this is not always the case <ns0:ref type='bibr' target='#b81'>(Tariq, Brynielsson, and Artman 2014)</ns0:ref>, a higher perceived level of security awareness typically results in enhanced security and privacy practices <ns0:ref type='bibr' target='#b55'>(Lebek et al. 2014;</ns0:ref><ns0:ref type='bibr' target='#b72'>Pratama and Firmansyah 2021;</ns0:ref><ns0:ref type='bibr' target='#b47'>Hwang et al. 2021)</ns0:ref>. Therefore, it is only natural that faculty with a greater understanding of their organization's security policies will be more willing to invest time in learning the security and privacy settings for any app they use, including video conferencing apps.</ns0:p><ns0:p>Familiarity with the video conferencing app itself was discovered to have the second largest positive effect on awareness of the security and privacy settings of the app. Given the abundance of available applications, each with its own features and configurations, familiarity with the app in use is crucial, as predicted when developing a new construct for this variable. Consequently, those unfamiliar with the video conferencing app in use may be unaware of all available security and privacy settings. This finding is also consistent with findings from other studies regarding familiarity with concepts, technical terms, or security-related systems that can aid in increasing individuals' security awareness <ns0:ref type='bibr' target='#b76'>(Schmidt et al. 2008;</ns0:ref><ns0:ref type='bibr'>Zwilling et al. 2022)</ns0:ref>. Fortunately, this is a PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>relatively straightforward issue to address, for example with adequate technical support to educate faculty on the subject. Digital literacy was found to have a moderately positive impact on the awareness of security and privacy settings in video conferencing applications. Individuals with a higher level of digital literacy are able to independently navigate all security and privacy settings, according to a straightforward explanation. As a result, individuals with a higher level of digital literacy tend to have a heightened security and privacy awareness, which is consistent with the findings of a number of previous studies <ns0:ref type='bibr' target='#b75'>(Sasvári, Nemeslaki, and Rauch 2015;</ns0:ref><ns0:ref type='bibr' target='#b67'>Nemeslaki and Sasvari 2015;</ns0:ref><ns0:ref type='bibr' target='#b26'>Cranfield et al. 2020)</ns0:ref>.</ns0:p><ns0:p>Privacy concerns were identified as the fourth exogenous variable positively influencing the awareness of security and privacy settings in video conferencing applications. This result is consistent with the extensive literature on the relationship between privacy concerns and security awareness in general <ns0:ref type='bibr' target='#b23'>(Chung et al. 2021;</ns0:ref><ns0:ref type='bibr' target='#b78'>Siponen 2001)</ns0:ref>. Unlike the previous three exogenous variables, however, this variable has a significant positive effect only on STEM faculty. In other words, the effect is essentially nonexistent among faculty members in the social sciences, arts, humanities, health, and medical sciences. This result could be explained by the increased familiarity of STEM faculty members with ICT in general, including its benefits and risks. According to a separate study, STEM faculty were more aware of cybersecurity threats like phishing than non-STEM faculty <ns0:ref type='bibr' target='#b27'>(Diaz, Sherman, and Joshi 2020)</ns0:ref>. Consequently, they are more aware of the security and privacy settings of all applications, including video conferencing apps, than their non-STEM counterparts.</ns0:p><ns0:p>Unlike the other factors, attitudes toward ICT for teaching and research were found to have a negative impact on video conferencing app users' awareness of security and privacy settings. The fact that this finding is significant only among participants with more than ten years of teaching experience may provide insight into an intriguing story. The senior faculty appear to be accustomed to whatever ICT solutions they utilized prior to the COVID-19 pandemic, which were probably not video conferencing applications. Having to learn something new in order to perform a task they are already very familiar with may be preventing them from fully utilizing this new technology, particularly in terms of its security and privacy settings. After all, resistance to change, whether among individuals in general <ns0:ref type='bibr' target='#b15'>(Audia and Brion 2007)</ns0:ref>, or among faculty in particular <ns0:ref type='bibr' target='#b64'>(McCrickerd 2012;</ns0:ref><ns0:ref type='bibr' target='#b79'>Tallvid 2016)</ns0:ref> is not something new. The good news is that despite being statistically significant, this variable has the smallest effect of all exogenous variables in this model. As a result, focusing on the other variables that have a positive effect may be enough to compensate for it.</ns0:p><ns0:p>Other than academic field and teaching experience, no statistically significant differences in age, gender, educational attainment, or tenure-track status were observed. On the one hand, this suggests that the endogeneity issue is largely addressed in this model, which strengthens the reliability of the findings. On the other hand, the relatively similar scores across demographic groups can be viewed as a positive development. It indicates equality in terms of security and privacy awareness among Saudi Arabian faculty, as is also the case in some countries <ns0:ref type='bibr' target='#b32'>(Hadlington, Binder, and Stanulewicz 2020;</ns0:ref><ns0:ref type='bibr' target='#b73'>Pratama, Firmansyah, and Rahma 2022)</ns0:ref>, whereas some other countries continue to demonstrate inequality <ns0:ref type='bibr' target='#b62'>(McCormac et al. 2017;</ns0:ref><ns0:ref type='bibr'>Zwilling et al. 2022)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this study, we used the Knowledge-Attitude-Behavior (KAB) model to determine the actual security and privacy awareness of faculty in Saudi Arabia regarding Blackboard Collaborate Ultra, the most widely used video conferencing app in the country, in terms of video conferencing apps. We discovered that the average score falls into the 'Poor' category (mean = 44.27, SD = 16.06), which is not surprising considering that many of them only use this new technology on a daily basis because of the pandemic.</ns0:p><ns0:p>In addition, based on the results of the subsequent PLS-SEM analysis, we found that all five latent variables in our model have significant relationships with Saudi Arabian faculty members' awareness of the security and privacy settings of video conferencing apps. In particular, perceived security policy awareness has the greatest impact, followed by familiarity with the video conferencing app's platform and digital literacy. Moreover, perceived privacy concerns are only significant among STEM faculty, and surprisingly, attitudes toward the use of ICT for teaching have a significant, albeit small, negative impact on senior faculty with more than ten years of teaching experience.</ns0:p><ns0:p>This study lays the groundwork for future research and interventions that aim to increase user awareness of security and privacy concerns when using video conferencing apps for educational and research purposes. Given the rapid adoption of video conferencing apps as a result of distance learning in response to the COVID-19 pandemic, it is becoming increasingly important to address this issue. Blackboard Collaborate Ultra, despite being the most applicable in the Saudi Arabian context, is not one of the most widely used video conferencing applications in the world. There are alternative applications such as Zoom, Google Meet, Microsoft Teams, Skype, and VooV Meeting among others. There may be some technical differences in their security and privacy settings, so some of the findings of this study may not necessarily be applicable to the other apps, despite the fact that their primary functions are typically identical. Therefore, we recommend that similar research be conducted in other regions of the world to account for cultural and technical differences that may make users of video conferencing apps less aware of their security and privacy settings. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,199.12,525.00,345.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,393.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>In terms of gender (50.81% male vs. 49.19% female), academic fields (46.91% STEM vs. 53.09% non-</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Manuscript to be reviewed</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='5'>Computer Science Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>!¦¦¥%!¥© ¦©¥</ns0:cell><ns0:cell>fam3 p</ns0:cell><ns0:cell>2.32</ns0:cell><ns0:cell>1.51</ns0:cell><ns0:cell>0.542</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Ae¥ of</ns0:cell><ns0:cell>k</ns0:cell><ns0:cell>3.74</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.937</ns0:cell><ns0:cell>0.937</ns0:cell><ns0:cell cols='2'>0.913 0.945</ns0:cell><ns0:cell>0.918</ns0:cell></ns0:row><ns0:row><ns0:cell>Variables Security and Priv¥ Settings</ns0:cell><ns0:cell>a Measuremenw sw b</ns0:cell><ns0:cell>3.82 Mean 2.79</ns0:cell><ns0:cell>0.77 S 0.69</ns0:cell><ns0:cell>Standardi 0.942 (Initial) Loading 0.889</ns0:cell><ns0:cell>Standardized 0.942 (Final) Loading 0.889</ns0:cell><ns0:cell>Cronbach</ns0:cell><ns0:cell cols='2'>DG rho_A</ns0:cell></ns0:row><ns0:row><ns0:cell>Policy Ae¥ Perceiv¨ Security</ns0:cell><ns0:cell>pspa1 pspa2 pspa3</ns0:cell><ns0:cell>3.48 3.47 3.61</ns0:cell><ns0:cell>1.07 1.07 1.06</ns0:cell><ns0:cell>0.949 0.970 0.934</ns0:cell><ns0:cell>0.949 0.934 0.970</ns0:cell><ns0:cell cols='2'>0.947 0.966</ns0:cell><ns0:cell>0.948</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc1p</ns0:cell><ns0:cell>4.21</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.680</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc2</ns0:cell><ns0:cell>4.30</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.728</ns0:cell><ns0:cell>0.846</ns0:cell><ns0:cell cols='2'>0.772 0.866</ns0:cell><ns0:cell>0.790</ns0:cell></ns0:row><ns0:row><ns0:cell>Priv¥ !</ns0:cell><ns0:cell>pc3 pc4</ns0:cell><ns0:cell>4.53 4.66</ns0:cell><ns0:cell>0.73 0.63</ns0:cell><ns0:cell>0.714 0.767</ns0:cell><ns0:cell>0.802 0.831</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc5 p</ns0:cell><ns0:cell>4.45</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.690</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>pc6 p</ns0:cell><ns0:cell>4.28</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.663</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Attitudes toe¥¨</ns0:cell><ns0:cell>att1</ns0:cell><ns0:cell>4.42</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.878</ns0:cell><ns0:cell>0.892</ns0:cell><ns0:cell cols='2'>0.884 0.928</ns0:cell><ns0:cell>0.891</ns0:cell></ns0:row><ns0:row><ns0:cell>§' # $ for</ns0:cell><ns0:cell>att2</ns0:cell><ns0:cell>4.26</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.884</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Teaching 8</ns0:cell><ns0:cell>att3</ns0:cell><ns0:cell>4.19</ns0:cell><ns0:cell>0.90</ns0:cell><ns0:cell>0.886</ns0:cell><ns0:cell>0.907</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Research</ns0:cell><ns0:cell>att4 p</ns0:cell><ns0:cell>4.18</ns0:cell><ns0:cell>0.93</ns0:cell><ns0:cell>0.699</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl1 p</ns0:cell><ns0:cell>3.85</ns0:cell><ns0:cell>0.92</ns0:cell><ns0:cell>0.671</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl2</ns0:cell><ns0:cell>4.21</ns0:cell><ns0:cell>0.77</ns0:cell><ns0:cell>0.783</ns0:cell><ns0:cell>0.823</ns0:cell><ns0:cell cols='2'>0.787 0.875</ns0:cell><ns0:cell>0.796</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl3</ns0:cell><ns0:cell>3.90</ns0:cell><ns0:cell>0.87</ns0:cell><ns0:cell>0.809</ns0:cell><ns0:cell>0.863</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Digital Literacy</ns0:cell><ns0:cell>dl4 p</ns0:cell><ns0:cell>4.22</ns0:cell><ns0:cell>0.82</ns0:cell><ns0:cell>0.757</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl5</ns0:cell><ns0:cell>3.98</ns0:cell><ns0:cell>0.86</ns0:cell><ns0:cell>0.771</ns0:cell><ns0:cell>0.823</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl6 p</ns0:cell><ns0:cell>4.26</ns0:cell><ns0:cell>0.79</ns0:cell><ns0:cell>0.765</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>dl7 p</ns0:cell><ns0:cell>3.99</ns0:cell><ns0:cell>0.96</ns0:cell><ns0:cell>0.438</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Familiarity e §©w</ns0:cell><ns0:cell>fam1</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>1.18</ns0:cell><ns0:cell>0.857</ns0:cell><ns0:cell>0.914</ns0:cell><ns0:cell cols='2'>0.741 0.884</ns0:cell><ns0:cell>0.764</ns0:cell></ns0:row><ns0:row><ns0:cell>Blackboard</ns0:cell><ns0:cell>fam2</ns0:cell><ns0:cell>4.25</ns0:cell><ns0:cell>1.19</ns0:cell><ns0:cell>0.790</ns0:cell><ns0:cell>0.865</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022) Manuscript to be reviewed 3 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Discriminant validity</ns0:cell></ns0:row><ns0:row><ns0:cell>Note: diagonal elements are average variance extracted (AVE), off-diagonal elements are</ns0:cell></ns0:row><ns0:row><ns0:cell>squared interfactor correlation</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Discriminant &'()0)12</ns0:figDesc><ns0:table><ns0:row><ns0:cell>'3)'4(5</ns0:cell><ns0:cell>PSPA</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Att</ns0:cell><ns0:cell>DL</ns0:cell><ns0:cell>Fam</ns0:cell><ns0:cell>AoSPS</ns0:cell></ns0:row><ns0:row><ns0:cell>PSPA</ns0:cell><ns0:cell>0H679</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>PC</ns0:cell><ns0:cell>0.026</ns0:cell><ns0:cell>0H@AB</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Att</ns0:cell><ns0:cell>0.065</ns0:cell><ns0:cell>0.077</ns0:cell><ns0:cell>0HAC7</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>DL</ns0:cell><ns0:cell>0.218</ns0:cell><ns0:cell>0.077</ns0:cell><ns0:cell>0.219</ns0:cell><ns0:cell>0HD77</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Fam</ns0:cell><ns0:cell>0.034</ns0:cell><ns0:cell>0.008</ns0:cell><ns0:cell>0.012</ns0:cell><ns0:cell>0.069</ns0:cell><ns0:cell>0HD6E</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>AoSPS</ns0:cell><ns0:cell>0.750</ns0:cell><ns0:cell>0.058</ns0:cell><ns0:cell>0.072</ns0:cell><ns0:cell>0.326</ns0:cell><ns0:cell>0.280</ns0:cell><ns0:cell>0HA9E</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>Note: diagonal elements are average variance extracted (AVE), off-diagonal elements are</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>squared interfactor correlation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71735:3:0:NEW 5 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "June 6, 2022
Dear Editor,
We would like to thank you and the reviewer for further follow-up comments on the manuscript, which we have incorporated into the revised version.
Specifically, we responded to your and their comments and suggestions in blue directly below them.
We believe this manuscript is now ready to be published in PeerJ Computer Science.
Dr. Ahmad R. Pratama
Assistant Professor of Informatics
On behalf of all authors
Editor's Decision
I am glad to inform you that your manuscript is nearly ready for acceptance. However, as indicated by the reviewer I suggest you to add the future scope and limitations in the manuscript. Also, thoroughly check the language and try to improve the presentation of the article.
We added the future score and limitations in the final paragraph of the manuscript (line 507-515).
The most recent version of the manuscript has been copy edited by a colleague from the United States of America, resulting in some modifications, including to the paper's title.
Reviewer Martin Falk
Basic reporting
The revised version looks good to me
Thank you.
Conclusions
Please provide information on the limitations
The limitations have been added to the most recent version of the manuscript (line 507-512).
The text is not always smooth. The language can be improved.
See an example:
Privacy concerns are significantly associated with higher awareness only for STEM faculty, while attitudes toward ICT for teaching and learning is negatively associated with such awareness among faculty with more than 10 years of experience.
->
Privacy concerns are significantly associated with higher awareness only among teachers of STEM subjects, while attitudes towards ICT for teaching and learning are negatively associated with such awareness among teachers with more than 10 years of experience.
The current manuscript was copy edited by our colleague from the United States of America, resulting in some modifications to the entire paper, including the title.
Since we are using American English in this paper, we use the term “faculty” to refer to all teaching staff of a university or college, as opposed to the more common definition in British English, which refers to a group of related departments in some universities. Also, we avoided using the term “teacher” throughout the paper so as to not confuse it with elementary and secondary education. Nevertheless, here is what the example sentence before and after revision:
Before:
Privacy concerns are significantly associated with higher awareness only for STEM faculty, while attitudes toward ICT for teaching and learning is negatively associated with such awareness among faculty with more than 10 years of experience.
After:
Privacy concerns are significantly associated with higher awareness only among STEM faculty members, while attitudes toward ICT for teaching and research are negatively associated with such awareness among senior faculty members with more than ten years of experience.
Experimental design
PLS-SEM is standard. All tests are reported
Thank you.
Validity of the findings
OK
Thank you.
" | Here is a paper. Please give your review comments after reading it. |
699 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Scientific software registries and repositories improve software findability and research transparency, provide information for software citations, and foster preservation of computational methods in a wide range of disciplines. Registries and repositories play a critical role by supporting research reproducibility and replicability, but developing them takes effort and few guidelines are available to help prospective creators of these resources. To address this need, the FORCE11 Software Citation Implementation Working Group convened a Task Force to distill the experiences of the managers of existing resources in setting expectations for all stakeholders. In this paper, we describe the resultant best practices which include defining the scope, policies, and rules that govern individual registries and repositories, along with the background, examples, and collaborative work that went into their development. We believe that establishing specific policies such as those presented here will help other scientific software registries and repositories better serve their users and their disciplines.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>management of its contents and allowed usages as well as clarifying positions on sensitive issues such as attribution.</ns0:p><ns0:p>In this paper, we expand on our pre-print 'Nine Best Practices for Research Software Registries and Repositories: A Concise Guide' (Task Force on Best Practices for Software <ns0:ref type='bibr'>Registries et al., 2020)</ns0:ref> to describe our best practices and their development.</ns0:p><ns0:p>Our guidelines are actionable, have a general purpose, and reflect the discussion of a community of more than 30 experts who handle over 15 resources (registries or repositories) across different scientific domains. Each guideline provides a rationale, suggestions, and examples based on existing repositories or registries. To reduce repetition, we refer to registries and repositories collectively as 'resources.'</ns0:p><ns0:p>The remainder of the paper is structured as follows. We first describe background and related efforts in Section 2, followed by the methodology we followed when structuring the discussion for creating the guidelines (Section 3). We then describe the nine best practices in Section 4, followed by a discussion (Section 5). Section 6 concludes the paper by summarizing current efforts to continue the adoption of the proposed practices. Those who contributed to the development of this paper are listed in Appendix A, and links to example policies are given in Appendix B. Appendix C provides supplemental information on our methods and an overview of main attributes of participating resources</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In the last decade, much was written about a reproducibility crisis in science <ns0:ref type='bibr' target='#b7'>(Baker, 2016)</ns0:ref> stemming in large part from the lack of training in programming skills and the unavailability of computational resources used in publications <ns0:ref type='bibr' target='#b53'>(Merali, 2010;</ns0:ref><ns0:ref type='bibr' target='#b61'>Peng, 2011;</ns0:ref><ns0:ref type='bibr' target='#b57'>Morin et al., 2012)</ns0:ref>. On these grounds, national and international governments have increased their interest in releasing artifacts of publicly-funded research to the public (Office of Science and Technology Policy, 2016; Directorate-General for Research and Innovation (European Commission), 2018; Australian <ns0:ref type='bibr' target='#b6'>Research Council, 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Chen et al., 2019</ns0:ref>; Ministère de l'Enseignement supérieur, de la Recherche et de l'Innovation, 2021), and scientists have appealed to colleagues in their field to release software to improve research transparency <ns0:ref type='bibr' target='#b73'>(Weiner et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b8'>Barnes, 2010;</ns0:ref><ns0:ref type='bibr' target='#b40'>Ince et al., 2012)</ns0:ref> and efficiency <ns0:ref type='bibr' target='#b33'>(Grosbol and Tody, 2010)</ns0:ref>. Open Science initiatives such as RDA and FORCE11 have emerged as a response to these calls for greater transparency and reproducibility. Journals introduced policies encouraging (or even requiring) that data and software be openly available to others <ns0:ref type='bibr'>(Editorial staff, 2019;</ns0:ref><ns0:ref type='bibr' target='#b24'>Fox et al., 2021)</ns0:ref>. New tools have been developed to facilitate depositing research data and software in a repository <ns0:ref type='bibr' target='#b9'>(Baruch, 2007;</ns0:ref><ns0:ref type='bibr' target='#b13'>CERN and OpenAIRE, 2013;</ns0:ref><ns0:ref type='bibr' target='#b18'>Di Cosmo and Zacchiroli, 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Clyburne-Sherin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Brinckman et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b69'>Trisovic et al., 2020)</ns0:ref> and consequently, make them citable so authors and other contributors gain recognition and credit for their work <ns0:ref type='bibr' target='#b64'>(Soito and Hwang, 2017;</ns0:ref><ns0:ref type='bibr' target='#b21'>Du et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Support for disseminating research outputs has been proposed with <ns0:ref type='bibr'>FAIR and FAIR4RS</ns0:ref> principles that state shared digital artifacts, such as data and software, should be Findable, Accessible, Interoperable, and Reusable <ns0:ref type='bibr'>(Wilkinson et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b51'>Lamprecht et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b47'>Katz et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b15'>Chue Hong et al., 2021)</ns0:ref>. Conforming with the FAIR</ns0:p></ns0:div>
<ns0:div><ns0:head>3/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science principles for published software <ns0:ref type='bibr' target='#b51'>(Lamprecht et al., 2020)</ns0:ref> requires facilitating its discoverability, preferably in domain-specific resources <ns0:ref type='bibr'>(Jiménez et al., 2017)</ns0:ref>. These resources should contain machine-readable metadata to improve the discoverability (Findable) and accessibility (Accessible) of research software through search engines or from within the resource itself. Furthering interoperability in FAIR is aided through the adoption of community standards e.g., schema.org <ns0:ref type='bibr' target='#b34'>(Guha et al., 2016)</ns0:ref> or the ability to translate from one resource to another. The CodeMeta initiative <ns0:ref type='bibr' target='#b46'>(Jones et al., 2017)</ns0:ref> achieves this translation by creating a 'Rosetta Stone' which maps the metadata terms used by each resource to a common schema. The CodeMeta schema 6 is an extension of schema.org which adds ten new fields to represent software specific metadata. To date, CodeMeta has been adopted for representing software metadata by many repositories. 7,8 As the usage of computational methods continues to grow, recommendations for improving research software have been proposed <ns0:ref type='bibr' target='#b66'>(Stodden et al., 2016)</ns0:ref> in many areas of science and software, as can be seen by the series of 'Ten Simple Rules' articles offered by PLOS <ns0:ref type='bibr' target='#b17'>(Dashnow et al., 2014)</ns0:ref>, sites such as AstroBetter, 9 courses to improve skills such as those offered by The Carpentries, 10 and attempts to measure the adoption of recognized best practices <ns0:ref type='bibr' target='#b62'>(Serban et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b70'>Trisovic et al., 2021)</ns0:ref>. Our quest for best practices complements these efforts by providing guides to the specific needs of research software registries and repositories.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>The best practices presented here were developed by an international Task Force of the FORCE11 Software Citation Implementation Working Group (SCIWG). The Task Force was proposed in June 2018 by author Alice Allen, who was endorsed with the goal of developing a list of best practices for software registries and repositories.</ns0:p><ns0:p>Working Group members and a broader group of managers of domain specific software resources formed the inaugural group. The resulting Task Force members were primarily managers and editors of resources from Europe, United States, and Australia. Due to the range in time zones, the Task Force held two meetings seven hours apart, with the expectation that, except for the meeting chair, participants would attend one of the two meetings. We generally refer to two meetings on the same day with the singular 'meeting' in the discussions to follow.</ns0:p><ns0:p>Eighteen people attended the inaugural Task Force meeting on February 2019. Participants introduced themselves and their resources by providing some basic information (see Appendix C). The chair laid out the goal of the Task Force, and the group was invited to brainstorm to identify commonalities for building a list of best practices.</ns0:p><ns0:p>Participants also shared challenges they had faced in running their resources and policies they had enacted to manage these resources. The result of the brainstorming and Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>discussion was a list of ideas collected in a common document. The possibility of holding a workshop in the future was also raised at this meeting and was met with enthusiasm.</ns0:p><ns0:p>Starting in May 2019 and continuing through the rest of 2019, the Task Force met on the third Thursday of each month and followed an iterative process to discuss, add to, and group ideas; refine and clarify the ideas into different practices, and define the practices more precisely. It was clear from the onset that, though our resources have goals in common, they are also very diverse and would be best served by best practices that were descriptive rather than prescriptive. We reached consensus on whether a practice should be a best practice through discussion and informal voting.</ns0:p><ns0:p>Each best practice was given a title and a list of questions or needs that it addressed.</ns0:p><ns0:p>While the initial plan for holding two Task Force meetings on the same day each month was to follow a common agenda with independent discussions built upon the previous month's meeting, the later meeting instead was advantaged by the earlier discussion. For instance, if the early meeting developed a list of examples for one of the guidelines, the late meeting then refined and added to the list. Hence, discussions were only duplicated when needed, e.g., where there was no consensus in the early group, and often proceeded in different directions according to the group's expertise and interest. Though we had not anticipated this, we found that holding two meetings each month on the same day accelerated the work, as work done in the second meeting of the day generally continued rather than repeated work done in the first meeting.</ns0:p><ns0:p>The resulting consensus from the meetings produced a list of the most broadly applicable practices, which became the initial list of best practices participants drew from during a two-day workshop, funded by the Sloan Foundation and held at the University of Maryland College Park, in November, 2019 (Scientific Software Registry Collaboration Workshop). A goal of the workshop was to develop the final recommendations on best practices for repositories and registries to the FORCE11 SCIWG.</ns0:p><ns0:p>The workshop included participants outside the Task Force resulting in a broader set of contributions to the final list. In 2020, the Task Force made additional refinements to the best practices during virtual meetings and through online collaborative writing producing in the guidelines described in the next section. Appendix A lists the people who participated in these efforts. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>BEST PRACTICES FOR REPOSITORIES AND REGISTRIES</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>will help document, guide, and preserve these resources, and put them in a stronger position to serve their disciplines, users, and communities. 11</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Provide a public scope statement</ns0:head><ns0:p>The landscape of research software is diverse and complex due to the overlap between scientific domains, the variety of technical properties and environments, and the additional considerations resulting from funding, authors' affiliation, or intellectual property. A scope statement clarifies the type of software contained in the repository or indexed in the registry. Precisely defining a scope, therefore, helps those users of the resource who are looking for software to better understand the results they obtained.</ns0:p><ns0:p>Moreover, given that many of these resources accept submission of software packages, The scope statement should describe:</ns0:p><ns0:p>• What is accepted, and acceptable, based on criteria covering scientific discipline, technical characteristics, and administrative properties</ns0:p><ns0:p>• What is not accepted, i.e. characteristics that preclude their incorporation in the resource • Notable exceptions to these rules, if any Particular criteria of relevance include the scientific community being served and the types of software listed in the registry or stored in the repository, such as source code, compiled executables, or software containers. The scope statement may also include criteria that must be satisfied by accepted software, such as whether certain software quality metrics must be fulfilled or whether the software must be used in published research. Availability criteria can be considered, such as whether the code has to be publicly available, be in the public domain and/or have a license from a predefined set, or whether software registered in another registry or repository will be accepted.</ns0:p><ns0:p>An illustrating example of such a scope statement is the editorial policy 12 published by the Astrophysics Source Code Library (ASCL) <ns0:ref type='bibr' target='#b0'>(Allen et al., 2013)</ns0:ref>, which states that it includes only software source code used in published astronomy and astrophysics research articles, and specifically excludes software available only as a binary or web service. Though the ASCL's focus is on research documented in peer-reviewed journals, its policy also explicitly states that it accepts source code used in successful theses. Other examples of scope statements can be found in Appendix B.</ns0:p><ns0:p>11 Please note that the information provided in this paper does not constitute legal advice.</ns0:p><ns0:p>12 https://ascl.net/wordpress/submissions/editiorial-policy/</ns0:p></ns0:div>
<ns0:div><ns0:head>6/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Provide guidance for users</ns0:head><ns0:p>Users accessing a resource to search for entries and browse or retrieve the description(s) of one or more software entries have to understand how to perform such actions. Although this guideline potentially applies to many public online resources, especially research databases, the potential complexity of the stored metadata and the curation mechanisms can seriously impede the understandability and usage of software registries and repositories.</ns0:p><ns0:p>User guidance material may include:</ns0:p><ns0:p>• How to perform common user tasks, such as searching the resource, or accessing the details of an entry</ns0:p><ns0:p>• Answers to questions that are often asked or can be anticipated, e.g., with Frequently Asked Questions or tips and tricks pages</ns0:p><ns0:p>• Who to contact for questions or help A separate section in these guidelines on the Conditions of use policy covers terms of use of the resource and how best to cite records in a resource and the resource itself.</ns0:p><ns0:p>Guidance for users who wish to contribute software is covered in the next section, Provide guidance to software contributors. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• Curation process, if any</ns0:p><ns0:p>• Procedures for updates (e.g., who can do it, when it is done, how is it done)</ns0:p><ns0:p>Topics to consider when writing a contributor policy include whether the author(s) of a software entry will be contacted if the contributor is not also an author and whether contact is a condition or side-effect of the submission. Additionally, a contributor policy should specify how persistent identifiers are assigned (if used) and should state that depositors must comply with all applicable laws and not be intentionally malicious.</ns0:p><ns0:p>Such material is provided in resources such as the Computational Infrastructure for Geodynamics <ns0:ref type='bibr' target='#b38'>(Hwang and Kellogg, 2017)</ns0:ref> software contribution checklist 15 and the CoMSES Net Computational Model Library <ns0:ref type='bibr' target='#b43'>(Janssen et al., 2008</ns0:ref>) model archival tutorial. 16 Additional examples of guidance for software contributors can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Establish an authorship policy</ns0:head><ns0:p>Because research software is often a research product, it is important to report authorship accurately, as it allows for proper scholarly credit and other types of attributions <ns0:ref type='bibr' target='#b63'>(Smith et al., 2016)</ns0:ref>. However, even though authorship should be defined at the level of a given project, it can prove complicated to determine <ns0:ref type='bibr' target='#b5'>(Alliez et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Roles in software development can widely vary as contributors change with time and versions, and contributions are difficult to gauge beyond the 'commit,' giving rise to complex situations. In this context, establishing a dedicated policy ensures that people are given due credit for their work. The policy also serves as a document that administrators can turn to in case disputes arise and allows proactive problem mitigation, rather than having to resort to reactive interpretation. Furthermore, having an authorship policy mirrors similar policies by journals and publishers and thus is part of a larger trend. Note that the authorship policy will be communicated at least partially to users through guidance provided to software contributors. Nevertheless, because the citation requirements for each piece of research software is under the authority of its owners, particular care should be taken to maintain the consistency of this policy with the citation policies for the registry or repository.</ns0:p><ns0:p>The authorship policy should specify:</ns0:p><ns0:p>• How authorship is determined e.g., a stated criteria by the contributors and/or the resource</ns0:p><ns0:p>• Policies around making changes to authorship</ns0:p><ns0:p>• The conflict resolution processes adopted to handle authorship disputes</ns0:p><ns0:p>When defining an authorship policy, resource maintainers should take into consideration whether those who are not coders, such as software testers or documentation maintainers, will be identified or credited as authors, as well as criteria for ordering Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the list of authors in cases of multiple authors, and how the resource handles large numbers of authors and group or consortium authorship. Resources may also include guidelines about how changes to authorship will be handled so each author receives proper credit for their contribution. Guidelines can help facilitate determining every contributors' role. In particular, the use of a credit vocabulary, such as the Contributor Roles Taxonomy <ns0:ref type='bibr' target='#b4'>(Allen et al., 2019)</ns0:ref>, to describe authors' contributions should be considered for this purpose. 17 An example of authorship policy is provided in the Ethics Guidelines 18 and the submission guide authorship section 19 of the Journal of Open Source Software <ns0:ref type='bibr' target='#b48'>(Katz et al., 2018)</ns0:ref>, which provides rules for inclusion in the authors list. Additional examples of authorship policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Share your metadata schema</ns0:head><ns0:p>The structure and semantics of the information stored in registries and repositories is sometimes complex, which can hinder the clarity, discovery, and reuse of the entries included in these resources. Publicly posting the metadata schema used for the entries helps individual and organizational users interested in a resource's information understand the structure and properties of the deposited information. The metadata structure helps to inform users how to interact with or ingest records in the resource.</ns0:p><ns0:p>A metadata schema mapped to other schemas and an API specification can improve the interoperability between registries and repositories. This practice should specify:</ns0:p><ns0:p>• The schema used and its version number. If a standard or community schema, such as CodeMeta <ns0:ref type='bibr' target='#b46'>(Jones et al., 2017)</ns0:ref> or schema.org <ns0:ref type='bibr' target='#b34'>(Guha et al., 2016)</ns0:ref> is used, the resource should reference its documentation or official website. If a custom schema is used, formal documentation such as a description of the schema and/or a data dictionary should be provided.</ns0:p><ns0:p>• Expected metadata when submitting software, including which fields are required and which are optional, and the format of the content in each field.</ns0:p><ns0:p>To improve the readability of the metadata schema and facilitate its translation to other standards, resources may provide a mapping (from the metadata schema used in the resource) to published standard schemas, through the form of a 'cross-walk' (e.g., the CodeMeta cross-walk 20 ) and include an example entry from the repository that illustrates all the fields of the metadata schema. For instance, extensive documentation 21 is available for the biotoolsSchema <ns0:ref type='bibr' target='#b42'>(Ison et al., 2021)</ns0:ref> format, which is used in the bio.tools registry. Another example is the OntoSoft vocabulary, 22 used by the OntoSoft registry <ns0:ref type='bibr' target='#b30'>(Gil et al., 2015</ns0:ref><ns0:ref type='bibr' target='#b29'>(Gil et al., , 2016) )</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.6'>Stipulate conditions of use</ns0:head><ns0:p>The conditions of use document the terms under which users may use the contents provided by a website. In the case of software registries and repositories, these conditions should specifically state how the metadata regarding the entities of a resource can be used, attributed, and/or cited, and provide information about licensing. This policy can forestall potential liabilities and difficulties that may arise, such as claims of damage for misinterpretation or misapplication of metadata. In addition, the conditions of use should clearly state how the metadata can and cannot be used, including for commercial purposes and in aggregate form. This document should include:</ns0:p><ns0:p>• Legal disclaimers about the responsibility and liability borne by the registry or repository • License and copyright information, both for individual entries and for the registry or repository as a whole</ns0:p><ns0:p>• Conditions for the use of the metadata, including prohibitions, if any • Preferred format for citing software entries • Preferred format for attributing or citing the resource itself When writing conditions of use, resource maintainers might consider what license governs the metadata, if licensing requirements apply for findings and/or derivatives of the resource, and whether there are differences in the terms and license for commercial versus noncommercial use. Restrictions on the use of the metadata may also be included, as well as a statement to the effect that the registry or repository makes no guarantees about completeness and is not liable for any damages that could arise from the use of the information. Technical restrictions, such as conditions of use of the API (if one is available), may also be mentioned.</ns0:p><ns0:p>Conditions of use can be found for instance for DOE CODE <ns0:ref type='bibr' target='#b23'>(Ensor et al., 2017)</ns0:ref>, which in addition to the general conditions of use 23 specifies that the rules for usage of the hosted code 24 are defined by their respective licenses. Additional examples of conditions of use policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.7'>State a privacy policy</ns0:head><ns0:p>Privacy policies define how personal data about users are stored, processed, exchanged or removed. Having a privacy policy demonstrates a strong commitment to the privacy of users of the registry or repository and allows the resource to comply with the legal requirement of many countries in addition to those a home institution and/or funding agencies may impose. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The privacy policy of a resource should describe:</ns0:p><ns0:p>• What information is collected and how long it is retained • How the information, especially any personal data, is used • Whether tracking is done, what is tracked, and how (e.g., Google Analytics)</ns0:p><ns0:p>• Whether cookies are used When writing a privacy policy, the specific personal data which are collected should be detailed, as well as the justification for their resource, and whether these data are sold and shared. Additionally, one should list explicitly the third-party tools used to collect analytic information and potentially reference their privacy policies. If users can receive emails as a result of visiting or downloading content, such potential solicitations or notifications should be announced. Measures taken to protect users' privacy and whether the resource complies with the European Union Directive on General Data Protection Regulation 25 (GDPR) or other local laws, if applicable, should be explained. 26 As a precaution, the statement can reserve the right to make changes to this privacy policy. Finally, a mechanism by which users can request the removal of such information should be described.</ns0:p><ns0:p>For example, the SciCrunch's <ns0:ref type='bibr' target='#b31'>(Grethe et al., 2014)</ns0:ref> privacy policy 27 details what kind of personal information is collected, how it is collected, and how it may be reused, including by third-party websites through the use of cookies. Additional examples of privacy policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.8'>Provide a retention policy</ns0:head><ns0:p>Many software registries and repositories aim to facilitate the discovery and accessibility of the objects they describe, e.g., enabling search and citation, by making the corresponding records permanently accessible. However, for various reasons, even in such cases maintainers and curators may have to remove records. Common examples include removing entries that are outdated, no longer meet the scope of the registry, or are found to be in violation of policies. The resource should therefore document retention goals and procedures so that users and depositors are aware of them.</ns0:p><ns0:p>The retention policy should describe:</ns0:p><ns0:p>• The length of time metadata and/or files are expected to be retained</ns0:p><ns0:p>• Under what conditions metadata and/or files are removed • Who has the responsibility and ability to remove information • Procedures to request that metadata and/or files be removed</ns0:p><ns0:p>The policy should take into account whether best practices for persistent identifiers are followed, including resolvability, retention, and non-reuse of those identifiers. The 25 https://gdpr-info.eu/ 26 In the case of GDPR, the regulation applies to all European user personal data, even if the resource is not located in Europe. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science retention time provided by the resource should not be too prescriptive (e.g., 'for the next 10 years'), but rather it should fit within the context of the underlying organization(s) and its funding. This policy should also state who is allowed to edit metadata, delete records, or delete files, and how these changes are performed to preserve the broader consistency of the registry. Finally, the process by which data may be taken offline and archived as well as the process for its possible retrieval should be thoroughly documented.</ns0:p><ns0:p>As an example, Bioconductor <ns0:ref type='bibr' target='#b28'>(Gentleman et al., 2004</ns0:ref>) has a deprecation process through which software packages are removed if they cannot be successfully built or tested, or upon specific request from the package maintainer. Their policy 28 specifies who initiates this process and under which circumstances, as well as the successive steps that lead to the removal of the package. Additional examples of retention policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.9'>Disclose your end-of-life policy</ns0:head><ns0:p>Despite their usefulness, the long-term maintenance, sustainability, and persistence of online scientific resources remains a challenge, and published web services or databases can disappear after a few years <ns0:ref type='bibr' target='#b71'>(Veretnik et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b50'>Kern et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Sharing a clear end-of-life policy increases trust in the community served by a registry or repository. It demonstrates a thoughtful commitment to users by informing them that provisions for the resource have been considered should the resource close or otherwise end its services for its described artifacts. Such a policy sets expectations and provides reassurance as to how long the records within the registry will be findable and accessible in the future.</ns0:p><ns0:p>This policy should describe:</ns0:p><ns0:p>• Under what circumstances the resource might end its services</ns0:p><ns0:p>• What consequences would result from closure</ns0:p><ns0:p>• What will happen to the metadata and/or the software artifacts contained in the resource in the event of closure</ns0:p><ns0:p>• If long-term preservation is expected, where metadata and/or software artifacts will be migrated for preservation</ns0:p><ns0:p>• How a migration will be funded</ns0:p><ns0:p>Publishing an end-of-life policy is an opportunity to consider, in the event a resource is closed, whether the records will remain available, and if so, how and for whom, and under which conditions, such as archived status or 'read-only.' The restrictions applicable to this policy, if any, should be considered and detailed. Establishing a formal agreement or memorandum of understanding with another registry, repository, or institution to receive and preserve the data or project, if applicable, might help to prepare for such a liability. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Examples of such policies include the Zenodo end-of-life policy, 29 which states that if Zenodo ceases its services, the data hosted in the resource will be migrated and the DOIs provided would be updated to resolve to the new location (currently unspecified). Additional examples of end-of-life policies can be found in Appendix B.</ns0:p><ns0:p>A summary of the practices presented in this section can be found in Table <ns0:ref type='table' target='#tab_5'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>The best practices described above serve as a guide for repositories and registries to provide better service to their users, ranging from software developers and researchers to publishers and search engines, and enable greater transparency about the operation of their described resources. Implementing our practices provides users with significant information about how different resources operate, while preserving important institutional knowledge, standardizing expectations, and guiding user interactions.</ns0:p><ns0:p>For instance, a public scope statement and guidance for users may directly impact the ease of use and, thus, the popularity of the repository. Usability is enhanced when resources have tools with a simple design and unambiguous commands as well as infographic guides or video tutorials to further ease the learning curve for new users.</ns0:p><ns0:p>The guidance for software contributions, conditions of use, and sharing the metadata schema used may help eager users contribute new functionality or tools, which could also help in creating a community around a resource. A privacy policy has become a requirement across geographic boundaries and legal jurisdictions. An authorship policy is critical in facilitating collaborative work among researchers and minimizing the chances for disputes. Finally, retention and end-of-life policies increase the trust and integrity of the repository service. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>9. Disclose end-of-life policy Informs both users and depositors of how long the records within the resource will be findable and accessible in the future.</ns0:p><ns0:p>Example: Zenodo end-of-life policy.</ns0:p><ns0:p>• Circumstances under which the resource might end its services • What consequences would result from closure • What will happen to the metadata and/or the software artifacts contained in the resource in the event of closure • If long-term preservation is expected, where metadata and/or software artifacts will be migrated for preservation; how a migration will be funded</ns0:p><ns0:p>Policies affecting a single community or domain were deliberately omitted when developing the best practices. First, an exhaustive list would have been a barrier to adoption and not applicable to every repository since each has a different perspective, audience, and motivation that drives policy development for their organization. Second, best practices that regulate the content of a resource are typically domain-specific to the artifact and left to resources to stipulate based on their needs. Participants in the 2019 Scientific Software Registry Collaboration Workshop were surprised to find that only four metadata elements were shared by all represented resources. 30 The diversity of our resources precludes prescriptive requirements, such as requiring specific metadata for records, so these were also deliberately omitted in the proposed best practices.</ns0:p><ns0:p>Hence, we focused on broadly applicable practices considered important by various resources. For example, amongst the participating registries and repositories, very few had codes of conduct that govern the behavior of community members. Codes of conduct are warranted if resources are run as part of a community, especially if comments and reviews are solicited for deposits. In contrast, a code of conduct would be less useful for resources whose primary purpose is to make software and software metadata available for reuse. However, this does not negate their importance and their inclusion as best practices in other arenas concerning software.</ns0:p><ns0:p>As noted by the FAIR4RS movement, software is different than data, motivating the need for a separate effort to address software resources <ns0:ref type='bibr' target='#b51'>(Lamprecht et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b49'>Katz et al., 2016)</ns0:ref>. Even so, there are some similarities, and our effort complements and aligns well with recent guidelines developed in parallel to increase the transparency, responsibility, user focus, sustainability, and technology of data repositories. For example, both the TRUST Principles <ns0:ref type='bibr' target='#b52'>(Lin et al., 2020)</ns0:ref> Manuscript to be reviewed the TRUST principles suggest. Inward-facing policies, such as documenting internal workflows and practices, are generally good in reducing operational risks, but internal management practices were considered out of scope of our guidelines.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Figure <ns0:ref type='figure' target='#fig_4'>1</ns0:ref> shows the number of resources that support (partially or in their totality) each best practice. Though we see the proposed best practices as critical, many of the repositories that have actively participated in the discussions (14 resources in total) have yet to implement every one of them. We have observed that the first three practices (providing public scope statement, add guidance for users and for software contributors) have the widest adoption, while the retention, end-of-life, and authorship policy the least. Understanding the lag in the implementation across all of the best practices requires further engagement with the community.</ns0:p><ns0:p>Improving the adoption of our guidelines is one of the goals of SciCodes, 31 a recent consortium of scientific software registries and repositories. SciCodes aims to build a community to continue the dialogue and share information between domains, including sharing of tools and ideas. SciCodes has also prioritized improving software citation (complementary to the efforts of the FORCE11 SCIWG) and tracking the impact of metadata and interoperability. In addition, SciCodes aims to understand barriers to implementing policies, ensure consistency between various best practices, and continue advocacy for software support by continuing dialogue between registries, repositories, researchers, and other stakeholders.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>The dissemination and preservation of research material, where repositories and registries play a key role, lies at the heart of scientific advancement. This paper introduces nine best practices for research software registries and repositories. The goal is to continue implementing these practices more uniformly in our own registries and repositories and reduce the burdens of adoption. In addition to completing the adoption of these best practices, SciCodes will address topics such as tracking the impact of good metadata, improving interoperability between registries, and making our metadata discoverable by search engines and services such as Google Scholar, ORCID, and discipline indexers.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>providing a precise and accessible definition will help researchers determine whether they should register or deposit software, and curators by making clear what is out of scope for the resource. Overall, a public scope manages the expectations of the potential depositor as well as the software seeker. It informs both what the resource does and does not contain.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>23 https://www.osti.gov/disclaim 24 https://www.osti.gov/doecode/faq#are-there-restrictions10/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>27 https://scicrunch.org/page/privacy 11/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>28 https://bioconductor.org/developers/package-end-of-life/12/29PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Number of resources supporting each best practice, out of 14 resources.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>31 http://scicodes.net 16/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021) Manuscript to be reviewed Computer Science practices are an outcome of a Task Force of the FORCE11 Software Citation Implementation Working Group and reflect the discussion, collaborative experiences, and consensus of over 30 experts and 15 resources. The best practices are non-prescriptive, broadly applicable, and include examples and guidelines for their adoption by a community. They specify establishing the working domain (scope) and guidance for both users and software contributors, address legal concerns with privacy, use, and authorship policies, enhance usability by encouraging metadata sharing, and set expectations with retention and end-of-life policies. However, we believe additional work is needed to raise awareness and adoption across resources from different scientific disciplines. Through the SciCodes consortium, our</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>and available in both machine-readable and 17 http://credit.niso.org/ 18 https://joss.theoj.org/about#ethics 19 https://joss.readthedocs.io/en/latest/submitting.html#authorship 20 https://codemeta.github.io/crosswalk/ 21 https://biotoolsschema.readthedocs.io/en/latest/ 22 http://ontosoft.org/software</ns0:figDesc><ns0:table><ns0:row><ns0:cell>9/29</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021) Manuscript to be reviewed Computer Science human readable formats. Additional examples of metadata schemas can be found in Appendix B.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of the best practices with recommendations and examples. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Practice, description and examples</ns0:cell><ns0:cell>Recommendations</ns0:cell></ns0:row><ns0:row><ns0:cell>1. Provide a public scope statement</ns0:cell><ns0:cell>• What is accepted, and acceptable, based on criteria</ns0:cell></ns0:row><ns0:row><ns0:cell>Informs both software depositor and</ns0:cell><ns0:cell>covering scientific discipline, technical</ns0:cell></ns0:row><ns0:row><ns0:cell>resource seeker what the collection does</ns0:cell><ns0:cell>characteristics, and administrative properties</ns0:cell></ns0:row><ns0:row><ns0:cell>and does not contain.</ns0:cell><ns0:cell>• What is not accepted, i.e. characteristics that</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: ASCL editorial policy.</ns0:cell><ns0:cell>preclude their incorporation in the resource</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>• Notable exceptions to these rules, if any</ns0:cell></ns0:row><ns0:row><ns0:cell>2. Provide guidance for users</ns0:cell><ns0:cell>• How to perform common user tasks, like searching</ns0:cell></ns0:row><ns0:row><ns0:cell>Helps users accessing a resource</ns0:cell><ns0:cell>for collection, or accessing the details of an entry</ns0:cell></ns0:row><ns0:row><ns0:cell>understand how to perform tasks like</ns0:cell><ns0:cell>• Answers to questions that are often asked or can be</ns0:cell></ns0:row><ns0:row><ns0:cell>searching, browsing, and retrieving</ns0:cell><ns0:cell>anticipated</ns0:cell></ns0:row><ns0:row><ns0:cell>software entries.</ns0:cell><ns0:cell>• Point of contact for help and questions</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: bio.tools registry API user</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>guide.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>29 https://help.zenodo.org/</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>13/29</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:1:1:NEW 28 Dec 2021)</ns0:note></ns0:figure>
</ns0:body>
" | "Response to reviewers
We would like to thank the reviewers for their constructive feedback. We have created a
revised version of our manuscript addressing all the comments raised by both reviews.
Below you will find our answers to your comments. We have included the original comments
in yellow background and italics, while our answer is available in plain text.
We also attach two versions of the paper: a revision incorporating the new changes and a
version highlighting the differences with the original revision, in red.
Please note that in the version highlighting the differences some of the footnotes are
not properly linked. This is fixed in the revised manuscript.
Reviewer 1 (Anonymous)
In terms of literature references, some relevant software archives are not cited, for example,
swMATH at https://swmath.org/ , main reference:
Greuel, Gert-Martin, and Wolfram Sperber. 'swMATH–an information service for
mathematical software.' International Congress on Mathematical Software. Springer, Berlin,
Heidelberg, 2014
and some other archives are cited but only with URLs rather than also with the relevant
literature references, e.g, Software Heritage is only cited via URL instead of referencing:
Di Cosmo, Roberto, and Stefano Zacchiroli. 'Software heritage: Why and how to preserve
software source code.' iPRES 2017-14th International Conference on Digital Preservation.
2017 similarly, figshare is only mentioned with URL instead of also referencing, among
others:
Thelwall, Mike, and Kayvan Kousha. 'Figshare: a universal repository for academic resource
sharing?.' Online Information Review (2016).
Both swMATH and Software Heritage should also be mentioned in the introduction, together
with Zenodo (that does have a proper citation) and FigShare.
We thank the reviewer for pointing us to these references. We added them in the manuscript
(introduction section). Additionally, we modified the text to emphasize the large number of
existing software resources: this paper doesn't seek to provide an exhaustive list, but rather
a few illustrating examples.
In the introduction, I recommend to anticipate the introduction of the terminology of
'resources and collections', because it is really tiresome to mention 'registries and
repositories', up to that point in the paper. Also, the authors should probably decide between
'resources' and 'collections' and be consistent about that.
We have made consistent our terminology by using “resources” only
In section 2, Background, 'RDA' and 'FORCE11' needs references, as they might not be
familiar to all readers.
We added these references to lines 42-43, where these organizations are first mentioned.
When mentioning 'tools have been developed to facilitate depositing research data', other
platforms should be mentioned, such as Zenodo, Software Heritage, and other services like
the HAL preprint service that now supports depositing source code as well.
We mentioned Zenodo and Software Heritage in the introduction, with their citations, and
have included citations to them and other similar repositories, including HAL, in the
Background section for readers who are interested in seeking them out.
At the end of the section, I find it incorrect to say that you 'address the needs of domain
software registries and repositories', because you are just providing guidelines. Maybe you
can tone down this claim a little.
We have revised this phrasing to be more specific and emphasize that the best practices are
targeted specifically as a guide for managers and editors of research software repositories.
Regarding the presentation of the best practice, it is hard to cross-reference the concrete
example of each best practice to the end in the appendix.
I recommend including at least 1-2 examples *inline* in the main paper text just after each
best practice, and cross-reference the appendix for *other* examples.
That way the main text of the paper become self-contained (and more interesting!) and the
appendix can be consulted only for those readers who want more.
(This is a comment critique/suggestion that applies to all best practices, as they all have
examples; which is definitely a good thing!)
The last paragraph of each best practice includes 1-2 examples inline, with a crosslink to
Appendix B at the end. We’ve included an explanation of this organization at the beginning
of the best practices in Section 4. Table 1 provides a summary of all best practices with a link
to one example.
As a minor point, the author list is inconsistent between the submission system and the
paper; the latter also includes 'Task Force on Best Practices for Software Registries, and
SciCodes Consortium'.
As these are not proper authors, they should not be listed as such, but rather included in the
acknowledgments.
The submission process did not allow us to provide all authors, and we expect to work with
the journal to improve the author list if the paper is accepted. We are following the example
of papers such as https://peerj.com/articles/cs-86/, where the task force is considered one of
the authors.
Experimental design
Section 3 is a bit disappointing, in the sense that it reads like poorly presented results of a
survey.
You should document what were *all* the questions asked, not just a few of them ('questions
[...] included') and detail what answers you got in the usual way (descriptive statistics, etc.),
otherwise it would be hard to assess the pertinence of the methodology followed to arrive at
the guidelines, and this would appear to be just a position paper by the authors.
We have rewritten Section 3 (Methodology) to rigorously report the creation of the Task
Force and best practices in a linear manner. Appendix A shows the questions answered by
representatives of different resources in the consortium, while a summary (Figure 2, in
appendix C) describes the distribution of the answers as they currently exist among both
Task Force and SciCodes participating resources.
4.1: the wording is weird, the main point is the first one ('What is accepted'), the other two
points feel just redundant restatement of the same notion ('what is not accepted' ->
complement of the first point; and 'notable exceptions' -> which is still a part of the notion of
'what is accepted').
Maybe you should recommend that resources operators just focus on the properties/criteria
of the artifacts that are acceptable, rather than restating the same notion in different ways.
Some resource editors have found it useful to explicitly declare what their resource does not
include/accept, as unsuitable material is sometimes submitted; it is helpful to have clear and
unambiguous language publicly available to point to as to why a submission may be
unsuitable.
We have edited the example (line 234) to include: “ ...articles, and specifically excludes
software available only as a binary or web service. Though the ASCL's focus is on research
documented in peer-reviewed journals, its policy also explicitly states that it accepts source
code used in successful theses.” This covers what is accepted, what is not accepted, and an
exception (a thesis), thus demonstrating points 2 and 3 of this practice.
4.3: the wording of 'required and optional metadata' is weird, because it centers on the fact
that metadata comes 'from software contributors'. Shouldn't this be worded as 'required and
optional metadata expected for deposited software'? Metadata are about the software, and
the operators should care that they exist, no matter who contribute them.
We have included these suggestions in the paper.
4.4 'Also, particular care should be taken to maintain the consistency of this policy with the
citation policies for the registry or repository.' -> I have no idea what this means, it should be
better explained/clarified in the text.
What we mean to say here is that the authorship policy and the way software authors and
contributors are presented in the repository should not prevent users accessing this
information from citing the software according to the authors' requirements. We have
modified the manuscript to make this point more explicit.
4.5 'share your metadata schema' -> 'share' should be 'publish' or 'document', as it's
really about making it public, not sharing with others.
We prefer to use the word “share” due to its broader meaning. We want to imply 'sharing
metadata by making it findable', 'sharing metadata by crosswalking', 'sharing with indexers'
when requested so that local resource indexing can be improved. However, we reworded the
second sentence of the paragraph to clarify this as: “Publicly posting the metadata schema
used for the entries helps individual and organizational users interested in a resource's
information understand the structure and properties of the deposited information.”
'This practice when implemented should specify:' -> drop 'when implemented', which feels
like a truism that could be said for any of the best practices.
We have dropped “when implemented”, as suggested.
4.6 this best practice is almost entirely worded about metadata, and that seems incorrect to
me.
The conditions of use are relevant for both metadata and the data itself (e.g., actual
software, for software repositories).
The text of this should be generalized to cover both scenarii on equal footing, maybe
adopting the syntactic convention of '(meta)data' when talking about both, as the FAIR
Principles do.
We agree with the reviewer that it is important that users know what the conditions of use
are for both the software itself and the metadata for that software. Our example
demonstrates a case in which the usage of a software component, though stored in a
resource, is governed by the license assigned to the code by its authors.
4.7 include passages like (emphasis mine) 'Additionally, one *can* list explicitly the
third-party tools' and 'a mechanism by which users can request the removal of such
information *may* be described'.
In a guideline document, I have a hard time understanding what those two modal verbs
mean.
Should the resource operators include those information or not?
I recommend that the authors take a stance on this point.
(Or else describe the precise semantics of the various modal verbs use, e.g., in the style of
RFC 2119.)
All this would make the guidelines much more actionable for resource operators.
The paper presents guidelines (not rules) that should be broadly applicable. They are meant
to allow for differences in repositories and registries (and different types of registries and
repositories) as some can implement proposed guidelines, but others don't have to.
However, we do agree that we should use consistent language throughout the paper, and we
have reviewed our use of modal verbs to make sure they reflect our intentions. In particular,
we use “may” as something to be considered but potentially not relevant for every resource.
4.8 When discussing taking offline data and archival, I consider there is an important
omission: the requirement of documenting the backup strategy (which commonly goes under
the notion of 'retention policy') and how the archive reacts to legal takedown notices (e.g.,
due to DMCA in the US or, in Europe, equivalent legislation as well as GDPR).
We consider a backup strategy out of scope for this paper. We view it as an operational
factor and not a policy, and in particular, not what the retention policy is addressing.
Furthermore, while we agree that takedown notices can be an issue, we do not want to
appear to be giving legal advice, and in general, do not talk about legal issues in the paper.
4.9 When mentioning the Zenodo example, it would be nice to mention in the paper to where
data will be migrated, in addition to the fact that they will be migrated.
Unfortunately, after reviewing Zenodo’s website, this information is not clear. Zenodo’s site
states “In the highly unlikely event that Zenodo will have to close operations, we guarantee
that we will migrate all content to other suitable repositories, and since all uploads have
DOIs, all citations and links to Zenodo resources (such as your data) will not be affected.”
(https://help.zenodo.org/) Where Zenodo might migrate its records, and indeed, where any
resource might migrate records when a resource has reached the end of its life, is likely to
change over the coming years.
Reviewer 2 (Anonymous)
- In lines 151–155, the authors mention that the Task Force gathered information from its
members to learn more about each resource and identify overlapping practices. I would
encourage the authors to provide a simple visualization of their survey results as the basis
for developing best practices.
Figure 2 (Appendix C) now provides a summary overview of the results from the different
resources in the consortium.
- Similar to the previous point, the authors mention in lines 483–486 that they observe
different adoption rates for best practices upon internal discussion. It would be great if the
authors could provide some simple statistics (preferably with visualizations) to let the reader
know the status quo.
We have included a new figure (Figure 1) to highlight the adoption of the different practices
by those resources from the consortium which currently have actively participated in our
internal discussions.
- Lastly, since the authors mention in lines 487–490 that their effort complements and aligns
well with recent guidelines developed for data repositories, it would be interesting to learn
the actual similarities and differences between best practices developed by the two
communities.
We have provided several examples of similarities and differences between our best
practices and guidance provided for data repositories in lines 500-511.
Overall, the nine best practices proposed here are quite comprehensive and tackle many
important issues within the broader research community. I look forward to future impact
tracking and assessments undertaken by SciCodes.
We thank the reviewer for their positive feedback.
" | Here is a paper. Please give your review comments after reading it. |
700 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Scientific software registries and repositories improve software findability and research transparency, provide information for software citations, and foster preservation of computational methods in a wide range of disciplines. Registries and repositories play a critical role by supporting research reproducibility and replicability, but developing them takes effort and few guidelines are available to help prospective creators of these resources. To address this need, the FORCE11 Software Citation Implementation Working Group convened a Task Force to distill the experiences of the managers of existing resources in setting expectations for all stakeholders. In this paper, we describe the resultant best practices which include defining the scope, policies, and rules that govern individual registries and repositories, along with the background, examples, and collaborative work that went into their development. We believe that establishing specific policies such as those presented here will help other scientific software registries and repositories better serve their users and their disciplines.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>management of its contents and allowed usages as well as clarifying positions on sensitive issues such as attribution.</ns0:p><ns0:p>In this paper, we expand on our pre-print 'Nine Best Practices for Research Software Registries and Repositories: A Concise Guide' (Task Force on Best Practices for Software <ns0:ref type='bibr'>Registries et al., 2020)</ns0:ref> to describe our best practices and their development.</ns0:p><ns0:p>Our guidelines are actionable, have a general purpose, and reflect the discussion of a community of more than 30 experts who handle over 15 resources (registries or repositories) across different scientific domains. Each guideline provides a rationale, suggestions, and examples based on existing repositories or registries. To reduce repetition, we refer to registries and repositories collectively as 'resources.'</ns0:p><ns0:p>The remainder of the paper is structured as follows. We first describe background and related efforts in Section 2, followed by the methodology we used when structuring the discussion for creating the guidelines (Section 3). We then describe the nine best practices in Section 4, followed by a discussion (Section 5). Section 6 concludes the paper by summarizing current efforts to continue the adoption of the proposed practices. Those who contributed to the development of this paper are listed in Appendix A, and links to example policies are given in Appendix B. Appendix C provides updated information about resources that have participated in crafting the best practices and an overview of their main attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In the last decade, much was written about a reproducibility crisis in science <ns0:ref type='bibr' target='#b6'>(Baker, 2016)</ns0:ref> stemming in large part from the lack of training in programming skills and the unavailability of computational resources used in publications <ns0:ref type='bibr' target='#b53'>(Merali, 2010;</ns0:ref><ns0:ref type='bibr' target='#b60'>Peng, 2011;</ns0:ref><ns0:ref type='bibr' target='#b56'>Morin et al., 2012)</ns0:ref>. On these grounds, national and international governments have increased their interest in releasing artifacts of publicly-funded research to the public (Office of Science and Technology Policy, 2016; Directorate-General for Research and Innovation (European Commission), 2018; Australian <ns0:ref type='bibr' target='#b5'>Research Council, 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Chen et al., 2019</ns0:ref>; Ministère de l'Enseignement supérieur, de la Recherche et de l'Innovation, 2021), and scientists have appealed to colleagues in their field to release software to improve research transparency <ns0:ref type='bibr' target='#b73'>(Weiner et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b7'>Barnes, 2010;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ince et al., 2012)</ns0:ref> and efficiency <ns0:ref type='bibr' target='#b33'>(Grosbol and Tody, 2010)</ns0:ref>. Open Science initiatives such as RDA and FORCE11 have emerged as a response to these calls for greater transparency and reproducibility. Journals introduced policies encouraging (or even requiring) that data and software be openly available to others <ns0:ref type='bibr'>(Editorial staff, 2019;</ns0:ref><ns0:ref type='bibr' target='#b23'>Fox et al., 2021)</ns0:ref>. New tools have been developed to facilitate depositing research data and software in a repository <ns0:ref type='bibr' target='#b8'>(Baruch, 2007;</ns0:ref><ns0:ref type='bibr' target='#b12'>CERN and OpenAIRE, 2013;</ns0:ref><ns0:ref type='bibr' target='#b18'>Di Cosmo and Zacchiroli, 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Clyburne-Sherin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Brinckman et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b70'>Trisovic et al., 2020)</ns0:ref> and consequently, make them citable so authors and other contributors gain recognition and credit for their work <ns0:ref type='bibr' target='#b63'>(Soito and Hwang, 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Du et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Support for disseminating research outputs has been proposed with <ns0:ref type='bibr'>FAIR and FAIR4RS</ns0:ref> principles that state shared digital artifacts, such as data and software, should be Findable, Accessible, Interoperable, and Reusable <ns0:ref type='bibr'>(Wilkinson et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b51'>Lamprecht et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b46'>Katz et al., 2021;</ns0:ref><ns0:ref type='bibr'>Chue Hong et al., 2021)</ns0:ref>. Conforming with the FAIR</ns0:p></ns0:div>
<ns0:div><ns0:head>3/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science principles for published software <ns0:ref type='bibr' target='#b51'>(Lamprecht et al., 2020)</ns0:ref> requires facilitating its discoverability, preferably in domain-specific resources <ns0:ref type='bibr'>(Jiménez et al., 2017)</ns0:ref>. These resources should contain machine-readable metadata to improve the discoverability (Findable) and accessibility (Accessible) of research software through search engines or from within the resource itself. Furthering interoperability in FAIR is aided through the adoption of community standards e.g., schema.org <ns0:ref type='bibr' target='#b34'>(Guha et al., 2016)</ns0:ref> or the ability to translate from one resource to another. The CodeMeta initiative <ns0:ref type='bibr' target='#b45'>(Jones et al., 2017)</ns0:ref> achieves this translation by creating a 'Rosetta Stone' which maps the metadata terms used by each resource to a common schema. The CodeMeta schema 6 is an extension of schema.org which adds ten new fields to represent software specific metadata. To date, CodeMeta has been adopted for representing software metadata by many repositories. 7,8 As the usage of computational methods continues to grow, recommendations for improving research software have been proposed <ns0:ref type='bibr' target='#b65'>(Stodden et al., 2016)</ns0:ref> in many areas of science and software, as can be seen by the series of 'Ten Simple Rules' articles offered by PLOS <ns0:ref type='bibr' target='#b17'>(Dashnow et al., 2014)</ns0:ref>, sites such as AstroBetter, 9 courses to improve skills such as those offered by The Carpentries, 10 and attempts to measure the adoption of recognized best practices <ns0:ref type='bibr' target='#b61'>(Serban et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b71'>Trisovic et al., 2022)</ns0:ref>. Our quest for best practices complements these efforts by providing guides to the specific needs of research software registries and repositories.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>The best practices presented in this paper were developed by an international Task Force of the FORCE11 Software Citation Implementation Working Group (SCIWG).</ns0:p><ns0:p>The Task Force was proposed in June 2018 by author Alice Allen, with the goal of developing a list of best practices for software registries and repositories. Working Group members and a broader group of managers of domain specific software resources formed the inaugural group. The resulting Task Force members were primarily managers and editors of resources from Europe, United States, and Australia. Due to the range in time zones, the Task Force held two meetings seven hours apart, with the expectation that, except for the meeting chair, participants would attend one of the two meetings. We generally refer to two meetings on the same day with the singular 'meeting' in the discussions to follow.</ns0:p><ns0:p>The inaugural Task Force meeting <ns0:ref type='bibr'>(Feb, 2019)</ns0:ref> Table <ns0:ref type='table'>1</ns0:ref>. Overview of the information shared by the 14 resources which participated in the first Task Force meeting <ns0:ref type='bibr'>(November, 2019)</ns0:ref>.</ns0:p><ns0:p>and DOI minting). Table <ns0:ref type='table'>1</ns0:ref> presents an overview of the collected responses, which highlight the efforts of the Task Force chairs to bring together both discipline-specific and general purpose resources. The 'Other' category indicates that the answer needed clarifying text (e.g., for the question 'is the repository actively curated?' some repositories are not manually curated, but have validation checks). Appendix C provides additional information on the questions asked to resource managers (Table <ns0:ref type='table'>C</ns0:ref>.1) and their responses (tables C.2, C.3 and C.4).</ns0:p><ns0:p>During the inaugural Task Force meeting, the chair laid out the goal of the Task Force, and the group was invited to brainstorm to identify commonalities for building a list of best practices. Participants also shared challenges they had faced in running their resources and policies they had enacted to manage these resources. The result of the brainstorming and discussion was a list of ideas collected in a common document.</ns0:p><ns0:p>Starting in May 2019 and continuing through the rest of 2019, the Task Force met on the third Thursday of each month and followed an iterative process to discuss, add to, and group ideas; refine and clarify the ideas into different practices, and define the practices more precisely. It was clear from the onset that, though our resources have goals in common, they are also very diverse and would be best served by best practices that were descriptive rather than prescriptive. We reached consensus on whether a practice should be a best practice through discussion and informal voting.</ns0:p><ns0:p>Each best practice was given a title and a list of questions or needs that it addressed.</ns0:p><ns0:p>Our initial plan aimed at holding two Task Force meetings on the same day each month, in order to follow a common agenda with independent discussions built upon the previous month's meeting. However, the later meeting was often advantaged by the earlier discussion. For instance, if the early meeting developed a list of examples for one of the guidelines, the late meeting then refined and added to the list. Hence, discussions were only duplicated when needed, e.g., where there was no consensus in the early group, and often proceeded in different directions according to the group's expertise and interest. Though we had not anticipated this, we found that holding two meetings each month on the same day accelerated the work, as work done in the second meeting of the day generally continued rather than repeating work done in the first meeting.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The resulting consensus from the meetings produced a list of the most broadly applicable practices, which became the initial list of best practices participants drew from during a two-day workshop, funded by the Sloan Foundation and held at the University of Maryland College Park, in November, 2019 (Scientific Software Registry Collaboration Workshop). A goal of the workshop was to develop the final recommendations on best practices for repositories and registries to the FORCE11 SCIWG.</ns0:p><ns0:p>The workshop included participants outside the Task Force resulting in a broader set of contributions to the final list. In 2020, this group made additional refinements to the best practices during virtual meetings and through online collaborative writing producing in the guidelines described in the next section. The Task Force then transitioned into the SciCodes consortium. 11 SciCodes is a permanent community for research software registries and repositories with a particular focus on these best practices. SciCodes continued to collect information about involved registries and repositories, which are listed in Appendix C. We also include some analysis of the number of entries and date of creation of member resources. Appendix A lists the people who participated in these efforts.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>BEST PRACTICES FOR REPOSITORIES AND REGISTRIES</ns0:head><ns0:p>Our recommendations are provided as nine separate policies or statements, each presented below with an explanation as to why we recommend the practice, what the practice describes, and specific considerations to take into account. The last paragraph of each best practice includes one or two examples and a link to Appendix B, which contains many examples from different registries and repositories.</ns0:p><ns0:p>These nine best practices, though not an exhaustive list, are applicable to the varied resources represented in the Task Force, so are likely to be broadly applicable to other scientific software repositories and registries. We believe that adopting these practices will help document, guide, and preserve these resources, and put them in a stronger position to serve their disciplines, users, and communities. 12</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Provide a public scope statement</ns0:head><ns0:p>The landscape of research software is diverse and complex due to the overlap between scientific domains, the variety of technical properties and environments, and the additional considerations resulting from funding, authors' affiliation, or intellectual property. A scope statement clarifies the type of software contained in the repository or indexed in the registry. Precisely defining a scope, therefore, helps those users of the resource who are looking for software to better understand the results they obtained.</ns0:p><ns0:p>Moreover, given that many of these resources accept submission of software packages, 11 http://scicodes.net 12 Please note that the information provided in this paper does not constitute legal advice.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The scope statement should describe:</ns0:p><ns0:p>• What is accepted, and acceptable, based on criteria covering scientific discipline, technical characteristics, and administrative properties</ns0:p><ns0:p>• What is not accepted, i.e. characteristics that preclude their incorporation in the resource</ns0:p><ns0:p>• Notable exceptions to these rules, if any Particular criteria of relevance include the scientific community being served and the types of software listed in the registry or stored in the repository, such as source code, compiled executables, or software containers. The scope statement may also include criteria that must be satisfied by accepted software, such as whether certain software quality metrics must be fulfilled or whether the software must be used in published research. Availability criteria can be considered, such as whether the code has to be publicly available, be in the public domain and/or have a license from a predefined set, or whether software registered in another registry or repository will be accepted.</ns0:p><ns0:p>An illustrating example of such a scope statement is the editorial policy 13 published by the Astrophysics Source Code Library (ASCL) <ns0:ref type='bibr' target='#b0'>(Allen et al., 2013)</ns0:ref>, which states that it includes only software source code used in published astronomy and astrophysics research articles, and specifically excludes software available only as a binary or web service. Though the ASCL's focus is on research documented in peer-reviewed journals, its policy also explicitly states that it accepts source code used in successful theses. Other examples of scope statements can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Provide guidance for users</ns0:head><ns0:p>Users accessing a resource to search for entries and browse or retrieve the description(s) of one or more software entries have to understand how to perform such actions. Although this guideline potentially applies to many public online resources, especially research databases, the potential complexity of the stored metadata and the curation mechanisms can seriously impede the understandability and usage of software registries and repositories.</ns0:p><ns0:p>User guidance material may include:</ns0:p><ns0:p>• How to perform common user tasks, such as searching the resource, or accessing the details of an entry</ns0:p><ns0:p>• Answers to questions that are often asked or can be anticipated, e.g., with Frequently Asked Questions or tips and tricks pages</ns0:p><ns0:p>• Who to contact for questions or help A separate section in these guidelines on the Conditions of use policy covers terms of use of the resource and how best to cite records in a resource and the resource itself.</ns0:p><ns0:p>Guidance for users who wish to contribute software is covered in the next section, Provide guidance to software contributors. Topics to consider when writing a contributor policy include whether the author(s) of a software entry will be contacted if the contributor is not also an author and whether contact is a condition or side-effect of the submission. Additionally, a contributor policy should specify how persistent identifiers are assigned (if used) and should state that depositors must comply with all applicable laws and not be intentionally malicious.</ns0:p><ns0:p>Such material is provided in resources such as the Computational Infrastructure for Geodynamics <ns0:ref type='bibr' target='#b38'>(Hwang and Kellogg, 2017)</ns0:ref> Manuscript to be reviewed Computer Science tions <ns0:ref type='bibr' target='#b62'>(Smith et al., 2016)</ns0:ref>. However, even though authorship should be defined at the level of a given project, it can prove complicated to determine <ns0:ref type='bibr' target='#b4'>(Alliez et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Roles in software development can widely vary as contributors change with time and versions, and contributions are difficult to gauge beyond the 'commit,' giving rise to complex situations. In this context, establishing a dedicated policy ensures that people are given due credit for their work. The policy also serves as a document that administrators can turn to in case disputes arise and allows proactive problem mitigation, rather than having to resort to reactive interpretation. Furthermore, having an authorship policy mirrors similar policies by journals and publishers and thus is part of a larger trend. Note that the authorship policy will be communicated at least partially to users through guidance provided to software contributors. Nevertheless, because the citation requirements for each piece of research software is under the authority of its owners, particular care should be taken to maintain the consistency of this policy with the citation policies for the registry or repository.</ns0:p><ns0:p>The authorship policy should specify:</ns0:p><ns0:p>• How authorship is determined e.g., a stated criteria by the contributors and/or the resource</ns0:p><ns0:p>• Policies around making changes to authorship • The conflict resolution processes adopted to handle authorship disputes When defining an authorship policy, resource maintainers should take into consideration whether those who are not coders, such as software testers or documentation maintainers, will be identified or credited as authors, as well as criteria for ordering the list of authors in cases of multiple authors, and how the resource handles large numbers of authors and group or consortium authorship. Resources may also include guidelines about how changes to authorship will be handled so each author receives proper credit for their contribution. Guidelines can help facilitate determining every contributors' role. In particular, the use of a credit vocabulary, such as the Contributor Roles Taxonomy <ns0:ref type='bibr' target='#b3'>(Allen et al., 2019)</ns0:ref>, to describe authors' contributions should be considered for this purpose. 18 An example of authorship policy is provided in the Ethics Guidelines 19 and the submission guide authorship section 20 of the Journal of Open Source Software <ns0:ref type='bibr' target='#b47'>(Katz et al., 2018)</ns0:ref>, which provides rules for inclusion in the authors list. Additional examples of authorship policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Share your metadata schema</ns0:head><ns0:p>The structure and semantics of the information stored in registries and repositories is sometimes complex, which can hinder the clarity, discovery, and reuse of the entries included in these resources. Publicly posting the metadata schema used for the entries helps individual and organizational users interested in a resource's information understand the structure and properties of the deposited information. The metadata Manuscript to be reviewed Computer Science structure helps to inform users how to interact with or ingest records in the resource.</ns0:p><ns0:p>A metadata schema mapped to other schemas and an API specification can improve the interoperability between registries and repositories. This practice should specify:</ns0:p><ns0:p>• The schema used and its version number. If a standard or community schema, such as CodeMeta <ns0:ref type='bibr' target='#b45'>(Jones et al., 2017)</ns0:ref> or schema.org <ns0:ref type='bibr' target='#b34'>(Guha et al., 2016)</ns0:ref> is used, the resource should reference its documentation or official website. If a custom schema is used, formal documentation such as a description of the schema and/or a data dictionary should be provided.</ns0:p><ns0:p>• Expected metadata when submitting software, including which fields are required and which are optional, and the format of the content in each field.</ns0:p><ns0:p>To improve the readability of the metadata schema and facilitate its translation to other standards, resources may provide a mapping (from the metadata schema used in the resource) to published standard schemas, through the form of a 'cross-walk' (e.g., the CodeMeta cross-walk 21 ) and include an example entry from the repository that illustrates all the fields of the metadata schema. For instance, extensive documentation 22 is available for the biotoolsSchema <ns0:ref type='bibr' target='#b41'>(Ison et al., 2021)</ns0:ref> format, which is used in the bio.tools registry. Another example is the OntoSoft vocabulary, 23 used by the OntoSoft registry <ns0:ref type='bibr' target='#b29'>(Gil et al., 2015</ns0:ref><ns0:ref type='bibr' target='#b28'>(Gil et al., , 2016) )</ns0:ref> and available in both machine-readable and human readable formats. Additional examples of metadata schemas can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.6'>Stipulate conditions of use</ns0:head><ns0:p>The conditions of use document the terms under which users may use the contents provided by a website. In the case of software registries and repositories, these conditions should specifically state how the metadata regarding the entities of a resource can be used, attributed, and/or cited, and provide information about licensing. This policy can forestall potential liabilities and difficulties that may arise, such as claims of damage for misinterpretation or misapplication of metadata. In addition, the conditions of use should clearly state how the metadata can and cannot be used, including for commercial purposes and in aggregate form. This document should include:</ns0:p><ns0:p>• Legal disclaimers about the responsibility and liability borne by the registry or repository • License and copyright information, both for individual entries and for the registry or repository as a whole Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Preferred format for attributing or citing the resource itself When writing conditions of use, resource maintainers might consider what license governs the metadata, if licensing requirements apply for findings and/or derivatives of the resource, and whether there are differences in the terms and license for commercial versus noncommercial use. Restrictions on the use of the metadata may also be included, as well as a statement to the effect that the registry or repository makes no guarantees about completeness and is not liable for any damages that could arise from the use of the information. Technical restrictions, such as conditions of use of the API (if one is available), may also be mentioned.</ns0:p><ns0:p>Conditions of use can be found for instance for DOE CODE <ns0:ref type='bibr' target='#b22'>(Ensor et al., 2017)</ns0:ref>, which in addition to the general conditions of use 24 specifies that the rules for usage of the hosted code 25 are defined by their respective licenses. Additional examples of conditions of use policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.7'>State a privacy policy</ns0:head><ns0:p>Privacy policies define how personal data about users are stored, processed, exchanged or removed. Having a privacy policy demonstrates a strong commitment to the privacy of users of the registry or repository and allows the resource to comply with the legal requirement of many countries in addition to those a home institution and/or funding agencies may impose.</ns0:p><ns0:p>The privacy policy of a resource should describe:</ns0:p><ns0:p>• What information is collected and how long it is retained • How the information, especially any personal data, is used • Whether tracking is done, what is tracked, and how (e.g., Google Analytics)</ns0:p><ns0:p>• Whether cookies are used When writing a privacy policy, the specific personal data which are collected should be detailed, as well as the justification for their resource, and whether these data are sold and shared. Additionally, one should list explicitly the third-party tools used to collect analytic information and potentially reference their privacy policies. If users can receive emails as a result of visiting or downloading content, such potential solicitations or notifications should be announced. Measures taken to protect users' privacy and whether the resource complies with the European Union Directive on General Data Protection Regulation 26 (GDPR) or other local laws, if applicable, should be explained. 27 As a precaution, the statement can reserve the right to make changes to this privacy policy. Finally, a mechanism by which users can request the removal of such information should be described.</ns0:p><ns0:p>24 https://www.osti.gov/disclaim 25 https://www.osti.gov/doecode/faq#are-there-restrictions 26 https://gdpr-info.eu/ 27 In the case of GDPR, the regulation applies to all European user personal data, even if the resource is not located in Europe.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For example, the SciCrunch's <ns0:ref type='bibr' target='#b30'>(Grethe et al., 2014)</ns0:ref> privacy policy 28 details what kind of personal information is collected, how it is collected, and how it may be reused, including by third-party websites through the use of cookies. Additional examples of privacy policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.8'>Provide a retention policy</ns0:head><ns0:p>Many software registries and repositories aim to facilitate the discovery and accessibility of the objects they describe, e.g., enabling search and citation, by making the corresponding records permanently accessible. However, for various reasons, even in such cases maintainers and curators may have to remove records. Common examples include removing entries that are outdated, no longer meet the scope of the registry, or are found to be in violation of policies. The resource should therefore document retention goals and procedures so that users and depositors are aware of them.</ns0:p><ns0:p>The retention policy should describe:</ns0:p><ns0:p>• The length of time metadata and/or files are expected to be retained</ns0:p><ns0:p>• Under what conditions metadata and/or files are removed • Who has the responsibility and ability to remove information • Procedures to request that metadata and/or files be removed</ns0:p><ns0:p>The policy should take into account whether best practices for persistent identifiers are followed, including resolvability, retention, and non-reuse of those identifiers. The retention time provided by the resource should not be too prescriptive (e.g., 'for the next 10 years'), but rather it should fit within the context of the underlying organization(s) and its funding. This policy should also state who is allowed to edit metadata, delete records, or delete files, and how these changes are performed to preserve the broader consistency of the registry. Finally, the process by which data may be taken offline and archived as well as the process for its possible retrieval should be thoroughly documented.</ns0:p><ns0:p>As an example, Bioconductor <ns0:ref type='bibr' target='#b27'>(Gentleman et al., 2004</ns0:ref>) has a deprecation process through which software packages are removed if they cannot be successfully built or tested, or upon specific request from the package maintainer. Their policy 29 specifies who initiates this process and under which circumstances, as well as the successive steps that lead to the removal of the package. Additional examples of retention policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.9'>Disclose your end-of-life policy</ns0:head><ns0:p>Despite their usefulness, the long-term maintenance, sustainability, and persistence of online scientific resources remains a challenge, and published web services or databases can disappear after a few years <ns0:ref type='bibr' target='#b72'>(Veretnik et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b49'>Kern et al., 2020)</ns0:ref>.</ns0:p><ns0:p>Sharing a clear end-of-life policy increases trust in the community served by a registry or repository. It demonstrates a thoughtful commitment to users by informing them that provisions for the resource have been considered should the resource close or otherwise end its services for its described artifacts. Such a policy sets expectations and provides reassurance as to how long the records within the registry will be findable and accessible in the future. This policy should describe:</ns0:p><ns0:p>• Under what circumstances the resource might end its services • What consequences would result from closure • What will happen to the metadata and/or the software artifacts contained in the resource in the event of closure • If long-term preservation is expected, where metadata and/or software artifacts will be migrated for preservation • How a migration will be funded Publishing an end-of-life policy is an opportunity to consider, in the event a resource is closed, whether the records will remain available, and if so, how and for whom, and under which conditions, such as archived status or 'read-only.' The restrictions applicable to this policy, if any, should be considered and detailed. Establishing a formal agreement or memorandum of understanding with another registry, repository, or institution to receive and preserve the data or project, if applicable, might help to prepare for such a liability.</ns0:p><ns0:p>Examples of such policies include the Zenodo end-of-life policy, 30 which states that if Zenodo ceases its services, the data hosted in the resource will be migrated and the DOIs provided would be updated to resolve to the new location (currently unspecified). Additional examples of end-of-life policies can be found in Appendix B.</ns0:p><ns0:p>A summary of the practices presented in this section can be found in Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>The best practices described above serve as a guide for repositories and registries to provide better service to their users, ranging from software developers and researchers to publishers and search engines, and enable greater transparency about the operation of their described resources. Implementing our practices provides users with significant information about how different resources operate, while preserving important institutional knowledge, standardizing expectations, and guiding user interactions.</ns0:p><ns0:p>For instance, a public scope statement and guidance for users may directly impact the ease of use and, thus, the popularity of the repository. Usability is enhanced when resources have tools with a simple design and unambiguous commands as well as infographic guides or video tutorials to further ease the learning curve for new users.</ns0:p><ns0:p>The guidance for software contributions, conditions of use, and sharing the metadata schema used may help eager users contribute new functionality or tools, which could also help in creating a community around a resource. A privacy policy has become Example: Zenodo end-of-life policy.</ns0:p><ns0:p>• Circumstances under which the resource might end its services • What consequences would result from closure • What will happen to the metadata and/or the software artifacts contained in the resource in the event of closure • If long-term preservation is expected, where metadata and/or software artifacts will be migrated for preservation; how a migration will be funded</ns0:p><ns0:p>Policies affecting a single community or domain were deliberately omitted when developing the best practices. First, an exhaustive list would have been a barrier to adoption and not applicable to every repository since each has a different perspective, audience, and motivation that drives policy development for their organization. Second, best practices that regulate the content of a resource are typically domain-specific to the artifact and left to resources to stipulate based on their needs. Participants in the 2019 Scientific Software Registry Collaboration Workshop were surprised to find that only four metadata elements were shared by all represented resources. 31 The diversity of our resources precludes prescriptive requirements, such as requiring specific metadata for records, so these were also deliberately omitted in the proposed best practices.</ns0:p><ns0:p>Hence, we focused on broadly applicable practices considered important by various resources. For example, amongst the participating registries and repositories, very few had codes of conduct that govern the behavior of community members. Codes of conduct are warranted if resources are run as part of a community, especially if comments and reviews are solicited for deposits. In contrast, a code of conduct would be less useful for resources whose primary purpose is to make software and software metadata available for reuse. However, this does not negate their importance and their inclusion as best practices in other arenas concerning software.</ns0:p><ns0:p>As noted by the FAIR4RS movement, software is different than data, motivating the 31 The elements were: software name, description, keywords, and URL.</ns0:p></ns0:div>
<ns0:div><ns0:head>15/29</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:p><ns0:p>Manuscript to be reviewed need for a separate effort to address software resources <ns0:ref type='bibr' target='#b51'>(Lamprecht et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b48'>Katz et al., 2016)</ns0:ref>. Even so, there are some similarities, and our effort complements and aligns well with recent guidelines developed in parallel to increase the transparency, responsibility, user focus, sustainability, and technology of data repositories. For example, both the TRUST Principles <ns0:ref type='bibr' target='#b52'>(Lin et al., 2020)</ns0:ref> and CoreTrustSeal Requirements <ns0:ref type='bibr' target='#b64'>Standards and Board (2019)</ns0:ref>) call for a repository to provide information on its scope and list the terms of use of its metadata to be considered compliant with TRUST or CoreTrustSeal, which aligns with our practices Provide a public scope statement and Stipulate conditions of use. CoreTrustSeal and TRUST also require that a repository consider continuity of access, which we have expressed as the practice to Disclosing your end-of-life policy. Our best practices differ in that they do not address, for example, staffing needs nor professional development for staff, as CoreTrustSeal requires, nor do our practices address protections against cyber or physical security threats, as the TRUST principles suggest. Inward-facing policies, such as documenting internal workflows and practices, are generally good in reducing operational risks, but internal management practices were considered out of scope of our guidelines.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>1</ns0:ref> shows the number of resources that support (partially or in their totality) each best practice. Though we see the proposed best practices as critical, many of the repositories that have actively participated in the discussions (14 resources in total) have yet to implement every one of them. We have observed that the first three practices (providing public scope statement, add guidance for users and for software contributors) have the widest adoption, while the retention, end-of-life, and authorship policy the least. Understanding the lag in the implementation across all of the best practices requires further engagement with the community.</ns0:p><ns0:p>Improving the adoption of our guidelines is one of the goals of SciCodes, 32 a recent Manuscript to be reviewed</ns0:p><ns0:p>Computer Science consortium of scientific software registries and repositories. SciCodes evolved from the Task Force as a permanent community to continue the dialogue and share information between domains, including sharing of tools and ideas. SciCodes has also prioritized improving software citation (complementary to the efforts of the FORCE11 SCIWG) and tracking the impact of metadata and interoperability. In addition, Sci-Codes aims to understand barriers to implementing policies, ensure consistency between various best practices, and continue advocacy for software support by continuing dialogue between registries, repositories, researchers, and other stakeholders.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>The dissemination and preservation of research material, where repositories and registries play a key role, lies at the heart of scientific advancement. This paper introduces nine best practices for research software registries and repositories. The goal is to continue implementing these practices more uniformly in our own registries and repositories and reduce the burdens of adoption. In addition to completing the adoption of these best practices, SciCodes will address topics such as tracking the impact of good metadata, improving interoperability between registries, and making our metadata discoverable by search engines and services such as Google Scholar, ORCID, and discipline indexers.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>was attended by eighteen people representing fourteen different resources. Participants introduced themselves and provided some basic information about their resources, including repository name, starting year, number of records, and scope (discipline-specific or general purpose), as well as services provided by each resource (e.g., support of software citation, software deposits,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>providing a precise and accessible definition will help researchers determine whether they should register or deposit software, and curators by making clear what is out of scope for the resource. Overall, a public scope manages the expectations of the potential depositor as well as the software seeker. It informs both what the resource does and does not contain.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>30 https://help.zenodo.org/ 13/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022) Manuscript to be reviewed Computer Science a requirement across geographic boundaries and legal jurisdictions. An authorship policy is critical in facilitating collaborative work among researchers and minimizing 491 the chances for disputes. Finally, retention and end-of-life policies increase the trust 492 and integrity of the repository service.493</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Number of resources supporting each best practice, out of 14 resources.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>32 http://scicodes.net 16/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>practices are an outcome of a Task Force of the FORCE11 Software Citation Implementation Working Group and reflect the discussion, collaborative experiences, and consensus of over 30 experts and 15 resources.The best practices are non-prescriptive, broadly applicable, and include examples and guidelines for their adoption by a community. They specify establishing the working domain (scope) and guidance for both users and software contributors, address legal concerns with privacy, use, and authorship policies, enhance usability by encouraging metadata sharing, and set expectations with retention and end-of-life policies. However, we believe additional work is needed to raise awareness and adoption across resources from different scientific disciplines. Through the SciCodes consortium, our</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>software contribution checklist16 and the CoMSES Net Computational Model Library<ns0:ref type='bibr' target='#b42'>(Janssen et al., 2008</ns0:ref>) model archival tutorial.17 Additional examples of guidance for software contributors can be found in</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Appendix B.</ns0:cell></ns0:row><ns0:row><ns0:cell>4.4 Establish an authorship policy</ns0:cell></ns0:row><ns0:row><ns0:cell>Because research software is often a research product, it is important to report au-</ns0:cell></ns0:row><ns0:row><ns0:cell>thorship accurately, as it allows for proper scholarly credit and other types of attribu-</ns0:cell></ns0:row></ns0:table><ns0:note>14 https://biotools.readthedocs.io/en/latest/api_usage_guide.html 15 https://daac.ornl.gov/submit/ 16 https://geodynamics.org/cig/dev/code-donation/checklist/ 17 https://forum.comses.net/t/archiving-your-model-1-gettingstarted/7377 8/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>18 http://credit.niso.org/ 19 https://joss.theoj.org/about#ethics 20 https://joss.readthedocs.io/en/latest/submitting.html#authorship</ns0:figDesc><ns0:table><ns0:row><ns0:cell>9/29</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of the best practices with recommendations and examples. Manuscript to be reviewed Computer Science 7. State a privacy policy Defines how personal data about users are stored, processed, exchanged, or removed. Example: SciCrunch's privacy policy. • What information is collected and how long it is retained • How the information, especially any personal data, is used • Whether tracking is done, what is tracked, and how; whether cookies are used</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Practice, description and examples</ns0:cell><ns0:cell>Recommendations</ns0:cell></ns0:row><ns0:row><ns0:cell>1. Provide a public scope statement</ns0:cell><ns0:cell>• What is accepted, and acceptable, based on criteria</ns0:cell></ns0:row><ns0:row><ns0:cell>Informs both software depositor and</ns0:cell><ns0:cell>covering scientific discipline, technical</ns0:cell></ns0:row><ns0:row><ns0:cell>resource seeker what the collection does</ns0:cell><ns0:cell>characteristics, and administrative properties</ns0:cell></ns0:row><ns0:row><ns0:cell>and does not contain.</ns0:cell><ns0:cell>• What is not accepted, i.e. characteristics that</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: ASCL editorial policy.</ns0:cell><ns0:cell>preclude their incorporation in the resource</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>• Notable exceptions to these rules, if any</ns0:cell></ns0:row><ns0:row><ns0:cell>2. Provide guidance for users</ns0:cell><ns0:cell>• How to perform common user tasks, like searching</ns0:cell></ns0:row><ns0:row><ns0:cell>Helps users accessing a resource</ns0:cell><ns0:cell>for collection, or accessing the details of an entry</ns0:cell></ns0:row><ns0:row><ns0:cell>understand how to perform tasks like</ns0:cell><ns0:cell>• Answers to questions that are often asked or can be</ns0:cell></ns0:row><ns0:row><ns0:cell>searching, browsing, and retrieving</ns0:cell><ns0:cell>anticipated</ns0:cell></ns0:row><ns0:row><ns0:cell>software entries.</ns0:cell><ns0:cell>• Point of contact for help and questions</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: bio.tools registry API user</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>guide.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3. Provide guidance to software</ns0:cell><ns0:cell>• Who can or cannot submit entries and/or metadata</ns0:cell></ns0:row><ns0:row><ns0:cell>contributors</ns0:cell><ns0:cell>• Required and optional metadata expected from</ns0:cell></ns0:row><ns0:row><ns0:cell>Specifies who can add or change software</ns0:cell><ns0:cell>software contributors</ns0:cell></ns0:row><ns0:row><ns0:cell>entries and explains the necessary</ns0:cell><ns0:cell>• Procedures for updates, review process, curation</ns0:cell></ns0:row><ns0:row><ns0:cell>processes.</ns0:cell><ns0:cell>process</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: Computational Infrastructure for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Geodynamics contribution checklist.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4. Establish an authorship policy</ns0:cell><ns0:cell>• How authorship is determined e.g., a stated criteria</ns0:cell></ns0:row><ns0:row><ns0:cell>Ensures that contributors are given due</ns0:cell><ns0:cell>by the contributors and/or the resource</ns0:cell></ns0:row><ns0:row><ns0:cell>credit for their work and to resolve</ns0:cell><ns0:cell>• Policies around making changes to authorship</ns0:cell></ns0:row><ns0:row><ns0:cell>disputes in case of conflict.</ns0:cell><ns0:cell>• Define the conflict resolution processes</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: JOSS authorship policy.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>5. Share your metadata schema</ns0:cell><ns0:cell>• Specify the used schema and its version number.</ns0:cell></ns0:row><ns0:row><ns0:cell>Revealing the metadata schema used helps</ns0:cell><ns0:cell>Add reference to its documentation or official</ns0:cell></ns0:row><ns0:row><ns0:cell>users understand the structure and</ns0:cell><ns0:cell>website. If a custom schema is used, provide</ns0:cell></ns0:row><ns0:row><ns0:cell>properties of the deposited information.</ns0:cell><ns0:cell>documentation.</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: OntoSoft vocabulary from the</ns0:cell><ns0:cell>• Expected metadata when submitting software</ns0:cell></ns0:row><ns0:row><ns0:cell>OntoSoft registry.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>6. Stipulate conditions of use</ns0:cell><ns0:cell>• Legal disclaimers about the responsibility and</ns0:cell></ns0:row><ns0:row><ns0:cell>Documents the terms under which users</ns0:cell><ns0:cell>liability borne by the resource</ns0:cell></ns0:row><ns0:row><ns0:cell>may use the provided resources, including</ns0:cell><ns0:cell>• License and copyright information, both for</ns0:cell></ns0:row><ns0:row><ns0:cell>metadata and software.</ns0:cell><ns0:cell>individual entries and for the resource as a whole</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: DOE CODE acceptable use</ns0:cell><ns0:cell>• Conditions for the use of the metadata, including</ns0:cell></ns0:row><ns0:row><ns0:cell>policy.</ns0:cell><ns0:cell>prohibitions, if any</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>• Preferred format for citing software entries;</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>preferred format for attributing or citing the</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>resource itself</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>14/29</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:note></ns0:figure>
<ns0:note place='foot' n='6'>https://codemeta.github.io/ 7 https://www.library.caltech.edu/caltechdata/news/enhancedsoftware-preservation-now-available-in-caltechdata 8 https://hal.inria.fr/hal-01897934v3/codemeta</ns0:note>
<ns0:note place='foot' n='28'>https://scicrunch.org/page/privacy 29 https://bioconductor.org/developers/package-end-of-life/ 12/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022)</ns0:note>
<ns0:note place='foot' n='29'>/29 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:2:1:NEW 28 Mar 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to reviewers
We would like to thank the reviewer for the last round of comments. Please find the specific
answer to all issues raised inline below. As with the original review, we have included the
original comments in yellow background and italics, while our answer is available in plain
text.
We also made a few additional minor modifications, mainly to better introduce one of the
main outcomes of our work, the creation of the SciCodes consortium, which is a community
of editors and maintainers of software registries and repositories to share, demonstrate and
improve our best practices.
We also attach two versions of the paper: a revision incorporating the new changes and a
version highlighting the differences with the original revision, in red.
Reviewer 2
I would like to first commend the authors' efforts in addressing the reviewers' questions in
detail and revising their manuscript accordingly. While I'm overall satisfied with the revision, if
I had to be picky, I would suggest the authors integrate what is currently in Appendix C into
the Methodology section. Both Reviewer #1 and I urged the authors to provide descriptive
statistics and/or visualization of their initial survey results to substantiate the empirical
grounds of this paper, but the rewritten Methodology section primarily supplies procedural
details rather than survey details we have requested.
For instance, lines 154--156 can be re-written as: Participants introduced themselves and
their resources by providing some basic information, including repository name, starting
year, number of records, target audience (discipline-specific or general), as well as services
provided (e.g., support of software citation, software deposits, and doi minting). Figure/Table
1 presents an overview of the responses.
Question | Yes | No | Other
--- | --- | --- | --Q1 | raw count (%) | raw count (%) | raw count (%)
Q2 | raw count (%) | raw count (%) | raw count (%)
We have added Table 1 in the methodology section, with a summary of the questions as
suggested. Please note that, in order to be consistent with the text in the methodology
section, the included table shows only the responses from those resources which
participated in the first task force meeting. Since then, 16 additional responses have been
collected. We now show all collected responses in Appendix C (Table C2), and have added
tables C3 and C4 showing the distribution of entries in repositories and their date of
creation.
You can then briefly comment on the results and direct readers to Appendix C for the full list
of questions and other summaries of the results if available (e.g., distribution of supported
unique identifier types, number of records, and years when the repository started operating).
We have edited the section as suggested and extended appendix C with two tables to reflect
on the distribution of number of records and date of creation.
On a second thought, it seems more appropriate to use a table instead of a figure to present
the results, which saves space, but the final decision should be at the authors' discretion. If a
figure is still preferred, I would encourage the authors to experiment with other types of
visualization such as a stacked bar chart or a divergent stacked bar chart. The rationale is
that we know the responses for each question sum to 30 anyway, and there are only three
possible answers for each (Yes/No/Other), so there is no real benefit of having separate bars
to represent possible answers for each question.
We have added a table instead of a figure, as suggested.
Other minor points
- The quotation marks in lines 1018--1019 do not look right, possibly an issue related to
LaTex.
We have fixed this in the revised manuscript, lines 161-162 (after the suggested changes)
" | Here is a paper. Please give your review comments after reading it. |
701 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Scientific software registries and repositories improve software findability and research transparency, provide information for software citations, and foster preservation of computational methods in a wide range of disciplines. Registries and repositories play a critical role by supporting research reproducibility and replicability, but developing them takes effort and few guidelines are available to help prospective creators of these resources. To address this need, the FORCE11 Software Citation Implementation Working Group convened a Task Force to distill the experiences of the managers of existing resources in setting expectations for all stakeholders. In this paper, we describe the resultant best practices which include defining the scope, policies, and rules that govern individual registries and repositories, along with the background, examples, and collaborative work that went into their development. We believe that establishing specific policies such as those presented here will help other scientific software registries and repositories better serve their users and their disciplines.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>management of its contents and allowed usages as well as clarifying positions on sensitive issues such as attribution.</ns0:p><ns0:p>In this paper, we expand on our pre-print 'Nine Best Practices for Research Software Registries and Repositories: A Concise Guide' (Task Force on Best Practices for Software <ns0:ref type='bibr'>Registries et al., 2020)</ns0:ref> to describe our best practices and their development.</ns0:p><ns0:p>Our guidelines are actionable, have a general purpose, and reflect the discussion of a community of more than 30 experts who handle over 14 resources (registries or repositories) across different scientific domains. Each guideline provides a rationale, suggestions, and examples based on existing repositories or registries. To reduce repetition, we refer to registries and repositories collectively as 'resources.'</ns0:p><ns0:p>The remainder of the paper is structured as follows. We first describe background and related efforts in Section 2, followed by the methodology we used when structuring the discussion for creating the guidelines (Section 3). We then describe the nine best practices in Section 4, followed by a discussion (Section 5). Section 6 concludes the paper by summarizing current efforts to continue the adoption of the proposed practices. Those who contributed to the development of this paper are listed in Appendix A, and links to example policies are given in Appendix B. Appendix C provides updated information about resources that have participated in crafting the best practices and an overview of their main attributes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>BACKGROUND</ns0:head><ns0:p>In the last decade, much was written about a reproducibility crisis in science <ns0:ref type='bibr' target='#b6'>(Baker, 2016)</ns0:ref> stemming in large part from the lack of training in programming skills and the unavailability of computational resources used in publications <ns0:ref type='bibr' target='#b53'>(Merali, 2010;</ns0:ref><ns0:ref type='bibr' target='#b60'>Peng, 2011;</ns0:ref><ns0:ref type='bibr' target='#b56'>Morin et al., 2012)</ns0:ref>. On these grounds, national and international governments have increased their interest in releasing artifacts of publicly-funded research to the public (Office of Science and Technology Policy, 2016; Directorate-General for Research and Innovation (European Commission), 2018; Australian <ns0:ref type='bibr' target='#b5'>Research Council, 2018;</ns0:ref><ns0:ref type='bibr' target='#b14'>Chen et al., 2019</ns0:ref>; Ministère de l'Enseignement supérieur, de la Recherche et de l'Innovation, 2021), and scientists have appealed to colleagues in their field to release software to improve research transparency <ns0:ref type='bibr' target='#b72'>(Weiner et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b7'>Barnes, 2010;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ince et al., 2012)</ns0:ref> and efficiency <ns0:ref type='bibr' target='#b33'>(Grosbol and Tody, 2010)</ns0:ref>. Open Science initiatives such as RDA and FORCE11 have emerged as a response to these calls for greater transparency and reproducibility. Journals introduced policies encouraging (or even requiring) that data and software be openly available to others <ns0:ref type='bibr'>(Editorial staff, 2019;</ns0:ref><ns0:ref type='bibr' target='#b23'>Fox et al., 2021)</ns0:ref>. New tools have been developed to facilitate depositing research data and software in a repository <ns0:ref type='bibr' target='#b8'>(Baruch, 2007;</ns0:ref><ns0:ref type='bibr' target='#b13'>CERN and OpenAIRE, 2013;</ns0:ref><ns0:ref type='bibr' target='#b18'>Di Cosmo and Zacchiroli, 2017;</ns0:ref><ns0:ref type='bibr' target='#b16'>Clyburne-Sherin et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b11'>Brinckman et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b69'>Trisovic et al., 2020)</ns0:ref> and consequently, make them citable so authors and other contributors gain recognition and credit for their work <ns0:ref type='bibr' target='#b63'>(Soito and Hwang, 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Du et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Support for disseminating research outputs has been proposed with <ns0:ref type='bibr'>FAIR and FAIR4RS</ns0:ref> principles that state shared digital artifacts, such as data and software, should be Findable, Accessible, Interoperable, and Reusable <ns0:ref type='bibr'>(Wilkinson et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b51'>Lamprecht et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b47'>Katz et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b15'>Chue Hong et al., 2021)</ns0:ref>. Conforming with the FAIR Manuscript to be reviewed Computer Science principles for published software <ns0:ref type='bibr' target='#b51'>(Lamprecht et al., 2020)</ns0:ref> requires facilitating its discoverability, preferably in domain-specific resources <ns0:ref type='bibr'>(Jiménez et al., 2017)</ns0:ref>. These resources should contain machine-readable metadata to improve the discoverability (Findable) and accessibility (Accessible) of research software through search engines or from within the resource itself. Furthering interoperability in FAIR is aided through the adoption of community standards e.g., schema.org <ns0:ref type='bibr' target='#b34'>(Guha et al., 2016)</ns0:ref> or the ability to translate from one resource to another. The CodeMeta initiative <ns0:ref type='bibr' target='#b45'>(Jones et al., 2017)</ns0:ref> achieves this translation by creating a 'Rosetta Stone' which maps the metadata terms used by each resource to a common schema. The CodeMeta schema 6 is an extension of schema.org which adds ten new fields to represent software-specific metadata. To date, CodeMeta has been adopted for representing software metadata by many repositories. 7,8 As the usage of computational methods continues to grow, recommendations for improving research software have been proposed <ns0:ref type='bibr' target='#b66'>(Stodden et al., 2016)</ns0:ref> in many areas of science and software, as can be seen by the series of 'Ten Simple Rules' articles offered by PLOS <ns0:ref type='bibr' target='#b17'>(Dashnow et al., 2014)</ns0:ref>, sites such as AstroBetter, 9 courses to improve skills such as those offered by The Carpentries, 10 and attempts to measure the adoption of recognized best practices <ns0:ref type='bibr' target='#b61'>(Serban et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b70'>Trisovic et al., 2022)</ns0:ref>. Our quest for best practices complements these efforts by providing guides to the specific needs of research software registries and repositories.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>The best practices presented in this paper were developed by an international Task Force of the FORCE11 Software Citation Implementation Working Group (SCIWG).</ns0:p><ns0:p>The Task Force was proposed in June 2018 by author Alice Allen, with the goal of developing a list of best practices for software registries and repositories. Working Group members and a broader group of managers of domain specific software resources formed the inaugural group. The resulting Task Force members were primarily managers and editors of resources from Europe, United States, and Australia. Due to the range in time zones, the Task Force held two meetings seven hours apart, with the expectation that, except for the meeting chair, participants would attend one of the two meetings. We generally refer to two meetings on the same day with the singular 'meeting' in the discussions to follow.</ns0:p><ns0:p>The inaugural Task Force meeting <ns0:ref type='bibr'>(Feb, 2019)</ns0:ref> Table <ns0:ref type='table'>1</ns0:ref>. Overview of the information shared by the 14 resources which participated in the first Task Force meeting.</ns0:p><ns0:p>and DOI minting). Table <ns0:ref type='table'>1</ns0:ref> presents an overview of the collected responses, which highlight the efforts of the Task Force chairs to bring together both discipline-specific and general purpose resources. The 'Other' category indicates that the answer needed clarifying text (e.g., for the question 'is the repository actively curated?' some repositories are not manually curated, but have validation checks). Appendix C provides additional information on the questions asked to resource managers (Table <ns0:ref type='table'>C</ns0:ref>.1) and their responses (tables C.2, C.3 and C.4).</ns0:p><ns0:p>During the inaugural Task Force meeting, the chair laid out the goal of the Task Force, and the group was invited to brainstorm to identify commonalities for building a list of best practices. Participants also shared challenges they had faced in running their resources and policies they had enacted to manage these resources. The result of the brainstorming and discussion was a list of ideas collected in a common document.</ns0:p><ns0:p>Starting in May 2019 and continuing through the rest of 2019, the Task Force met on the third Thursday of each month and followed an iterative process to discuss, add to, and group ideas; refine and clarify the ideas into different practices, and define the practices more precisely. It was clear from the onset that, though our resources have goals in common, they are also very diverse and would be best served by best practices that were descriptive rather than prescriptive. We reached consensus on whether a practice should be a best practice through discussion and informal voting.</ns0:p><ns0:p>Each best practice was given a title and a list of questions or needs that it addressed.</ns0:p><ns0:p>Our initial plan aimed at holding two Task Force meetings on the same day each month, in order to follow a common agenda with independent discussions built upon the previous month's meeting. However, the later meeting was often advantaged by the earlier discussion. For instance, if the early meeting developed a list of examples for one of the guidelines, the late meeting then refined and added to the list. Hence, discussions were only duplicated when needed, e.g., where there was no consensus in the early group, and often proceeded in different directions according to the group's expertise and interest. Though we had not anticipated this, we found that holding two meetings each month on the same day accelerated the work, as work done in the second meeting of the day generally continued rather than repeating work done in the first meeting.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The resulting consensus from the meetings produced a list of the most broadly applicable practices, which became the initial list of best practices participants drew from during a two-day workshop, funded by the Sloan Foundation and held at the University of Maryland College Park, in November, 2019 (Scientific Software Registry Collaboration Workshop). A goal of the workshop was to develop the final recommendations on best practices for repositories and registries to the FORCE11 SCIWG.</ns0:p><ns0:p>The workshop included participants outside the Task Force resulting in a broader set of contributions to the final list. In 2020, this group made additional refinements to the best practices during virtual meetings and through online collaborative writing producing in the guidelines described in the next section. The Task Force then transitioned into the SciCodes consortium. 11 SciCodes is a permanent community for research software registries and repositories with a particular focus on these best practices. SciCodes continued to collect information about involved registries and repositories, which are listed in Appendix C. We also include some analysis of the number of entries and date of creation of member resources. Appendix A lists the people who participated in these efforts.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>BEST PRACTICES FOR REPOSITORIES AND REGISTRIES</ns0:head><ns0:p>Our recommendations are provided as nine separate policies or statements, each presented below with an explanation as to why we recommend the practice, what the practice describes, and specific considerations to take into account. The last paragraph of each best practice includes one or two examples and a link to Appendix B, which contains many examples from different registries and repositories.</ns0:p><ns0:p>These nine best practices, though not an exhaustive list, are applicable to the varied resources represented in the Task Force, so are likely to be broadly applicable to other scientific software repositories and registries. We believe that adopting these practices will help document, guide, and preserve these resources, and put them in a stronger position to serve their disciplines, users, and communities. 12</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Provide a public scope statement</ns0:head><ns0:p>The landscape of research software is diverse and complex due to the overlap between scientific domains, the variety of technical properties and environments, and the additional considerations resulting from funding, authors' affiliation, or intellectual property. A scope statement clarifies the type of software contained in the repository or indexed in the registry. Precisely defining a scope, therefore, helps those users of the resource who are looking for software to better understand the results they obtained.</ns0:p><ns0:p>Moreover, given that many of these resources accept submission of software packages, 11 http://scicodes.net 12 Please note that the information provided in this paper does not constitute legal advice.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The scope statement should describe:</ns0:p><ns0:p>• What is accepted, and acceptable, based on criteria covering scientific discipline, technical characteristics, and administrative properties</ns0:p><ns0:p>• What is not accepted, i.e. characteristics that preclude their incorporation in the resource</ns0:p><ns0:p>• Notable exceptions to these rules, if any Particular criteria of relevance include the scientific community being served and the types of software listed in the registry or stored in the repository, such as source code, compiled executables, or software containers. The scope statement may also include criteria that must be satisfied by accepted software, such as whether certain software quality metrics must be fulfilled or whether a software project must be used in published research. Availability criteria can be considered, such as whether the code has to be publicly available, be in the public domain and/or have a license from a predefined set, or whether software registered in another registry or repository will be accepted.</ns0:p><ns0:p>An illustrating example of such a scope statement is the editorial policy 13 published by the Astrophysics Source Code Library (ASCL) <ns0:ref type='bibr' target='#b0'>(Allen et al., 2013)</ns0:ref>, which states that it includes only software source code used in published astronomy and astrophysics research articles, and specifically excludes software available only as a binary or web service. Though the ASCL's focus is on research documented in peer-reviewed journals, its policy also explicitly states that it accepts source code used in successful theses. Other examples of scope statements can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Provide guidance for users</ns0:head><ns0:p>Users accessing a resource to search for entries and browse or retrieve the description(s) of one or more software entries have to understand how to perform such actions. Although this guideline potentially applies to many public online resources, especially research databases, the potential complexity of the stored metadata and the curation mechanisms can seriously impede the understandability and usage of software registries and repositories.</ns0:p><ns0:p>User guidance material may include:</ns0:p><ns0:p>• How to perform common user tasks, such as searching the resource, or accessing the details of an entry</ns0:p><ns0:p>• Answers to questions that are often asked or can be anticipated, e.g., with Frequently Asked Questions or tips and tricks pages</ns0:p><ns0:p>• Who to contact for questions or help A separate section in these guidelines on the Conditions of use policy covers terms of use of the resource and how best to cite records in a resource and the resource itself. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Guidance for users who wish to contribute software is covered in the next section, Provide guidance to software contributors. Topics to consider when writing a contributor policy include whether the author(s) of a software entry will be contacted if the contributor is not also an author and whether contact is a condition or side-effect of the submission. Additionally, a contributor policy should specify how persistent identifiers are assigned (if used) and should state that depositors must comply with all applicable laws and not be intentionally malicious.</ns0:p><ns0:p>Such material is provided in resources such as the Computational Infrastructure for Geodynamics <ns0:ref type='bibr' target='#b38'>(Hwang and Kellogg, 2017)</ns0:ref> software contribution checklist 16 and the CoMSES Net Computational Model Library <ns0:ref type='bibr' target='#b42'>(Janssen et al., 2008)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.4'>Establish an authorship policy</ns0:head><ns0:p>Because research software is often a research product, it is important to report authorship accurately, as it allows for proper scholarly credit and other types of attributions <ns0:ref type='bibr' target='#b62'>(Smith et al., 2016)</ns0:ref>. However, even though authorship should be defined at the level of a given project, it can prove complicated to determine <ns0:ref type='bibr' target='#b4'>(Alliez et al., 2019)</ns0:ref>.</ns0:p><ns0:p>Roles in software development can widely vary as contributors change with time and versions, and contributions are difficult to gauge beyond the 'commit,' giving rise to complex situations. In this context, establishing a dedicated policy ensures that people are given due credit for their work. The policy also serves as a document that administrators can turn to in case disputes arise and allows proactive problem mitigation, rather than having to resort to reactive interpretation. Furthermore, having an authorship policy mirrors similar policies by journals and publishers and thus is part of a larger trend. Note that the authorship policy will be communicated at least partially to users through guidance provided to software contributors. Resource maintainers should ensure this policy remains consistent with the citation policies for the registry or repository (usually, the citation requirements for each piece of research software are under the authority of its owners).</ns0:p><ns0:p>The authorship policy should specify:</ns0:p><ns0:p>• How authorship is determined e.g., a stated criteria by the contributors and/or the resource • Policies around making changes to authorship • The conflict resolution processes adopted to handle authorship disputes When defining an authorship policy, resource maintainers should take into consideration whether those who are not coders, such as software testers or documentation maintainers, will be identified or credited as authors, as well as criteria for ordering the list of authors in cases of multiple authors, and how the resource handles large numbers of authors and group or consortium authorship. Resources may also include guidelines about how changes to authorship will be handled so each author receives proper credit for their contribution. Guidelines can help facilitate determining every contributors' role. In particular, the use of a credit vocabulary, such as the Contributor Roles Taxonomy <ns0:ref type='bibr' target='#b3'>(Allen et al., 2019)</ns0:ref>, to describe authors' contributions should be considered for this purpose. 18 An example of authorship policy is provided in the Ethics Guidelines 19 and the submission guide authorship section 20 of the Journal of Open Source Software <ns0:ref type='bibr' target='#b48'>(Katz et al., 2018)</ns0:ref>, which provides rules for inclusion in the authors list. Additional examples of authorship policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>Document and share your metadata schema</ns0:head><ns0:p>The structure and semantics of the information stored in registries and repositories is sometimes complex, which can hinder the clarity, discovery, and reuse of the entries A metadata schema mapped to other schemas and an API specification can improve the interoperability between registries and repositories. This practice should specify:</ns0:p><ns0:p>• The schema used and its version number. If a standard or community schema, such as CodeMeta <ns0:ref type='bibr' target='#b45'>(Jones et al., 2017)</ns0:ref> or schema.org <ns0:ref type='bibr' target='#b34'>(Guha et al., 2016)</ns0:ref> is used, the resource should reference its documentation or official website. If a custom schema is used, formal documentation such as a description of the schema and/or a data dictionary should be provided.</ns0:p><ns0:p>• Expected metadata when submitting software, including which fields are required and which are optional, and the format of the content in each field.</ns0:p><ns0:p>To improve the readability of the metadata schema and facilitate its translation to other standards, resources may provide a mapping (from the metadata schema used in the resource) to published standard schemas, through the form of a 'cross-walk' (e.g., the CodeMeta cross-walk 21 ) and include an example entry from the repository that illustrates all the fields of the metadata schema. For instance, extensive documentation 22 is available for the biotoolsSchema <ns0:ref type='bibr' target='#b41'>(Ison et al., 2021)</ns0:ref> format, which is used in the bio.tools registry. Another example is the OntoSoft vocabulary, 23 used by the OntoSoft registry <ns0:ref type='bibr' target='#b30'>(Gil et al., 2015</ns0:ref><ns0:ref type='bibr' target='#b28'>(Gil et al., , 2016) )</ns0:ref> and available in both machine-readable and human readable formats. Additional examples of metadata schemas can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.6'>Stipulate conditions of use</ns0:head><ns0:p>The conditions of use document the terms under which users may use the contents provided by a website. In the case of software registries and repositories, these conditions should specifically state how the metadata regarding the entities of a resource can be used, attributed, and/or cited, and provide information about the licenses used for the code and binaries. This policy can forestall potential liabilities and difficulties that may arise, such as claims of damage for misinterpretation or misapplication of metadata. In addition, the conditions of use should clearly state how the metadata can and cannot be used, including for commercial purposes and in aggregate form. This document should include:</ns0:p><ns0:p>• Legal disclaimers about the responsibility and liability borne by the registry or repository • License and copyright information, both for individual entries and for the registry or repository as a whole governs the metadata, if licensing requirements apply for findings and/or derivatives of the resource, and whether there are differences in the terms and license for commercial versus noncommercial use. Restrictions on the use of the metadata may also be included, as well as a statement to the effect that the registry or repository makes no guarantees about completeness and is not liable for any damages that could arise from the use of the information. Technical restrictions, such as conditions of use of the API (if one is available), may also be mentioned.</ns0:p><ns0:p>Conditions of use can be found for instance for DOE CODE <ns0:ref type='bibr' target='#b22'>(Ensor et al., 2017)</ns0:ref>, which in addition to the general conditions of use 24 specifies that the rules for usage of the hosted code 25 are defined by their respective licenses. Additional examples of conditions of use policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.7'>State a privacy policy</ns0:head><ns0:p>Privacy policies define how personal data about users are stored, processed, exchanged or removed. Having a privacy policy demonstrates a strong commitment to the privacy of users of the registry or repository and allows the resource to comply with the legal requirement of many countries in addition to those a home institution and/or funding agencies may impose.</ns0:p><ns0:p>The privacy policy of a resource should describe:</ns0:p><ns0:p>• What information is collected and how long it is retained • How the information, especially any personal data, is used • Whether tracking is done, what is tracked, and how (e.g., Google Analytics)</ns0:p><ns0:p>• Whether cookies are used When writing a privacy policy, the specific personal data which are collected should be detailed, as well as the justification for their resource, and whether these data are sold and shared. Additionally, one should list explicitly the third-party tools used to collect analytic information and potentially reference their privacy policies. If users can receive emails as a result of visiting or downloading content, such potential solicitations or notifications should be announced. Measures taken to protect users' privacy and whether the resource complies with the European Union Directive on General Data Protection Regulation 26 (GDPR) or other local laws, if applicable, should be explained. 27 As a precaution, the statement can reserve the right to make 24 https://www.osti.gov/disclaim 25 https://www.osti.gov/doecode/faq#are-there-restrictions 26 https://gdpr-info.eu/ 27 In the case of GDPR, the regulation applies to all European user personal data, even if the resource is not located in Europe.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:p>Computer Science changes to this privacy policy. Finally, a mechanism by which users can request the removal of such information should be described.</ns0:p><ns0:p>For example, the SciCrunch's <ns0:ref type='bibr' target='#b31'>(Grethe et al., 2014)</ns0:ref> privacy policy 28 details what kind of personal information is collected, how it is collected, and how it may be reused, including by third-party websites through the use of cookies. Additional examples of privacy policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.8'>Provide a retention policy</ns0:head><ns0:p>Many software registries and repositories aim to facilitate the discovery and accessibility of the objects they describe, e.g., enabling search and citation, by making the corresponding records permanently accessible. However, for various reasons, even in such cases maintainers and curators may have to remove records. Common examples include removing entries that are outdated, no longer meet the scope of the registry, or are found to be in violation of policies. The resource should therefore document retention goals and procedures so that users and depositors are aware of them.</ns0:p><ns0:p>The retention policy should describe:</ns0:p><ns0:p>• The length of time metadata and/or files are expected to be retained</ns0:p><ns0:p>• Under what conditions metadata and/or files are removed • Who has the responsibility and ability to remove information • Procedures to request that metadata and/or files be removed</ns0:p><ns0:p>The policy should take into account whether best practices for persistent identifiers are followed, including resolvability, retention, and non-reuse of those identifiers. The retention time provided by the resource should not be too prescriptive (e.g., 'for the next 10 years'), but rather it should fit within the context of the underlying organization(s) and its funding. This policy should also state who is allowed to edit metadata, delete records, or delete files, and how these changes are performed to preserve the broader consistency of the registry. Finally, the process by which data may be taken offline and archived as well as the process for its possible retrieval should be thoroughly documented.</ns0:p><ns0:p>As an example, Bioconductor <ns0:ref type='bibr' target='#b27'>(Gentleman et al., 2004</ns0:ref>) has a deprecation process through which software packages are removed if they cannot be successfully built or tested, or upon specific request from the package maintainer. Their policy 29 specifies who initiates this process and under which circumstances, as well as the successive steps that lead to the removal of the package. Additional examples of retention policies can be found in Appendix B.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.9'>Disclose your end-of-life policy</ns0:head><ns0:p>Despite their usefulness, the long-term maintenance, sustainability, and persistence of online scientific resources remains a challenge, and published web services or databases can disappear after a few years <ns0:ref type='bibr' target='#b71'>(Veretnik et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b50'>Kern et al., 2020)</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Sharing a clear end-of-life policy increases trust in the community served by a registry or repository. It demonstrates a thoughtful commitment to users by informing them that provisions for the resource have been considered should the resource close or otherwise end its services for its described artifacts. Such a policy sets expectations and provides reassurance as to how long the records within the registry will be findable and accessible in the future. This policy should describe:</ns0:p><ns0:p>• Under what circumstances the resource might end its services</ns0:p><ns0:p>• What consequences would result from closure • What will happen to the metadata and/or the software artifacts contained in the resource in the event of closure • If long-term preservation is expected, where metadata and/or software artifacts will be migrated for preservation • How a migration will be funded Publishing an end-of-life policy is an opportunity to consider, in the event a resource is closed, whether the records will remain available, and if so, how and for whom, and under which conditions, such as archived status or 'read-only.' The restrictions applicable to this policy, if any, should be considered and detailed. Establishing a formal agreement or memorandum of understanding with another registry, repository, or institution to receive and preserve the data or project, if applicable, might help to prepare for such a liability.</ns0:p><ns0:p>Examples of such policies include the Zenodo end-of-life policy, 30 which states that if Zenodo ceases its services, the data hosted in the resource will be migrated and the DOIs provided would be updated to resolve to the new location (currently unspecified). Additional examples of end-of-life policies can be found in Appendix B.</ns0:p><ns0:p>A summary of the practices presented in this section can be found in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSION</ns0:head><ns0:p>The best practices described above serve as a guide for repositories and registries to provide better service to their users, ranging from software developers and researchers to publishers and search engines, and enable greater transparency about the operation of their described resources. Implementing our practices provides users with significant information about how different resources operate, while preserving important institutional knowledge, standardizing expectations, and guiding user interactions.</ns0:p><ns0:p>For instance, a public scope statement and guidance for users may directly impact usability and, thus, the popularity of the repository. Resources including tools with a simple design and unambiguous commands, as well as infographic guides or video tutorials, ease the learning curve for new users. The guidance for software contributions, conditions of use, and sharing the metadata schema used may help eager users Example: Bioconductor package deprecation.</ns0:p><ns0:p>• The length of time metadata and/or files are expected to be retained • Under what conditions metadata and/or files are removed • Who has the responsibility and ability to remove information; procedures to request that metadata and/or files be removed 9. Disclose end-of-life policy Informs both users and depositors of how long the records within the resource will be findable and accessible in the future.</ns0:p><ns0:p>Example: Zenodo end-of-life policy.</ns0:p><ns0:p>• Circumstances under which the resource might end its services • What consequences would result from closure • What will happen to the metadata and/or the software artifacts contained in the resource in the event of closure • If long-term preservation is expected, where metadata and/or software artifacts will be migrated for preservation; how a migration will be funded</ns0:p><ns0:p>Policies affecting a single community or domain were deliberately omitted when developing the best practices. First, an exhaustive list would have been a barrier to adoption and not applicable to every repository since each has a different perspective, audience, and motivation that drives policy development for their organization. Second, best practices that regulate the content of a resource are typically domain-specific to the artifact and left to resources to stipulate based on their needs. Participants in the 2019 Scientific Software Registry Collaboration Workshop were surprised to find that only four metadata elements were shared by all represented resources. 31 The diversity of our resources precludes prescriptive requirements, such as requiring specific metadata for records, so these were also deliberately omitted in the proposed best practices.</ns0:p><ns0:p>Hence, we focused on broadly applicable practices considered important by various 31 The elements were: software name, description, keywords, and URL.</ns0:p></ns0:div>
<ns0:div><ns0:head>15/30</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science resources. For example, amongst the participating registries and repositories, very few had codes of conduct that govern the behavior of community members. Codes of conduct are warranted if resources are run as part of a community, especially if comments and reviews are solicited for deposits. In contrast, a code of conduct would be less useful for resources whose primary purpose is to make software and software metadata available for reuse. However, this does not negate their importance and their inclusion as best practices in other arenas concerning software.</ns0:p><ns0:p>As noted by the FAIR4RS movement, software is different than data, motivating the need for a separate effort to address software resources <ns0:ref type='bibr' target='#b51'>(Lamprecht et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b49'>Katz et al., 2016)</ns0:ref>. Even so, there are some similarities, and our effort complements and aligns well with recent guidelines developed in parallel to increase the transparency, responsibility, user focus, sustainability, and technology of data repositories. For example, both the TRUST Principles <ns0:ref type='bibr' target='#b52'>(Lin et al., 2020)</ns0:ref> and CoreTrustSeal Requirements</ns0:p><ns0:p>Standards and Board ( <ns0:ref type='formula'>2019</ns0:ref>)) call for a repository to provide information on its scope and list the terms of use of its metadata to be considered compliant with TRUST or</ns0:p><ns0:p>CoreTrustSeal, which aligns with our practices 'Provide a public scope statement'</ns0:p><ns0:p>and 'Stipulate conditions of use'. CoreTrustSeal and TRUST also require that a repository consider continuity of access, which we have expressed as the practice to 'Disclosing your end-of-life policy'. Our best practices differ in that they do not address, for example, staffing needs nor professional development for staff, as CoreTrustSeal requires, nor do our practices address protections against cyber or physical security threats, as the TRUST principles suggest. Inward-facing policies, such as documenting internal workflows and practices, are generally good in reducing operational risks, but internal management practices were considered out of scope of our guidelines.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>1</ns0:ref> shows the number of resources that support (partially or in their totality) each best practice. Though we see the proposed best practices as critical, many of the repositories that have actively participated in the discussions (14 resources in total) have yet to implement every one of them. We have observed that the first three practices (providing public scope statement, add guidance for users and for software contributors) have the widest adoption, while the retention, end-of-life, and authorship policy the least. Understanding the lag in the implementation across all of the best practices requires further engagement with the community.</ns0:p><ns0:p>Improving the adoption of our guidelines is one of the goals of SciCodes, 32 a recent consortium of scientific software registries and repositories. SciCodes evolved from the Task Force as a permanent community to continue the dialogue and share information between domains, including sharing of tools and ideas. SciCodes has also prioritized improving software citation (complementary to the efforts of the FORCE11 SCIWG) and tracking the impact of metadata and interoperability. In addition, Sci-Codes aims to understand barriers to implementing policies, ensure consistency between various best practices, and continue advocacy for software support by continuing dialogue between registries, repositories, researchers, and other stakeholders. </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSIONS</ns0:head><ns0:p>The dissemination and preservation of research material, where repositories and registries play a key role, lies at the heart of scientific advancement. This paper introduces nine best practices for research software registries and repositories. The practices are an outcome of a Task Force of the FORCE11 Software Citation Implementation Working Group and reflect the discussion, collaborative experiences, and consensus of over 30 experts and 14 resources.</ns0:p><ns0:p>The best practices are non-prescriptive, broadly applicable, and include examples and guidelines for their adoption by a community. They specify establishing the working domain (scope) and guidance for both users and software contributors, address legal concerns with privacy, use, and authorship policies, enhance usability by encouraging metadata sharing, and set expectations with retention and end-of-life policies. However, we believe additional work is needed to raise awareness and adoption across resources from different scientific disciplines. Through the SciCodes consortium, our goal is to continue implementing these practices more uniformly in our own registries and repositories and reduce the burdens of adoption. In addition to completing the adoption of these best practices, SciCodes will address topics such as tracking the impact of good metadata, improving interoperability between registries, and making our metadata discoverable by search engines and services such as Google Scholar, ORCID, and discipline indexers.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>was attended by eighteen people representing fourteen different resources. Participants introduced themselves and provided some basic information about their resources, including repository name, starting year, number of records, and scope (discipline-specific or general purpose), as well as services provided by each resource (e.g., support of software citation, software deposits,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>providing a precise and accessible definition will help researchers determine whether they should register or deposit software, and curators by making clear what is out of scope for the resource. Overall, a public scope manages the expectations of the potential depositor as well as the software seeker. It informs both what the resource does and does not contain.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>13 https://ascl.net/wordpress/submissions/editiorial-policy/7/30PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>18 http://credit.niso.org/ 19 https://joss.theoj.org/about#ethics 20 https://joss.readthedocs.io/en/latest/submitting.html#authorship 9/30 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)Manuscript to be reviewedComputer Scienceincluded in these resources. Publicly posting the metadata schema used for the entries helps individual and organizational users interested in a resource's information understand the structure and properties of the deposited information. The metadata structure helps to inform users how to interact with or ingest records in the resource.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>21 https://codemeta.github.io/crosswalk/ 22 https://biotoolsschema.readthedocs.io/en/latest/ 23 http://ontosoft.org/software 10/30 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022) Manuscript to be reviewed Computer Science • Conditions for the use of the metadata, including prohibitions, if any • Preferred format for citing software entries • Preferred format for attributing or citing the resource itself When writing conditions of use, resource maintainers might consider what license</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>28 https://scicrunch.org/page/privacy 29 https://bioconductor.org/developers/package-end-of-life/ 12/30 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>30 https://help.zenodo.org/ 13/30 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022) Manuscript to be reviewed Computer Science contribute new functionality or tools, which may also help in creating a community around a resource. A privacy policy has become a requirement across geographic 490 boundaries and legal jurisdictions. An authorship policy is critical in facilitating col-491 laborative work among researchers and minimizing the chances for disputes. Finally, 492 retention and end-of-life policies increase the trust and integrity of a repository ser-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Number of resources supporting each best practice, out of 14 resources.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>When writing guidelines for users, it is advisable to identify the types of users your resource has or could potentially have and corresponding use cases. Guidance itself</ns0:figDesc><ns0:table><ns0:row><ns0:cell>should be offered in multiple forms, such as in-field prompts, linked explanations, and</ns0:cell></ns0:row><ns0:row><ns0:cell>completed examples. Any machine-readable access, such as an API, should be fully</ns0:cell></ns0:row><ns0:row><ns0:cell>described directly in the interface or by providing a pointer to existing documentation,</ns0:cell></ns0:row><ns0:row><ns0:cell>and should specify which formats are supported (e.g., JSON-LD, XML) through</ns0:cell></ns0:row><ns0:row><ns0:cell>content negotiation if it is enabled.</ns0:cell></ns0:row><ns0:row><ns0:cell>Examples of such elements include, for instance, the bio.tools registry (Ison et al.,</ns0:cell></ns0:row><ns0:row><ns0:cell>2019) API user guide, 14 or the ORNL DAAC (ORNL, 2013) instructions for data</ns0:cell></ns0:row><ns0:row><ns0:cell>providers. 15 Additional examples of user guidance can be found in Appendix B.</ns0:cell></ns0:row></ns0:table><ns0:note>4.3 Provide guidance to software contributors Most software registries and repositories rely on a community model, whereby external contributors will provide software entries to the resource. The scope statement will already have explained what is accepted and what is not; the contributor policy addresses who can add or change software entries and the processes involved. The contributor policy should therefore describe: • Who can or cannot submit entries and/or metadata • Required and optional metadata expected for deposited software • Review process, if any • Curation process, if any• Procedures for updates (e.g., who can do it, when it is done, how is it done)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of the best practices with recommendations and examples. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66090:3:0:NEW 25 May 2022) Manuscript to be reviewed Computer Science 6. Stipulate conditions of use Documents the terms under which users may use the provided resources, including metadata and software. Example: DOE CODE acceptable use policy.• Legal disclaimers about the responsibility and liability borne by the resource • License and copyright information, both for individual entries and for the resource as a whole • Conditions for the use of the metadata, including prohibitions, if any • Preferred format for citing software entries;</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Practice, description and examples</ns0:cell><ns0:cell>Recommendations</ns0:cell></ns0:row><ns0:row><ns0:cell>1. Provide a public scope statement</ns0:cell><ns0:cell>• What is accepted, and acceptable, based on criteria</ns0:cell></ns0:row><ns0:row><ns0:cell>Informs both software depositor and</ns0:cell><ns0:cell>covering scientific discipline, technical</ns0:cell></ns0:row><ns0:row><ns0:cell>resource seeker what the collection does</ns0:cell><ns0:cell>characteristics, and administrative properties</ns0:cell></ns0:row><ns0:row><ns0:cell>and does not contain.</ns0:cell><ns0:cell>• What is not accepted, i.e. characteristics that</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: ASCL editorial policy.</ns0:cell><ns0:cell>preclude their incorporation in the resource</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>• Notable exceptions to these rules, if any</ns0:cell></ns0:row><ns0:row><ns0:cell>2. Provide guidance for users</ns0:cell><ns0:cell>• How to perform common user tasks, like searching</ns0:cell></ns0:row><ns0:row><ns0:cell>Helps users accessing a resource</ns0:cell><ns0:cell>for collection, or accessing the details of an entry</ns0:cell></ns0:row><ns0:row><ns0:cell>understand how to perform tasks like</ns0:cell><ns0:cell>• Answers to questions that are often asked or can be</ns0:cell></ns0:row><ns0:row><ns0:cell>searching, browsing, and retrieving</ns0:cell><ns0:cell>anticipated</ns0:cell></ns0:row><ns0:row><ns0:cell>software entries.</ns0:cell><ns0:cell>• Point of contact for help and questions</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: bio.tools registry API user</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>guide.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>3. Provide guidance to software</ns0:cell><ns0:cell>• Who can or cannot submit entries and/or metadata</ns0:cell></ns0:row><ns0:row><ns0:cell>contributors</ns0:cell><ns0:cell>• Required and optional metadata expected from</ns0:cell></ns0:row><ns0:row><ns0:cell>Specifies who can add or change software</ns0:cell><ns0:cell>software contributors</ns0:cell></ns0:row><ns0:row><ns0:cell>entries and explains the necessary</ns0:cell><ns0:cell>• Procedures for updates, review process, curation</ns0:cell></ns0:row><ns0:row><ns0:cell>processes.</ns0:cell><ns0:cell>process</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: Computational Infrastructure for</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Geodynamics contribution checklist.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>4. Establish an authorship policy</ns0:cell><ns0:cell>• How authorship is determined e.g., a stated criteria</ns0:cell></ns0:row><ns0:row><ns0:cell>Ensures that contributors are given due</ns0:cell><ns0:cell>by the contributors and/or the resource</ns0:cell></ns0:row><ns0:row><ns0:cell>credit for their work and to resolve</ns0:cell><ns0:cell>• Policies around making changes to authorship</ns0:cell></ns0:row><ns0:row><ns0:cell>disputes in case of conflict.</ns0:cell><ns0:cell>• Define the conflict resolution processes</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: JOSS authorship policy.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>5. Document and share your metadata</ns0:cell><ns0:cell>• Specify the used schema and its version number.</ns0:cell></ns0:row><ns0:row><ns0:cell>schema</ns0:cell><ns0:cell>Add reference to its documentation or official</ns0:cell></ns0:row><ns0:row><ns0:cell>Revealing the metadata schema used helps</ns0:cell><ns0:cell>website. If a custom schema is used, provide</ns0:cell></ns0:row><ns0:row><ns0:cell>users understand the structure and</ns0:cell><ns0:cell>documentation.</ns0:cell></ns0:row><ns0:row><ns0:cell>properties of the deposited information.</ns0:cell><ns0:cell>• Expected metadata when submitting software</ns0:cell></ns0:row><ns0:row><ns0:cell>Example: OntoSoft vocabulary from the</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>OntoSoft registry.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>14/30</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='6'>https://codemeta.github.io/ 7 https://www.library.caltech.edu/caltechdata/news/enhancedsoftware-preservation-now-available-in-caltechdata 8 https://hal.inria.fr/hal-01897934v3/codemeta</ns0:note>
</ns0:body>
" | "Response letter
We would like to thank the reviewers for their constructive feedback. We have created a
revised version of our manuscript addressing all the comments raised by the reviewers.
Below you will find our answers to your comments. We have included the original comments
in yellow background and italics, while our answer is available in plain text. We have
highlighted in blue background answers already included in previous response letters (in
particular, addressing comments from Reviewer 1).
We also attach two versions of the paper: a revision incorporating the new changes and a
version highlighting the differences with the original revision, in red.
Please note that in the version highlighting the differences some of the footnotes are
not properly linked. This is fixed in the revised manuscript.
Reviewer 1 (Anonymous)
As a general comment for the authors and editors, there seems to be a process failure here.
I have reviewed v0 of this paper and I'm not reviewing v2. There seem to have been a v1 in
between, which I have not reviewed. As a consequence of that I have seen neither the
tracked changes between v0 and v1, nor the rebuttal for v1 (which presumably included
author answers to my initial remarks). Hence in this review I'm solely pointing out which
parts of my initial review for v0 are still not addressed. This is not due to a fault of the
authors, but of the review process. Still, as a reviewer, I have to insist on the points for which
I have received neither an answer nor a change.
We understand the reviewer’s frustration. This is indeed the third revision of the paper. For
every revision we have produced a response letter addressing the comments of all
reviewers. We have copied below in blue background those answers that are still valid
responses to the concerns raised by the reviewer.
----------------Regarding basic reporting, the only remaining unaddressed point is this:
> Regarding the presentation of the best practice, it is hard to cross-reference the concrete
example of each best practice to the end in the appendix. I recommend including at least 1-2
examples *inline* in the main paper text just after each best practice, and cross-reference
the appendix for *other* examples. That way the main text of the paper become selfcontained (and more interesting!) and the appendix can be consulted only for those readers
who want more. (This is a comment critique/suggestion that applies to all best practices, as
they all have examples; which is definitely a good thing!)
The last paragraph of each best practice now includes 1-2 examples inline, with a crosslink
to Appendix B at the end. We’ve included an explanation of this organization at the
beginning of the best practices in Section 4. Table 2 now provides a summary of all best
practices with a link to one example.
Experimental design
No remaining unaddressed points from my initial review remains about experimental design.
It's all good!
We are glad to see that all issues have been sorted out.
Validity of the findings
The following points of my initial review for v0 remains unaddressed (and are in fact the
main reason why I am recommending a major revision):
> 4.1: the wording is weird, the main point is the first one ('What is accepted'), the other two
points feel just redundant restatement of the same notion ('what is not accepted' ->
complement of the first point; and 'notable exceptions' -> which is still a part of the notion of
'what is accepted'). Maybe you should recommend that resources operators just focus on
the properties/criteria of the artifacts that are acceptable, rather than restating the same
notion in different ways.
Some resource editors have found it useful to explicitly declare what their resource does not
include/accept, as unsuitable material is sometimes submitted; it is helpful to have clear and
unambiguous language publicly available to point to as to why a submission may be
unsuitable.
We have edited the example (line 244) to include: “ ...articles, and specifically excludes
software available only as a binary or web service. Though the ASCL's focus is on research
documented in peer-reviewed journals, its policy also explicitly states that it accepts source
code used in successful theses.” This covers what is accepted, what is not accepted, and an
exception (a thesis), thus demonstrating points 2 and 3 of this practice.
> 4.4 'Also, particular care should be taken to maintain the consistency of this policy with the
citation policies for the registry or repository.' -> I have no idea what this means, it should be
better explained/clarified in the text.
What we mean to say here is that the authorship policy and the way software authors and
contributors are presented in the repository should not prevent users accessing this
information from citing the software according to the authors' requirements. We have
modified the manuscript to make this point more explicit:
“Resource maintainers should ensure this policy remains consistent with the citation policies
for the registry or repository (usually, the citation requirements for each piece of research
software are under the authority of its owners).”
> 4.5 'share your metadata schema' -> 'share' should be 'publish' or 'document', as it's
really about making it public, not sharing with others.
We prefer to use the word “share” due to its broader meaning. We want to imply 'sharing
metadata by making it findable', 'sharing metadata by crosswalking', 'sharing with indexers'
when requested so that local resource indexing can be improved. However, we reworded the
second sentence of the paragraph to clarify this as: “Publicly posting the metadata schema
used for the entries helps individual and organizational users interested in a resource's
information understand the structure and properties of the deposited information.”
In addition, we have changed the title to “Document and share” instead of just “share”
> 4.6 this best practice is almost entirely worded about metadata, and that seems incorrect
to me. The conditions of use are relevant for both metadata and the data itself (e.g., actual
software, for software repositories). The text of this should be generalized to cover both
scenarii on equal footing, maybe adopting the syntactic convention of '(meta)data' when
talking about both, as the FAIR Principles do.
We agree with the reviewer that it is important that users know what the conditions of use
are for both the software itself and the metadata for that software. Our example
demonstrates a case in which the usage of a software component, though stored in a
resource, is governed by the license assigned to the code by its authors.
We have also clarified that licensing information about the code or binaries should be
included as part of this policy.
> 4.8 When discussing taking offline data and archival, I consider there is an important
omission: the requirement of documenting the backup strategy (which commonly goes under
the notion of 'retention policy') and how the archive reacts to legal takedown notices (e.g.,
due to DMCA in the US or, in Europe, equivalent legislation as well as GDPR).
We consider a backup strategy out of scope for this paper. We view it as an operational
factor and not a policy, and in particular, not what the retention policy is addressing.
Furthermore, while we agree that takedown notices can be an issue, we do not want to
appear to be giving legal advice, and in general, do not talk about legal issues in the paper.
Additional comments
Other than the above points, the authors did a good job at improving the paper. Thanks a lot
for addressing all the (other) points I had raised in my initial review. I'm looking forward to
this article finalization.
We thank the reviewer for the constructive feedback, we hope this review will satisfy/clarify
the remaining comments.
Reviewer 2 (Anonymous)
Additional comments
I noticed that the time of the inaugural Task Force meeting (February 2019) is different from
the date shown in the Table 1 caption (November 2019). I was wondering if this was a
random mistake or if the responses were actually collected later. Other than that, all the
previous comments have been adequately addressed, and the manuscript can be accepted
in its current form.
This was a typo. It has been fixed in the reviewed document.
" | Here is a paper. Please give your review comments after reading it. |
702 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The intelligence of energy storage devices has led to a sharp increase in the amount of detection data generated. Data sharing among distributed energy storage networks can realize collaborative control and comprehensive analysis, which effectively improves the clustering and intelligence. However, data security problems have become the main obstacle for energy storage devices to share data for joint modeling and analysis. The security issues caused by information leakage far outweigh property losses. In this article, we first propose a blockchain-based machine learning scheme for secure data sharing in distributed energy storage networks. Then, we formulate the data sharing problem into a machine-learning problem by incorporating secure federated learning. Innovative verification methods and consensus mechanisms are used to encourage participants to act honestly, and to use well-designed incentive mechanisms to ensure the sustainable and stable operation of the system. We have implemented the scheme of SFedChain and experimented on real datasets with different settings. The numerical results show that SFedChain is promising.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>With the rise of a new round of energy revolution, energy and information are highly integrated. Energy storage as one of the areas with the most large-scale development potential in renewable energy, generates massive amounts of data with the improvement of informatization and intelligence. While data derives value, its information security issues have also received extensive attention <ns0:ref type='bibr' target='#b24'>(Stoyanova et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b6'>Cook et al. (2017)</ns0:ref>). Data leakage problems may occur in terminals, networks, storage, and the cloud, etc, which will cause serious obstacles to the construction of power information network security. In this regard, the traditional privacy protection strategy can be mainly divided into the privacy protection of input data and output data. The privacy protection of input data is mainly based on publishing anonymous data, such as k-anonymity, l-diversity, t-closeness, and differential privacy. k-anonymity, l-diversity and t-closeness <ns0:ref type='bibr' target='#b3'>(Brickell and Shmatikov (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>Sei et al. (2019)</ns0:ref>). They usually replace the sensitive information contained in the data with randomly generated data or remove it directly. However, if the attacker has sufficient background knowledge of the original data, or attacks through inference or other methods, it will not be able to effectively protect the confidential information. Differential privacy <ns0:ref type='bibr' target='#b8'>(Dwork (2008)</ns0:ref>) technology achieves a balance between model performance and privacy protection by adding noise to the model or generated results, it is considered a reliable privacy protection method. Yin et.al. <ns0:ref type='bibr' target='#b34'>(Yin et al. (2018)</ns0:ref>) applied differential privacy technology to hide the original trajectory and location data of the information by adding noise to the selected data in the location information tree model, which protects the location privacy of big data in the sensor network. The experimental results of Hitaj et.al. <ns0:ref type='bibr' target='#b11'>(Hitaj et al. (2017))</ns0:ref> show that a data privacy protection strategy that only incorporates differential privacy may leak original data using GAN learning. In order to prevent the inference of the original confidential data Manuscript to be reviewed Computer Science privacy, however, the algorithm is only applicable to a single model scenario and is not universal, so it is difficult to be effectively promoted. Therefore, how to improve the availability of data under the premise of protecting data privacy remains to be further studied.</ns0:p><ns0:p>The privacy protection of output data is mainly to pertube or audit the result <ns0:ref type='bibr' target='#b0'>(Aggarwal (2005)</ns0:ref>) such as association rule hiding, query auditing and classification accuracy. The existing association rule hiding technology <ns0:ref type='bibr' target='#b10'>(Gkoulalas-Divanis and Verykios (2009)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Wu and Wang (2008)</ns0:ref>) directly operates on the original transaction dataset. When the transaction dataset is relatively large, the time utilization rate will be relatively low, at the same time, it is difficult to achieve a good compromise between sensitive information hiding and data quality by artificially adding rules to the original transaction dataset to hide sensitive information. In <ns0:ref type='bibr' target='#b12'>(Hou et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>Thomas (2007)</ns0:ref>), the authors provide effective query audit algorithms and frameworks that leverage security review mechanisms for system privacy protection and access control. The classification accuracy improvement method <ns0:ref type='bibr' target='#b22'>(Samanthula et al. (2015)</ns0:ref>) achieves privacy protection by deforming confidential data when the classification accuracy of confidential data is close to that of reconstructed data, but the existence of a large amount of heterogeneous data and the limitations of restrictions make this method difficult to achieve a wide range of applications.</ns0:p><ns0:p>Data privacy protection algorithm based on deep learning can further improve data availability and prevent the risk of data leakage more efficiently compared with traditional data privacy protection strategies.</ns0:p><ns0:p>Therefore, many excellent models for deep learning privacy protection have been proposed. Federal Learning <ns0:ref type='bibr' target='#b1'>(Bonawitz et al. (2019)</ns0:ref>) stands out for its unique privacy policy. Multiple collaborators do not need to upload their raw data to the central server for iterative training during the deep learning model training process and get better training results than their respective local models. <ns0:ref type='bibr'>Konečný et.al. (Konečný et al. (2016)</ns0:ref>) proposed an efficient optimization algorithm to deal with the statistical heterogeneous problem of data in federated learning. <ns0:ref type='bibr'>Fallah et.al. (Fallah et al. (2020)</ns0:ref>) proposed Personalized Federated Learning, which can be better done by localizing the global model by using a local data structure.</ns0:p><ns0:p>Nevertheless, traditional federal learning techniques still have some privacy leakage problems due to the existence of curious parameter server or dishonest participants. <ns0:ref type='bibr' target='#b21'>Nasr et al. (Nasr et al. (2018))</ns0:ref> indicates that secret membership information can be obtained by performing member inferring attacks. <ns0:ref type='bibr'>Zhu et.al.(Zhu and Han (2020)</ns0:ref>) uses the depth gradient leakage algorithm to reduce the difference between the virtual gradient and the real gradient to obtain private data.</ns0:p><ns0:p>Due to the decentralization of energy storage devices and the confidentiality of generated data, how to protect privacy while collecting data is a key issue. Recently, the issue of multi-party data sharing has received widespread attention. For data sharing on distributed data streams, Dong et al. <ns0:ref type='bibr' target='#b7'>(Dong et al. (2015)</ns0:ref>) proposed a Scheme for safe sharing of sensitive data on big data platforms. Huang et.al <ns0:ref type='bibr' target='#b13'>(Huang et al. (2021)</ns0:ref>) designed an accountable and efficient data sharing scheme ADS for industrial IoT, which can punish participants with data leakage problems. It is worth noting that Blockchain <ns0:ref type='bibr' target='#b14'>(Huh et al. (2017)</ns0:ref>), as a decentralized, tamper-proof, and traceable distributed ledger technology, effectively guarantees the confidentiality of data and the security of data sharing by using consensus protocols. Due to data trust and security issues in the edge computing environment, <ns0:ref type='bibr' target='#b20'>Ma et al. (Ma et al. (2020b)</ns0:ref>) proposed a blockchain-based edge computing trusted data management scheme BlockTDM. Based on blockchain technology, Ma et.al. <ns0:ref type='bibr' target='#b19'>(Ma et al. (2020a)</ns0:ref>) realized the secure utilization and decentralized management of big data in the Internet of Things. In the above work, a consensus algorithm that achieves the consistency of all participating nodes is indispensable as a key technology. The work of <ns0:ref type='bibr' target='#b36'>(Zheng et al. (2017)</ns0:ref>) miners need to solve tedious mathematical problems and compete to produce blocks, which seriously affects the efficiency of the system, so it is not suitable for scenarios with frequent transactions.</ns0:p><ns0:p>Despite extensive research has been conducted on distributed multi-party data sharing, there are still two serious problems that have received less attention so far. The first is that the existing work usually targets the attack threats of the central server or collaborators, while ignoring the model quality problems caused by dishonest collaborators destroying the joint modeling process. The second is that participants' concerns about data privacy leakage in the process of distributed multi-party data sharing have led to the continuous decline of users' willingness to share data.</ns0:p><ns0:p>To the end, there are many challenges in distributed multi-party collaborative data sharing in distributed energy storage networks. We have established a new mechanism to ensure secure data sharing between collaborators who do not trust each other, and proposed a scheme based on blockchain and federated learning named SFedChain. Privacy protection and data sharing are carried out in the joint modeling by encrypting the original information, which can ensure the confidentiality of collaborators' data, the Manuscript to be reviewed Computer Science traceability of shared events, and the robustness of the training model. Specifically, we adopt the 'Three Chains in One' approach to ensure the secure storage of data, auditability, and traceability. In addition, the use of encryption technology provides a further guarantee for the secure sharing of parameters. The adoption of novel consensus algorithms and incentive mechanisms and the use of election collaborators for parameter aggregation can effectively improve the security of the system and maximize the benefits of the system. To sum up, The specific contributions of this paper are as follows:</ns0:p><ns0:p>1. We propose SFedChain, a novel distributed multi-party data sharing collaboration training scheme, which effectively reduces the risk of data leakage and achieves secure data sharing in the process of joint modeling.</ns0:p><ns0:p>2. SFedChain not only protects the privacy of data holders, but also realizes the secure storage of data, the auditability and traceability of the sharing process. The adoption of efficient consensus algorithms and incentive mechanisms promotes collaborators to act honestly in the joint modeling process, thereby generating a high-performance joint modeling model.</ns0:p><ns0:p>3. We implement SFedChain prototype and evaluate its performance in terms of training accuracy and training time. We also evaluate the effectiveness of our proposed model with benchmark, open real-world datasets for data categorization.</ns0:p><ns0:p>The rest of the paper is organized as follows. In Section 1, we present our system model. In Section 2, we give implementation details of SFedChain. In Section 3, we present security analysis for our proposed scheme, and evaluate the performance of the SFedChain. Finally, Section 4 summarizes this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>SYSTEM MODEL</ns0:head><ns0:p>In this paper, we assume a joint modeling scenario involving multiple collaborators. Each collaborator has a dataset that can train a local model, multiple collaborators work together to jointly model the requested task. We use the 'Three-in-One' blockchain network to archive, retrieve, and audit the joint modeling process to ensure its safety, and use the consortium blockchain as the infrastructure for the distributed energy storage network. An illustration of data protection among various devices is shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. We consider one TaskRequester(TR), X data holders(DHs), Y (Y ≤ X) task collaborators selected according to SFedChain's task-related parties retrieval strategy, and Z (Z ≤ Y ) consensus members responsible for the verification of the aggregated model. For Y collaborators, each collaborator has a local dataset The global model(GM) is obtained using SFedChain's aggregation strategy. After continuous iterative training. Finally, the joint modeling model GM is eventually recorded in the blockchain, and the task requester obtains the result Req(GM) through the blockchain.</ns0:p><ns0:formula xml:id='formula_0'>D i = (d 1 , d 2 , . . . , d n ),</ns0:formula></ns0:div>
<ns0:div><ns0:head n='1.1'>SFedChain Scheme</ns0:head><ns0:p>Before we introduce SFedChain, let's give the relevant concepts and keyword definitions in SFedChain.</ns0:p><ns0:p>MasterChain: MasterChain is used to register new sites and new users, records the main configuration information of the site, manage user data access control, and store the joint modeling model. We use MasterChain to publish the requested task.</ns0:p><ns0:p>RetrievalChain: RetrievalChain is used to record the summary of site document information and the Unified Retrieval Graph that is regularly established, and it is mainly responsible for the retrieval of task-related parties. In the scenario of distributed energy storage networks, we designed the system model of SFedChain.</ns0:p><ns0:p>We combine blockchain technology with traditional federated learning technology to achieve secure and efficient distributed data sharing. MasterChain is mainly responsible for issuing requested tasks and recording joint modeling models, which can improve the efficiency of the system in processing tasks.</ns0:p><ns0:p>RetrievalChain is used to record the Unified Retrieval Graph that is generated regularly, and realize the quick retrieval of Workers. Workers uses its dataset to train local models, we combine the federated learning technology and use the parameter entry method to ensure the safety of parameter sharing in the joint modeling process. Committee Members and Leader in committee were selected for aggregation of local model parameters through SFedChain's novel aggregation strategy. Therefore, the system does not require a reliable third party for parameter aggregation, which further ensures the security of the joint modeling process. At the same time, the system introduces an incentive mechanism to encourage the active participation of DataHolder by rewarding honest participants. Eventually, TaskRequester will obtain the result of the request through MasterChain.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1.2'>Threat Model</ns0:head><ns0:p>We pay attention to the secure data sharing between distributed multiple parties, and select Y Workers related to the requested task among the X data providers to complete a joint modeling task. In a real-world industrial environment, task requesters and co-modeling participants are usually considered dishonest.</ns0:p><ns0:p>They don't want to pay for the requested task or deliberately sabotage the joint modeling process, steal confidential information from other participants. From the above analysis, we can see that the proposed system model may face the following three threats:</ns0:p><ns0:p>1. Quality of the locally trained model: Workers may provide poorly trained model parameters due to local data set quality problems, or malicious Workers want to get rewards but do not participate in training, so they directly provide incorrect local models. <ns0:ref type='table'>2022:04:72513:1:1:NEW 2 Jun 2022)</ns0:ref> Manuscript to be reviewed </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.'>Instability of the parameter aggregation</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='1.3'>Architecture Design</ns0:head><ns0:p>The SFedChain architecture we proposed consists of three parts: MasterChain module, RetrievalChain module, and ArgChain module based on federated learning. MasterChain module establishes a secure connection between all participants, and records the joint modeling model of each requested task to achieve a rapid response to the same requested task of other users. RetrievalChain module uses the regularly generated Unified Retrieval Graph to achieve efficient retrieval of the relevant DataHolders of the requested task, and uses the retrieved Workers to implement joint modeling. ArgChain module combines with traditional federated learning to realize the secure sharing of local model parameters, and uses novel smart contract and consensus mechanism to improve the quality and efficiency of the joint modeling model. We use the 'three-chain-in-one' architecture to achieve secure joint modeling without the original data coming out of the local situation, and maintain the system's lasting operation through all DataHolders.</ns0:p><ns0:p>Before a new user initiates a requested task or a new DataHolder participates in joint modeling, both should first register through MasterChain. TaskRequester publishes requested task to MasterChain through its nearby DataHolder Req site server, Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref> shows the working mechanism of our proposed architecture, DataHolder Req first searches MasterChain whether the joint modeling model of the same requested task has been recorded. If found, the system will download the joint modeling model GM i recorded in MasterChain, and then return the result of the requested task Req(GM i ) to the TaskRequester.</ns0:p><ns0:p>Otherwise, for a new task, the task-related information is sent to RetrievalChain to retrieve the task-related DataHolders, and then, the system will use the retrieved Workers to perform the joint modeling process through ArgChain. In each iteration, the new CreditCoin owned by each Worker is calculated based on the CreditCoin owned by each task-related parties and the local model accuracy, and then new Leader in committee and Committee Members are elected to aggregate and agree on the joint model. Finally, the result Req(GM Req ) of the requested task is returned to the TaskRequester in the form of a transaction through MasterChain. The coin paid by TaskRequester are distributed in the same proportion according to the proportion of CreditCoin ultimately owned by Workers participating in the joint modeling, in this way, more data holders will be attracted to join the system. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>CONSTRUCTION OF SFEDCHAIN SCHEME</ns0:head><ns0:p>In this section, we design and analyze the SFedChain scheme. Firstly, we design the Unified Retrieval Graph to realize the efficient retrieval of the task-related parties and protect the privacy of the DataHolder. Furthermore, we described in detail the data sharing process of our proposed model SFedChain. Finally, this article adopts the verify upload mechanism of encrypted parameters and the dynamic weight consensus protocol based on CreditCoin to improve the accuracy of the joint modeling model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Unified Retrieval Graph</ns0:head><ns0:p>As a classic image data information extraction technology, CNN has been applied in many fields such as image segmentation, video classification, and target recognition <ns0:ref type='bibr' target='#b18'>(Liu et al. (2022)</ns0:ref>). For traditional CNN technology, an input image is processed by a convolutional layer, a pooling layer, and a linear layer to obtain the final result. But for text data, Chen <ns0:ref type='bibr' target='#b5'>(Chen (2015)</ns0:ref>) proposed that the CNN model can also learn the content of text information. The output of the operating data of each device in the energy storage networks is mostly recorded in text format. How to use text data to measure the similarity of data sets between DataHolders and to achieve retrieval of task-related parties, inspired by Liu et.al <ns0:ref type='bibr' target='#b16'>(Liu et al. (2021)</ns0:ref>), we proposed the following method. The process is shown in Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p><ns0:p>For the processing of text data, the use of pre-trained language models <ns0:ref type='bibr' target='#b32'>(Yamada and Shindo (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b28'>Yamada et al. (2020)</ns0:ref>) for text characterization is considered a reliable method. We first use the selected pre-trained language model to process the text data of each DataHolder, then use the convolutional layer DataHolders will become a factor that must be considered. Therefore, we propose an improved Jaccard distance formula suitable for our proposed system Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>Similarity(NODE i , NODE j ) = |NODE i ∩ NODE j | |NODE i ∪ NODE j | + α * distance(NODE i , NODE j )<ns0:label>(1</ns0:label></ns0:formula><ns0:p>Computer Science where NODE is the sparse representation matrix of DataHolder, distance represents the actual physical distance between DataHolder i and DataHolder j and α is a hyperparameter, which is used to adjust the holding ratio of the actual physical distance.</ns0:p><ns0:p>In order to improve the efficiency of calculation and processing, finally, we use graphs to express the relationship between DataHolders, as shown in the following definition 1:</ns0:p><ns0:formula xml:id='formula_2'>De f inition1 (Uni f ied Retrieval Graph):A Unified Retrieval Graph G = {V, E} consists of a series</ns0:formula><ns0:p>of nodes and edges. Each vertex V i represents a DataHolder, and its weight represents the identity information of DataHolder, such as ID, data type, data size, etc. Each edge E i j connects vertices V i and V j , and its weight W E i j represents the ratio of the similarity between the two vertices to the maximum vertex similarity value (</ns0:p><ns0:formula xml:id='formula_3'>W E i j = W E i j</ns0:formula><ns0:p>Max W E i j</ns0:p><ns0:p>).</ns0:p><ns0:p>Finally, with the assistance of Unified Retrieval Graph, when users submit tasks, we can find the parties involved in the task very accurately and efficiently. In order to ensure the timeliness of the unified search graph search, it is updated after performing a certain number of search operations or adding a new data holder.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Task-related Parties Retrieval</ns0:head><ns0:p>The sharing of raw data will not only bring security threats, but also a series of privacy leaks. Therefore, we do not share the original data of DataHolder, and use the Unified Retrieval Graph to retrieve the task-related parties. When a TaskRequester publishes a requested task, the relevant information of the task will be recorded in the MasterChain in the form of a transaction. The MasterChain will send the processed task information to RetrievalChain to retrieve the task-related parties. The retrieval process in RetrievalChain is shown in Figure <ns0:ref type='figure' target='#fig_8'>4</ns0:ref>.</ns0:p><ns0:p>RetrievalChain obtains ID DH (the id of the DataHolder) information of nearby nodes through ID T R (the id of the TaskRequester). According to the coin paid by TaskRequester, the ratio of retrieving the number of DataHolders is ratio retrieval = coin CONST coin . We first traverse the Unified Retrieval Graph, and then query the vertices V DH representing ID DH , retrieve its adjacent edges. If the weight W E i, j of the adjacent edge E i, j is less than ratio retrieval , the vertex V i connected by the adjacent edge E i, j will be included in the task-related parties set SET DH (V j ∈ SET DH ) After the traversal is over, the nodes contained in the set SET DH are the Workers participating in the joint modeling. At the same time, the equal number of CreditCoins are equally divided among Workers to identify the initial credit of each participant.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Data sharing process</ns0:head><ns0:p>In distributed energy storage networks, the first energy storage device to join the system is responsible for the deployment of the blockchain network. Subsequent devices need to register their own nodes Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Req(GM) to TaskRequester. Finally, the coin paid by the TaskRequester are distributed according to the proportion of CreditCoin owned by each Worker after GM i is jointly modeled. On the contrary, If the joint modeling model GM that matches the requested task is not found, or the GM is hit but the coin paid by the TaskRequester is more than the coin paid by the user when the GM i is established, the system will retrain the model for the requested task. During joint modeling, the system first uses the Unified Retrieval Graph to perform task-related parties retrieval through RetrievalChain, and then performs joint modeling using the retrieved Workers. After multiple iterations of training through ArgChain, the system finally obtains the joint modeling model GM Req that satisfies the requested task, and then uploads GM Req to MasterChain in the form of a transaction, and returns the result Req(GM Req ) to TaskRequester, finally the system performs the remuneration distribution of coin paid by TaskRequester.</ns0:p><ns0:p>The detailed steps of our data sharing scheme are as follows, Figure <ns0:ref type='figure' target='#fig_9'>5</ns0:ref> shows the process of data sharing.</ns0:p><ns0:p>1. System deployment: The first DataHolder to join the system is responsible for the deployment of the system. First, it will create MasterChain, RetrievalChain and ArgChain, and then register its node information in MasterChain. There are two main types of transactions in MasterChain: 4. Historical joint model query: Once the task submitted by TaskRequester is accepted by MasterChain, the system will query the historical union model. If there is a joint model GM i of the same requested task, the coin paid by TaskRequester is less than or equal to the coin i spent on establishment of GM i , and greater than the minimum payment fee αcoin i . Then the result Req GM i is returned to TaskRequester, and the coins paid by it are distributed according to the proportion of CreditCoin owned by each worker after GM i joint modeling. On the contrary, if the joint model GM that matches the task is not found, or GM i is queried but TaskRequester paid more coins than GM i was created, retrain the joint model. Existing consensus protocols, such as PoW, etc. miner requires huge computational overhead to solve cumbersome data problems, and its long consensus process seriously affects the efficiency of system modeling, so it is not suitable for scenarios with frequent transactions. To solve these problems, inspired by Tang et.al <ns0:ref type='bibr' target='#b25'>(Tang et al. (2020)</ns0:ref> After the local training of the Workers participating in the joint model training is completed, we add noise that conforms to the Laplace distribution to it, and then upload it to ArgChain. The privacy of users is protected through a differential privacy mechanism. We use the selected random algorithm L . For any two local model parameters LM i and LM j participating in joint modeling, we make them satisfy:</ns0:p><ns0:formula xml:id='formula_4'>Pr [M (LM i ) ∈ S] ≤ e ε Pr [M (LM j ) ∈ S] + δ</ns0:formula><ns0:p>The system provides differential privacy protection for the local model parameters LM i and LM j satisfying </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>SECURITY ANALYSIS AND PERFORMANCE EVALUATION</ns0:head><ns0:p>We have established a multi-party data security sharing and privacy protection mechanism in distributed energy storage networks by applying blockchain technology. By integrating into the traditional federated learning mechanism, the 'Three Chains in One' structure has been established. We solved the threat model proposed in 1.2.</ns0:p><ns0:p>1. Security proof for SFedChain: The traditional single chain structure of blockchain is difficult to meet the requirements of data retrieval, calculation and privacy protection at the same time.</ns0:p><ns0:p>Therefore, we propose a 'three chains in one' architecture including MasterChain, RetrievalChain, and ArgChain. MasterChain is mainly responsible for the publication of events of requested task and the quick query of historical aggregation models. RetrievalChain is mainly responsible for regular update of the Unified Retrieval Graph. ArgChain mainly carries out the secure sharing of parameters of the federated learning. They perform their respective duties to improve the network performance of the system to achieve data security sharing and privacy protection. </ns0:p><ns0:formula xml:id='formula_5'>S C i = c i ∑ length(C) j=1</ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.1'>Evaluation setup</ns0:head><ns0:p>We use 20 Newsgroups and AG News to simulate the adaptability and high efficiency of SFedChain, which are international standard datasets that are often used to evaluate text-related machine learning algorithms. We simulated different numbers of energy storage devices, which have their own local dataset and can be independently modeled. We use the selected dataset to split the data entries, and regroup according to the number of groups set in each experiment, to simulate DataHolders in SFedChain. We use text topic classification analysis to simulate the requested task of the TaskRequester, and implement our improved attention mechanism on text data to perform the joint modeling process of the SFedChain scheme in the process of distributed multi-party data sharing.</ns0:p><ns0:p>We perform a lot of simulations on Linux ubuntu 4.15.0-45-generic, and the hardware configuration is as follows: Intel(R) Xeon(R) Gold 5218 CPU @ 2.30GHz, 251G, 10TB hard drive, interpreter Python 3.8.10, and pytorch 1.7.0, We analyzed and evaluated the performance of the SFedChain scheme, and gave the following experimental results. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.2'>Numerical results</ns0:head><ns0:p>We use 20 Newsgroup and AG News benchmark datasets to evaluate the accuracy of our proposed model.</ns0:p><ns0:p>In order to ensure the accuracy of the experimental results, we conducted 10 experiments and took the average of the results. The performance comparison between our proposed model and the benchmark method text graph convolutional networks <ns0:ref type='bibr' target='#b33'>(Yao et al. (2019)</ns0:ref>) on various datasets is shown in Figure <ns0:ref type='figure' target='#fig_16'>7</ns0:ref>.</ns0:p><ns0:p>From Figure <ns0:ref type='figure' target='#fig_16'>7</ns0:ref>(a), we can see that compared to the benchmark method, most of our test groups have obtained a higher accuracy, which shows that our proposed SFedChain has a high diagnostic ability. At the same time, we can see that the more data that each DataHolder has, the higher the accuracy of the joint model built together. This is because that the accuracy of the model has a certain relationship with the number of datasets and computing resources owned by the DataHolder in the actual environment. to establish the joint model will not change significantly due to the continuous addition of DataHolder, which shows that the model we proposed has good compatibility. Due to the existence of malicious workers, the performance of the joint model is affected to different degrees. Therefore, we simulate the anti-interference of our model by simulating different proportions of malicious attackers to conduct simulation experiments. We use the AG News dataset as the experimental dataset to simulate a scenario of joint modeling of 50 energy storage nodes, and simulate malicious nodes by modifying the dataset in the nodes to be mismatched label pairs. We set different proportions of malicious attackers: attack strength 10%, attack strength 20%, attack strength 30%, attack strength 40%, attack strength 50%. From Figure <ns0:ref type='figure' target='#fig_20'>9</ns0:ref>, we can see that the presence of a small number of dishonest nodes does not affect the accuracy of our proposed model for joint modeling. The system can dynamically distinguish malicious nodes through the dynamic weight consensus protocol and smart contract mechanism to ensure the quality of the training data set, thereby effectively improving the performance of the system.</ns0:p><ns0:p>Through the above evaluation, we can observe that with the addition of the new data holder, the accuracy of the joint modeling model can be continuously improved without significantly increasing Manuscript to be reviewed Computer Science the time spent in the joint modeling process. So our scheme can attract more data holders to join to improve the joint modeling effect of the request task. As the number of data holders participating in joint modeling increases, the system needs to perform more local model aggregation and updates, which causes a slight increase in system overhead. However, with the addition of more data holders, the data scale of joint modeling is further improved, and SFedChain's secure data sharing mechanism brings a significant increase in the performance of joint modeling models, which effectively improves the quality of service in distributed energy storage networks. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>CONCLUSION</ns0:head><ns0:p>In this article, we propose a blockchain-based machine learning scheme for privacy data sharing in distributed energy storage networks. A series of security analysis and simulation experiments show that our proposed scheme not only protects data privacy, but also further improves the accuracy of the joint modeling model in energy storage device applications through a secure data sharing mechanism.</ns0:p><ns0:p>The combination of blockchain and machine learning is an effective way to realize the safe sharing of data. However, how to use blockchain technology to further ensure privacy protection in the data sharing process is still worthy of attention. Moreover, how to gather more valuable data information from distributed multi-party data holders. Therefore, machine learning algorithms suitable for joint modeling scenarios still need further research. In addition, due to the limitation of communication bandwidth, how to further reduce the communication overhead of the joint model modeling process remains to be further discussed. </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>through the intermediate state information during the training of the Latent Dirichlet Allocation model, Zhao et.al. (Zhao et al. (2021)) proposed a privacy protection algorithm HDP-LDA based on differential PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72513:1:1:NEW 2 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>After the task requester publishes the task, the system first retrieves Y collaborators related to the task from the X data holders through the blockchain. The collaborators use their local data set to train to obtain the local model, and use blockchain to record the parameters of each local model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Scenario of secure multi-party data sharing</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>service: A dishonest parameter aggregation server may provide incorrect aggregation models, which will result in a serious degradation of the quality of 4/17 PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Working mechanism of our proposed method.</ns0:figDesc><ns0:graphic coords='6,162.41,63.78,372.19,200.92' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Figure 3. The process of building Unified Retrieval Graph.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>and the ReLU activation function to process to obtain the feature map representation of data information of each DataHolder. Finally, the sparse representation in each channel is obtained through multi-channel convolution kernel processing,Sparsity = Number o f non−zero values in vector Total number o f vector elements , the data information owned by each DataHolder is abstracted into a matrix represented by sparsity. Since we use the sparsity expression of text statistics to retrieve Workers, it is difficult to steal the original information. In order to continue to simplify the calculation of data similarity between DataHolders, we further process the sparsity expression of data of each DataHolder. For the i − th DataHolder, we use DATA i = d i 11 , d i 12 , . . . , d i 1n , . . . , d i m1 , d i m1 , . . . , d i mn represents the data it holds, where m represents the number of texts of the filtered DataHolder. Since the Jaccard distance formula has the advantage of being independent of position and order, this paper decides to use it to calculate data similarity. Communication efficiency will also affect the creation of the Unified Retrieval Graph, the actual physical distance between</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:04:72513:1:1:NEW 2 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Task-related parties retrieval process.</ns0:figDesc><ns0:graphic coords='8,162.41,63.77,372.21,199.10' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Data sharing process.</ns0:figDesc><ns0:graphic coords='9,203.77,63.78,289.50,352.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>new users and new nodes, and release records of joint models.The main transaction forms in RetrievalChain include: update records of Unified Retrieval Graph and retrieval records of task-related parties. The transaction forms in ArgChain mainly include: the upload record of local training models and aggregate model. 2. System Initialization: When a new DataHolder applies to join the system, the system distributes ID DH through MasterChain (ID DH consists of device code, module code and sensor code), and then it will work with other DataHolder to maintain the operation of the system. Similarly, a new user should first register with the nearby DataHolder before posting the requested task to obtain its unique ID user . 3. Task Request: Task Requester submits the requested task Req (r1, r2, . . . , rn) through its nearby DataHolder Req , and pays the corresponding coin according to the expected model effect. Mas-terChain checks whether the TaskRequester has been registered. If the check passes, the nearby energy storage deivce DataHolder Req uploads the task to MasterChain as a transaction.Otherwise, the system performs a new user registration operation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>5.</ns0:head><ns0:label /><ns0:figDesc>Retrieval task-related parties: MasterChain obtains the relevant information of the nearby energy storage device DataHolder Req through the requested task, and then sends the obtained ID DH to RetrievalChain for task related party retrieval and make the distribution of CreditCoin. Finally, RetriavalChain sends the retrieved Workers to ArgChain for joint modeling. 6. Model training: ArgChain obtains Y Workers participating in joint modeling from RetrievalChain. Each Worker uses its local dataset and initial model GM Req for local model training. After the local model training is over, ArgChain's smart contract algorithm will verify the local model parameters, and then upload the W (W ≤ Y ) local model parameters that have passed the verification to the ArgChain. At the same time, the CreditCoin owned by each worker will also be adjusted according to the training quality of its local model. 7. Consensus process: We select Z(Z = µY ) Workers with the highest accuracy from W honest workers to form Committee Members. At the same time, we select the Worker with the highest accuracy as the Leader of the committee for local model parameter aggregation, and send the aggregation results to the Committee Members for consensus.If the consensus is passed, the Leader of the committee will release the aggregation model GM Req and upload it to AgrChain, which can facilitate the Workers participating in the joint modeling process to download and update their local models. 8. Complete the requested task:After several iterations of training, the joint modeling model GM f inal Req is finally established. According to the proportion of CreditCoin held by each Worker, the system distributes the coin paid by TaskRequester as reward to Workers participating in joint modeling, which can encourage DataHolder to actively participate in joint modeling of requested task next time. Finally, the system will upload and store the joint modeling model GM f inal Req to MasterChain, and return the result Req (GM) to the TaskRequester. 2.4 SFecChain aggregation strategy For a DataHolder, it is difficult to guarantee the training quality of the local model due to its limited resources, and to effectively protect the privacy of users by sharing the original data information of all parties for centralized training. As a result, the aggregation strategy of SFecChain not only expands the 9/17 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72513:1:1:NEW 2 Jun 2022) Manuscript to be reviewed Computer Science amount of data required but also protects data privacy of users by integrating multiparty DataHolders' local model parameters for joint model training without the original data being local. In the local model parameter aggregation stage, dishonest Workers may upload incorrect local model parameters, and it is difficult to guarantee the reliability of the Leader in committee responsible for local model parameter aggregation. Therefore, it is difficult to establish an accurate and efficient joint model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>, Tang et al. (2020)), we propose a credit-based dynamic weight consensus protocol combined with deep learning. The system can effectively identify dishonest Workers participating in each round of joint modeling, and can also use the historical credit of the Workers participating in joint modeling and the accuracy of each round of local model training, and combine with the way of dynamically selecting committees for consensus. Eventually a high-quality joint model will be obtained. 2.4.1 Encrypted parameter upload In order to ensure the secure sharing of local model parameters during the joint modeling, we integrate the differential privacy mechanism into SFedChain. The privacy of Workers is protected by adding noise to local model parameters. We use smart contract of ArgChain to filter the malicious behavior of dishonest participants in the joint modeling, which effectively guarantees the quality of the joint training model, and combined with the distributed ledger technology of ArgChain to further ensure the security of local model parameter sharing.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Consensus process of dynamic weight consesus protocol.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>the P k corresponding to the maximum value in S C as Leader in committee, and get the P corresponding to the larger µ × length( P) values in S C to form Committee Members for consensus 19: contract verification mechanism of ArgChain combined with differential privacy: Before uploading the trained local model parameters to ArgChain, the real data can be hidden by perturbing the local model parameters and adding noise that conforms to the Laplace distribution. Attacks with background knowledge can be avoided to obtain the original data information of the DataHolder. The trained local model parameters are verified through smart contract of ArgChain. Since DataHolder may train poor quality model parameters or dishonest DataHolder maliciously upload incorrect local model parameters, these factors will lead to lower quality of the joint model. Filtering the uploaded local model parameters through ArgChain's smart contract can guarantee the training quality of the joint model. 3. No fixed aggregation server: In the aggregation phase of the local model parameters. A dishonest parameter aggregation server generates incorrect aggregation parameters, or the parameter aggregation server is attacked by a malicious attacker, which may cause the interruption of the parameter aggregation process. We propose a method of dynamically selecting parameter aggregation server and Committee Members based on the credit of Workers to perform parameter aggregation services, and to agree on the result of parameter aggregation to ensure the safety and accuracy of the parameter aggregation process. 12/17 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72513:1:1:NEW 2 Jun 2022) Manuscript to be reviewed Computer Science 4. The quality of the joint training model: In order to obtain a higher-quality joint training model, we propose a local model parameter aggregation algorithm based on dynamic weight allocation. Specifically, the new CreditCoin owned by each Worker is calculated by using the CreditCoin owned by the Worker and the accuracy of each round of local training. Perform softmax on the new CreditCoin, and perform a weighted summation of its local model parameters according to its different proportions to obtain the joint model parameters for each round. 5. Incentive mechanism: In order to ensure the durable operation of SFedChain and attract more DataHolders to participate in the joint modeling of the requested task. We propose to pay rewards for participating in joint modeling workers to attract more DataHolders to join. TaskRequester pays Coin for its requested task. According to the performance of Workers' joint modeling process, different amounts of rewards are allocated to improve the enthusiasm of DataHolder to participate.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>The 20 Newsgroup dataset is a collection newsgroup documents. This dataset collects about 20000 newsgroup documents, which are evenly divided into 20 newsgroup collections with different topics.it has become a popular data set for experiments in text applications of machine learning techniques.The AG News is a collection of more than 1 million news articles. News articles have been gathered from more than 2000 news sources by ComeToMyHead in more than 1 year of activity. The dataset includes 120000 training samples and 7600 test samples. Each sample is a short text with four types of labels. We use these two datasets to simulate the text data generated by the equipment operation and monitoring of each device.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Accuracy in various datasets</ns0:figDesc><ns0:graphic coords='14,141.73,503.03,413.53,189.36' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>Figure 7(b) shows the accuracy results with various number of data providers. As data providers increase, the accuracy curve of the model first grows and eventually stabilizes. This means that the lack of data volume affects the accuracy of the model to a certain extent. Eventually, as the data volume saturates, the model reaches the limit of its diagnostic ability. Therefore, we determine the number of Workers according to the minimum and maximum payment amount of the request task to adapt to the model's diagnostic ability.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Running time in various datasets</ns0:figDesc><ns0:graphic coords='15,141.73,255.52,413.53,189.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:04:72513:1:1:NEW 2 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Accuracy and Loss under various attack strengths</ns0:figDesc><ns0:graphic coords='16,141.73,160.05,413.53,181.74' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,162.41,63.78,372.20,242.57' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,204.37,525.00,283.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,42.52,204.37,525.00,274.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,204.37,525.00,280.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,204.37,525.00,342.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='26,42.52,229.87,525.00,240.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,204.37,525.00,240.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,204.37,525.00,230.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>Credit-based Dynamic Weight Parameter Aggregation Input: Workers participating in joint modeling P, Workers verified by ArgChain P, iteration times iter = 1 Output: Encrypted global model,GM 1: if length(veri f ied(P)) = length(P) then</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Algorithm 2 2: while each participant p i ∈ P − P do</ns0:cell></ns0:row><ns0:row><ns0:cell>3:</ns0:cell><ns0:cell cols='2'>p i .CreditCoin = p i .CreditCoin − u × p i .CreditCoin</ns0:cell></ns0:row><ns0:row><ns0:cell>4:</ns0:cell><ns0:cell>sum r = sum r + u × p i .CreditCoin</ns0:cell></ns0:row><ns0:row><ns0:cell>5:</ns0:cell><ns0:cell>end while</ns0:cell></ns0:row><ns0:row><ns0:cell>6: 7:</ns0:cell><ns0:cell>while each participant p i ∈ P do p i .CreditCoin = p i .CreditCoin + sum r length( P)</ns0:cell></ns0:row><ns0:row><ns0:cell>8:</ns0:cell><ns0:cell>end while</ns0:cell></ns0:row><ns0:row><ns0:cell>9:</ns0:cell><ns0:cell>while each participant p i ∈ P do</ns0:cell></ns0:row><ns0:row><ns0:cell>10: 11:</ns0:cell><ns0:cell>S CreditCoin i S Acc i = ∑ length( P) = ∑ length( P) e p i .CreditCoin e p j .CreditCoin j=1 e p i .accuracy j=1 e p j .accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>12:</ns0:cell><ns0:cell>end while</ns0:cell></ns0:row><ns0:row><ns0:cell>13:</ns0:cell><ns0:cell>C = S CreditCoin + S Acc</ns0:cell></ns0:row><ns0:row><ns0:cell>14:</ns0:cell><ns0:cell>while each c i ∈ C do</ns0:cell></ns0:row><ns0:row><ns0:cell>15:</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:72513:1:1:NEW 2 Jun 2022)</ns0:cell><ns0:cell>11/17</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Dear Editor and Reviewers.
Thanks for your letter and for the comments concerning our manuscript entitled 'SFedChain: blockchain-based federated learning scheme for secure data sharing in distributed energy storage networks' (ID: #CS-2022:04:72513). Those comments were profoundly insightful and enabled us to significantly improve the quality of our manuscript, as well as the significant guiding significance to our future researches.
We have studied comments carefully and have tried our best to revise and improve the manuscript. Responses to the comments are as follows:
Responses to the comments of Editor
Comment: “Based on the reports of reviewers, both of them believed that the paper is well-written with a solid contribution. Some writing issues should be corrected before accepting it as the English grammar issues are pervasive. A minor revision is needed to enhance the quality.”
Response: Thanks for your hard work, we have modified and improved the quality of our manuscript according to the comments of reviewers, especially we have thoroughly checked the typo, capitalization, and grammatical errors.
Responses to the comments of Reviewer 1#
Comment1: There are some Language problems and typos or grammar errors in this manuscript. Many of them could cause difficulties to the readers.
Response1: Thank for your comments. We have thoroughly checked and corrected the grammatical errors and typos we found in our revised manuscript.
Comment2: The author proposes to use the consortium chain to trace the source of the joint modeling process. Why does this article apply the consortium chain instead of the public chain or private chain? Please give the author a detailed explanation.
Response2: Thank you for pointing out this problem in manuscript. The public blockchain have a very strict consensus mechanism, so the biggest problem of the public blockchain is the consensus problem, which directly leads to the problem of the speed of the public blockchains processing data, which is difficult to apply to the occasions with a large amount of transaction data. Therefore, it is not suitable for distributed energy storage network scenarios with frequent joint modeling. The private blockchain establishes a private blockchain that is not open to the public, and only permissioned nodes can participate and view all data. It is generally suitable for the internal data management and auditing of specific institutions, and is not suitable for multi-party data sharing scenarios. However, the consortium blockchain is between the public blockchain and the private blockchain, and has the characteristics of partial decentralization, combining the characteristic elements of both. The processing speed of the consortium blockchain is faster than that of the public blockchain, because the number and identities of the nodes have been stipulated, so a relatively loose consensus mechanism can be used, and the processing speed of data is greatly improved than that of the public blockchain. At the same time, for the multi-party data sharing scenario in the distributed energy storage network, it can better meet the data security sharing between institutions and efficiently perform joint modeling of tasks than private blockchain. Therefore, we choose to apply consortium blockchain in distributed energy storage scenarios.
Comment3: In the simulation of the attack experiment in Figure 9, the authors should give a more detailed experimental setup.
Response3: Thanks for your remind, we have revised the text to address your concerns and hope that it is now clearer. We describe the experimental setup in the simulated attack experiments in more detail. Please refers to section 2 of chapter 3.
Responses to the comments of Reviewer 2#
Comment1: In the first paragraph of chapter 1, the “blockchain” is written as “blockchian”. Please proofread carefully.
Response1: We thank the reviewer for pointing out this issue. We have corrected this error in the revised manuscript (page 3, line 132) .
Comment2: In section 2 of chapter 2, the explanation of and are not given when it appears at the first time.
Response2: Thanks for your valuable counsel. The full descriptions of the abbreviations like and have been supplemented in the revised manuscript. Please refers to section 2 of chapter 2.
Comment3: In chapter INTRODUCTION, page 2, line 99 - 'privacy', use capital 'P'. Line 118, 'In Section 2, We give implementation', Use the small letter 'we'.
Response3: Thanks for your remind, we’ve changed 'privacy' to 'Privacy' (page 2, line 99) and 'In Section 2, We give implementation' to 'In Section 2, we give implementation' (page 3, line 118).
Comment4: The text in Figure 1 is too small, please use a high-quality vector image and adjust the font size.
Response4: Thank you for pointing out this issue in our manuscript. According to the revised content, we have redrawn Figure 1 and revised the text size issues in the images and used high-quality vector graphics to more clearly demonstrate the Scenario of secure multi-party data sharing. Please refer to Figure 1.
Once again we appreciate for Editors/Reviewers’warm work earnestly and hope that the revisions in the manuscript and attached responses letter are qualified enough. We shall look forward to hearing from you at your earliest convenience.
Yours sincerely,
Mingming Meng
On behalf of all authors.
" | Here is a paper. Please give your review comments after reading it. |
703 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The energy-constrained heterogeneous nodes are the most challenging wireless sensor networks (WSNs) for developing energy-aware clustering schemes.</ns0:p><ns0:p>Although various clustering approaches are proven to minimise energy consumption and delay and extend the network lifetime by selecting optimum cluster heads (CHs), it is still a crucial challenge. Methods. This paper proposes a genetic algorithm-based energy-aware multi-hop clustering (GA-EMC) scheme for heterogeneous WSNs (HWSNs). In HWSNs, all the nodes have varying initial energy and typically have an energy consumption restriction. A genetic algorithm determines the optimal CHs and their positions in the network. The fitness of chromosomes is calculated in terms of distance, optimal CHs, and the node's residual energy. Multi-hop communication improves energy efficiency in HWSNs. The areas near the sink are deployed with more supernodes far away from the sink to solve the hot spot problem in WSNs near the sink node. Results. Simulation results proclaim that the GA-EMC scheme achieves a more extended network lifetime network stability and minimises delay than existing approaches in heterogeneous nature.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The latest technology development in wireless communication, sensing devices, and microelectronics have opened new frontiers in wireless sensor networks (WSNs). Critical WSNs applications include environmental monitoring <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref><ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, transport <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref><ns0:ref type='bibr' target='#b4'>[4]</ns0:ref><ns0:ref type='bibr' target='#b6'>[5]</ns0:ref>, surveillance systems <ns0:ref type='bibr' target='#b7'>[6]</ns0:ref><ns0:ref type='bibr' target='#b8'>[7]</ns0:ref>, healthcare <ns0:ref type='bibr' target='#b10'>[8]</ns0:ref><ns0:ref type='bibr' target='#b11'>[9]</ns0:ref><ns0:ref type='bibr' target='#b12'>[10]</ns0:ref><ns0:ref type='bibr' target='#b13'>[11]</ns0:ref><ns0:ref type='bibr' target='#b14'>[12]</ns0:ref><ns0:ref type='bibr' target='#b15'>[13]</ns0:ref><ns0:ref type='bibr' target='#b17'>[14]</ns0:ref><ns0:ref type='bibr' target='#b20'>[15]</ns0:ref><ns0:ref type='bibr' target='#b21'>[16]</ns0:ref>, emotions recognition and monitoring <ns0:ref type='bibr' target='#b23'>[17]</ns0:ref><ns0:ref type='bibr' target='#b24'>[18]</ns0:ref><ns0:ref type='bibr' target='#b26'>[19]</ns0:ref><ns0:ref type='bibr' target='#b27'>[20]</ns0:ref>, home automation <ns0:ref type='bibr' target='#b29'>[21]</ns0:ref>, battlefield monitoring, and industrial automation and control <ns0:ref type='bibr' target='#b41'>[32]</ns0:ref>. WSNs contain more sensor nodes capable of sensing the physical phenomenon, packet forwarding, and communicating the packets to the destination. However, each sensor node has limitations, such as limited memory, restricted processing capability, short-range transmission, finite energy resources, and low storage capability. WSNs are becoming a good network for protecting, controlling, and facilitating real-time applications <ns0:ref type='bibr' target='#b42'>[33]</ns0:ref>. The primary constraint of the WSNs is the limited, non-rechargeable battery-powered sensor nodes, and these nodes have used their energy for sensing, sending, and receiving the data. When the sensor battery is drained, several areas in the sensor field will lack coverage, and valuable data from these areas will not reach the sink. Using energy among the nodes and prolonging the network's lifetime are considered the primary challenge for HWSNs.</ns0:p><ns0:p>Various routing and clustering algorithms have been addressed to minimise average energy consumption in WSNs. CHs in WSNs allocate the energy to their member nodes to maintain the load-balancing of a cluster <ns0:ref type='bibr' target='#b43'>[34]</ns0:ref>. This protocol utilises local coordination to enhance scalability and reduce the large number of data packets communicated to the sink. The clustering protocol splits up the network structure into many clusters, and every cluster contains CH and cluster members (CMs). The CM node collects and sends information to the respective CH in each cluster. Each CH gathers the data packets from the CMs and carries out the data aggregation process. Eventually, the aggregated data is communicated to the sink. The protocol in <ns0:ref type='bibr' target='#b44'>[35]</ns0:ref> selects CHs periodically using residual energy and the degree of the nodes. This approach has a low overhead and performs CH selection uniformly across the network. In <ns0:ref type='bibr' target='#b45'>[36]</ns0:ref>, the proposed algorithm analyses the routing algorithms in WSNs to extend the network lifetime, save power, and maintain load-balancing. In <ns0:ref type='bibr' target='#b46'>[37]</ns0:ref>, the energy consumption and nonuniform node distribution in WSNs have been discussed. An energy-aware routing algorithm is presented for cluster-based WSNs to stabilise the energy consumption of the CHs <ns0:ref type='bibr' target='#b37'>[28]</ns0:ref>. Quality of service (QoS) and energy-efficiency requirements are satisfied in practical WSNs <ns0:ref type='bibr' target='#b38'>[29]</ns0:ref>. <ns0:ref type='bibr' target='#b39'>[30]</ns0:ref> highlights the optimal CH selection cluster formation and maintains the load-balanced network. For a given distance, energy efficiency improves if the data is transmitted through multi-hop. By properly selecting the CHs and the next-hop nodes for multi-hop routing, energy spent by the sensor nodes can be reduced. <ns0:ref type='bibr'>[31]</ns0:ref> analyses the impact of heterogeneity in WSNs, energy level, and hierarchical cluster structures. Smaragdakis et al. <ns0:ref type='bibr' target='#b41'>[32]</ns0:ref> proposed a protocol that prolongs the stability period of sensor nodes in heterogeneous WSNs (HWSNs). In <ns0:ref type='bibr' target='#b42'>[33]</ns0:ref><ns0:ref type='bibr' target='#b43'>[34]</ns0:ref>, average network energy and the nodes' residual energy select the optimal CHs. In <ns0:ref type='bibr' target='#b44'>[35]</ns0:ref>, the proposed algorithm minimises delay based on signal-to-interference-and-noise-ratio (SINR) in WSNs. <ns0:ref type='bibr' target='#b45'>[36]</ns0:ref> finds optimal cluster sizes based on the hop count to the sink node. It is also used to extend the network lifetime and minimise energy consumption. Several heterogeneous routing protocols in WSNs <ns0:ref type='bibr' target='#b46'>[37]</ns0:ref> are reviewed and analysed with performance metrics. The algorithm in <ns0:ref type='bibr' target='#b47'>[38]</ns0:ref> organises the nodes into several clusters in WSNs and generates a hierarchy of CHs. A genetic algorithm (GA) is a metaheuristic algorithm used to solve optimisation problems <ns0:ref type='bibr' target='#b48'>[39]</ns0:ref><ns0:ref type='bibr' target='#b49'>[40]</ns0:ref>. GA is an appropriate scheme for solving any clustering problems in WSNs. It is also used to resolve persistent optimisation problems <ns0:ref type='bibr' target='#b50'>[41]</ns0:ref>. In this paper, HWSNs use GA for solving the multi-hop clustering based on the newly defined fitness function <ns0:ref type='bibr' target='#b51'>[42]</ns0:ref>.</ns0:p><ns0:p>Existing solutions have the advantage of cluster formation done through the residual energy and prolonging the lifetime of WSNs. However, re-clustering consumes more energy while the end-to-end delay is not minimised. This motivates us to devise an approach for designing energy-aware multi-hop clustering for HWSNs. WSNs with heterogeneous nodes result in better network stability and extend the network lifetime. Energy consumption has been minimised using GA by selecting the optimal CHs during the re-clustering. The main contributions of this paper are specified as follows:</ns0:p><ns0:p> A GA-based energy-aware multi-hop clustering algorithm (GA-EMC) is proposed for selecting the optimal number of CHs dynamically during the re-clustering.</ns0:p><ns0:p> A framework for optimised transmission scheduling and routing is formulated to reduce the delay under the SINR model for HWSNs.</ns0:p><ns0:p> A combination of weak and robust sensor nodes using their residual energy mitigates the re-clustering issues.</ns0:p><ns0:p> For optimising cluster construction, the GA maintains the stability of the nodes in a network.</ns0:p><ns0:p> A dynamic power allocation scheme for sensor nodes is proposed to have a guaranteed QoS for nodes.</ns0:p><ns0:p>The structure of this paper starts with the introduction related to the wireless sensor network, genetic algorithm, and multi-hop clustering paradigms. The following section describes the existing multi-hop clustering algorithms and their issues. In the next section, we present the GA-EMC algorithm, followed by the section which addresses the experimental results and analyses the performance of GA-EMC. The following section is a discussion, and finally, the last section discusses the conclusion. This section presents the various modern and advanced multi-hop clustering schemes in WSNs. Many researchers have done some work in multi-hop clustering algorithms based on GA, and an overview of that work is given here. Heinzelman et al. <ns0:ref type='bibr' target='#b43'>[34]</ns0:ref> developed the probability-based CH selection and decreased the average CHs energy consumption. In <ns0:ref type='bibr' target='#b44'>[35]</ns0:ref><ns0:ref type='bibr' target='#b45'>[36]</ns0:ref>, the spatial distribution of CHs in WSNs by constructing a multi-hop table. It is also used to decrease the CHs when directly transmitted to the sink or base station (BS). Yu et al. <ns0:ref type='bibr' target='#b46'>[37]</ns0:ref> 's algorithm selects the CHs with higher residual energy and achieves better load-balancing among CHs. In <ns0:ref type='bibr' target='#b47'>[38]</ns0:ref>, CHs energy consumption was minimised during the data routing process and achieved better time complexity. Cheng et al. <ns0:ref type='bibr' target='#b48'>[39]</ns0:ref> 's protocol satisfies the QoS requirements in WSNs, and <ns0:ref type='bibr' target='#b39'>[30]</ns0:ref> addresses the cluster formation and CH selection using weight metrics in HWSNs.</ns0:p><ns0:p>In general, sensor networks can be heterogeneous regarding the initial energy, computational ability of the WSN nodes, and the bandwidth of the links <ns0:ref type='bibr'>[31]</ns0:ref>. Designing WSNs with heterogeneous nodes increases the reliability and network lifetime. Computational and link heterogeneity reduces the latency in data transmission <ns0:ref type='bibr' target='#b41'>[32]</ns0:ref><ns0:ref type='bibr' target='#b42'>[33]</ns0:ref>. Various parameters are used to classify the nodes in HWSNs <ns0:ref type='bibr' target='#b43'>[34]</ns0:ref>. <ns0:ref type='bibr' target='#b44'>[35]</ns0:ref> studied transmission scheduling and multi-hop routing to minimise delay using SINR. The initial energy varies according to the node's distance from the sink to overcome the energy hole problem in multi-hop networks <ns0:ref type='bibr' target='#b45'>[36]</ns0:ref>. <ns0:ref type='bibr' target='#b46'>[37]</ns0:ref> categorises several heterogeneous routing protocols with predefined parameters by enhancing network lifetime and node heterogeneity in WSNs.</ns0:p><ns0:p>GA has been used for the CHs' optimal selection in recent research. The main focus of the GA-based clustering algorithms is the fitness function. The fitness function determines the goodness of an individual to be selected for the next generation <ns0:ref type='bibr' target='#b47'>[38]</ns0:ref>. <ns0:ref type='bibr' target='#b48'>[39]</ns0:ref> critically analysed the energy-efficient routing protocols for WSNs. The method in <ns0:ref type='bibr' target='#b49'>[40]</ns0:ref> is based on biogeographybased optimisation in HWSNs. The fitness value is modified further by incorporating the residual energy of the remaining nodes that enhances the performance. It prolongs the network lifetime <ns0:ref type='bibr' target='#b50'>[41]</ns0:ref>. Meta-heuristics techniques are widely applied to solve several clustering problems in WSNs <ns0:ref type='bibr' target='#b51'>[42]</ns0:ref><ns0:ref type='bibr' target='#b52'>[43]</ns0:ref>. <ns0:ref type='bibr' target='#b53'>[44]</ns0:ref> reviewed the various protocols and their properties in WSNs. <ns0:ref type='bibr' target='#b54'>[45]</ns0:ref> investigated and presented more clustering approaches.</ns0:p><ns0:p>Younis et al. <ns0:ref type='bibr' target='#b55'>[46]</ns0:ref> 's approach formulates clusters and considers the relay nodes as CHs in two-tiered sensor networks that prolong the relay node lifetime. The method in <ns0:ref type='bibr' target='#b56'>[47]</ns0:ref> extends the network lifetime dynamic route selection and reduces energy consumption. <ns0:ref type='bibr' target='#b57'>[48]</ns0:ref> critically investigated and addressed the power-conserving issues in WSNs, and the algorithm in <ns0:ref type='bibr' target='#b58'>[49]</ns0:ref> solves the energy balance problem in WSNs. Gupta and Pandey <ns0:ref type='bibr' target='#b59'>[50]</ns0:ref> have considered the location of BS and residual energy as clustering parameters to solve an energy hole problem in HWSNs. Darabkh et al. <ns0:ref type='bibr' target='#b61'>[51]</ns0:ref> 's scheme minimises the average energy consumption and prolongs the lifetime of WSNs. Javid et al. <ns0:ref type='bibr' target='#b62'>[52]</ns0:ref>'s technique for HWSNs dynamically elects the CH. It extends the network lifetime. <ns0:ref type='bibr' target='#b63'>[53]</ns0:ref> analyses the heterogeneous node locations and selects optimal CH based on the distance between the clusters. The algorithm in <ns0:ref type='bibr' target='#b64'>[54]</ns0:ref> improves the energy and lifetime of both nodes and networks by choosing the optimal CHs. <ns0:ref type='bibr' target='#b65'>[55]</ns0:ref><ns0:ref type='bibr' target='#b66'>[56]</ns0:ref> maximise the network energy and extend nodes' network lifetime by selecting optimal CH in WSNs. Fan <ns0:ref type='bibr' target='#b67'>[57]</ns0:ref> 's method investigates several issues such as energy consumption, coverage, and data routing in WSNs. This method improves the coverage ratio and prolongs network lifetime. Javaid et al. <ns0:ref type='bibr' target='#b68'>[58]</ns0:ref> 's scheme increases the node stability period and sends more packets to BS. <ns0:ref type='bibr' target='#b69'>[59]</ns0:ref> designs an energyaware cluster by selecting optimal CHs in WSNs.</ns0:p><ns0:p>Ali et al. <ns0:ref type='bibr' target='#b70'>[60]</ns0:ref> 's algorithm optimises the clusters in a network and minimises the data traffic and energy dissipation among nodes. <ns0:ref type='bibr' target='#b71'>[61]</ns0:ref> continuously monitors patients' data by selecting an optimal path in the body area network. It also enhances network lifetime, load balancing, and energy on the overall network. Pal et al. <ns0:ref type='bibr' target='#b73'>[62]</ns0:ref> 's method achieves a load-balanced network. It prolongs the lifetime of WSNs by optimising CH selection approach <ns0:ref type='bibr' target='#b74'>[63]</ns0:ref> that reduces the distance between the CH and CMs in WSNs to improve energy conservation. Lin et al. <ns0:ref type='bibr' target='#b75'>[64]</ns0:ref> 's approach maximises the lifetime of heterogeneous nodes based on sensing coverage and network connectivity. The approach in <ns0:ref type='bibr' target='#b76'>[65]</ns0:ref> selects energy-aware clusters and optimal CH based on hop count and locations. <ns0:ref type='bibr' target='#b77'>[66]</ns0:ref> considers the GA parameters for enhancing the CH performance in WSNs. <ns0:ref type='bibr' target='#b78'>[67]</ns0:ref> 's approach investigates the cluster formation that reduces energy consumption. Haseeb et al. <ns0:ref type='bibr' target='#b79'>[68]</ns0:ref> 's method increases energy efficiency and data security against malicious activities. [69-70] 's algorithms prolong the network lifetime by selecting the optimal CHs and reducing average energy consumption.</ns0:p><ns0:p>Delavar and Baradaran's <ns0:ref type='bibr' target='#b82'>[71]</ns0:ref> algorithm reduced energy consumption by selecting chromosomes in different states. <ns0:ref type='bibr' target='#b83'>[72]</ns0:ref><ns0:ref type='bibr' target='#b85'>[73]</ns0:ref><ns0:ref type='bibr' target='#b86'>[74]</ns0:ref><ns0:ref type='bibr' target='#b87'>[75]</ns0:ref> studied the optimal selection of clusters to extend the WSNs' lifetime. <ns0:ref type='bibr' target='#b88'>[76]</ns0:ref> analyses the spatial distribution of heterogeneous nodes in WSNs and effectively avoids the energy hole problem. <ns0:ref type='bibr' target='#b89'>[77]</ns0:ref> 's algorithm enhances the reported sensitivity of the nodes and optimises the solution quality in HWSNs management. <ns0:ref type='bibr' target='#b90'>[78]</ns0:ref> compares the various evolutionary algorithms with network lifetime, node stability period, and energy efficiency. <ns0:ref type='bibr' target='#b91'>[79]</ns0:ref> 's method optimises heterogeneous sensor node clustering. It dramatically extends the network lifetime. Huang et al. <ns0:ref type='bibr' target='#b92'>[80]</ns0:ref> 's method was used to minimise the delay and collision and reduce the energy consumption in WSNs. The protocol in <ns0:ref type='bibr' target='#b93'>[81]</ns0:ref> improves energy utilisation and minimises the delay in sensor networks. <ns0:ref type='bibr' target='#b94'>[82]</ns0:ref> minimises the energy holes and prolongs the network's lifetime in WSNs. In this approach, the network is divided into unequal clusters, and it considers the node's residual energy and distance to the base station for cluster formation in WSNs. Nodes with the highest energy are considered CH for WSNs <ns0:ref type='bibr' target='#b95'>[83]</ns0:ref>. Kamil et al. <ns0:ref type='bibr' target='#b97'>[84]</ns0:ref>'s techniques change the WSNs' sink node position dynamically to increase residual energy and prolong the network lifetime.</ns0:p><ns0:p>The method in <ns0:ref type='bibr' target='#b98'>[85]</ns0:ref> selects the smart CH in WSNs to prolong the network lifetime. Optimal CH selection is performed for extending the network lifetime of WSN by using various attributes of sensor nodes <ns0:ref type='bibr' target='#b99'>[86]</ns0:ref><ns0:ref type='bibr' target='#b100'>[87]</ns0:ref>. Kashyap et al. <ns0:ref type='bibr' target='#b101'>[88]</ns0:ref>'s algorithm performs load-balancing among sensor nodes for WSNs. Also, it balances the optimal number of CHs and evenly distributes the load among nodes. Zhang et al. <ns0:ref type='bibr' target='#b102'>[89]</ns0:ref>'s technique performs that the sensors are scheduled into several disjoint complete cover sets in WSNs and activates them in batch for energy conservation. The algorithm in <ns0:ref type='bibr' target='#b103'>[90]</ns0:ref> is suitable for small-scale WSNs and suffers from high network latency due to PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science multiple forwarding operations. <ns0:ref type='bibr' target='#b104'>[91]</ns0:ref> discussed the various energy management challenges for WSNs.</ns0:p><ns0:p>Many research proposals exist in the related works addressing the energy-efficient hierarchical clustering issues, but node heterogeneity of WSN nodes has not been exploited to its full potential. Energy efficiency is the essential component in extending the life of WSN systems that are resource-constrained, particularly in terms of energy. The energy-aware clustering algorithms become a significant factor in WSNs since multi-hop clustering methods relate to the network's communication operations. The energy, computation, and link are the three broadly divided basic types of heterogeneity of WSN. Another vital factor to consider is the heterogeneity of data creation rates, which considers nodes with varying data transmission requirements. As a result, distinct performance evaluation parameters must be used to categorise sensor nodes. So there is a necessity to categorise sensor nodes based on different performance evaluation metrics. Motivated by the above facts, in this paper, we provide a genetic algorithm-based energy-aware multi-hop clustering scheme for heterogeneous WSNs. Table <ns0:ref type='table'>1</ns0:ref> shows the Summarisation of Related Works.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1. The summarisation of Related Work</ns0:head><ns0:p>The proposed method is similar to <ns0:ref type='bibr' target='#b44'>[35]</ns0:ref> and <ns0:ref type='bibr' target='#b77'>[66]</ns0:ref>. Compared to existing works <ns0:ref type='bibr' target='#b44'>[35,</ns0:ref><ns0:ref type='bibr' target='#b58'>49,</ns0:ref><ns0:ref type='bibr' target='#b61'>51,</ns0:ref><ns0:ref type='bibr' target='#b62'>52,</ns0:ref><ns0:ref type='bibr' target='#b64'>54,</ns0:ref><ns0:ref type='bibr' target='#b65'>55,</ns0:ref><ns0:ref type='bibr' target='#b66'>56,</ns0:ref><ns0:ref type='bibr' target='#b70'>60,</ns0:ref><ns0:ref type='bibr' target='#b77'>66]</ns0:ref>, our study is distinguished by the type of algorithm. In this approach, two methods are investigated in HWSNs. The first method uses GA to enhance performance by selecting the optimal CHs during the clustering and re-clustering phases. The second method extends the first method by featuring optimal transmission scheduling. In this method, we carefully analyse the transmission scheduling and communication among CHs. As a result, we address various properties and analyses of node strategies to minimise the end-to-end delay, extend the network lifetime, and improve energy efficiency. However, this is the first paper presenting a GA-based energy-aware multi-hop clustering to minimise end-to-end delay, expand the network lifetime, and enhance energy efficiency in HWSNs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>In HWSNs, clusters are formed based on GA. GA finds optimal CHs by considering the network coverage and its energy level. The CHs perform data aggregation and transmit the combined data packets to the sink. A multi-hop network is used to send packets from CHs to the sink. Neighbouring sink nodes consider regular, advanced, and supernodes. These nodes have different initial energy. Regions near the sink have a more significant number of supernodes than other regions. The next-hop CH is selected with the distance between the CHs, the residual energy, the number of CMs, and the neighbouring CHs associated with the given CH in routing. Various symbols and notations used in the proposed work are mentioned in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Multi-hop Network Model</ns0:head><ns0:p>A WSNs is assumed to be a bidirected graph where denotes the network size,</ns0:p><ns0:formula xml:id='formula_0'>), , , ( C E V G  V</ns0:formula><ns0:p>is two-way communication links, and is the direct links. We</ns0:p><ns0:formula xml:id='formula_1'>V V E   } } , { : ) , ( ), , {( E j i i j j i C   consider that</ns0:formula><ns0:p>iff, the SINR is convinced, i.e., and where and</ns0:p><ns0:formula xml:id='formula_2'>E j i  } , {     ) , ( j i , ) , (     i j ) , ( j i </ns0:formula><ns0:p>be the energy at node when node is sending and receiving data packets, respectively. It ) , ( i j</ns0:p><ns0:formula xml:id='formula_3'> j i</ns0:formula><ns0:p>can be represented as where and be the transmitted</ns0:p><ns0:formula xml:id='formula_4'>), , ( ) ( : ) , ( ), , ( ) ( : ) , ( i j j i j j i i j i         ) (i  ) ( j </ns0:formula><ns0:p>energy of nodes and . Here, is to obtain the communication link is the</ns0:p><ns0:formula xml:id='formula_5'>i j ) , ( ) , ( i j j i    }, , { j i </ns0:formula><ns0:p>noise power and is the SINR value. The network size occurrence to is defined by i.e.,</ns0:p><ns0:formula xml:id='formula_6'> V i  ), (i  }. } , { : { ) ( E j i V j i    </ns0:formula><ns0:p>Assume that the order of time and is the group of nodes in a time. The direct</ns0:p><ns0:formula xml:id='formula_7'>} , . . . , 2 , 1 { :     link</ns0:formula><ns0:p>is very dynamic, only if and the resulting SINR is convinced:</ns0:p><ns0:formula xml:id='formula_8'>C j i  ) , (     j i ,           } { ) , ( ) , ( i j k j i (1)</ns0:formula><ns0:p>A node can either send, receive, or be inactive at a particular time. A group of The SINR is applied to </ns0:p><ns0:formula xml:id='formula_9'>c }. ) , ( , : { : ) ( c j i j i c     communication links : ) , ( c j i            } { ) ( ) , ( ) , (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Optimised Energy Model</ns0:head><ns0:p>The proposed GA-EMC is adopted an optimised energy model <ns0:ref type='bibr' target='#b105'>[92]</ns0:ref> that minimises energy consumption. The nodes' energy is needed to communicate a data packet consisting of bits of a </ns0:p><ns0:formula xml:id='formula_10'>    l l R ) (<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The CM node spends the energy to send a packet to its CH. The power spent by the CM to transmit bits of a packet to its CH is determined by Eq. ( <ns0:ref type='formula'>5</ns0:ref>). l ) , (</ns0:p><ns0:formula xml:id='formula_11'>, i i CH i T CM d l    (5)</ns0:formula><ns0:p>Where, represents the Euclidean distance between the CM and its CH. A CH spends</ns0:p><ns0:formula xml:id='formula_12'>i CH i d , th i</ns0:formula><ns0:p>its power to receive a packet from its CMs, aggregate all the packets, and send it to other CHs. In addition to forwarding the local cluster data, CHs may also forward the traffic received from other CHs. Equation <ns0:ref type='bibr' target='#b7'>(6)</ns0:ref> shows the energy required by CHs.</ns0:p><ns0:formula xml:id='formula_13'>F CH P j T A CM j R CM j CH j j j d l l l            ) , ( ) ( ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Where, denotes the energy spent by CH to accept data packets from the CMs. The )) , ( ) ( (</ns0:p><ns0:formula xml:id='formula_14'>j j j P j, T R CH CH F CH d l l       (7)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Phases in the proposed GA-EMC protocol</ns0:head><ns0:p>The proposed GA-EMC contains 4 phases: Heterogeneous Nodes Deployment, Clustering Formulation, Selection of Next-hop Neighbor, and Packet Transmission. The proposed GA-EMC scheme's main idea is to optimise energy management of the WSNs by minimising the intracluster distance between a CH and a CM. Using Euclidean distance, the distance between the CM and the CH is calculated for WSNs. The CM is placed in the cluster with the least space between it and the others. The nodes interact directly with the sink if the distance between the sink and the sensor node is smaller than the distance between CH and CM. When a node joins a cluster, it sends a JOIN message to the other nodes and the CH to let them know it's there. The CH assigns each node a time slot for data collection. After the data has been acquired, the CH aggregates it before sending it to the sink. The nodes may sleep during this entire process, but CH must be awake at all times. This lowers CH's energy, and few nodes die over time due to living in a sparse network. In each cycle, the clusters are reconstructed, and CHs are chosen.</ns0:p><ns0:p>The fitness function is used in the proposed GA-EMC technique to reduce the intra-cluster distance between the sensor nodes and the cluster head (CH). The function optimised the CH's placement, which impacts the estimated number of packet retransmissions along the path and hence on the network's overall energy usage. Because GA-EMC works with the fitness function, the proposed technique is preferable in terms of performance measurement in terms of energy consumption. Minimising the distance between CMs and their CHs examines the sink distance, intra-cluster distance, and residual energy of CMs to determine their ideal positions. These phases are described below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Heterogeneous Nodes Deployment Phase</ns0:head><ns0:p>In multi-hop communication, the CHs are situated very close to the sink node and have to forward more packets received from other nodes, and their power is exhausted quicker than the CHs far away from the sink. This creates a hot spot in the regions near the sink. To solve this issue, sensor nodes are classified into regular nodes with initial energy , advanced nodes with </ns0:p></ns0:div>
<ns0:div><ns0:head>Clustering Formulation Phase</ns0:head><ns0:p>In this phase, more clusters are formed in HWSNs. It also contains two other sub-phases, namely CHs Selection and CM Association phases. The CHs selection phase selects an optimal CH. Each CM is associated with any one of the nearest energy-efficient CH in the CM association phase.</ns0:p></ns0:div>
<ns0:div><ns0:head>CH Selection Phase</ns0:head><ns0:p>This phase uses the GA for selecting optimal CHs and their location. GA is working on natural genetics and natural selection principles and is used to optimise various parameters. GA is applied in multiple fields for solving constrained and unconstrained optimisation problems <ns0:ref type='bibr' target='#b47'>[38]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>CM Association Phase</ns0:head><ns0:p>Each CH sends a CH advertisement message containing its identifier, location, initial cost as given by Eq. ( <ns0:ref type='formula'>8</ns0:ref>). CM selects their respective CH with low cost and sends JOIN message to the optimal CH.</ns0:p><ns0:formula xml:id='formula_15'>2 1 1 1 ) 1 ( f c f c Cost      (8)</ns0:formula><ns0:p>Here is a constant</ns0:p><ns0:p>. By setting proper value for , we can decide how much</ns0:p><ns0:formula xml:id='formula_16'>1 c 1 0 1   c 1 c</ns0:formula><ns0:p>importance to give to distance and energy in the CH selection. The terms and are</ns0:p><ns0:formula xml:id='formula_17'>1 f 2 f</ns0:formula><ns0:p>calculated as provided by Eqs. ( <ns0:ref type='formula'>9</ns0:ref>) and ( <ns0:ref type='formula'>10</ns0:ref>). </ns0:p><ns0:formula xml:id='formula_18'>j i j i f    2 (10)</ns0:formula><ns0:p>Where, and represents the initial and residual energies of respectively. is a <ns0:ref type='bibr' target='#b13'>(11)</ns0:ref> The CHs collect the JOIN message from the CMs until the clustering timer expires. Upon the expiry of the timer, CHs create a dynamic time division multiple access (TDMA) scheduling for the packet transmission and send it to the CMs. The GA-based clustering algorithm shows the various steps involved in forming clusters and optimal CH selection.</ns0:p><ns0:formula xml:id='formula_19'>j i  j i  j i CH j i CH CH present</ns0:formula></ns0:div>
<ns0:div><ns0:head>Next-hop neighbour selection phase</ns0:head><ns0:p>Each CH broadcasts a neighbour advertisement message that contains information like identifier, location, initial and residual energies, distance to sink, and the size of CMs associated with it. When a CH receives a neighbour advertisement message, it adds the information contained in the packet to the neighbour. As shown in Figure <ns0:ref type='figure' target='#fig_9'>1</ns0:ref>, CHs use multi-hop paths to communicate the data packets to the sink. The next-hop CHs are chosen based on the distance, residual energy, and the size of CMs associated with the next-hop CH, the number of CHs that have reached via the nexthop CH. When more CHs can be reached via a CH, the CH will help forward the packet reliably. CH with more residual energy, less distance, smaller CMs, and more neighbouring CHs prefer the next-hop CH.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1. Multi-hop communication from CH to sink</ns0:head><ns0:p>For each CH node in the neighbour table, a merit value (MV) is calculated based on the above factors. Eq. ( <ns0:ref type='formula'>12</ns0:ref>) shows the calculation of MV.</ns0:p><ns0:formula xml:id='formula_20'>4 4 3 3 2 2 1 1                 MV (2) 1 4 3 2 1         (13) i d S CH CH CH d j j i max 1 ) , ( ) , ( (    (14) j S CH CH CH d d j j i i     }, ) , ( ) , ( { max max (15)</ns0:formula><ns0:p>Here represents the neighbouring CHs of .</ns0:p><ns0:formula xml:id='formula_21'> i CH 2 j j     (16)      j , 3 CM CH j (17)      j 1 4 CM CH j<ns0:label>(18)</ns0:label></ns0:formula><ns0:p>In Eq. ( <ns0:ref type='formula'>13</ns0:ref>), and represents weights associated with different factors. , , ,</ns0:p><ns0:formula xml:id='formula_22'>3 2 1    4 </ns0:formula></ns0:div>
<ns0:div><ns0:head>Data Transmission Phase</ns0:head><ns0:p>It involves communication within the cluster and communication between sink and CH. In intracluster communication, the CH receives packets from their CMs per the dynamic TDMA scheduling. CM also senses the data from the surroundings and sends them to the concerned CHs during a particular time. The CMs turn off their radio in the remaining time to save the energy wasted during idle listening. Each CH has many next-hop CH neighbours, and the best neighbour node is selected in the next-hop neighbour selection phase.</ns0:p></ns0:div>
<ns0:div><ns0:head>Genetic Algorithm</ns0:head><ns0:p>In GA, each result to a specific problem is denoted by a chromosome using a binary coding scheme. A group of chromosomes constitutes the population. The initial population consists of PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science randomly selected chromosomes, and each bit in the chromosome is called a gene. For each chromosome, a fitness value is calculated, and it evaluates the effectiveness of the chromosome. Chromosomes with high fitness values will get more chances to create new chromosomes. The GA involves three basic operations: selection, crossover, and mutation to select the best chromosome. The selection process duplicates good chromosomes and eliminates the poor ones, and there are many selection methods like tournament selection, ranking selection, and roulette wheel selection. The crossover operation selects two parents, recombines them, and creates two children. Crossover can be either single-point crossover or multi-point crossover. Crossover does not introduce any new genetic properties. Mutation operation introduces new genetic properties. These operations are repeated for a given number of generations <ns0:ref type='bibr' target='#b47'>[38]</ns0:ref><ns0:ref type='bibr' target='#b48'>[39]</ns0:ref>. The implementation of various GA operations is explained below. ii. Objective Function: The objective function ( ) is used for selecting optimal CHs. In  designing , the following facts are considered. The optimal CH consumes more energy  than the CM, so the number of CHs must be minimised. The power required for intracluster communication depends on the distance between CHs and CMs, and the power required for inter-cluster communication depends on the distance between two CHs. To save power, we have to reduce the size of optimal CHs ( ), the distance between CHs  and CMs ( ), and the two CHs distance ( ). By selecting CHs with higher residual   energy, we can deliver packets reliably. The selects the CHs by considering the above  factors, and it is a minimisation function as given in Eq. <ns0:ref type='bibr' target='#b26'>(19)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_23'>                 1 (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_24'>)</ns0:formula><ns0:p>Where, represents the sum of the residual energy associated with the CHs.</ns0:p><ns0:formula xml:id='formula_25'>       1 i i (20)</ns0:formula><ns0:p>Eq. ( <ns0:ref type='formula'>21</ns0:ref>) determines the sum of the distance between CMs from their respective CHs. </ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:note type='other'>Computer Science</ns0:note><ns0:formula xml:id='formula_26'>j i j i i CH CH d        (<ns0:label>22</ns0:label></ns0:formula><ns0:formula xml:id='formula_27'>)</ns0:formula><ns0:p>where and represents the set of parent CHs associated with .</ns0:p><ns0:formula xml:id='formula_28'>i P j  i P i CH</ns0:formula><ns0:p>iii. Fitness Function: GA is generally suitable for solving the maximisation problem. Since our aim is minimising , this problem is transformed into maximising the fitness value  . For each chromosome in the population is used to calculate the as given by Eq.</ns0:p><ns0:formula xml:id='formula_29'>v f  v f (23).    1 1 v f (<ns0:label>23</ns0:label></ns0:formula><ns0:formula xml:id='formula_30'>)</ns0:formula><ns0:p>iv. Selection: It is used to select chromosomes with higher to join the mating pool to v f form a new population for the subsequent generations. The proposed method uses the Roulette wheel selection method.</ns0:p><ns0:p>v. Crossover: The proposed GA-EMC scheme uses a single-point crossover. A random value (0 to 1) and two chromosomes have been selected for this operation. The crossover operation is performed only if the selected random value is less than the crossover probability . Otherwise, no crossover is done. If it is decided to perform crossover, an c p arbitrary crossover point is selected. After the crossover point, the two-parent chromosomes exchange their packet to generate two child chromosomes. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the crossover operation. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As shown in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>, in the first chromosome, no mutation is performed, whereas in the second chromosome, 6 bits are mutated.</ns0:p><ns0:p>Selection, mutation, and crossover operations are repeated for given generations. The better chromosome is selected at the end of the last generation. In the best chromosome selection, if the genome value is 1, the node becomes CH, and otherwise, it becomes CM. Minimising End-to-End Delay with Packet Forwarding Mechanism</ns0:p></ns0:div>
<ns0:div><ns0:head>GA-based Clustering Algorithm</ns0:head><ns0:p>Let represent an upper limit on the delay with A mathematical model is designed</ns0:p><ns0:formula xml:id='formula_31'> }. , . . . , 2 , 1 {   </ns0:formula><ns0:p>to analyse the number of data packets sent and received between CH and CMs for a particular time. We use the binary variables: if time ; if node is transmitting in</ns0:p><ns0:formula xml:id='formula_32'>1  t    t 1 ,   t s i   i S s  ; if node is receiving in ; if is present at by .   t 1 ,   t s i   i S s    t 1 ,  t s i  S s    i   t</ns0:formula><ns0:p>GA-EMC is specially formulated to minimise the delay required to send packets from source to destination. The constraint has been forced all the time after the first round.</ns0:p><ns0:formula xml:id='formula_33'>} { \ , 1    T t t t   </ns0:formula><ns0:p>The constraint ensures that a node can either send and receive data</ns0:p><ns0:formula xml:id='formula_34'>  T t v i Y Z t S s t s i t s i       , , , ,</ns0:formula></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>packets at a particular time or nothing to be done. The constraint</ns0:p><ns0:formula xml:id='formula_35'>          t S s Y v i t s i v i t s i , ,<ns0:label>1 , 1 , ,</ns0:label></ns0:formula><ns0:p>ensures that, in time , the data is transmitted by one node and received by another node in t HWSNs. The constraint ensures that the node receives a packet</ns0:p><ns0:formula xml:id='formula_36'>        t S s V j Y t s j j v i t s i , ,<ns0:label>, , ) ( , j s</ns0:label></ns0:formula><ns0:p>in the current time. Inequality allows a node to communicate data</ns0:p><ns0:formula xml:id='formula_37'>S s V i Y T t t s i T i t s i          , ,<ns0:label>1 , 1 , ,</ns0:label></ns0:formula><ns0:p>packets during the time. The constraint is fully justified in</ns0:p><ns0:formula xml:id='formula_38'>          t S s Y v i t s i v i t s i , ,<ns0:label>1 , 1 , ,</ns0:label></ns0:formula><ns0:p>transmitting packets through multi-hop routing. The constraints and defines variable and set the Manuscript to be reviewed Computer Science conditions for the starting and ending of the dynamic TDMA scheduling. Finally, the constraint expresses the SINR state for sending data packets on link at a time</ns0:p><ns0:formula xml:id='formula_39'>T t s O V i S s Y a t T T s i t s i       )}, ( { \ , , 1 , , S s a a T s s D s s O    , 1<ns0:label>, 1 ) ( 1 )</ns0:label></ns0:formula><ns0:formula xml:id='formula_40'>T t S s V i a t s i t s i      , , , , , s   j i,</ns0:formula><ns0:p>. Subsequently, the SINR state is stable when agreeing to the case when all nodes</ns0:p><ns0:formula xml:id='formula_41'>t     1 , , t s j t s i Y</ns0:formula><ns0:p>besides only node is sending packets in a network. Although, node receives a data packets</ns0:p><ns0:formula xml:id='formula_42'>i j s</ns0:formula><ns0:p>from node in a time , then becomes equivalent to</ns0:p><ns0:formula xml:id='formula_43'>i t    t t          } , { \ } { \ ), } , ( ( ) , ( j i V i s S s t ks j k p j i p  (3)</ns0:formula><ns0:p>which accurately confirms that the SINR value is met. In Eqn. <ns0:ref type='bibr' target='#b32'>(24)</ns0:ref>, is valid</ns0:p><ns0:formula xml:id='formula_44'>   } \{ ), } , ( s S s t ks j k p</ns0:formula><ns0:p>since when then all the nodes besides are illegal to send in since</ns0:p><ns0:formula xml:id='formula_45'>1 ,   t s i i s t .           t S s Y v i t s i v i t s i , ,<ns0:label>1 , 1 , ,</ns0:label></ns0:formula><ns0:p>We observe that the packet forwarding mechanism is used to increase the transmissions in HWSNs. In GA-EMC, the packet forwarding mechanism increases the transmissions at a particular time, and more data packets are transmitted to the CMs through adjacent clusters. This is possible for increasing the use of packet forwarding and forward interference cancellation mechanisms among CMs in all clusters in the ensuing time, which is more cooperative for minimising delay in HWSNs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>In this section, the performance of GA-EMC is analysed and compared with E-MDSP <ns0:ref type='bibr' target='#b44'>[35]</ns0:ref> and EEWC <ns0:ref type='bibr' target='#b77'>[66]</ns0:ref>. Simulations are performed using the Network simulator -NS2 <ns0:ref type='bibr' target='#b106'>[93]</ns0:ref>. An HWSN consists of 400 nodes in a simulation area. To evaluate the GA-EMC performance, we have considered the metrics such as network lifetime, throughput, network stability, the number of data packets sent to the sink, and the average energy consumption in the whole network. The various parameters for simulation are presented in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 5. The parameters and values for simulation</ns0:head></ns0:div>
<ns0:div><ns0:head>Network Lifetime</ns0:head><ns0:p>To extend the HWSNs' lifetime, we have considered the alive nodes in each round. Figure <ns0:ref type='figure' target='#fig_12'>3</ns0:ref> illustrates that the proposed GA-EMC scheme extends the lifetime of alive nodes in every round than EEWC and E-MDSP. The proposed GA-EMC provides a better network lifetime than existing schemes. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The proposed GA-EMC uses multi-hop communication for packet delivery to extend the network lifetime. Compared to the existing schemes, the first node dies after 1800 rounds in GA-EMC. Later, the last node remains alive for 2100 rounds. In EEWC and E-MDSP schemes, the nodes have died after 1000 and 1600 rounds, respectively. Figure <ns0:ref type='figure' target='#fig_12'>3</ns0:ref> shows that the proposed GA-EMC scheme prolongs the network lifetime and stability, and the last alive node can still respond to the network in this approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Throughput</ns0:head><ns0:p>In HWSNs, the proposed GA-EMC algorithm analysed the number of data packets sent, each CH sends data packets to the sink, and the CMs send data packets to their respective CHs. As shown in Figure <ns0:ref type='figure' target='#fig_13'>4</ns0:ref>, the EEWC performs poorly with less data packet communication. Similarly, the E-MDSP gives the best behaviour than EEWC and also provides poor performance than GA-EMC. The number of data packets sent from CHs is increased significantly by the EMC-GA and achieves better throughput when compared to the other schemes. </ns0:p></ns0:div>
<ns0:div><ns0:head>Stability Period</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_14'>5</ns0:ref> illustrates the regular time interval from the beginning of the network process until the death of the first node in HWSNs. As shown in Figure <ns0:ref type='figure' target='#fig_14'>5</ns0:ref>, the GA-EMC has a better stability period than the other schemes. The first dead node starts at 1800 rounds in the GA-EMC scheme, whereas the first dead node starts at nearly 1000 and 1600 rounds under the EEWC, E-MDSP approaches. The stability duration of GA-EMC compared with the EEWC scheme increases from 1000 to 2500 rounds, and the E-MDSP increases from 1600 to 2500 rounds. So, GA-EMC provides better stability duration and prolongs the network lifetime. Minimising the End-to-End Delay Figure <ns0:ref type='figure' target='#fig_16'>6</ns0:ref> displays the analysis of various approaches in terms of delay. It shows that the EEWC acquires the extreme delay of 0.04s in 2500 rounds. However, the delay is low compared to EEWC, and it is maximum than the GA-EMC. At the same time, E-MDSP achieves a minimum delay than EEWC, but it fails to outperform GA-EMC. GA-EMC approach achieves a low delay of only 0.02s at the 2500 rounds. Manuscript to be reviewed The average energy consumption Even though more packets are transmitted in the proposed protocol than in EEWC and E-MDSP, the average energy consumption till a particular round is less in the proposed GA-EMC, as shown in Figure <ns0:ref type='figure' target='#fig_17'>7</ns0:ref>. This energy-saving aims to use multi-hop communication and associate the CMs with the optimum CHs.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Impact of sink node location on the HWSNs lifetime and stability</ns0:head><ns0:p>Network stability is measured by the round when the first node died. To study the impact of sink location on the network stability and lifetime, we have considered three scenarios. In scenarios 1, 2, and 3, the sink is situated at various places such as the middle, top right corner, and outside the field, respectively. Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref> showed the comparison of the round when a given percentage of nodes died for different sink positions. On average, the proposed protocol extends the round when the last node died by 30.94% by considering different sink positions. GA-EMC extends the network lifetime and better stability in all three cases, and it provides more significant improvement when the sink is at the corner and outside the field due to multi-hop routing.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_19'>8</ns0:ref> shows the rounds when the first node died in the three scenarios. As shown in Figure <ns0:ref type='figure' target='#fig_19'>8</ns0:ref>, the round when the first node died is postponed by 10.98%, 23.47% and 46.94% in scenarios 1, 2 and 3, respectively. This shows that the proposed protocol performs better for longer distance transmission. Compared to EEWC and E-MDSP, the GA-EMC provides a 27.13% improvement in the round when the first node died, considering the average of different sink positions. </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The proposed GA-EMC scheme outperforms the existing methods, especially EEWC and E-MDSP, in almost all aspects. It extends the lifetime of alive nodes in every round and prolongs the network lifetime and stability. Also, it significantly increases the number of data packets sent from CHs and achieves better throughput. It provides better stability duration and prolongs the network lifetime. Furthermore, it achieves a lower delay and reduces the average energy consumption till a particular round. It extends the network lifetime and better stability in all three cases. Due to multi-hop routing, it improves when the sink is at the corner and outside the field and performs better for longer distance transmission.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this paper, a GA-EMC scheme is presented for extending the lifetime and minimising the delay in HWSNs. In selecting the optimal CHs, the fitness value is calculated based on cluster distances, the number of CHs, and their initial and residual energies. Each cluster selects a CH with minimum distance, higher residual energy, minimum CMs, and maximum neighbours as its next hop in inter-cluster routing. The energy hole problem created due to multipath routing is solved by deploying more higher energy supernodes in the areas closer to the sink. The mathematical model for energy consumption for clustering with multi-hop data transmission is explained. The experimental results proclaim that GA-EMC prolongs the HWSNs lifetime, minimises the delay, and maximises stability compared to EEWC and E-MDSP for various positions of the BS, primarily when the BS is situated in the network corner and outer area. The death of the first and last nodes is prolonged by 27.13% and 30.94%, respectively, compared with EEWC and E-MDSP. In the future, the simulation can be repeated to see the impact of the number of nodes in HWSNs. Also, the performance of GA_EMC can be analysed in actual (not simulated) HWSNs in some practical scenarios. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Hot-spot problems are created EADC <ns0:ref type='bibr' target='#b7'>[6]</ns0:ref> It uses competition range to construct clusters of even sizes.</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note></ns0:div>
<ns0:div><ns0:head>Achieves load balance among CHs</ns0:head><ns0:p>Uneven clustering strategy ERA <ns0:ref type='bibr' target='#b8'>[7]</ns0:ref> The clever strategy of CH selection, residual energy of the CHs and the intracluster distance for cluster formation.</ns0:p><ns0:p>Achieves constant message and linear time complexity.</ns0:p><ns0:p>High message complexity for building backbone network of CHs.</ns0:p></ns0:div>
<ns0:div><ns0:head>S-MDSP [14]</ns0:head><ns0:p>Delay-minimization scheduling for multi-hop networks.</ns0:p><ns0:p>Minimising the end-toend delay</ns0:p><ns0:p>The delay is significantly reduced by combining cooperative forwarding (CF) and forward interference cancellation (FIC). E2HRC <ns0:ref type='bibr' target='#b37'>[28]</ns0:ref> Messaging Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>communication links will be simultaneously very active for the compatible set situations. The ) (c group of active sensor node is represented by</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>the data packet set , and each data packet needs a time for a particular    m transmission and is sent from source to sink The data packets are available in the )</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>ld</ns0:head><ns0:label /><ns0:figDesc>packet is denoted by Eq. (3). PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022) Manuscript to be reviewed Computer Science Where, represents the distance between the nodes involved in the communication, is the d  energy dissipated in the source and sink. It considers the factors like modulation and digital coding. The variables and represent space and multipath fading coefficients, and the   threshold decide whether to use a multipath fading model. 0 Equation (4) gives the energy spent by the sink receiving bit of packets.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>l</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>second gives the energy spent in aggregation, the third term gives the energy spent for data transmission to the next-hop CH, and the last term represents the energy spent inF CH j forwarding the relay traffic. is the sum of energy required to receive bits of the packetF CH j  kfrom all the low-level CHs and communicate the packets to the parent CH as shown by Eq. (7),</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>are greater than 1. The WSNs consist of nodes in total with  have more supernodes than the areas away from the sink.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>i. Binary Coding: Binary coding scheme represents each chromosome for the given sensor scenario as a string of and . a chromosome of length bits signifies HWSNs with s chromosome size is the same as the size of the network. In the chromosome set, value 1 and 0 represents the CH and CM, respectively. Figure2shows the chromosome representation of a network with 20 sensor nodes. Nodes are CHs</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Binary coding representation of a chromosome</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>1 </ns0:head><ns0:label>1</ns0:label><ns0:figDesc><ns0:ref type='bibr' target='#b30'>(22)</ns0:ref>, represents the total distance between all the CHs in the level to the parent th i CH nodes in the level. The node level is considered to find out the parent CH nodes,. All th i the CHs in level 1 send packets to the destination directly. CHs in the remaining level send packets to their parent CHs in a multi-hop fashion to the sink, and CHs in level is the</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Beging</ns0:head><ns0:label /><ns0:figDesc>Choose binary coding to represent chromosome Set values for population size Set values for cross over and mutation probability Set the maximum number of generations max g Initialise generation counter 0 Generate and send TDMA schedule to CMs end for else PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>( a PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Number of rounds Vs number of alive nodes</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Number of rounds Vs throughput</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Number of rounds Vs number of dead nodes</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Number of rounds Vs Delay</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 displays the average energy consumption under variable simulation rounds.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Number of rounds Vs Average energy consumption</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Comparing GA-EMC with EEWC and E-MDSP based on network lifetime</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 1 Multi</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head /><ns0:label /><ns0:figDesc>Binary coding representation of a chromosomeThe first row represents sensor nodes and the second row represent their corresponding binary coding. The chromosome size is the same as the size of the network. In the chromosome set, value 1 and 0 represents the CH and CM, respectively.PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head /><ns0:label /><ns0:figDesc>Number of rounds Vs number of alive nodesThe X-axis represents the number of rounds, and Y-axis represents the number of alive sensor nodes. Green-line, red-line and blue-line represent proposed GA-EMC, E-MDSP and EEWC, respectively. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head /><ns0:label /><ns0:figDesc>Number of rounds Vs ThroughputThe X-axis represents the number of rounds and Y-axis represents throughput. Green-line, red-line and blue-line represent proposed GA-EMC, E-MDSP and EEWC respectively. The EEWC performs poorly with less data packet communication. Similarly, the E-MDSP gives the best behaviour than EEWC and also provides poor performance than GA-EMC.PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head /><ns0:label /><ns0:figDesc>Number of rounds Vs number of dead nodesThe X-axis represents the number of rounds, and Y-axis represents the number of dead nodes. Green-line, red-line and blue-line represent proposed GA-EMC, E-MDSP and EEWC, respectively. It shows the regular time interval from the beginning of the network process until the death of the first node in HWSNs. The GA-EMC has a better stability period than the other schemes. The first dead node starts at 1800 rounds in the GA-EMC scheme, whereas the first dead node starts nearly 1000 and 1600 rounds under the EEWC, E-MDSP approaches. The stability duration of GA-EMC compared with the EEWC scheme increases from 1000 to 2500 rounds, and the E-MDSP increases from 1600 to 2500 rounds. So, GA-EMC provides better stability duration and prolongs the network lifetime.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head>Figure 6 Number</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>Figure 7 Number</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head /><ns0:label /><ns0:figDesc>Comparing GA-EMC with EEWC and E-MDSP based on network lifetimeThe X-axis represents each of three scenarios: i.e., the first node died, the middle node died and the last node died. The Y-axis represents the rounds when the first node died in the three scenarios. Green-bar, red-bars and blue-bars represent proposed GA-EMC, E-MDSP and EEWC, respectively.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='34,42.52,70.87,525.00,348.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The list of symbols and notations in this paper</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>in the cluster table of and is the distance among the CH and its CMs, and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>max d </ns0:cell><ns0:cell>max</ns0:cell><ns0:cell></ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>( CM</ns0:cell><ns0:cell>, i CH</ns0:cell><ns0:cell>i</ns0:cell><ns0:cell>j</ns0:cell><ns0:cell> )</ns0:cell></ns0:row></ns0:table><ns0:note>i CM max dit is calculated by Eq. (11),</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>A single-point crossover</ns0:figDesc><ns0:table><ns0:row><ns0:cell>vi. Mutation: In bit-level mutation, a random value is chosen for every bit in a chromosome.</ns0:cell></ns0:row><ns0:row><ns0:cell>Suppose this random value is less than , then the mutation is performed to invert the</ns0:cell></ns0:row></ns0:table><ns0:note>m P bit. Otherwise, the bit is kept as such.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Bit-level mutation</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of percentage of nodes that died for different sink node locations</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>1 Table 1 . The summarisation of Related Works Name of the Proposed Solutions</ns0:head><ns0:label>11</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>Functionality</ns0:cell><ns0:cell>Advantages</ns0:cell><ns0:cell>Disadvantages</ns0:cell></ns0:row><ns0:row><ns0:cell>HEED [4]</ns0:cell><ns0:cell>Cluster heads (CH) have been selected according to a hybrid of the node residual energy</ns0:cell><ns0:cell>Surely guarantee connectivity of clustered networks</ns0:cell><ns0:cell>It works only on two-level hierarchy, not to multilevel hierarchies</ns0:cell></ns0:row><ns0:row><ns0:cell>CATD [5]</ns0:cell><ns0:cell>It is improved in the cluster data transmission phase after the CHs are selected</ns0:cell><ns0:cell>It reduces the network energy, network overhead, and cost.</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:1:2:NEW 30 Apr 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | " Reviewer Comments and Responses
Journal Name
:
PeerJ Computer Science
Title
:
A genetic algorithm-based energy-aware multi-hop clustering
scheme for heterogeneous wireless sensor networks
We thank the reviewer for the careful and thorough reading of this manuscript and the thoughtful comments and constructive suggestions, which help improve the quality of this manuscript. Our response follows.
Reviewer #1:
Comment 1: MAC protocol has an important impact on network performance, and it is also an important research content of clustering networks. Therefore, I strongly suggest that the author discuss the recent MAC protocol in the relevant work. For example A parallel joint optimised relay selection protocol for wake-up radio enabled WSNs,' Physical Communication, vol. 47, 101320, august 2021.
Response: We appreciate your suggestions, and we have discussed the recent MAC protocol in the relevant work in the revised manuscript and cited references 82 and 83 in the Reference Section.
Comment 2: Some problems have been addressed by the authors, the reviewer strongly suggests that the theoretical analysis of the system performance should be added to improve the quality of this paper. The author can found such work in: 'Theoretical analysis of the lifetime and energy hole in cluster based Wireless Sensor Networks [J], Journal of Parallel and Distributed Computing, 2011,71(10):1327-1355.'
Response: Thank you for your thorough review and salient observations. Following your suggestions, the theoretical analysis of the system performance has been added to the revised manuscript to improve the paper's quality.
Comment 3: Authors are suggested to review more new and relevant research to support their research contribution. Many references in this paper are the work of more than 10 years ago.
Response: We again appreciate your suggestions, and the following relevant research papers are included in the revised manuscript.
1. Sant A, Garg L, Xuereb PA, Chakraborty C (2021) A Novel Green IoT-based Pay-As-You-Go Smart Parking System. Computers, Materials & Continua, 67(3): 3523-3544. https://doi.org/10.32604/cmc.2021.015265.
2. Andrushia AD, Paul JJ, Sagayam KM, Grace SR, Garg L (2021). Spy-Bot: Controlling and Monitoring a Wi-Fi-Controlled Surveillance Robotic Car using Windows 10 IoT and Cloud Computing. In Blockchain Technology for Data Privacy Management 2021 Mar 21 (pp. 37-59). CRC Press.
3. Chukwu E, Garg L, Zahra R (2021) Internet of Health Things: Opportunities and Challenges. In: Chakraborty U, Banerjee A, Saha JK, Sarkar N, Chakraborty C (eds.) Artificial intelligence and the fourth industrial revolution. Jenny Stanford Publishing.
4. Salankar N, Mishra P, Garg L (2021) Emotion recognition from EEG signals using empirical mode decomposition and second-order difference plot, Biomedical Signal Processing and Control, 65, Article 102389. https://doi.org/10.1016/j.bspc.2020.102389.
5. Jayalakshmi M, Garg L, Maharajan K, Srinivasan K, Jayakumar K, Bashir AK, Ramesh K (2021) Fuzzy Logic-based Health Monitoring System for COVID'19 Patients. Computers, Materials & Continua, 67(2) :2431-2447, DOI: https://doi.org/10.32604/cmc.2021.015352.
6. Bhattacharyya A, Tripathy RK, Garg L, Pachori RB (2021) A Novel Multivariate-Multiscale Approach for Computing EEG Spectral and Temporal Complexity for Human Emotion Recognition, IEEE Sensors Journal, https://doi.org/10.1109/JSEN.2020.3027181.
7. V. Chauhan, S. Soni, 'Energy aware unequal clustering algorithm with multi-hop routing via low degree relay nodes for wireless sensor networks,' J Ambient Intell Human Comput, vol. 12, 2469–2482 (2021). https://doi.org/10.1007/s12652-020-02385-1.
8. A. S. Toor and A. K. Jain, 'Energy Aware Cluster Based Multi-hop Energy Efficient Routing Protocol using Multiple Mobile Nodes (MEACBM) in Wireless Sensor Networks,' AEU - Int. J. Electron. Commun., vol. 102, pp. 41–53, 2019, https://doi.org/10.1016/j.aeue.2019.02.006.
9. A. A. Kamil, M. K. Naji and H. A. Turki, 'Design and implementation of grid based clustering in WSN using dynamic sink node,' Bulletin of Electrical Engineering and Informatics, vol. 9, no. 5, pp. 2055–2064, 2020.
10. G. Prabaharan, S. Jayashri, 'Mobile cluster head selection using soft computing technique in wireless sensor network,' Soft Comput. vol. 23, pp. 8525–8538, 2019. https://doi.org/10.1007/s00500-019-04133-w.
11. D. Kalaimani, Z. Zah and S. Vashist, 'Energy-efficient density-based fuzzy c-means clustering in WSN for smart grids,' Australian Journal of Multi-Disciplinary Engineering, vol. 4, no. 5, pp. 1–16, 2020.
12. P. Rajpoot, P. Dwivedi, 'Multiple Parameter Based Energy Balanced and Optimised Clustering for WSN to Enhance the Lifetime Using MADM Approaches,' Wireless Pers Commun, vol. 106, pp. 829–877, 2019, https://doi.org/10.1007/s11277-019-06192-6.
13. P. K. Kashyap, S. Kumar, U. Dohare, V. Kumar and R. Kharel, 'Green computing in sensors-enabled internet of things: Neuro fuzzy logic-based load balancing,' Electronics, vol. 8, no. 4, pp. 1–22, 2019.
Comment 4: I suggest that the author set up real experiments to test the performance of the proposed protocol.
Response: Thanks a trillion for your comment. The performance of the proposed protocol was tested using various performance metrics such as Network Lifetime, Throughput, Stability Period, Minimising the End-to-End Delay, Average energy consumption, and Impact of sink node location on the HWSNs lifetime and stability.
The various parameters for real experiments are presented in Table 5.
Table 5. The parameters and values for real experiments
Parameters
Value
Network area
Network size
Initial energy of normal node
Packet size
and
and
Population size
Maximum generations
Comment 5: The formulas in the text is incorrectly formatted which should be align with the words instead of being upper than the words.
Response: We again appreciate your suggestions, and the formulas in the text are now correctly formatted and aligned in the revised manuscript as per the reviewers' suggestion.
Comment 6: In fact, the work of this paper has been studied in many previous works. I also believe that the work done by the author is effective. However, the innovation of the paper is not strong, and almost all the work done in the paper can be found in the previous work. However, the author gives a PSO algorithm in detail, which can be used as a reference for relevant work. Therefore, it is suggested to give the author a chance to modify it to improve the quality of the paper.
Response: We again appreciate your suggestions, and the proposed algorithm has been modified in the revised manuscript as per the reviewer's suggestion to improve the quality of the paper.
Reviewer #2:
Comment 1: Energy-efficient hierarchical clustering on IoT/WSNs is a very well-studied area with a lot of previous studies and over saturated. Therefore, there needs to be a very strong motivation and justification for the proposed approach not to repeat the previous contributions.
Response: Thank you very much indeed for your comments. As per your suggestions to provide very strong motivation and justification for the proposed approach, we have now added the second paragraph of the introduction section
'Many research proposals exist in the related works addressing the energy-efficient hierarchical clustering issues, but node heterogeneity of WSNs nodes has not been exploited to its full potential. Energy efficiency is the prime factor for attaining the long life of WSN systems, which are resource-constrained, especially in energy. As multi-hop clustering algorithms are associated with the communication activities of the network, the energy-aware clustering algorithms become a critical factor in WSNs. The heterogeneity of WSN nodes has been broadly classified into three major categories, viz. energy, computation, and link. The heterogeneity of data generation rate is another crucial aspect, which considers nodes with heterogeneous data transmission requirements. So there is a necessity to categorise sensor nodes based on different performance evaluation metrics. Motivated by the above facts, in this paper, we provide a genetic algorithm-based energy-aware multi-hop clustering scheme for heterogeneous WSNs.'
Comment 2: Given that there are tons of works in WSNs research, the related work should be written in such a way that the reader can see the differences of works in a Table and grasp the main assumptions/ disadvantages of other approaches. In literature review, it is very important to summarise the advantages and disadvantages of existing work.
Response: We profoundly appreciate your suggestion, and we have summarised the advantages and disadvantages of existing work in the Related Works section. Table 1 shows the Summarisation of Related Works.
Name of the Proposed Solutions
Functionality
Advantages
Disadvantages
HEED [4]
Cluster heads (CH) have selected according to a hybrid of the node residual energy
Surely guarantee connectivity of clustered networks
It works only on two-level hierarchy, not to multilevel hierarchies
CATD [5]
It is improved in cluster data transmission phase after the CHs are selected
It reduces the network energy, network overhead, and cost.
Hot-spot problems are created
EADC [6]
It uses competition range to construct clusters of even sizes.
Achieves load balance among CHs
Uneven clustering strategy
ERA [7]
Clever strategy of CH selection, residual energy of the CHs and the intra-cluster distance for cluster formation.
Achieves constant message and linear time complexity.
High message complexity for building backbone network of CHs.
S-MDSP [14]
Delay-minimization scheduling for multi-hop networks.
Minimising the end-to-end delay
Combining cooperative forwarding (CF) and forward interference cancellation (FIC) together, the delay is significantly reduced.
E2HRC [28]
Messaging structure for clustering and routing
Balancing average energy consumption, network load and improving network performance
Delay was occurred
EDB-CHS-BOF [30]
A tight closed-form expression for the optimal number of CHs in the network
Balancing energy consumption amongst all sensor nodes and prolonging the network lifetime.
It is more sensitive to any changes in the network size.
EDDEEC [31]
Probabilities for CH selection based on initial, remaining energy level of the nodes and average energy of network
Achieves longer network lifetime and stability period
Dynamic random channel selection
FABC [34]
optimally selecting cluster-head based on fitness function value of nodes
Maximise the network energy and life time of nodes
-
iABC [35]
Obtain optimal cluster heads
Improves energy-efficiency in WSNs
-
MOPSO [39]
optimise the number of clusters in an ad hoc network as well as energy dissipation in nodes
Provide an energy-efficient solution and reduce the network traffic
-
EEWC [45]
Solve the clustering problem in a wireless sensor network
Improve the performance of WSNs
-
Comment 3: This work uses GA to optimise the energy consumption in WSNs, which have been deeply studied so far. Hence, the analysis in the paper on WSNs with GA is necessary, the following article is also appropriate choices:
1. X.-Y. Zhang, J. Zhang, Y.-J. Gong, Z.-H. Zhan, W.-N. Chen, and Y. Li, 'Kuhn–Munkres parallel genetic algorithm for the set cover problem and its application to large-scale wireless sensor networks,' IEEE Trans. Evol. Comput., vol. 20, no. 5, pp. 695–710, Oct. 2016.
2. Y. Chang, X. Yuan, B. Li, D. Niyato, N. Al-Dhahir, 'A Joint Unsupervised Learning and Genetic Algorithm Approach for Topology Control in Energy-Efficient Ultra-Dense Wireless Sensor Networks', Communications Letters IEEE, vol. 22, no. 11, pp. 2370-2373, 2018.
3. Y. Chang, X. Yuan, B. Li, D. Niyato, and N. Al-Dhahir, Machine learning-based parallel genetic algorithms for multi-objective optimisation in ultra-reliable low-latency WSNs, IEEE Access, vol. 7, pp. 4913–4926, 2019.
Response: The above articles have been discussed in the Related works Section to improve the concepts of WSNs with GA and cited as [89], [90] and [91].
Comment 4: There can be the situation that some CHs are overloaded as they may have many nearby CMs. It might be good to state how this issue can be avoided. In addition, how to formulate the fitness in the optimisation process?
Response: The proposed GA-EMC scheme's main idea is to optimise the energy management of the WSNs by minimising the intra-cluster distance between a CH and a CM. For WSNs, the distance between the CM and the CH is computed using Euclidean distance. The CM is placed in the cluster with minimal distance. If the distance between the sink and the sensor node is less than the distance between CH and CM, the nodes directly communicate with the sink. When the nodes join any cluster, it sends a JOIN message that lets the other nodes and the CH know about its presence. The CH assigns a time slot for collecting data from each node. Once the data have been collected, it is aggregated by the CH and further transmitted to the sink. The nodes may sleep during this complete process, but CH has to live continuously. This reduces the energy of CH; also, with time, few nodes die, living in a sparse network after some time. The clusters are restructured, and CHs are selected in each cycle.
The proposed GA-EMC scheme applies the fitness function to reduce the intra-cluster distance from the sensor nodes to the cluster head (CH). The function optimised the location of the CH, which influences the expected number of packet retransmissions along the path, which helps optimise overall energy consumption in the network. The proposed approach is better in performance measurement in terms of energy consumption since GA-EMC deals with the fitness function. It considers the sink distance, intra-cluster distance, and the residual energy of CMs, by reducing the distance between them CMs and their CHs to find their optimal locations.
Reviewer #3:
Comment 1: The introduction contains mostly older references. Hence, the authors are suggested to include a few recent research in this section.
Response: Thanks for your advice, and we have now included more recent research papers in the revised manuscript.
Comment 2: The authors could present the existing research regarding multi-hop clustering schemes in WSNs according to the timeline so that a thorough gap analysis could be made.
Response: We appreciate your suggestions, multi-hop clustering schemes in WSNs are presented in the related works section, and we have evaluated and analysed the research gap thoroughly.
Comment 3: The authors could list the frequently used symbols with their meaning in a table for convenience.
Response: Various symbols and notations used in the proposed work are mentioned in Table 2.
Table 2. The list of symbols and notations in this paper
Symbol
Description
G
Bi-directed Graph
V
Network size
E
Two-way communication links
C
Direct links
Noise power
SINR value
Energy at node
M
Data packet set
d
Distance between the nodes
Energy dissipated in the source and sink
,
Space and multipath fading coefficients
Size of optimal CHs
Distance among CHs and CMs
Two CHs distance
Comment 4: The authors are suggested to go through the entire manuscript thoroughly to correct grammatical mistakes.
Response: We appreciate your comments, and we have corrected all the grammatical errors within the paper; proofreading is also done.
We hope we have incorporated all the corrections in the final manuscript as per reviewer comments. Kindly accept the revised version of the final manuscript. We are looking forward to receiving your positive response.
Thanks for the Suggestions. We sincerely take the suggestion, and we thank the reviewer for the kind concerns shown in increasing the technical quality of the manuscript. Once again, thanks for your valuable comments and suggestions in preparing the revised version of our manuscript. We hope the revised manuscript will meet your expectations.
Best regards,
R Muthukkumar, Lalit Garg, K Maharajan, M Jayalakshmi, NZ Jhanjhi, S Parthiban, G.Saritha
" | Here is a paper. Please give your review comments after reading it. |
704 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The energy-constrained heterogeneous nodes are the most challenging wireless sensor networks (WSNs) for developing energy-aware clustering schemes.</ns0:p><ns0:p>Although various clustering approaches are proven to minimise energy consumption and delay and extend the network lifetime by selecting optimum cluster heads (CHs), it is still a crucial challenge. Methods. This paper proposes a genetic algorithm-based energy-aware multi-hop clustering (GA-EMC) scheme for heterogeneous WSNs (HWSNs). In HWSNs, all the nodes have varying initial energy and typically have an energy consumption restriction. A genetic algorithm determines the optimal CHs and their positions in the network. The fitness of chromosomes is calculated in terms of distance, optimal CHs, and the node's residual energy. Multi-hop communication improves energy efficiency in HWSNs. The areas near the sink are deployed with more supernodes far away from the sink to solve the hot spot problem in WSNs near the sink node. Results. Simulation results proclaim that the GA-EMC scheme achieves a more extended network lifetime network stability and minimises delay than existing approaches in heterogeneous nature.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>The latest technology development in wireless communication, sensing devices, and microelectronics have opened new frontiers in wireless sensor networks (WSNs). Critical WSNs applications include environmental monitoring <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref><ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>, transport <ns0:ref type='bibr' target='#b3'>[3]</ns0:ref><ns0:ref type='bibr' target='#b6'>[4]</ns0:ref><ns0:ref type='bibr' target='#b7'>[5]</ns0:ref>, surveillance systems <ns0:ref type='bibr' target='#b8'>[6]</ns0:ref><ns0:ref type='bibr' target='#b9'>[7]</ns0:ref>, healthcare <ns0:ref type='bibr' target='#b11'>[8]</ns0:ref><ns0:ref type='bibr' target='#b12'>[9]</ns0:ref><ns0:ref type='bibr' target='#b13'>[10]</ns0:ref><ns0:ref type='bibr' target='#b14'>[11]</ns0:ref><ns0:ref type='bibr' target='#b15'>[12]</ns0:ref><ns0:ref type='bibr' target='#b16'>[13]</ns0:ref><ns0:ref type='bibr' target='#b18'>[14]</ns0:ref><ns0:ref type='bibr' target='#b20'>[15]</ns0:ref><ns0:ref type='bibr' target='#b21'>[16]</ns0:ref>, emotions recognition and monitoring <ns0:ref type='bibr' target='#b22'>[17]</ns0:ref><ns0:ref type='bibr' target='#b23'>[18]</ns0:ref><ns0:ref type='bibr' target='#b24'>[19]</ns0:ref><ns0:ref type='bibr' target='#b25'>[20]</ns0:ref>, home automation <ns0:ref type='bibr' target='#b27'>[21]</ns0:ref>, battlefield monitoring, and industrial automation and control <ns0:ref type='bibr' target='#b28'>[22]</ns0:ref>. WSNs contain more sensor nodes capable of sensing the physical phenomenon, packet forwarding, and communicating the packets to the destination. However, each sensor node has limitations, such as limited memory, restricted processing capability, short-range transmission, finite energy resources, and low storage capability. WSNs are becoming a good network for protecting, controlling, and facilitating real-time applications <ns0:ref type='bibr' target='#b29'>[23]</ns0:ref>. The primary constraint of the WSNs is the limited, non-rechargeable battery-powered sensor nodes, and these nodes have used their energy for sensing, sending, and receiving the data. When the sensor battery is drained, several areas in the sensor field will lack coverage, and valuable data from these areas will not reach the sink. Using energy among the nodes and prolonging the network's lifetime are considered the primary challenge for HWSNs.</ns0:p><ns0:p>selecting the CHs and the next-hop nodes for multi-hop routing, energy spent by the sensor nodes can be reduced. <ns0:ref type='bibr' target='#b38'>[31]</ns0:ref> analyses the impact of heterogeneity in WSNs, energy level, and hierarchical cluster structures. Smaragdakis et al. <ns0:ref type='bibr' target='#b28'>[22]</ns0:ref> proposed a protocol that prolongs the stability period of sensor nodes in heterogeneous WSNs (HWSNs). In <ns0:ref type='bibr' target='#b29'>[23]</ns0:ref>, <ns0:ref type='bibr' target='#b41'>[34]</ns0:ref>, average network energy and the nodes' residual energy select the optimal CHs. In <ns0:ref type='bibr' target='#b42'>[35]</ns0:ref>, the proposed algorithm minimises delay based on signal-to-interference-and-noise-ratio (SINR) in WSNs. <ns0:ref type='bibr' target='#b43'>[36]</ns0:ref> finds optimal cluster sizes based on the hop count to the sink node. It is also used to extend the network lifetime and minimise energy consumption. Several heterogeneous routing protocols in WSNs <ns0:ref type='bibr' target='#b44'>[37]</ns0:ref> are reviewed and analysed with performance metrics. The algorithm in <ns0:ref type='bibr' target='#b46'>[38]</ns0:ref> organises the nodes into several clusters in WSNs and generates a hierarchy of CHs. A genetic algorithm (GA) is a metaheuristic algorithm used to solve optimisation problems <ns0:ref type='bibr' target='#b47'>[39]</ns0:ref><ns0:ref type='bibr' target='#b48'>[40]</ns0:ref>. GA is an appropriate scheme for solving any clustering problems in WSNs. It is also used to resolve persistent optimisation problems <ns0:ref type='bibr' target='#b49'>[41]</ns0:ref>. In this paper, HWSNs use GA for solving the multi-hop clustering based on the newly defined fitness function <ns0:ref type='bibr' target='#b50'>[42]</ns0:ref>.</ns0:p><ns0:p>Existing solutions have the advantage of cluster formation done through the residual energy and prolonging the lifetime of WSNs. However, re-clustering consumes more energy while the end-to-end delay is not minimised. This motivates us to devise an approach for designing energy-aware multi-hop clustering for HWSNs. WSNs with heterogeneous nodes result in better network stability and extend the network lifetime. Energy consumption has been minimised using GA by selecting the optimal CHs during the re-clustering. The main contributions of this paper are specified as follows:</ns0:p><ns0:p> A GA-based energy-aware multi-hop clustering algorithm (GA-EMC) is proposed for selecting the optimal number of CHs dynamically during the re-clustering.</ns0:p><ns0:p> A framework for optimised transmission scheduling and routing is formulated to reduce the delay under the SINR model for HWSNs.</ns0:p><ns0:p> A combination of weak and robust sensor nodes using their residual energy mitigates the re-clustering issues.</ns0:p><ns0:p> For optimising cluster construction, the GA maintains the stability of the nodes in a network.</ns0:p><ns0:p> A dynamic power allocation scheme for sensor nodes is proposed to have a guaranteed QoS for nodes.</ns0:p><ns0:p>The structure of this paper starts with the introduction related to the wireless sensor network, genetic algorithm, and multi-hop clustering paradigms. The following section describes the existing multi-hop clustering algorithms and their issues. In the next section, we present the GA-EMC algorithm, followed by the section which addresses the experimental results and analyses the performance of GA-EMC. The following section is a discussion, and finally, the last section discusses the conclusion.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Works</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This section presents the various modern and advanced multi-hop clustering schemes in WSNs. Many researchers have done some work in multi-hop clustering algorithms based on GA, and an overview of that work is given here. Heinzelman et al. <ns0:ref type='bibr' target='#b41'>[34]</ns0:ref> developed the probability-based CH selection and decreased the average CHs energy consumption. In <ns0:ref type='bibr' target='#b42'>[35]</ns0:ref><ns0:ref type='bibr' target='#b43'>[36]</ns0:ref>, the spatial distribution of CHs in WSNs by constructing a multi-hop table. It is also used to decrease the CHs when directly transmitted to the sink or base station (BS). Yu et al. <ns0:ref type='bibr' target='#b44'>[37]</ns0:ref> 's algorithm selects the CHs with higher residual energy and achieves better load-balancing among CHs. In <ns0:ref type='bibr' target='#b46'>[38]</ns0:ref>, CHs energy consumption was minimised during the data routing process and achieved better time complexity. Cheng et al. <ns0:ref type='bibr' target='#b47'>[39]</ns0:ref> 's protocol satisfies the QoS requirements in WSNs, and <ns0:ref type='bibr' target='#b37'>[30]</ns0:ref> addresses the cluster formation and CH selection using weight metrics in HWSNs.</ns0:p><ns0:p>In general, sensor networks can be heterogeneous regarding the initial energy, computational ability of the WSN nodes, and the bandwidth of the links <ns0:ref type='bibr' target='#b38'>[31]</ns0:ref>. Designing WSNs with heterogeneous nodes increases the reliability and network lifetime. Computational and link heterogeneity reduces the latency in data transmission <ns0:ref type='bibr' target='#b28'>[22]</ns0:ref><ns0:ref type='bibr' target='#b29'>[23]</ns0:ref>. Various parameters are used to classify the nodes in HWSNs <ns0:ref type='bibr' target='#b41'>[34]</ns0:ref>. <ns0:ref type='bibr' target='#b42'>[35]</ns0:ref> studied transmission scheduling and multi-hop routing to minimise delay using SINR. The initial energy varies according to the node's distance from the sink to overcome the energy hole problem in multi-hop networks <ns0:ref type='bibr' target='#b43'>[36]</ns0:ref>. <ns0:ref type='bibr' target='#b44'>[37]</ns0:ref> categorises several heterogeneous routing protocols with predefined parameters by enhancing network lifetime and node heterogeneity in WSNs.</ns0:p><ns0:p>GA has been used for the CHs' optimal selection in recent research. The main focus of the GA-based clustering algorithms is the fitness function. The fitness function determines the goodness of an individual to be selected for the next generation <ns0:ref type='bibr' target='#b46'>[38]</ns0:ref>. <ns0:ref type='bibr' target='#b47'>[39]</ns0:ref> critically analysed the energy-efficient routing protocols for WSNs. The method in <ns0:ref type='bibr' target='#b48'>[40]</ns0:ref> is based on biogeographybased optimisation in HWSNs. The fitness value is modified further by incorporating the residual energy of the remaining nodes that enhances the performance. It prolongs the network lifetime <ns0:ref type='bibr' target='#b49'>[41]</ns0:ref>. Meta-heuristics techniques are widely applied to solve several clustering problems in WSNs <ns0:ref type='bibr' target='#b50'>[42]</ns0:ref><ns0:ref type='bibr' target='#b51'>[43]</ns0:ref>. <ns0:ref type='bibr' target='#b52'>[44]</ns0:ref> reviewed the various protocols and their properties in WSNs. <ns0:ref type='bibr' target='#b53'>[45]</ns0:ref> investigated and presented more clustering approaches.</ns0:p><ns0:p>Younis et al. <ns0:ref type='bibr' target='#b54'>[46]</ns0:ref> 's approach formulates clusters and considers the relay nodes as CHs in two-tiered sensor networks that prolong the relay node lifetime. The method in <ns0:ref type='bibr' target='#b55'>[47]</ns0:ref> extends the network lifetime dynamic route selection and reduces energy consumption. <ns0:ref type='bibr' target='#b56'>[48]</ns0:ref> critically investigated and addressed the power-conserving issues in WSNs, and the algorithm in <ns0:ref type='bibr' target='#b57'>[49]</ns0:ref> solves the energy balance problem in WSNs. Gupta and Pandey <ns0:ref type='bibr' target='#b58'>[50]</ns0:ref> have considered the location of BS and residual energy as clustering parameters to solve an energy hole problem in HWSNs. Darabkh et al. <ns0:ref type='bibr' target='#b60'>[51]</ns0:ref> 's scheme minimises the average energy consumption and prolongs the lifetime of WSNs. Javid et al. <ns0:ref type='bibr' target='#b61'>[52]</ns0:ref>'s technique for HWSNs dynamically elects the CH. It extends the network lifetime. <ns0:ref type='bibr' target='#b62'>[53]</ns0:ref> analyses the heterogeneous node locations and selects optimal CH based on the distance between the clusters. The algorithm in <ns0:ref type='bibr' target='#b63'>[54]</ns0:ref> improves the energy and lifetime of both nodes and networks by choosing the optimal CHs. <ns0:ref type='bibr' target='#b64'>[55]</ns0:ref><ns0:ref type='bibr' target='#b65'>[56]</ns0:ref> maximise the network energy and extend nodes' network lifetime by selecting optimal CH in WSNs. Fan <ns0:ref type='bibr' target='#b66'>[57]</ns0:ref> 's method investigates several issues such as energy consumption, coverage, and data routing in WSNs. This method improves the coverage ratio and prolongs network lifetime. Javaid et al. <ns0:ref type='bibr' target='#b67'>[58]</ns0:ref> 's scheme increases the node stability period and sends more packets to BS. <ns0:ref type='bibr' target='#b68'>[59]</ns0:ref> designs an energyaware cluster by selecting optimal CHs in WSNs.</ns0:p><ns0:p>Ali et al. <ns0:ref type='bibr' target='#b69'>[60]</ns0:ref> 's algorithm optimises the clusters in a network and minimises the data traffic and energy dissipation among nodes. <ns0:ref type='bibr' target='#b70'>[61]</ns0:ref> continuously monitors patients' data by selecting an optimal path in the body area network. It also enhances network lifetime, load balancing, and energy on the overall network. Pal et al. <ns0:ref type='bibr' target='#b71'>[62]</ns0:ref> 's method achieves a load-balanced network. It prolongs the lifetime of WSNs by optimising CH selection approach <ns0:ref type='bibr' target='#b72'>[63]</ns0:ref> that reduces the distance between the CH and CMs in WSNs to improve energy conservation. Lin et al. <ns0:ref type='bibr' target='#b73'>[64]</ns0:ref> 's approach maximises the lifetime of heterogeneous nodes based on sensing coverage and network connectivity. The approach in <ns0:ref type='bibr' target='#b74'>[65]</ns0:ref> selects energy-aware clusters and optimal CH based on hop count and locations. <ns0:ref type='bibr' target='#b75'>[66]</ns0:ref> considers the GA parameters for enhancing the CH performance in WSNs. <ns0:ref type='bibr' target='#b76'>[67]</ns0:ref> 's approach investigates the cluster formation that reduces energy consumption. Haseeb et al. <ns0:ref type='bibr' target='#b77'>[68]</ns0:ref> 's method increases energy efficiency and data security against malicious activities. [69-70] 's algorithms prolong the network lifetime by selecting the optimal CHs and reducing average energy consumption.</ns0:p><ns0:p>Delavar and Baradaran's <ns0:ref type='bibr' target='#b80'>[71]</ns0:ref> algorithm reduced energy consumption by selecting chromosomes in different states. <ns0:ref type='bibr' target='#b81'>[72]</ns0:ref><ns0:ref type='bibr' target='#b82'>[73]</ns0:ref><ns0:ref type='bibr' target='#b84'>[74]</ns0:ref><ns0:ref type='bibr' target='#b85'>[75]</ns0:ref> studied the optimal selection of clusters to extend the WSNs' lifetime. <ns0:ref type='bibr' target='#b86'>[76]</ns0:ref> analyses the spatial distribution of heterogeneous nodes in WSNs and effectively avoids the energy hole problem. <ns0:ref type='bibr' target='#b87'>[77]</ns0:ref> 's algorithm enhances the reported sensitivity of the nodes and optimises the solution quality in HWSNs management. <ns0:ref type='bibr' target='#b88'>[78]</ns0:ref> compares the various evolutionary algorithms with network lifetime, node stability period, and energy efficiency. <ns0:ref type='bibr' target='#b89'>[79]</ns0:ref> 's method optimises heterogeneous sensor node clustering. It dramatically extends the network lifetime. Huang et al. <ns0:ref type='bibr' target='#b90'>[80]</ns0:ref> 's method was used to minimise the delay and collision and reduce the energy consumption in WSNs. The protocol in <ns0:ref type='bibr' target='#b91'>[81]</ns0:ref> improves energy utilisation and minimises the delay in sensor networks. <ns0:ref type='bibr' target='#b92'>[82]</ns0:ref> minimises the energy holes and prolongs the network's lifetime in WSNs. In this approach, the network is divided into unequal clusters, and it considers the node's residual energy and distance to the base station for cluster formation in WSNs. Nodes with the highest energy are considered CH for WSNs <ns0:ref type='bibr' target='#b93'>[83]</ns0:ref>. Kamil et al. <ns0:ref type='bibr' target='#b94'>[84]</ns0:ref>'s techniques change the WSNs' sink node position dynamically to increase residual energy and prolong the network lifetime.</ns0:p><ns0:p>The method in <ns0:ref type='bibr' target='#b96'>[85]</ns0:ref> selects the smart CH in WSNs to prolong the network lifetime. Optimal CH selection is performed for extending the network lifetime of WSN by using various attributes of sensor nodes <ns0:ref type='bibr' target='#b97'>[86]</ns0:ref><ns0:ref type='bibr' target='#b98'>[87]</ns0:ref>. Kashyap et al. <ns0:ref type='bibr' target='#b99'>[88]</ns0:ref>'s algorithm performs load-balancing among sensor nodes for WSNs. Also, it balances the optimal number of CHs and evenly distributes the load among nodes. Zhang et al. <ns0:ref type='bibr' target='#b100'>[89]</ns0:ref>'s technique performs that the sensors are scheduled into several disjoint complete cover sets in WSNs and activates them in batch for energy conservation. The algorithm in <ns0:ref type='bibr' target='#b101'>[90]</ns0:ref> is suitable for small-scale WSNs and suffers from high network latency due to</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1. The summarisation of Related Work</ns0:head><ns0:p>The proposed method is similar to <ns0:ref type='bibr' target='#b42'>[35]</ns0:ref> and <ns0:ref type='bibr' target='#b75'>[66]</ns0:ref>. Compared to existing works <ns0:ref type='bibr' target='#b42'>[35,</ns0:ref><ns0:ref type='bibr' target='#b57'>49,</ns0:ref><ns0:ref type='bibr' target='#b60'>51,</ns0:ref><ns0:ref type='bibr' target='#b61'>52,</ns0:ref><ns0:ref type='bibr' target='#b63'>54,</ns0:ref><ns0:ref type='bibr' target='#b64'>55,</ns0:ref><ns0:ref type='bibr' target='#b65'>56,</ns0:ref><ns0:ref type='bibr' target='#b69'>60,</ns0:ref><ns0:ref type='bibr' target='#b75'>66]</ns0:ref>, our study is distinguished by the type of algorithm. In this approach, two methods are investigated in HWSNs. The first method uses GA to enhance performance by selecting the optimal CHs during the clustering and re-clustering phases. The second method extends the first method by featuring optimal transmission scheduling. In this method, we carefully analyse the transmission scheduling and communication among CHs. As a result, we address various properties and analyses of node strategies to minimise the end-to-end delay, extend the network lifetime, and improve energy efficiency. However, this is the first paper presenting a GA-based energy-aware multi-hop clustering to minimise end-to-end delay, expand the network lifetime, and enhance energy efficiency in HWSNs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>In HWSNs, clusters are formed based on GA. GA finds optimal CHs by considering the network coverage and its energy level. The CHs perform data aggregation and transmit the combined data packets to the sink. A multi-hop network is used to send packets from CHs to the sink. Neighbouring sink nodes consider regular, advanced, and supernodes. These nodes have different initial energy. Regions near the sink have a more significant number of supernodes than other regions. The next-hop CH is selected with the distance between the CHs, the residual energy, the number of CMs, and the neighbouring CHs associated with the given CH in routing. Various symbols and notations used in the proposed work are mentioned in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Multi-hop Network Model</ns0:head><ns0:p>A WSNs is assumed to be a bidirected graph where denotes the network size,</ns0:p><ns0:formula xml:id='formula_0'>), , , ( C E V G  V</ns0:formula><ns0:p>is two-way communication links, and is the direct links. We</ns0:p><ns0:formula xml:id='formula_1'>V V E   } } , { : ) , ( ), , {( E j i i j j i C   consider that</ns0:formula><ns0:p>iff, the SINR is convinced, i.e., and where and</ns0:p><ns0:formula xml:id='formula_2'>E j i  } , {     ) , ( j i , ) , (     i j ) , ( j i </ns0:formula><ns0:p>be the energy at node when node is sending and receiving data packets, respectively. It ) , ( i j</ns0:p><ns0:formula xml:id='formula_3'> j i</ns0:formula><ns0:p>can be represented as where and be the transmitted</ns0:p><ns0:formula xml:id='formula_4'>), , ( ) ( : ) , ( ), , ( ) ( : ) , ( i j j i j j i i j i         ) (i  ) ( j </ns0:formula><ns0:p>energy of nodes and . Here, is to obtain the communication link is the</ns0:p><ns0:formula xml:id='formula_5'>i j ) , ( ) , ( i j j i    }, , { j i </ns0:formula><ns0:p>noise power and is the SINR value. The network size occurrence to is defined by i.e.,</ns0:p><ns0:formula xml:id='formula_6'> V i  ), (i  }. } , { : { ) ( E j i V j i    </ns0:formula><ns0:p>Assume that the order of time and is the group of nodes in a time. The direct</ns0:p><ns0:formula xml:id='formula_7'>} , . . . , 2 , 1 { :     link</ns0:formula><ns0:p>is very dynamic, only if and the resulting SINR is convinced:</ns0:p><ns0:formula xml:id='formula_8'>C j i  ) , (     j i ,           } { ) , ( ) , ( i j k j i (1)</ns0:formula><ns0:p>A node can either send, receive, or be inactive at a particular time. A group of C c  communication links will be simultaneously very active for the compatible set situations. The</ns0:p></ns0:div>
<ns0:div><ns0:head>) (c group of active sensor node is represented by</ns0:head><ns0:p>The SINR is applied to</ns0:p><ns0:formula xml:id='formula_9'>c }. ) , ( , : { : ) ( c j i j i c     communication links : ) , ( c j i            } { ) ( ) , ( ) , ( i c j k j i (2)</ns0:formula><ns0:p>Consider the data packet set , and each data packet needs a time for a particular  </ns0:p></ns0:div>
<ns0:div><ns0:head>Optimised Energy Model</ns0:head><ns0:p>The proposed GA-EMC is adopted an optimised energy model <ns0:ref type='bibr' target='#b110'>[98]</ns0:ref> that minimises energy consumption. The nodes' energy is needed to communicate a data packet consisting of bits of a l packet is denoted by Eq. (3). </ns0:p><ns0:formula xml:id='formula_10'>          0 4 0 2 d d , d d , ) , ( d l l d l l d l      (3) PeerJ Comput. Sci.</ns0:formula><ns0:formula xml:id='formula_11'>    l l R ) (<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>The CM node spends the energy to send a packet to its CH. The power spent by the CM to transmit bits of a packet to its CH is determined by Eq. ( <ns0:ref type='formula'>5</ns0:ref>).</ns0:p><ns0:formula xml:id='formula_12'>l ) , ( , i i CH i T CM d l    (5)</ns0:formula><ns0:p>where, represents the Euclidean distance between the CM and its CH. A CH spends its</ns0:p><ns0:formula xml:id='formula_13'>i CH i d , th i</ns0:formula><ns0:p>power to receive a packet from its CMs, aggregate all the packets, and send it to other CHs. In addition to forwarding the local cluster data, CHs may also forward the traffic received from other CHs. Equation <ns0:ref type='bibr' target='#b8'>(6)</ns0:ref> shows the energy required by CHs.</ns0:p><ns0:formula xml:id='formula_14'>F CH P j T A CM j R CM j CH j j j d l l l            ) , ( ) ( ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where, denotes the energy spent by CH to accept data packets from the CMs. The )) , ( ) ( (</ns0:p><ns0:formula xml:id='formula_15'>j j j P j, T R CH CH F CH d l l       (7)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Phases in the proposed GA-EMC protocol</ns0:head><ns0:p>The proposed GA-EMC contains 4 phases: Heterogeneous Nodes Deployment, Clustering Formulation, Selection of Next-hop Neighbor, and Packet Transmission. The proposed GA-EMC scheme's main idea is to optimise energy management of the WSNs by minimising the intracluster distance between a CH and a CM. Using Euclidean distance, the distance between the CM and the CH is calculated for WSNs. The CM is placed in the cluster with the least space between it and the others. The nodes interact directly with the sink if the distance between the sink and the sensor node is smaller than the distance between CH and CM. When a node joins a cluster, it sends a JOIN message to the other nodes and the CH to let them know it's there. The CH assigns each node a time slot for data collection. After the data has been acquired, the CH aggregates it before sending it to the sink. The nodes may sleep during this entire process, but CH must be awake at all times. This lowers CH's energy, and few nodes die over time due to living in a sparse network. In each cycle, the clusters are reconstructed, and CHs are chosen.</ns0:p><ns0:p>The fitness function is used in the proposed GA-EMC technique to reduce the intra-cluster distance between the sensor nodes and the cluster head (CH). The function optimised the CH's placement, which impacts the estimated number of packet retransmissions along the path and hence on the network's overall energy usage. Because GA-EMC works with the fitness function, the proposed technique is preferable in terms of performance measurement in terms of energy consumption. Minimising the distance between CMs and their CHs examines the sink distance, intra-cluster distance, and residual energy of CMs to determine their ideal positions. These phases are described below.</ns0:p></ns0:div>
<ns0:div><ns0:head>Heterogeneous Nodes Deployment Phase</ns0:head><ns0:p>In multi-hop communication, the CHs are situated very close to the sink node and have to forward more packets received from other nodes, and their power is exhausted quicker than the CHs far away from the sink. This creates a hot spot in the regions near the sink. To solve this issue, sensor nodes are classified into regular nodes with initial energy , advanced nodes with 0  initial energy , and supernodes with initial energy joules. The value of energy</ns0:p><ns0:formula xml:id='formula_16'>) 1 ( 0    ) 1 ( 0   </ns0:formula><ns0:p>heterogeneity constants and are greater than 1. The WSNs consist of nodes in total with    advanced nodes, supernodes, and regular nodes. The areas near</ns0:p><ns0:formula xml:id='formula_17'>  a m   s m     ) 1 ( s a m m</ns0:formula><ns0:p>the sink node have more supernodes than the areas away from the sink.</ns0:p></ns0:div>
<ns0:div><ns0:head>Clustering Formulation Phase</ns0:head><ns0:p>In this phase, more clusters are formed in HWSNs. It also contains two other sub-phases, namely CHs Selection and CM Association phases. The CHs selection phase selects an optimal CH. Each CM is associated with any one of the nearest energy-efficient CH in the CM association phase.</ns0:p></ns0:div>
<ns0:div><ns0:head>CH Selection Phase</ns0:head><ns0:p>This phase uses the GA for selecting optimal CHs and their location. GA is working on natural genetics and natural selection principles and is used to optimise various parameters. GA is applied in multiple fields for solving constrained and unconstrained optimisation problems <ns0:ref type='bibr' target='#b46'>[38]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>CM Association Phase</ns0:head><ns0:p>Each CH sends a CH advertisement message containing its identifier, location, initial cost as given by Eq. ( <ns0:ref type='formula'>8</ns0:ref>). CM selects their respective CH with low cost and sends JOIN message to the optimal CH.</ns0:p><ns0:formula xml:id='formula_18'>2 1 1 1 ) 1 ( f c f c Cost      (8)</ns0:formula><ns0:p>Here is a constant</ns0:p><ns0:p>. By setting proper value for , we can decide how much</ns0:p><ns0:formula xml:id='formula_19'>1 c 1 0 1   c 1 c</ns0:formula><ns0:p>importance to give to distance and energy in the CH selection. The terms and are</ns0:p><ns0:formula xml:id='formula_20'>1 f 2 f</ns0:formula><ns0:p>calculated as provided by Eqs. ( <ns0:ref type='formula'>9</ns0:ref>) and ( <ns0:ref type='formula'>10</ns0:ref>). </ns0:p><ns0:formula xml:id='formula_21'>j i j i f    2 (10)</ns0:formula><ns0:p>where, and represents the initial and residual energies of respectively. is a <ns0:ref type='bibr' target='#b14'>(11)</ns0:ref> The CHs collect the JOIN message from the CMs until the clustering timer expires. Upon the expiry of the timer, CHs create a dynamic time division multiple access (TDMA) scheduling for the packet transmission and send it to the CMs. The GA-based clustering algorithm shows the various steps involved in forming clusters and optimal CH selection.</ns0:p><ns0:formula xml:id='formula_22'>j i  j i  j i CH j i CH CH present</ns0:formula></ns0:div>
<ns0:div><ns0:head>Next-hop neighbour selection phase</ns0:head><ns0:p>Each CH broadcasts a neighbour advertisement message that contains information like identifier, location, initial and residual energies, distance to sink, and the size of CMs associated with it. When a CH receives a neighbour advertisement message, it adds the information contained in the packet to the neighbour. As shown in Figure <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>, CHs use multi-hop paths to communicate the data packets to the sink. The next-hop CHs are chosen based on the distance, residual energy, and the size of CMs associated with the next-hop CH, the number of CHs that have reached via the nexthop CH. When more CHs can be reached via a CH, the CH will help forward the packet reliably. CH with more residual energy, less distance, smaller CMs, and more neighbouring CHs prefer the next-hop CH. For each CH node in the neighbour table, a merit value (MV) is calculated based on the above factors. Eq. ( <ns0:ref type='formula'>12</ns0:ref>) shows the calculation of MV.</ns0:p><ns0:formula xml:id='formula_23'>4 4 3 3 2 2 1 1                 MV (2) 1 4 3 2 1         (13) i d S CH CH CH d j j i max 1 ) , ( ) , ( (    (14) j S CH CH CH d d j j i i     }, ) , ( ) , ( { max max (15)</ns0:formula><ns0:p>Here represents the neighbouring CHs of .</ns0:p><ns0:formula xml:id='formula_24'> i CH 2 j j     (16)      j , 3 CM CH j (17)      j 1 4 CM CH j (<ns0:label>18</ns0:label></ns0:formula><ns0:formula xml:id='formula_25'>)</ns0:formula><ns0:p>In Eq. ( <ns0:ref type='formula'>13</ns0:ref>), and represents weights associated with different factors. , , ,</ns0:p><ns0:formula xml:id='formula_26'>3 2 1    4 </ns0:formula></ns0:div>
<ns0:div><ns0:head>Data Transmission Phase</ns0:head><ns0:p>It involves communication within the cluster and communication between sink and CH. In intracluster communication, the CH receives packets from their CMs per the dynamic TDMA scheduling. CM also senses the data from the surroundings and sends them to the concerned CHs during a particular time. The CMs turn off their radio in the remaining time to save the energy wasted during idle listening. Each CH has many next-hop CH neighbours, and the best neighbour node is selected in the next-hop neighbour selection phase.</ns0:p></ns0:div>
<ns0:div><ns0:head>Genetic Algorithm</ns0:head><ns0:p>In GA, each result to a specific problem is denoted by a chromosome using a binary coding scheme. A group of chromosomes constitutes the population. The initial population consists of PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science randomly selected chromosomes, and each bit in the chromosome is called a gene. For each chromosome, a fitness value is calculated, and it evaluates the effectiveness of the chromosome. Chromosomes with high fitness values will get more chances to create new chromosomes. The GA involves three basic operations: selection, crossover, and mutation to select the best chromosome. The selection process duplicates good chromosomes and eliminates the poor ones, and there are many selection methods like tournament selection, ranking selection, and roulette wheel selection. The crossover operation selects two parents, recombines them, and creates two children. Crossover can be either single-point crossover or multi-point crossover. Crossover does not introduce any new genetic properties. Mutation operation introduces new genetic properties. These operations are repeated for a given number of generations <ns0:ref type='bibr' target='#b46'>[38]</ns0:ref><ns0:ref type='bibr' target='#b47'>[39]</ns0:ref>. The implementation of various GA operations is explained below. ii. Objective Function: The objective function ( ) is used for selecting optimal CHs. In  designing , the following facts are considered. The optimal CH consumes more energy  than the CM, so the number of CHs must be minimised. The power required for intracluster communication depends on the distance between CHs and CMs, and the power required for inter-cluster communication depends on the distance between two CHs. To save power, we have to reduce the size of optimal CHs ( ), the distance between CHs  and CMs ( ), and the two CHs distance ( ). By selecting CHs with higher residual   energy, we can deliver packets reliably. The selects the CHs by considering the above  factors, and it is a minimisation function as given in Eq. <ns0:ref type='bibr' target='#b24'>(19)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_27'>                 1 (<ns0:label>19</ns0:label></ns0:formula><ns0:formula xml:id='formula_28'>)</ns0:formula><ns0:p>where, represents the sum of the residual energy associated with the CHs.</ns0:p><ns0:formula xml:id='formula_29'>       1 i i (20)</ns0:formula><ns0:p>Eq. ( <ns0:ref type='formula'>21</ns0:ref>) determines the sum of the distance between CMs from their respective CHs. </ns0:p><ns0:formula xml:id='formula_30'>j i j i i CH CH d        (<ns0:label>22</ns0:label></ns0:formula><ns0:formula xml:id='formula_31'>)</ns0:formula><ns0:p>where and represents the set of parent CHs associated with . v. Crossover: The proposed GA-EMC scheme uses a single-point crossover. A random value (0 to 1) and two chromosomes have been selected for this operation. The crossover operation is performed only if the selected random value is less than the crossover probability . Otherwise, no crossover is done. If it is decided to perform crossover, an c p arbitrary crossover point is selected. After the crossover point, the two-parent chromosomes exchange their packet to generate two child chromosomes. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> shows the crossover operation. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As shown in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>, in the first chromosome, no mutation is performed, whereas in the second chromosome, 6 bits are mutated.</ns0:p><ns0:p>Selection, mutation, and crossover operations are repeated for given generations. The better chromosome is selected at the end of the last generation. In the best chromosome selection, if the genome value is 1, the node becomes CH, and otherwise, it becomes CM. Minimising End-to-End Delay with Packet Forwarding Mechanism</ns0:p></ns0:div>
<ns0:div><ns0:head>GA-based Clustering Algorithm</ns0:head><ns0:p>Let represent an upper limit on the delay with A mathematical model is designed</ns0:p><ns0:formula xml:id='formula_32'> }. , . . . , 2 , 1 {   </ns0:formula><ns0:p>to analyse the number of data packets sent and received between CH and CMs for a particular time. We use the binary variables: if time ; if node is transmitting in</ns0:p><ns0:formula xml:id='formula_33'>1  t    t 1 ,   t s i   i S s  ; if node is receiving in ; if is present at by .   t 1 ,   t s i   i S s    t 1 ,  t s i  S s    i   t</ns0:formula><ns0:p>GA-EMC is specially formulated to minimise the delay required to send packets from source to destination. The constraint has been forced all the time after the first round.</ns0:p><ns0:formula xml:id='formula_34'>} { \ , 1    T t t t   </ns0:formula><ns0:p>The constraint ensures that a node can either send and receive data</ns0:p><ns0:formula xml:id='formula_35'>  T t v i Y Z t S s t s i t s i       , , , ,</ns0:formula></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>packets at a particular time or nothing to be done. The constraint</ns0:p><ns0:formula xml:id='formula_36'>          t S s Y v i t s i v i t s i , , 1 , 1 , ,</ns0:formula><ns0:p>ensures that, in time , the data is transmitted by one node and received by another node in t HWSNs. The constraint ensures that the node receives a packet</ns0:p><ns0:formula xml:id='formula_37'>        t S s V j Y t s j j v i t s i , ,<ns0:label>, , ) ( , j s</ns0:label></ns0:formula><ns0:p>in the current time. Inequality allows a node to communicate data</ns0:p><ns0:formula xml:id='formula_38'>S s V i Y T t t s i T i t s i          , ,<ns0:label>1 , 1 , ,</ns0:label></ns0:formula><ns0:p>packets during the time. The constraint is fully justified in</ns0:p><ns0:formula xml:id='formula_39'>          t S s Y v i t s i v i t s i , ,<ns0:label>1 , 1 , ,</ns0:label></ns0:formula><ns0:p>transmitting packets through multi-hop routing. The constraints and defines variable and set the Manuscript to be reviewed Computer Science conditions for the starting and ending of the dynamic TDMA scheduling. Finally, the constraint expresses the SINR state for sending data packets on link at a time</ns0:p><ns0:formula xml:id='formula_40'>T t s O V i S s Y a t T T s i t s i       )}, ( { \ , , 1 , , S s a a T s s D s s O    , 1<ns0:label>, 1 ) ( 1 )</ns0:label></ns0:formula><ns0:formula xml:id='formula_41'>T t S s V i a t s i t s i      , , , , , s   j i,</ns0:formula><ns0:p>. Subsequently, the SINR state is stable when agreeing to the case when all nodes</ns0:p><ns0:formula xml:id='formula_42'>t     1 , , t s j t s i Y</ns0:formula><ns0:p>besides only node is sending packets in a network. Although, node receives a data packets</ns0:p><ns0:formula xml:id='formula_43'>i j s</ns0:formula><ns0:p>from node in a time , then becomes equivalent to</ns0:p><ns0:formula xml:id='formula_44'>i t    t t          } , { \ } { \ ), } , ( ( ) , ( j i V i s S s t ks j k p j i p  (3)</ns0:formula><ns0:p>which accurately confirms that the SINR value is met. In Eqn. <ns0:ref type='bibr' target='#b30'>(24)</ns0:ref>, is valid</ns0:p><ns0:formula xml:id='formula_45'>   } \{ ), } , ( s S s t ks j k p</ns0:formula><ns0:p>since when then all the nodes besides are illegal to send in since</ns0:p><ns0:formula xml:id='formula_46'>1 ,   t s i i s t .           t S s Y v i t s i v i t s i , ,<ns0:label>1 , 1 , ,</ns0:label></ns0:formula><ns0:p>We observe that the packet forwarding mechanism is used to increase the transmissions in HWSNs. In GA-EMC, the packet forwarding mechanism increases the transmissions at a particular time, and more data packets are transmitted to the CMs through adjacent clusters. This is possible for increasing the use of packet forwarding and forward interference cancellation mechanisms among CMs in all clusters in the ensuing time, which is more cooperative for minimising delay in HWSNs.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>In this section, the performance of GA-EMC is analysed and compared with E-MDSP <ns0:ref type='bibr' target='#b42'>[35]</ns0:ref> and EEWC <ns0:ref type='bibr' target='#b75'>[66]</ns0:ref>. Simulations are performed using the Network simulator -NS2 <ns0:ref type='bibr' target='#b111'>[99]</ns0:ref>. An HWSN consists of 400 nodes in a simulation area. To evaluate the GA-EMC performance, we have considered the metrics such as network lifetime, throughput, network stability, the number of data packets sent to the sink, and the average energy consumption in the whole network. The various parameters for simulation are presented in Table <ns0:ref type='table'>5</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 5. The parameters and values for simulation</ns0:head></ns0:div>
<ns0:div><ns0:head>Network Lifetime</ns0:head><ns0:p>To extend the HWSNs' lifetime, we have considered the alive nodes in each round. Figure <ns0:ref type='figure' target='#fig_14'>3</ns0:ref> illustrates that the proposed GA-EMC scheme extends the lifetime of alive nodes in every round than EEWC and E-MDSP. The proposed GA-EMC provides a better network lifetime than existing schemes. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The proposed GA-EMC uses multi-hop communication for packet delivery to extend the network lifetime. Compared to the existing schemes, the first node dies after 1800 rounds in GA-EMC. Later, the last node remains alive for 2100 rounds. In EEWC and E-MDSP schemes, the nodes have died after 1000 and 1600 rounds, respectively. Figure <ns0:ref type='figure' target='#fig_14'>3</ns0:ref> shows that the proposed GA-EMC scheme prolongs the network lifetime and stability, and the last alive node can still respond to the network in this approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Throughput</ns0:head><ns0:p>In HWSNs, the proposed GA-EMC algorithm analysed the number of data packets sent, each CH sends data packets to the sink, and the CMs send data packets to their respective CHs. As shown in Figure <ns0:ref type='figure' target='#fig_15'>4</ns0:ref>, the EEWC performs poorly with less data packet communication. Similarly, the E-MDSP gives the best behaviour than EEWC and also provides poor performance than GA-EMC. The number of data packets sent from CHs is increased significantly by the EMC-GA and achieves better throughput when compared to the other schemes. </ns0:p></ns0:div>
<ns0:div><ns0:head>Stability Period</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_16'>5</ns0:ref> illustrates the regular time interval from the beginning of the network process until the death of the first node in HWSNs. As shown in Figure <ns0:ref type='figure' target='#fig_16'>5</ns0:ref>, the GA-EMC has a better stability period than the other schemes. The first dead node starts at 1800 rounds in the GA-EMC scheme, whereas the first dead node starts at nearly 1000 and 1600 rounds under the EEWC, E-MDSP approaches. The stability duration of GA-EMC compared with the EEWC scheme increases from 1000 to 2500 rounds, and the E-MDSP increases from 1600 to 2500 rounds. So, GA-EMC provides better stability duration and prolongs the network lifetime. Minimising the End-to-End Delay Figure <ns0:ref type='figure' target='#fig_18'>6</ns0:ref> displays the analysis of various approaches in terms of delay. It shows that the EEWC acquires the extreme delay of 0.04s in 2500 rounds. However, the delay is low compared to EEWC, and it is maximum than the GA-EMC. At the same time, E-MDSP achieves a minimum delay than EEWC, but it fails to outperform GA-EMC. GA-EMC approach achieves a low delay of only 0.02s at the 2500 rounds. Manuscript to be reviewed The average energy consumption Even though more packets are transmitted in the proposed protocol than in EEWC and E-MDSP, the average energy consumption till a particular round is less in the proposed GA-EMC, as shown in Figure <ns0:ref type='figure' target='#fig_19'>7</ns0:ref>. This energy-saving aims to use multi-hop communication and associate the CMs with the optimum CHs.</ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head>Impact of sink node location on the HWSNs lifetime and stability</ns0:head><ns0:p>Network stability is measured by the round when the first node died. To study the impact of sink location on the network stability and lifetime, we have considered three scenarios. In scenarios 1, 2, and 3, the sink is situated at various places such as the middle, top right corner, and outside the field, respectively. Table <ns0:ref type='table' target='#tab_5'>6</ns0:ref> showed the comparison of the round when a given percentage of nodes died for different sink positions. On average, the proposed protocol extends the round when the last node died by 30.94% by considering different sink positions. GA-EMC extends the network lifetime and better stability in all three cases, and it provides more significant improvement when the sink is at the corner and outside the field due to multi-hop routing.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_21'>8</ns0:ref> shows the rounds when the first node died in the three scenarios. As shown in Figure <ns0:ref type='figure' target='#fig_21'>8</ns0:ref>, the round when the first node died is postponed by 10.98%, 23.47% and 46.94% in scenarios 1, 2 and 3, respectively. This shows that the proposed protocol performs better for longer distance transmission. Compared to EEWC and E-MDSP, the GA-EMC provides a 27.13% improvement in the round when the first node died, considering the average of different sink positions. </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>The proposed GA-EMC scheme outperforms the existing methods, especially EEWC and E-MDSP, in almost all aspects. It extends the lifetime of alive nodes in every round and prolongs the network lifetime and stability. Also, it significantly increases the number of data packets sent from CHs and achieves better throughput. It provides better stability duration and prolongs the network lifetime. Furthermore, it achieves a lower delay and reduces the average energy consumption till a particular round. It extends the network lifetime and better stability in all three cases. Due to multi-hop routing, it improves when the sink is at the corner and outside the field and performs better for longer distance transmission.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>In this paper, a GA-EMC scheme is presented for extending the lifetime and minimising the delay in HWSNs. In selecting the optimal CHs, the fitness value is calculated based on cluster distances, the number of CHs, and their initial and residual energies. Each cluster selects a CH with minimum distance, higher residual energy, minimum CMs, and maximum neighbours as its next hop in inter-cluster routing. The energy hole problem created due to multipath routing is solved by deploying more higher energy supernodes in the areas closer to the sink. The mathematical model for energy consumption for clustering with multi-hop data transmission is explained. The experimental results proclaim that GA-EMC prolongs the HWSNs lifetime, minimises the delay, and maximises stability compared to EEWC and E-MDSP for various positions of the BS, primarily when the BS is situated in the network corner and outer area. The death of the first and last nodes is prolonged by 27.13% and 30.94%, respectively, compared with EEWC and E-MDSP. In the future, the simulation can be repeated to see the impact of the number of nodes in HWSNs. Also, the performance of GA_EMC can be analysed in actual (not simulated) HWSNs in some practical scenarios. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Hot-spot problems are created EADC <ns0:ref type='bibr' target='#b8'>[6]</ns0:ref> It uses competition range to construct clusters of even sizes.</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note></ns0:div>
<ns0:div><ns0:head>Achieves load balance among CHs</ns0:head><ns0:p>Uneven clustering strategy ERA <ns0:ref type='bibr' target='#b9'>[7]</ns0:ref> The clever strategy of CH selection, residual energy of the CHs and the intracluster distance for cluster formation.</ns0:p><ns0:p>Achieves constant message and linear time complexity.</ns0:p><ns0:p>High message complexity for building backbone network of CHs.</ns0:p></ns0:div>
<ns0:div><ns0:head>S-MDSP [14]</ns0:head><ns0:p>Delay-minimization scheduling for multi-hop networks.</ns0:p><ns0:p>Minimising the end-toend delay</ns0:p><ns0:p>The delay is significantly reduced by combining cooperative forwarding (CF) and forward interference cancellation (FIC). E2HRC <ns0:ref type='bibr' target='#b35'>[28]</ns0:ref> Messaging Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_1'><ns0:head>d</ns0:head><ns0:label /><ns0:figDesc>reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022) Manuscript to be reviewed Computer Science where, represents the distance between the nodes involved in the communication, is the d  energy dissipated in the source and sink. It considers the factors like modulation and digital coding. The variables and represent space and multipath fading coefficients, and the   threshold decide whether to use a multipath fading model. 0 Equation (4) gives the energy spent by the sink receiving bit of packets.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>l</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head></ns0:head><ns0:label /><ns0:figDesc>second gives the energy spent in aggregation, the third term gives the energy spent for data transmission to the next-hop CH, and the last term represents the energy spent inF CH j forwarding the relay traffic. is the sum of energy required to receive bits of the packetF CH j  kfrom all the low-level CHs and communicate the packets to the parent CH as shown by Eq. (7),</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Multi-hop communication from CH to sink</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>i. Binary Coding: Binary coding scheme represents each chromosome for the given sensor scenario as a string of and . a chromosome of length bits signifies HWSNs with s chromosome size is the same as the size of the network. In the chromosome set, value 1 and 0 represents the CH and CM, respectively. Figure2shows the chromosome representation of a network with 20 sensor nodes. Nodes are CHs</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Binary coding representation of a chromosome</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>1 </ns0:head><ns0:label>1</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)<ns0:ref type='bibr' target='#b28'>(22)</ns0:ref>, represents the total distance between all the CHs in the level to the parent th iCH nodes in the level. The node level is considered to find out the parent CH nodes,. Allth i the CHs in level 1 send packets to the destination directly. CHs in the remaining level send packets to their parent CHs in a multi-hop fashion to the sink, and CHs in level is the</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>Function: GA is generally suitable for solving the maximisation problem. Since our aim is minimising , this problem is transformed into maximising the fitness value  . For each chromosome in the population is used to calculate the as given by Eq.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>It is used to select chromosomes with higher to join the mating pool to v f form a new population for the subsequent generations. The proposed method uses the Roulette wheel selection method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Beging</ns0:head><ns0:label /><ns0:figDesc>Choose binary coding to represent chromosome Set values for population size Set values for cross over and mutation probability Set the maximum number of generations max g Initialise generation counter 0 Generate and send TDMA schedule to CMs end for else PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>( a PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Number of rounds Vs number of alive nodes</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Number of rounds Vs throughput</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Number of rounds Vs number of dead nodes</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Number of rounds Vs Delay</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 displays the average energy consumption under variable simulation rounds.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Number of rounds Vs Average energy consumption</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Comparing GA-EMC with EEWC and E-MDSP based on network lifetime</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head>Figure 1 Multi</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head /><ns0:label /><ns0:figDesc>Binary coding representation of a chromosomeThe first row represents sensor nodes and the second row represent their corresponding binary coding. The chromosome size is the same as the size of the network. In the chromosome set, value 1 and 0 represents the CH and CM, respectively.PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head /><ns0:label /><ns0:figDesc>Number of rounds Vs number of alive nodesThe X-axis represents the number of rounds, and Y-axis represents the number of alive sensor nodes. Green-line, red-line and blue-line represent proposed GA-EMC, E-MDSP and EEWC, respectively. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head /><ns0:label /><ns0:figDesc>Number of rounds Vs ThroughputThe X-axis represents the number of rounds and Y-axis represents throughput. Green-line, red-line and blue-line represent proposed GA-EMC, E-MDSP and EEWC respectively. The EEWC performs poorly with less data packet communication. Similarly, the E-MDSP gives the best behaviour than EEWC and also provides poor performance than GA-EMC.PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head /><ns0:label /><ns0:figDesc>Number of rounds Vs number of dead nodesThe X-axis represents the number of rounds, and Y-axis represents the number of dead nodes. Green-line, red-line and blue-line represent proposed GA-EMC, E-MDSP and EEWC, respectively. It shows the regular time interval from the beginning of the network process until the death of the first node in HWSNs. The GA-EMC has a better stability period than the other schemes. The first dead node starts at 1800 rounds in the GA-EMC scheme, whereas the first dead node starts nearly 1000 and 1600 rounds under the EEWC, E-MDSP approaches. The stability duration of GA-EMC compared with the EEWC scheme increases from 1000 to 2500 rounds, and the E-MDSP increases from 1600 to 2500 rounds. So, GA-EMC provides better stability duration and prolongs the network lifetime.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_27'><ns0:head>Figure 6 Number</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_28'><ns0:head>Figure 7 Number</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_29'><ns0:head /><ns0:label /><ns0:figDesc>Comparing GA-EMC with EEWC and E-MDSP based on network lifetimeThe X-axis represents each of three scenarios: i.e., the first node died, the middle node died and the last node died. The Y-axis represents the rounds when the first node died in the three scenarios. Green-bar, red-bars and blue-bars represent proposed GA-EMC, E-MDSP and EEWC, respectively.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='35,42.52,70.87,525.00,348.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The list of symbols and notations in this paper</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head /><ns0:label /><ns0:figDesc>in the cluster table of and is the distance among the CH and its CMs, and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>max d </ns0:cell><ns0:cell>max</ns0:cell><ns0:cell></ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>( CM</ns0:cell><ns0:cell>, i CH</ns0:cell><ns0:cell>i</ns0:cell><ns0:cell>j</ns0:cell><ns0:cell> )</ns0:cell></ns0:row></ns0:table><ns0:note>i CM max dit is calculated by Eq. (11),</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>A single-point crossover</ns0:figDesc><ns0:table><ns0:row><ns0:cell>vi. Mutation: In bit-level mutation, a random value is chosen for every bit in a chromosome.</ns0:cell></ns0:row><ns0:row><ns0:cell>Suppose this random value is less than , then the mutation is performed to invert the</ns0:cell></ns0:row></ns0:table><ns0:note>m P bit. Otherwise, the bit is kept as such.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Bit-level mutation</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Comparison of percentage of nodes that died for different sink node locations</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>1 Table 1 . The summarisation of Related Works Name of the Proposed Solutions</ns0:head><ns0:label>11</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell>Functionality</ns0:cell><ns0:cell>Advantages</ns0:cell><ns0:cell>Disadvantages</ns0:cell></ns0:row><ns0:row><ns0:cell>HEED [4]</ns0:cell><ns0:cell>Cluster heads (CH) have been selected according to a hybrid of the node residual energy</ns0:cell><ns0:cell>Surely guarantee connectivity of clustered networks</ns0:cell><ns0:cell>It works only on two-level hierarchy, not to multilevel hierarchies</ns0:cell></ns0:row><ns0:row><ns0:cell>CATD [5]</ns0:cell><ns0:cell>It is improved in the cluster data transmission phase after the CHs are selected</ns0:cell><ns0:cell>It reduces the network energy, network overhead, and cost.</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='3'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66432:2:0:NEW 6 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | " Reviewer Comments and Responses
Journal Name
:
PeerJ Computer Science
Title
:
A genetic algorithm-based energy-aware multi-hop clustering
scheme for heterogeneous wireless sensor networks
We thank the reviewer for the careful and thorough reading of this manuscript and the thoughtful comments and constructive suggestions, which help improve the quality of this manuscript. Our response follows.
Reviewer #1:
Comment 1: Please strictly follow this journal template, like alignment, section number etc.
Response: We appreciate your suggestions, and we have thoroughly checked the manuscript to make sure it follows the journal template.
Comment 2: Please cite reference papers with increasing references order.
Response: Thank you for your thorough review and salient observations. Following your suggestions, we have updated the citations to make sure articles are cited in the increasing reference order.
Comment 3: In the related works part, it is not suggested to mention each reference paper with 1-2 setences, which is not meaningful.
It is suggested to firstly classfy them into several types and then give explanation about their own work (uniqueness).
Response: We again appreciate your suggestions, and Table 1 lists articles references with their own novel work.
Comment 4: All tables and figures are missiong in the main pdf file.
Response: As per the journal requirements and template, all tables and figures are uploaded saperately and they are shown at the end of the manuscript PDF.
Comment 5: All the symbols are not alligned well.
Response: We again appreciate your suggestions, and the formulas in the text are now correctly formatted and aligned in the revised manuscript as per the suggestion.
Comment 6: Please change 'Where' to 'where' and move it to the front on certain line below certain equations.
Response: We again appreciate your suggestions, and all appearances of “Where” are now updated to “where”.
Comment 7: Acknowledgements part is missing.
Response: Thank you very much indeed for your comments. As per your suggestions Acknowledgements is now added as follows:
“We thank reviewers, editors and publishers for providing valuable feedback to improve the manuscript. We also appreciate Vija Prakash for helping us with formatting.”
Comment 8: Reference part is weak, and ref. 93 is missing. There are too many so-so references, which is not necessary.
And more relevant papers about ' energy efficiency and optimization for WSN' is suggested like below.
--A PSO based Energy Efficient Coverage Control Algorithm for Wireless Sensor Networks, Computers Materials & Continua,vol.56, no.3, pp.433-446, 2018.
--Optimal Coverage Multi-Path Scheduling Scheme with Multiple Mobile Sinks for WSNs, Computers, Materials & Continua, vol.62, no.2, 2020, pp.695-711.
-- An Enhanced PEGASIS Algorithm with Mobile Sink Support for Wireless Sensor Networks, Wireless Communications & Mobile Computing, Volume 2018, Article ID 9472075, 2018.
--Multiple Strategies Differential Privacy on Sparse Tensor Factorization for Network Traffic Analysis in 5G, IEEE Transactions on Industrial Informatics, vol.18, no.3, pp.1939-1948, 2022.
--A novel fault tolerance energy-aware clustering method via social spider optimization (SSO) and fuzzy logic and mobile sink in wireless sensor networks (WSNs), Computer Systems Science and Engineering, vol. 35, no.6, pp. 477–494, 2020.
--Global levy flight of cuckoo search with particle swarm optimization for effective cluster head selection in wireless sensor network, Intelligent Automation & Soft Computing, vol. 26, no.2, pp. 303–311, 2020.
Response: We profondly appreciate your suggestion and we have now added all these references as [92-97].
92. J. Wang, C. Ju, Y. Gao, A. K. Sangaiah and G. Kim, 'A pso based energy efficient coverage control algorithm for wireless sensor networks,' Computers, Materials & Continua, vol. 56, no.3, pp. 433–446, 2018. doi: 10.3970/cmc.2018.04132.
93. J. Wang, Y. Gao, C. Zhou, R. Simon Sherratt and L. Wang, 'Optimal coverage multi-path scheduling scheme with multiple mobile sinks for wsns,' Computers, Materials & Continua, vol. 62, no.2, pp. 695–711, 2020. doi:10.32604/cmc.2020.08674
94. J. Wang, Y. Gao, X. Yin, F. Li, and H. -J. Kim, 'An Enhanced PEGASIS Algorithm with Mobile Sink Support for Wireless Sensor Networks,' Wireless Communications & Mobile Computing, Vol. 2018, Article ID 9472075, 2018. doi: 10.1155/2018/9472075.
95. J. Wang, H. Han, H. Li, S. He, P. Kumar Sharma and L. Chen, 'Multiple Strategies Differential Privacy on Sparse Tensor Factorization for Network Traffic Analysis in 5G,' in IEEE Transactions on Industrial Informatics, vol. 18, no. 3, pp. 1939-1948, March 2022, doi: 10.1109/TII.2021.3082576.
96. S. Tabatabaei, 'A Novel Fault Tolerance Energy-Aware Clustering Method via Social Spider Optimization (SSO) and Fuzzy Logic and Mobile Sink in Wireless Sensor Networks (WSNs).' Computer Systems Science and Engineering, vol. 35, no.6, pp. 477–494, 2020. doi:10.32604/csse.2020.35.477.
97. K. Vijayalakshmi, P. Anandan, 'Global levy flight of cuckoo search with particle swarm optimization for effective cluster head selection in wireless sensor network,' Intelligent Automation & Soft Computing, vol. 26, no.2, pp. 303–311, 2020. doi:10.31209/2020.100000165.
We very much hope that the revised manuscript meets all your expectations and and requirements for publishing in the esteemed journal.
Best regards,
R Muthukkumar1, Lalit Garg2, K Maharajan3, M Jayalakshmi4, NZ Jhanjhi5, S Parthiban6, G.Saritha7
1National Engineering College, Kovilpatti, Thoothukudi, Tamil Nadu, India
2Faculty of Information and Communication Technology, University of Malta, Msida, Malta
3Vel Tech Rangarajan Dr Sagunthala R& D Institute of Science and Technology, Chennai, India
4National Engineering College, Kovilpatti, Thoothukudi, Tamil Nadu, India
5School of Computer Science and Engineering, Taylor's University, Subang Jaya, Malaysia
6Saveetha of Engineering, Saveetha Institute of Medical and Technical Sciences, Chennai, India
7Sri Sairam institute of technology, Chennai, India.
" | Here is a paper. Please give your review comments after reading it. |
705 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>SymPy is an open source computer algebra system written in pure Python. It is built with a focus on extensibility and ease of use, through both interactive and programmatic applications. These characteristics have led SymPy to become a popular symbolic library for the scientific Python ecosystem. This paper presents the architecture of SymPy, a description of its features, and a discussion of select submodules. The supplementary material provide additional examples and further outline details of the architecture and features of SymPy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>SymPy is a full featured computer algebra system (CAS) written in the Python <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref> programming language. It is free and open source software, licensed under the 3-clause BSD license <ns0:ref type='bibr' target='#b52'>[49]</ns0:ref>.</ns0:p><ns0:p>The SymPy project was started by Ondřej Čertík in 2005, and it has since grown to over 500 contributors. Currently, SymPy is developed on GitHub using a bazaar community model <ns0:ref type='bibr' target='#b44'>[43]</ns0:ref>.</ns0:p><ns0:p>The accessibility of the codebase and the open community model allow SymPy to rapidly respond to the needs of users and developers.</ns0:p><ns0:p>Python is a dynamically typed programming language that has a focus on ease of use and readability. 1 Due in part to this focus, it has become a popular language for scientific computing and data science, with a broad ecosystem of libraries <ns0:ref type='bibr' target='#b38'>[37]</ns0:ref>. SymPy is itself used as a dependency by many libraries and tools to support research within a variety of domains, such as SageMath <ns0:ref type='bibr' target='#b61'>[58]</ns0:ref> (pure and applied mathematics), yt <ns0:ref type='bibr' target='#b67'>[64]</ns0:ref> (astronomy and astrophysics), PyDy <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref> (multibody dynamics), and SfePy <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> (finite elements).</ns0:p><ns0:p>Unlike many CAS's, SymPy does not invent its own programming language. Python itself is used both for the internal implementation and end user interaction. By using the operator overloading functionality of Python, SymPy follows the embedded domain specific language paradigm proposed by Hudak <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref>. The exclusive usage of a single programming language makes it easier for people already familiar with that language to use or develop SymPy. Simultaneously, it enables developers to focus on mathematics, rather than language design. SymPy officially supports Python 2.6, 2.7 and 3.2-3.5.</ns0:p><ns0:p>SymPy is designed with a strong focus on usability as a library. Extensibility is important in its application program interface (API) design. Thus, SymPy makes no attempt to extend the Python language itself. The goal is for users of SymPy to be able to include SymPy alongside other Python libraries in their workflow, whether that be in an interactive environment or as a programmatic part in a larger system. Being a library, SymPy does not have a built-in graphical user interface (GUI). However, SymPy exposes a rich interactive display system, and supports registering display formatters with Jupyter <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref> frontends, including the Notebook and Qt Console, which will render SymPy expressions using MathJax <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> or L A T E X.</ns0:p><ns0:p>The remainder of this paper discusses key components of the SymPy library. Section 2 enumerates the features of SymPy and takes a closer look at some of the important ones.</ns0:p><ns0:p>The section 3 looks at the numerical features of SymPy and its dependency library, mpmath.</ns0:p><ns0:p>Section 4 looks at the domain specific physics submodules for performing symbolic and numerical calculations in classical mechanics and quantum mechanics. Section 5 discusses the architecture of SymPy. Section 6 looks at a selection of packages that depend on SymPy. The following statement imports all SymPy functions into the global Python namespace. 2 From here on, all examples in this paper assume that this statement has been executed: 3 >>> from sympy import *</ns0:p><ns0:p>All the examples in this paper can be tested on SymPy Live, an online Python shell that uses the Google App Engine <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> to execute SymPy code. SymPy Live is also integrated into the SymPy documentation at http://docs.sympy.org.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>OVERVIEW OF CAPABILITIES</ns0:head><ns0:p>This section gives a basic introduction of SymPy, and lists its features. A few featuresassumptions, simplification, calculus, polynomials, printers, solvers, and matrices-are core components of SymPy and are discussed in depth. Many other features are discussed in depth in the supplementary material.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Basic Usage</ns0:head><ns0:p>Symbolic variables, called symbols, must be defined and assigned to Python variables before they can be used. This is typically done through the symbols function, which may create multiple symbols in a single function call. For instance, >>> x, y, z = symbols('x y z') creates three symbols representing variables named x, y, and z. In this particular instance, these symbols are all assigned to Python variables of the same name. However, the user is free to assign them to different Python variables, while representing the same symbol, such as a, b, c = symbols('x y z'). In order to minimize potential confusion, though, all examples in this paper will assume that the symbols x, y, and z have been assigned to Python variables identical to their symbolic names.</ns0:p><ns0:p>Expressions are created from symbols using Python's mathematical syntax. For instance, the following Python code creates the expression (x 2 − 2x + 3)/y. Note that the expression remains unevaluated: it is represented symbolically. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>List of Features</ns0:head><ns0:p>Although SymPy's extensive feature set cannot be covered in depth in this paper, bedrock areas, that is, those areas that are used throughout the library, are discussed in their own subsections below. Additionally, Table <ns0:ref type='table'>1</ns0:ref> gives a compact listing of all major capabilities present in the SymPy codebase. This grants a sampling from the breadth of topics and application domains that SymPy services. Unless stated otherwise, all features noted in Table <ns0:ref type='table'>1</ns0:ref> are symbolic in nature. Numeric features are discussed in Section 3.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1. SymPy Features and Descriptions</ns0:head></ns0:div>
<ns0:div><ns0:head>Feature (submodules) Description</ns0:head><ns0:p>Calculus (sympy.core, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Combinatorics & Group Theory (sympy.combinatorics) Permutations, combinations, partitions, subsets, various permutation groups (such as polyhedral, Rubik, symmetric, and others), Gray codes <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref>, and Prufer sequences <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Concrete Math (sympy.concrete) Summation, products, tools for determining whether summation and product expressions are convergent, absolutely convergent, hypergeometric, and for determining other properties; computation of Gosper's normal form <ns0:ref type='bibr' target='#b43'>[42]</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Assumptions</ns0:head><ns0:p>The assumptions system allows users to specify that symbols have certain common mathematical properties, such as being positive, imaginary, or integer. SymPy is careful to never perform simplifications on an expression unless the assumptions allow them. For instance, the identity √ t 2 = t holds if t is nonnegative (t ≥ 0). However, for general complex t, no such identity holds.</ns0:p><ns0:p>By default, SymPy performs all calculations assuming that symbols are complex valued. This assumption makes it easier to treat mathematical problems in full generality.</ns0:p><ns0:formula xml:id='formula_0'>>>> t = Symbol('t') >>> sqrt(t**2) sqrt(t**2)</ns0:formula><ns0:p>By assuming the most general case, that t is complex by default, SymPy avoids performing mathematically invalid operations. However, in many cases users will wish to simplify expressions containing terms like</ns0:p><ns0:formula xml:id='formula_1'>√ t 2 .</ns0:formula><ns0:p>Assumptions are set on Symbol objects when they are created. For instance Symbol('t', positive=True) will create a symbol named t that is assumed to be positive. Assumptions are only needed to restrict a domain so that certain simplifications can be performed. They are not required to make the domain match the input of a function. For instance, one can create the object m n=0 f (n) as Sum(f(n), (n, 0, m)) without setting integer=True when creating the Symbol object n.</ns0:p><ns0:p>The assumptions system additionally has deductive capabilities. The assumptions use a three-valued logic using the Python built in objects True, False, and None. Note that False is returned if the SymPy object doesn't or can't have the assumption. For example, both I.is_real and I.is_prime return False for the imaginary unit I.</ns0:p><ns0:p>None represents the 'unknown' case. This could mean that given assumptions do not unambiguously specify the truth of an attribute. For instance, Symbol('x', real=True).is_positive will give None because a real symbol might be positive or negative. None could also mean that not enough is known or implemented to compute the given fact. For instance, (pi + E).is_irrational</ns0:p><ns0:p>gives None-indeed, the rationality of π + e is an open problem in mathematics <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref>.</ns0:p><ns0:p>Basic implications between the facts are used to deduce assumptions. Deductions are made using the Rete algorithm <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>. 5 For instance, the assumptions system knows that being an integer implies being rational.</ns0:p><ns0:formula xml:id='formula_2'>>>> i = Symbol('i', integer=True) >>> i.is_rational</ns0:formula><ns0:p>True Furthermore, expressions compute the assumptions on themselves based on the assumptions of their arguments. For instance, if x and y are both created with positive=True, then (x + y).is_positive will be True (whereas (x -y).is_positive will be None).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Simplification</ns0:head><ns0:p>The generic way to simplify an expression is by calling the simplify function. It must be emphasized that simplification is not a rigorously defined mathematical operation <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref>. The simplify function applies several simplification routines along with heuristics to make the output expression 'simple'. 6 It is often preferable to apply more directed simplification functions. These apply very specific rules to the input expression and are typically able to make guarantees about the output. For instance, the factor function, given a polynomial with rational coefficients in several variables, is guaranteed to produce a factorization into irreducible factors. Table <ns0:ref type='table'>2</ns0:ref> lists common simplification functions. hyperexpand expand hypergeometric functions <ns0:ref type='bibr' target='#b45'>[44,</ns0:ref><ns0:ref type='bibr' target='#b47'>45]</ns0:ref> Examples for these simplification functions can be found in the supplement. 5 For historical reasons, this algorithm is distinct from the sympy.logic submodule, which is discussed in the supplementary material. SymPy also has an experimental assumptions system which stores facts separate from objects, and uses sympy.logic and a SAT solver for deduction. We will not discuss this system here. 6 The measure parameter of the simplify function lets the user specify the Python function used to determine how complex an expression is. The default measure function returns the total number of operations in the expression.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 2. Some SymPy Simplification Functions</ns0:head></ns0:div>
<ns0:div><ns0:head>6/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Calculus</ns0:head><ns0:p>SymPy provides all the basic operations of calculus, such as calculating limits, derivatives, integrals, or summations. Limits are computed with the limit function, using the Gruntz algorithm <ns0:ref type='bibr' target='#b23'>[22]</ns0:ref> for computing symbolic limits and heuristics (a description of the Gruntz algorithm may be found in the supplement). For example, the following computes lim x→∞ x sin( <ns0:ref type='formula'>1</ns0:ref>x ) = 1. Note that SymPy denotes ∞ as oo.</ns0:p><ns0:p>>>> limit(x*sin(1/x), x, oo) 1 As a more complex example, SymPy computes Integrals are calculated with the integrate function. SymPy implements a combination of the Risch algorithm <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, table lookups, a reimplementation of Manuel Bronstein's 'Poor Man's Integrator' <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>, and an algorithm for computing integrals based on Meijer G-functions <ns0:ref type='bibr' target='#b45'>[44,</ns0:ref><ns0:ref type='bibr' target='#b47'>45]</ns0:ref>. These allow SymPy to compute a wide variety of indefinite and definite integrals. Summations are computed with the summation function, which uses a combination of Gosper's algorithm <ns0:ref type='bibr' target='#b22'>[21]</ns0:ref>, an algorithm that uses Meijer G-functions <ns0:ref type='bibr' target='#b45'>[44,</ns0:ref><ns0:ref type='bibr' target='#b47'>45]</ns0:ref>, and heuristics. Products are computed with product function via a suite of heuristics.</ns0:p><ns0:formula xml:id='formula_3'>lim x→0 2e 1−cos (x) sin (x) − 1 sinh (x) atan 2 (x) = e.</ns0:formula><ns0:formula xml:id='formula_4'>>>> i, n = symbols('i n') >>> summation(2**i, (i, 0, n -1)) 2**n -1 >>> summation(i*factorial(i), (i, 1, n)) n*factorial(n) + factorial(n) -1</ns0:formula><ns0:p>Series expansions are computed with the series function. This example computes the power series of sin(x) around x = 0 up to x 6 .</ns0:p></ns0:div>
<ns0:div><ns0:head>7/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>>>> series(sin(x), x, 0, 6)</ns0:p><ns0:p>x -x**3/6 + x**5/120 + O(x**6)</ns0:p><ns0:p>The supplementary material discusses series expansions methods in more depth.</ns0:p><ns0:p>Integrals, derivatives, summations, products, and limits that cannot be computed return unevaluated objects. These can also be created directly if the user chooses.</ns0:p><ns0:p>>>> integrate(x**x, x)</ns0:p><ns0:formula xml:id='formula_5'>Integral(x**x, x)</ns0:formula><ns0:p>>>> Sum(2**i, (i, 0, n -1)) Sum(2**i, (i, 0, n -1))</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.6'>Polynomials</ns0:head><ns0:p>SymPy implements a suite of algorithms for polynomial manipulation, which ranges from relatively simple algorithms for doing arithmetic of polynomials, to advanced methods for factoring multivariate polynomials into irreducibles, symbolically determining real and complex root isolation intervals, or computing Gröbner bases.</ns0:p><ns0:p>Polynomial manipulation is useful in its own right. Within SymPy, though, it is mostly used indirectly as a tool in other areas of the library. In fact, many mathematical problems in symbolic computing are first expressed using entities from the symbolic core, preprocessed, and then transformed into a problem in the polynomial algebra, where generic and efficient algorithms are used to solve the problem. The solutions to the original problem are subsequently recovered from the results. This is a common scheme in symbolic integration or summation algorithms.</ns0:p><ns0:p>SymPy implements dense and sparse polynomial representations. 7 Both are used in the univariate and multivariate cases. The dense representation is the default for univariate polynomials.</ns0:p><ns0:p>For multivariate polynomials, the choice of representation is based on the application. The most common case for the sparse representation is algorithms for computing Gröbner bases (Buchberger, F4, and F5) <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr' target='#b13'>14,</ns0:ref><ns0:ref type='bibr' target='#b14'>15]</ns0:ref>. This is because different monomial orderings can be expressed easily in this representation. However, algorithms for computing multivariate GCDs or factorizations, at least those currently implemented in SymPy <ns0:ref type='bibr' target='#b39'>[38]</ns0:ref>, are better expressed when the representation is dense. The dense multivariate representation is specifically a recursively-dense representation, where polynomials in K[x 0 , x 1 , . . . , x n ] are viewed as a polynomials in</ns0:p><ns0:formula xml:id='formula_6'>K[x 0 ][x 1 ] . . . [x n ]. Note</ns0:formula><ns0:p>that despite this, the coefficient domain K, can be a multivariate polynomial domain as well.</ns0:p><ns0:p>The dense recursive representation in Python gets inefficient as the number of variables increases.</ns0:p><ns0:p>Some examples for the sympy.polys submodule can be found in the supplement.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.7'>Printers</ns0:head><ns0:p>SymPy has a rich collection of expression printers. By default, an interactive Python session will render the str form of an expression, which has been used in all the examples in this paper so A two-dimensional (2D) textual representation of the expression can be printed with monospace fonts via pprint. Unicode characters are used for rendering mathematical symbols such as integral signs, square roots, and parentheses. Greek letters and subscripts in symbol names that have Unicode code points associated are also rendered automatically. 7 In a dense representation, the coefficients for all terms up to the degree of each variable are stored in memory. In a sparse representation, only the nonzero coefficients are stored. 8 Many Python libraries distinguish the str form of an object, which is meant to be human-readable, and the repr form, which is mean to be valid Python that recreates the object. In SymPy, str(expr) == repr(expr). In other words, the string representation of an expression is designed to be compact, human-readable, and valid Python code that could be used to recreate the expression. As noted in section 5.1, the srepr function prints the exact, verbose form of an expression. The function latex returns a L A T E X representation of an expression.</ns0:p><ns0:p>>>> print(latex(Integral(sqrt(phi0 + 1), phi0)))</ns0:p><ns0:formula xml:id='formula_7'>\int \sqrt{\phi_{0} + 1}\, d\phi_{0}</ns0:formula><ns0:p>Users are encouraged to run the init_printing function at the beginning of interactive sessions, which automatically enables the best pretty printing supported by their environment.</ns0:p><ns0:p>In the Jupyter Notebook or Qt Console <ns0:ref type='bibr' target='#b41'>[40]</ns0:ref>, the L A T E X printer is used to render expressions using MathJax or L A T E X, if it is installed on the system. The 2D text representation is used otherwise.</ns0:p><ns0:p>Other printers such as MathML are also available. SymPy uses an extensible printer subsystem, which allows extending any given printer, and also allows custom objects to define their printing behavior for any printer. The code generation functionality of SymPy relies on this subsystem to convert expressions into code in various target programming languages.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.8'>Solvers</ns0:head><ns0:p>SymPy has equation solvers that can handle ordinary differential equations, recurrence relationships, Diophantine equations 9 , and algebraic equations. There is also rudimentary support for simple partial differential equations.</ns0:p><ns0:p>There are two functions for solving algebraic equations in SymPy: solve and solveset. The domain parameter can be any set from the sympy.sets module (see the supplementary material for details on sympy.sets), but is typically either S.Complexes (the default) or S.Reals;</ns0:p><ns0:p>the latter causes solveset to only return real solutions.</ns0:p><ns0:p>An important difference between the two functions is that the output API of solve varies with input (sometimes returning a Python list and sometimes a Python dictionary) whereas solveset always returns a SymPy set object.</ns0:p><ns0:p>Both functions implicitly assume that expressions are equal to 0. For instance, solveset(x -1, x) solves x − 1 = 0 for x.</ns0:p><ns0:p>solveset is under active development as a planned replacement for solve. There are certain features which are implemented in solve that are not yet implemented in solveset, including multivariate systems, and some transcendental equations.</ns0:p><ns0:p>Some examples for solveset and solve can be found in the supplement. 9 See the supplementary material for an in depth discussion on the Diophantine submodule.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.9'>Matrices</ns0:head><ns0:p>Besides being an important feature in its own right, computations on matrices with symbolic entries are important for many algorithms within SymPy. The following code shows some basic usage of the Matrix class. Internally these matrices store the elements as Lists of Lists (LIL) <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>, meaning the matrix is stored as a list of lists of entries (effectively, the input format used to create the matrix A above), making it a dense representation. 10 For storing sparse matrices, the SparseMatrix class can be used. Sparse matrices store their elements in Dictionary of Keys (DOK) format, meaning entries are stored as a dict of (row, column) pairs mapping to the elements.</ns0:p><ns0:p>SymPy also supports matrices with symbolic dimension values. MatrixSymbol represents a matrix with dimensions m × n, where m and n can be symbolic. Matrix addition and multiplication, scalar operations, matrix inverse, and transpose are stored symbolically as matrix expressions.</ns0:p><ns0:p>Block matrices are also implemented in SymPy. BlockMatrix elements can be any matrix expression, including explicit matrices, matrix symbols, and other block matrices. All functionalities of matrix expressions are also present in BlockMatrix.</ns0:p><ns0:p>When symbolic matrices are combined with the assumptions submodule for logical inference, they provide powerful reasoning over invertibility, semi-definiteness, orthogonality, etc., which are valuable in the construction of numerical linear algebra systems <ns0:ref type='bibr' target='#b49'>[46]</ns0:ref>.</ns0:p><ns0:p>More examples for Matrix and BlockMatrix may be found in the supplement.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>NUMERICS</ns0:head><ns0:p>While SymPy primarily focuses on symbolics, it is impossible to have a complete symbolic system without the ability to numerically evaluate expressions. Many operations directly use numerical evaluation, such as plotting a function, or solving an equation numerically. Beyond this, certain purely symbolic operations require numerical evaluation to effectively compute. For instance, determining the truth value of e + 1 > π is most conveniently done by numerically evaluating both sides of the inequality and checking which is larger.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Floating-Point Numbers</ns0:head><ns0:p>Floating-point numbers in SymPy are implemented by the Float class, which represents an arbitrary-precision binary floating-point number by storing its value and precision (in bits).</ns0:p><ns0:p>This representation is distinct from the Python built-in float type, which is a wrapper around machine double types and uses a fixed precision (53-bit).</ns0:p><ns0:p>Because Python float literals are limited in precision, strings should be used to input precise decimal values:</ns0:p><ns0:p>10 Similar to the polynomials submodule, dense here means that all entries are stored in memory, contrasted with a sparse representation where only nonzero entries are stored. The evalf method converts a constant symbolic expression to a Float with the specified precision, here 25 digits:</ns0:p><ns0:p>>>> (pi + 1).evalf(25)</ns0:p></ns0:div>
<ns0:div><ns0:head>4.141592653589793238462643</ns0:head><ns0:p>Float numbers do not track their accuracy, and should be used with caution within symbolic expressions since familiar dangers of floating-point arithmetic apply <ns0:ref type='bibr' target='#b21'>[20]</ns0:ref>. A notorious case is that of catastrophic cancellation:</ns0:p><ns0:p>>>> cos(exp(-100)).evalf( <ns0:ref type='formula'>25</ns0:ref>) -1 0</ns0:p><ns0:p>Applying the evalf method to the whole expression solves this problem. Internally, evalf estimates the number of accurate bits of the floating-point approximation for each sub-expression, and adaptively increases the working precision until the estimated accuracy of the final result matches the sought number of decimal digits:</ns0:p><ns0:p>>>> (cos(exp(-100)) -1).evalf(25) -6.919482633683687653243407e-88</ns0:p><ns0:p>The evalf method works with complex numbers and supports more complicated expressions, such as special functions, infinite series, and integrals. The internal error tracking does not provide rigorous error bounds (in the sense of interval arithmetic) and cannot be used to accurately track uncertainty in measurement data; the sole purpose is to mitigate loss of accuracy that typically occurs when converting symbolic expressions to numerical values.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>The mpmath Library</ns0:head><ns0:p>The implementation of arbitrary-precision floating-point arithmetic is supplied by the mpmath library <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>. Originally, it was developed as a SymPy submodule but has subsequently been moved to a standalone pure-Python package. The basic datatypes in mpmath are mpf and mpc, which respectively act as multiprecision substitutes for Python's float and complex. Like SymPy, mpmath is a pure Python library. A design decision of SymPy is to keep it and its required dependencies pure Python. This is a primary advantage of mpmath over other multiple precision libraries such as GNU MPFR <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>, which is faster. Like SymPy, mpmath is also BSD licensed (GNU MPFR is licensed under the GNU Lesser General Public License <ns0:ref type='bibr' target='#b52'>[49]</ns0:ref>).</ns0:p><ns0:p>Internally, mpmath represents a floating-point number (−1) s x • 2 y by a tuple (s, x, y, b) where</ns0:p><ns0:p>x and y are arbitrary-size Python integers and the redundant integer b stores the bit length of x for quick access. If GMPY <ns0:ref type='bibr' target='#b24'>[23]</ns0:ref> is installed, mpmath automatically uses the gmpy.mpz type for x, and GMPY methods for rounding-related operations, improving performance.</ns0:p><ns0:p>Most mpmath and SymPy functions use the same naming scheme, although this is not true in every case. For example, the symbolic SymPy summation expression Sum(f(x), (x, a, b)) Manuscript to be reviewed The mpmath library supports special functions, root-finding, linear algebra, polynomial approximation, and numerical computation of limits, derivatives, integrals, infinite series, and solving ODEs. All features work in arbitrary precision and use algorithms that allow computing hundreds of digits rapidly (except in degenerate cases).</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>The double exponential (tanh-sinh) quadrature is used for numerical integration by default.</ns0:p><ns0:p>For smooth integrands, this algorithm usually converges extremely rapidly, even when the integration interval is infinite or singularities are present at the endpoints <ns0:ref type='bibr' target='#b57'>[54,</ns0:ref><ns0:ref type='bibr' target='#b1'>2]</ns0:ref>. However, for good performance, singularities in the middle of the interval must be specified by the user. To evaluate slowly converging limits and infinite series, mpmath automatically tries Richardson extrapolation and the Shanks transformation (Euler-Maclaurin summation can also be used) <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>.</ns0:p><ns0:p>A function to evaluate oscillatory integrals by means of convergence acceleration is also available.</ns0:p><ns0:p>A wide array of higher mathematical functions is implemented with full support for complex values of all parameters and arguments, including complete and incomplete gamma functions, Bessel functions, orthogonal polynomials, elliptic functions and integrals, zeta and polylogarithm functions, the generalized hypergeometric function, and the Meijer G-function. The Meijer Equivalently, with SymPy's interface this function can be evaluated as:</ns0:p><ns0:formula xml:id='formula_8'>G-function instance G 3,0 1,3 0; 1 2 , −1, −</ns0:formula><ns0:formula xml:id='formula_9'>>>> meijerg([[],[0]], [[-S(1)/2,-1,-S(3)/2],[]], 10000).evalf()</ns0:formula></ns0:div>
<ns0:div><ns0:head>2.43925769071996e-94</ns0:head><ns0:p>Symbolic integration and summation often produce hypergeometric and Meijer G-function closed forms (see section 2.5); numerical evaluation of such special functions is a useful complement to direct numerical integration and summation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>PHYSICS SUBMODULE</ns0:head><ns0:p>SymPy includes several submodules that allow users to solve domain specific physics problems.</ns0:p><ns0:p>For example, a comprehensive physics submodule is included that is useful for solving problems in mechanics, optics, and quantum mechanics along with support for manipulating physical quantities with units.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Classical Mechanics</ns0:head><ns0:p>One of the core domains that SymPy suports is the physics of classical mechanics. This is in turn separated into two distinct components: vector algebra and mechanics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.1'>Vector Algebra</ns0:head><ns0:p>The sympy.physics.vector submodule provides reference frame-, time-, and space-aware vector and dyadic objects that allow for three-dimensional operations such as addition, subtraction, scalar multiplication, inner and outer products, and cross products. The vector and dyadic objects both can be written in very compact notation that make it easy to express the vectors and dyadics in terms of multiple reference frames with arbitrarily defined relative orientations.</ns0:p><ns0:p>The vectors are used to specify the positions, velocities, and accelerations of points; orientations, angular velocities, and angular accelerations of reference frames; and forces and torques. The dyadics are essentially reference frame-aware 3 × 3 tensors <ns0:ref type='bibr' target='#b56'>[53]</ns0:ref>. The vector and dyadic objects can be used for any one-, two-, or three-dimensional vector algebra, and they provide a strong framework for building physics and engineering tools. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The following Python code demonstrates how a vector is created using the orthogonal unit vectors of three reference frames that are oriented with respect to each other, and the result of expressing the vector in the A frame. The B frame is oriented with respect to the A frame using Z-X-Z Euler Angles of magnitude π, π 2 , and π 3 , respectively, whereas the C frame is oriented with respect to the B frame through a simple rotation about the B frame's X unit vector through π 2 .</ns0:p><ns0:p>>>> from sympy.physics.vector import ReferenceFrame </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2'>Mechanics</ns0:head><ns0:p>The sympy.physics.mechanics submodule utilizes the sympy.physics.vector submodule to populate time-aware particle and rigid-body objects to fully describe the kinematics and kinetics of a rigid multi-body system. These objects store all of the information needed to derive the ordinary differential or differential algebraic equations that govern the motion of the system, i.e., the equations of motion. These equations of motion abide by Newton's laws of motion and can handle arbitrary kinematic constraints or complex loads. The submodule offers two automated methods for formulating the equations of motion based on Lagrangian Dynamics <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref> and Kane's Method <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. Lastly, there are automated linearization routines for constrained dynamical systems <ns0:ref type='bibr' target='#b42'>[41]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Quantum Mechanics</ns0:head><ns0:p>The sympy.physics.quantum submodule has extensive capabilities to solve problems in quantum mechanics, using Python objects to represent the different mathematical objects relevant in quantum theory <ns0:ref type='bibr' target='#b53'>[50]</ns0:ref>: states (bras and kets), operators (unitary, Hermitian, etc.), and basis sets, as well as operations on these objects such as representations, tensor products, inner products, outer products, commutators, and anticommutators. The base objects are designed in the most general way possible to enable any particular quantum system to be implemented by subclassing the base operators and defining the relevant class methods to provide system-specific logic.</ns0:p><ns0:p>Symbolic quantum operators and states may be defined, and one can perform a full range of operations with them. Commutators can be expanded using common commutator identities: On top of this set of base objects, a number of specific quantum systems have been implemented in a fully symbolic framework. These include:</ns0:p><ns0:p>• Many of the exactly solvable quantum systems, including simple harmonic oscillator states and raising/lowering operators, infinite square well states, and 3D position and momentum operators and states. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Second quantized formalism of non-relativistic many-body quantum mechanics <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref>.</ns0:p><ns0:p>• Quantum angular momentum <ns0:ref type='bibr' target='#b68'>[65]</ns0:ref>. Spin operators and their eigenstates can be represented in any basis and for any quantum numbers. A rotation operator representing the Wigner D-matrix, which may be defined symbolically or numerically, is also implemented to rotate spin eigenstates. Functionality for coupling and uncoupling of arbitrary spin eigenstates is provided, including symbolic representations of Clebsch-Gordon coefficients and Wigner symbols.</ns0:p><ns0:p>• Quantum information and computing <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>. Multidimensional qubit states, and a full set of one-and two-qubit gates are provided and can be represented symbolically or as matrices/vectors. With these building blocks, it is possible to implement a number of basic quantum algorithms including the quantum Fourier transform, quantum error correction, quantum teleportation, Grover's algorithm, dense coding, etc. In addition, any quantum circuit may be plotted using the circuit_plot function (Figure <ns0:ref type='figure' target='#fig_16'>1</ns0:ref>).</ns0:p><ns0:p>Here Qubit states can also be used in adjoint operations, tensor products, inner/outer products:</ns0:p><ns0:p>>>> Dagger(q) <0101| >>> ip = Dagger(q)*q >>> ip <0101|0101> >>> ip.doit()</ns0:p><ns0:formula xml:id='formula_10'>1</ns0:formula><ns0:p>Quantum gates (unitary operators) can be applied to transform these states and then classical measurements can be performed on the results: </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>ARCHITECTURE</ns0:head><ns0:p>Software architecture is of central importance in any large software project because it establishes predictable patterns of usage and development <ns0:ref type='bibr' target='#b54'>[51]</ns0:ref>. This section describes the essential structural components of SymPy, provides justifications for the design decisions that have been made, and gives example user-facing code as appropriate.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>The Core</ns0:head><ns0:p>A computer algebra system stores mathematical expressions as data structures. For example, the mathematical expression x + y is represented as a tree with three nodes, +, x, and y, where x and y are ordered children of +. As users manipulate mathematical expressions with traditional mathematical syntax, the CAS manipulates the underlying data structures. Symbolic computations such as integration, simplification, etc. are all functions that consume and produce expression trees.</ns0:p><ns0:p>In SymPy every symbolic expression is an instance of the class Basic, 11 the superclass of all SymPy types providing common methods to all SymPy tree-elements, such as traversals. The children of a node in the tree are held in the args attribute. A leaf node in the expression tree has empty args.</ns0:p><ns0:p>For example, consider the expression xy + 2:</ns0:p><ns0:p>>>> x, y = symbols('x y') >>> expr = x*y + 2</ns0:p><ns0:p>By order of operations, the parent of the expression tree for expr is an addition. It is of type This means that expressions are rebuildable from their args. 13 Note that in SymPy the == operator represents exact structural equality, not mathematical equality. This allows testing if any two expressions are equal to one another as expression trees. For example, even though (x + 1) 2 and x 2 + 2x + 1 are equal mathematically, SymPy gives Another important property of SymPy expressions is that they are immutable. This simplifies the design of SymPy, and enables expression interning. It also enables expressions to be hashed, which allows expressions to be used as keys in Python dictionaries, and is used to implement caching in SymPy.</ns0:p><ns0:p>Python allows classes to override mathematical operators. The Python interpreter translates the above x*y + 2 to, roughly, (x.__mul__(y)).__add__ <ns0:ref type='bibr' target='#b1'>(2)</ns0:ref>. Both x and y, returned from the symbols function, are Symbol instances. The 2 in the expression is processed by Python as a literal, and is stored as Python's built in int type. When 2 is passed to the __add__ method of Symbol, it is converted to the SymPy type Integer(2) before being stored in the resulting expression tree. In this way, SymPy expressions can be built in the natural way using Python operators and numeric literals.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Extensibility</ns0:head><ns0:p>While the core of SymPy is relatively small, it has been extended to a wide variety of domains by a broad range of contributors. This is due, in part, to the fact that the same language, Python, is used both for the internal implementation and the external usage by users. All of the extensibility capabilities available to users are also utilized by SymPy itself. This eases the transition pathway from SymPy user to SymPy developer.</ns0:p><ns0:p>The typical way to create a custom SymPy object is to subclass an existing SymPy class, usually Basic, Expr, or Function. As it was stated before, all SymPy classes used for expression trees should be subclasses of the base class Basic. Expr is the Basic subclass for mathematical objects that can be added and multiplied together. The most commonly seen classes in SymPy 12 The dotprint function from the sympy.printing.dot submodule prints output to dot format, which can be rendered with Graphviz to visualize expression trees graphically. 13 expr.func is used instead of type(expr) to allow the function of an expression to be distinct from its actual Python class. In most cases the two are the same.</ns0:p></ns0:div>
<ns0:div><ns0:head>16/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science are subclasses of Expr, including Add, Mul, and Symbol. Instances of Expr typically represent complex numbers, but may also include other 'rings', like matrix expressions. Not all SymPy classes are subclasses of Expr. For instance, logic expressions, such as And(x, y), are subclasses of Basic but not of Expr. 14 The Function class is a subclass of Expr which makes it easier to define mathematical functions called with arguments. This includes named functions like sin(x) and log(x) as well as undefined functions like f (x). Subclasses of Function should define a class method eval, which returns an evaluated value for the function application (usually an instance of some other class, e.g., a Number), or None if for the given arguments it should not be automatically evaluated.</ns0:p><ns0:p>Many SymPy functions perform various evaluations down the expression tree. Classes define their behavior in such functions by defining a relevant _eval_* method. For instance, an object can indicate to the diff function how to take the derivative of itself by defining the _eval_derivative(self, x) method, which may in turn call diff on its args. (Subclasses of Function should implement the fdiff method instead; it returns the derivative of the function without considering the chain rule.) The most common _eval_* methods relate to the assumptions: _eval_is_assumption is used to deduce assumption on the object. Listing 1 presents an example of this extensibility. It gives a stripped down version of the gamma function Γ(x) from SymPy. The methods defined allow it to evaluate itself on positive integer arguments, define the real assumption, allow it to be rewritten in terms of factorial (with gamma(x).rewrite(factorial)), and allow it to be differentiated. self.func is used throughout instead of referencing gamma explicitly so that potential subclasses of gamma can reuse the methods. The gamma function implemented in SymPy has many more capabilities than the above listing, such as evaluation at rational points and series expansion.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Performance</ns0:head><ns0:p>Due to being written in pure Python without the use of extension modules, SymPy's performance characteristics are generally poorer than its commercial competitors. For many applications, the performance of SymPy, as measured by clock cycles, memory usage, and memory layout, is sufficient. However, the boundaries for when SymPy's pure Python strategy becomes insufficient</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Conclusions and future directions for SymPy are given in section 7. All examples in this paper use SymPy version 1.0 and mpmath version 0.19.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>>>> (x**2 -2*x + 3)/y (x**2 -2*x + 3)/y</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>expand expand the expression factor factor a polynomial into irreducibles collect collect polynomial coefficients cancel rewrite a rational</ns0:head><ns0:label /><ns0:figDesc>function as p/q with common factors canceled apart compute the partial fraction decomposition of a rational function trigsimp simplify trigonometric expressions<ns0:ref type='bibr' target='#b17'>[18]</ns0:ref> </ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>>>> limit(( 2 *</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>exp((1-cos(x))/sin(x))-1)**(sinh(x)/atan(x)**2), x, 0) E Derivatives are computed with the diff function, which recursively uses the various differentiation rules. >>> diff(sin(x)*exp(x), x) exp(x)*sin(x) + exp(x)*cos(x)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>far. The str form of an expression is valid Python and roughly matches what a user would type to enter the expression. 8 >>> phi0 = Symbol('phi0') >>> str(Integral(sqrt(phi0), phi0)) 'Integral(sqrt(phi0), phi0)'</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>8 / 21 PeerJ</ns0:head><ns0:label>821</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016) Manuscript to be reviewed Computer Science >>> pprint(Integral(sqrt(phi0 + 1), phi0)) ⌠ ⎮ ________ ⎮ ╲╱ φ₀ + 1 d(φ₀) ⌡ 1Alternately, the use_unicode=False flag can be set, which causes the expression to be printed using only ASCII characters.>>> pprint(Integral(sqrt(phi0 + 1), phi0), use_unicode=False)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>solveset has several design changes with respect to the older solve function. This distinction is present in order to resolve the usability issues with the previous solve function API while maintaining backward compatibility with earlier versions of SymPy. solveset only requires essential input information from the user. The function signatures of solve and solveset are solve(f, *symbols, **flags) solveset(f, symbol, domain=S.Complexes)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>>>> A = Matrix([[x, x + y], [y, x]])SymPy matrices support common symbolic linear algebra manipulations, including matrix addition, multiplication, exponentiation, computing determinants, solving linear systems, singular values, and computing inverses using LU decomposition, LDL decomposition, Gauss-Jordan elimination, Cholesky decomposition, Moore-Penrose pseudoinverse, or adjugate matrices.All operations are performed symbolically. For instance, eigenvalues are computed by generating the characteristic polynomial using the Berkowitz algorithm and then solving it using polynomial routines.>>> A.eigenvals() {x -sqrt(y*(x + y)): 1, x + sqrt(y*(x + y)): 1}</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>10 / 21 PeerJ</ns0:head><ns0:label>1021</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>representing b x=a f (x) is represented in mpmath as nsum(f, (a, b)), where f is a numeric Python function.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>12 / 21 PeerJ</ns0:head><ns0:label>1221</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>>>> A, B, C = symbols('A B C', cls=ReferenceFrame) >>> B.orient(A, 'body', (pi, pi/3, pi/4), 'zxz') >>> C.orient(B, 'axis', (pi/2, B.x)) >>> v = 1*A.x + 2*B.z + 3*C.y >>> v A.x + 2*B.z + 3*C.y >>> v.express(A) A.x + 5*sqrt(3)/2*A.y + 5/2*A.z</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>>>> from sympy.physics.quantum import Commutator, Dagger, Operator >>> from sympy.physics.quantum import Ket, qapply >>> A, B, C, D = symbols('A B C D', cls=Operator) >>> a = Ket('a') >>> comm = Commutator(A, B) >>> comm [A,B] >>> qapply(Dagger(comm*a)).doit() -<a|*(Dagger(A)*Dagger(B) -Dagger(B)*Dagger(A))</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>>>></ns0:head><ns0:label /><ns0:figDesc>Commutator(C+B, A*D).expand(commutator=True) -[A,B]*D -[A,C]*D + A*[B,D] + A*[C,D]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>21 PeerJFigure 1 .</ns0:head><ns0:label>211</ns0:label><ns0:figDesc>Figure 1. The circuit diagram for a three-qubit quantum Fourier transform generated by SymPy.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>15 / 21 PeerJ</ns0:head><ns0:label>1521</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016) Manuscript to be reviewed Computer Science >>> expr.args[0].args () Symbols or symbolic constants, like e or π, are other examples of leaf nodes. to view an expression tree is using the srepr function, which returns a string representation of an expression as valid Python code 12 with all the nested class constructor calls to create the given expression. >>> srepr(expr) 'Add(Mul(Symbol('x'), Symbol('y')), Integer(2))' Every SymPy expression satisfies a key identity invariant: expr.func(*expr.args) == expr</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>1 False</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>>>> (x + 1)**2 == x**2 + 2*x + because they are different as expression trees (the former is a Pow object and the latter is an Add object).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Listing 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>A minimal implementation of sympy.gamma. from sympy import Function, Integer, factorial, polygamma class gamma(Function): @classmethod def eval(cls, arg): if isinstance(arg, Integer) and arg.is_positive: return factorial(arg -1) def _eval_is_real(self): x = self.args[0] # noninteger means real and not integer if x.is_positive or x.is_noninteger: return True def _eval_rewrite_as_factorial(self, z): return factorial(z -1) def fdiff(self, argindex=1): from sympy.core.function import ArgumentIndexError if argindex == 1: return self.func(self.args[0])*polygamma(0, self.args[0]) else: raise ArgumentIndexError(self, argindex)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Generation of compilable and executable code in a variety of different programming languages from expressions directly. Target languages include C, Fortran, Julia, JavaScript, Mathematica, MATLAB and Octave, Python, and Theano.</ns0:figDesc><ns0:table /><ns0:note>there is one, shown on the next line.3/21PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>Some of the common assumptions are negative, real, nonpositive, integer, prime and commutative. 4 Assumptions on any SymPy object can be checked with the is_assumption attributes, like t.is_positive.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>>>> t = Symbol('t', positive=True)</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> sqrt(t**2)</ns0:cell></ns0:row><ns0:row><ns0:cell>t</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>The Meijer G-function algorithm and the Risch algorithm are respectively demonstrated below by the computation of</ns0:figDesc><ns0:table><ns0:row><ns0:cell>>>> s, t = symbols('s t', positive=True)</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> integrate(exp(-s*t)*log(t), (t, 0, oo)).simplify()</ns0:cell></ns0:row><ns0:row><ns0:cell>-(log(s) + EulerGamma)/s</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> integrate((-2*x**2*(log(x) + 1)*exp(x**2) +</ns0:cell></ns0:row><ns0:row><ns0:cell>... (exp(x**2) + 1)**2)/(x*(exp(x**2) + 1)**2*(log(x) + 1)), x)</ns0:cell></ns0:row><ns0:row><ns0:cell>log(log(x) + 1) + 1/(exp(x**2) + 1)</ns0:cell></ns0:row></ns0:table><ns0:note>∞ 0 e −st log (t) dt = − log (s) + γ s and −2x 2 (log (x) + 1) e x 2 + e x 2 + 1 2 x e x 2 + 1 2 (log (x) + 1) dx = log (log (x) + 1) + 1 e x 2 + 1 .</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head /><ns0:label /><ns0:figDesc>are a few short examples of the quantum information and computing capabilities in sympy.physics.quantum. Start with a simple four-qubit state and flip the second qubit from the right using a Pauli-X gate:</ns0:figDesc><ns0:table><ns0:row><ns0:cell>>>> from sympy.physics.quantum.qubit import Qubit</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> from sympy.physics.quantum.gate import XGate</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> q = Qubit('0101')</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> q</ns0:cell></ns0:row><ns0:row><ns0:cell>|0101></ns0:cell></ns0:row><ns0:row><ns0:cell>>>> X = XGate(1)</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> qapply(X*q)</ns0:cell></ns0:row><ns0:row><ns0:cell>|0111></ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head /><ns0:label /><ns0:figDesc>Add. The child nodes of expr are 2 and x*y.Descending further down into the expression tree yields the full expression. For example, the next child node (given by expr.args[0]) is 2. Its class is Integer, and it has an empty args tuple, indicating that it is a leaf node.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>>>> type(expr)</ns0:cell></ns0:row><ns0:row><ns0:cell><class 'sympy.core.add.Add'></ns0:cell></ns0:row><ns0:row><ns0:cell>>>> expr.args</ns0:cell></ns0:row><ns0:row><ns0:cell>(2, x*y)</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> expr.args[0]</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>>>> type(expr.args[0])</ns0:cell></ns0:row><ns0:row><ns0:cell><class 'sympy.core.numbers.Integer'></ns0:cell></ns0:row></ns0:table><ns0:note>11 Some internal classes, such as those used in the polynomial submodule, do not follow this rule for efficiency reasons.</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>This paper assumes a moderate familiarity with the Python programming language. 2 import * has been used here to aid the readability of the paper, but is best to avoid such wildcard import statements in production code, as they make it unclear which names are present in the namespace. Furthermore, imported names could clash with already existing imports from another package. For example, SymPy, the standard Python math library, and NumPy all define the exp function, but only the SymPy one will work with SymPy symbolic expressions.<ns0:ref type='bibr' target='#b2'>3</ns0:ref> The three greater-than signs denote the user input for the Python interactive session, with the result, ifPeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='4'>SymPy assumes that two expressions A and B commute with each other multiplicatively, that is, A • B = B • A, unless they both have commutative=False. Many algorithms in SymPy require special consideration to work correctly with noncommutative products. 5/21 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='14'>See the supplement for more information on the sympy.logic submodule.17/21PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:1:1:NEW 10 Oct 2016)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Aaron Meurer
ERGS: http://www.ergs.sc.edu/
Nuclear Eng., Mechanical Eng. Dept.
University of South Carolina
541 Main Street, St. 009
Columbia, SC 29201
[email protected]
September 31, 2016
Dear Editors,
We thank you and the referees for your thoughtful comments and suggestions on our manuscript,
“SymPy: Symbolic Computing in Python”. We have endeavored to address your comments and
concerns point by point below. We give your and the reviewers’ comments on white background
and our replies on gray background.
Reviewer 1 had concerns that the paper was out of scope, but the editor has confirmed that it is
in scope. Many of the points that reviewer 3 raised were in regards to the relevance of various
sections. The intended scope of this paper was to be an architecture paper for SymPy’s 1.0 release.
We have examined reviewer 3’s points through this lens. To address this, we have restructured the
manuscript to make the paper more readable to those not already experts in SymPy or symbolic
mathematics.
We thank you again for your consideration. Given these edits, the manuscript has been greatly
improved and we believe that it is now suitable for publication in PeerJ Computer Science. If there
are further questions or concerns please do not hesitate to contact us.
Best regards,
Aaron Meurer
1
Comments from Editor
1. I found the http://live.sympy.org/ site very convenient for trying things out in
SymPy, but it is only mentioned in the supplement, which might be overlooked by
readers. I suggest moving that section into the main paper, as it provides a great way
to play along with the examples while reading the paper.
Section (previously) 9 from the supplement was edited and moved to the introduction:
All the examples in this paper can be tested on SymPy Live, an online
Python shell that uses the Google App Engine to execute SymPy code.
SymPy Live is also integrated in the SymPy documentation at http://docs.sympy.org.
2. The Basic Usage section omits what I think is an important point: how does one distinguish between evaluating exp(1) in Python and exp(1) in SymPy? In other words, how
are symbolic constants specified?
Both the Python standard library and SymPy have an exp callable, so it depends
which symbols are imported in your global namespace: the exp function from the math
module or the exp class from sympy. In the later case a special symbolic constant will
be produced.
We have added a sentence to the section 5.1, that shows symbolic constants as an
example of leaf nodes:
Symbols or symbolic constants, like e or π, are other examples of leaf nodes.
>>> exp(1)
E
>>> exp(1).args
()
>>> x.args
()
3. The first thing I tried after seeing Table 2 (simplication functions) did not work:
>>> trigsimp (exp( Matrix(2, 2, [0, -y, y, 0])) )
fails to recognize cos and sin. Am I expecting too much?
This indeed does not work. The implementation in trigsimp primarily thus far has
focused on transforming trigonometric functions into other trigonometric functions.
The ability to simplify complex exponentials is something that we would like to work.
Issue 11459 in our public issue tracker tracks this feature.
Page 2
4. Section 3.6: are eigenvalues and singular values included? If not, why not?
Yes, they are included. The second example in that section addresses the computation
of eigenvalues. Singular values can be computed with A.singular_values(). We have
not included this example, as the singular value expressions for our example matrix A
are quite large, but we have added “singular values” to the list of features in the second
paragraph of the section.
5. Like one of the referees, I expected to see MPFR mentioned in Section 4.1, at least to
mention the pros and cons and say why it isn’t used.
The major advantage is that mpmath is a pure Python library. It’s slower than MPFR,
but not too much to evaluate any reasonably complicated single expression at interactive
speed, which covers most use cases in SymPy. We have added a sentence about this to
the numerics section.
We’ve also mentioned that another advantage is that mpmath is BSD licensed, like
SymPy, which is important for the scientific Python ecosystem (MPFR is licensed
under the LGPL).
6. Section 4.1: please state whether the syntax for functions in mpmath is identical, or not,
to that in Sympy.
The mpmath library is not a SymPy submodule, but a separate library, which we have
indicated in the text. Most function names in mpmath are the same as SymPy, but it
is not true in every case. We have added an example to the text showing an instance
where they diverge (summations).
7. Unless I have missed something, a weakness of the paper is that speed is mentioned
only twice, on lines 496 and 666. My assumption is that SymPy is slow compared with
its commercial competitors. Please comment on the speed issue, and not just in the
conclusions.
Section 5.3, titled “Performance” was added to the manuscript, which discusses the
speed issue and lays out a path forward. The conclusion was simplified.
8. Editors are needed for [20].
We have done this.
Page 3
2
2.1
Reviewer 1
Basic reporting
1. Generally good. A part that confused me is the assertion (footnote 3) that “If A and B
are Symbols created with commutative=False then SymPy will keep A · B and B · A
distinct.” Does that mean that BOTH of them must be created this way, and that A and
x (if x is created normally) will commute? Is there any way to declare commutators?
How does one guarantee that other pieces of code, e.g. Gaussian elimination, respect
non-commutativity?
We have clarified the footnote. Both expressions must be set as commutative=False.
Internally, in Mul, the “commutative part” of an expression is pulled out to the front
and canonically ordered, and the “noncommutative part” is not reordered.
Commutators are implemented in the sympy.physics.quantum submodule and are
discussed in section 4.2.
Different algorithms do generally require special consideration for noncommutative expressions to be correct. If an algorithm implicitly assumes that expressions commute,
it may return incorrect results when given a noncommutative expression.
2. References reasonable, though [8], an excellent reference, makes almost precisely the
opposite point to that for which it is cited—“simplification is not well-defined”. I think
the authors are trying to follow [8]’s definition of simplification.
We agree that this is a bad reference, because we are not actually using their definition.
Yes, that article rigorously defines “simplification”, but it’s just one approach. This
citation was replaced with a better reference [34], that illustrates our point.
2.2
Experimental design
1. The referee is asked to comment whether the research is within the scope of the journal, defined as “PeerJ is an Open Access, peer-reviewed, scholarly journal. It considers
articles in the Biological Sciences, Medical Sciences, and Health Sciences. PeerJ does
not publish in the Physical Sciences, the Mathematical Sciences, the Social Sciences, or
the Humanities (except where articles in those areas have clear applicability to the core
areas of Biological, Medical or Health sciences).”
Hence I fear a software description on the maths/cmputing boundary with no cited
applications is out of scope.
The editor has confirmed that the paper is in scope for PeerJ Computer Science.
Page 4
2.3
Validity of the findings
1. There are no findings as such, since this is a software description, not an experimental
paper.
2.4
Comments for the author
1. Nice paper — pity it seems totally out of scope. Why not J. Symbolic Computation or
some such?
See above.
3
3.1
Reviewer 2
Basic reporting
No Comments.
3.2
Experimental design
No Comments
3.3
Validity of the findings
1. There is a minor problem with the supplementary notebook
The final cell is
>>> circuit_plot(fourier, nqubits=3);
plt.savefig('./images/fig1-circuitplot-qft.pdf', format='pdf')
This fails if the user does not have an images folder.
We have added
%mkdir -p './images'
to the cell before the call to savefig.
2. I suggest removing the >>> before each line. It looks strange in a notebook.
Page 5
We have done this.
3.4
Comments for the author
Sympy is a superb package and I am happy to see that there is now a paper that describes
its current state. Here are a few comments that you may wish to consider before publication.
1. Line 105 It is generally frowned upon to import all symbols from a Python module in this
manner. I understand why you are doing it in this paper, it makes subsequent sympy
commands less verbose. It might, however, encourage bad practice. It may also lead to
a poor user experience for newbies.
For example, say I had the following code
from numpy import sin, array
test=array(123)
sin(test)
I decide that I want to use sympy for something and, following your example, do
from numpy import sin, array
from sympy import *
x,y,z=symbols('x,y,z')
test=array(123)
sin(test)
The code will break because sympy has it’s own sin that gets imported that doesn’t
work on numpy arrays. As a newbie, I might not know this. I tested using Sympy 1.0
and Python 3.
Later in the text, a similar assertion is made by you (line 485) in reference to a different
package that may break sympy.
We have used import * on purpose, as we felt that explicitly importing names would
unnecessarily distract from the content of the paper, but the reviewer is absolutely
right that this is bad practice for actual code. We have added a footnote:
import * has been used here to aid the readability of the paper, but is best
to avoid such wildcard import statements in production code, as they make
it unclear which names are present in the namespace. Furthermore, imported names could clash with already existing imports from another package. For example, SymPy, the standard Python math library, and NumPy
all define the exp function, but only the SymPy one will work with SymPy
symbolic expressions.
We have also removed the reference to from mpmath import * from line 485.
Page 6
2. Lines 151–153 Minor comment: srepr is a useful command. As someone who also uses
Mathematica, I wonder if there is a sympy version of the TreeForm command which
produces a visualisation of the expression tree, or maybe an output format that I could
pass to a graph library for visualisation?
We have added a footnote about the dotprint function (footnote 12), which outputs
expressions in the dot format that can be rendered with Graphviz.
3. Section 2.3: Assumptions Is there a way of the user listing all available assumptions?
To access this in code, the only way is
from sympy.core.assumptions import _assume_defined
but as this is a private variable (it begins with an underscore), it is not considered part
of the public API, so we do not wish to recommend it in the paper. The recommended
list is in the SymPy documentation, at http://docs.sympy.org/latest/modules/
core.html#module-sympy.core.assumptions. We have opened issue 11539 to track
this in our issue tracker.
4. Section 2.4 Line 216: This is due in part because the same language, Python, is used
both for the internal implementation and the external usage by users.
This reads badly. Perhaps the following might be better?
This is due, in part, to the fact that the same language, Python, is used both for the
internal implementation and the external usage by users.
We have changed the sentence as suggested.
5. Line 221: the phrase ‘Expression tree’ is cited but this is not the first time you’ve used
it. Perhaps cite earlier? Line 129 perhaps?
The footnote here was in reference to the SymPy classes. We have moved the footnote
later in the sentence, to “Basic”, as suggested by reviewer 3, point 16.
6. Page 8: Footnote 5 The line reads
The measure parameter of the simplify function lets specify the Python function used to determine how
should this be
Page 7
The measure parameter of the simplify function lets the user specify the
Python function used to determine how
We have changed the sentence as suggested.
7. Section 4.1 Should mpmath be cited here?
We have added a citation for mpmath.
4
Reviewer 3
1. The paper introduces SymPy, a well-known pure Python library for symbolic computation. It covers many aspects of the library: architecture, basic usage, overview of
modules, a more detailed look into some of them, and physics application.
The writing style is mostly clear and easy to understand. Some example code is provided
to further explain the use of various module, classes, and functions. Unfortunately,
the paper is very unconnected and disorganized, with some unnecessary repetition and
some things left undefined. Further, parts of the supplement should be moved to the
main paper and vice versa. All of this makes the paper look more like a collection of
unconnected or, at best, very loosely connected parts, instead of a meaningful whole.
Another big problem of the paper is the lack of aim. Some parts of SymPy are covered
at informative level (short descriptions of the elements related to some subject), some
at the beginners level (basic usage examples), while some go deep in internal SymPy
implementation of certain features. This structure leaves the impression that different,
yet mutually intermingled sections aim at different audiences.
Moreover, these vastly differently approached elements come in no specific order, and
with no obvious reason why each of them is picked to be covered at all and, specifically,
at the chosen level of complexity in the approach.
I suggest a major rewrite of the paper, to improve the structure and group differently
approached subjects. It would probably be best to:
1. Create a new section (following the introduction of SymPy) on projects that use
SymPy, and put in it the materials currently available in supplement’s sections 8, 9,
11. The comparison with Mathematica (supplement’s section 10) should be moved
either to the introduction as a section, or right after the introduction as its own
section, but it should not be in the middle of the description of SymPy-powered
projects.
2. This should be followed by the list of SymPy packages and modules (currently
section 3) and descriptions of selected modules (currently sections 3.1, 3.3, 3.5,
supplement’s sections 5, 7, and 9).
Page 8
3. Now, basic usage can be given as its own section (currently done in sections 2.1 and
2.3), followed by introductions on usage of various modules (currently in sections
3.2, 3.4, 3.6, 4, 5, and supplement’s sections 2, 3, 4, 6).
4. In-depth architecture (most of the so far unmentioned sections) can be given either
as its own section, or made into a supplement of its own (as it is naturally far more
technical and less interesting to general audience).
5. The current conclusion works fine as the finishing section of the paper.
We have made the following changes to the paper
1. We have renamed the “Features” section to “Overview of Capabilities”, now section 2.
2. We have renamed the “Domain Specific Submodules” section to “Physics Submodule”, now section 4 (see point 30).
3. We have moved the “Architecture” section to the end, after the “Physics Submodule” section. It is now section 5.
4. We have moved the “Basic Usage” subsection to the beginning of the “Overview
of Capabilities” section (it is now section 2.1), and added an intro paragraph to
the “Overview of Capabilities” section.
5. We have moved the “Assumptions” subsection to the “Overview of Capabilities”
subsection, after the features table. It is now section 2.3.
6. The “Comparison with Mathematica” section has been moved to the end of the
supplement, now section 12.
7. The “Other Projects that Depend on SymPy” section (which has been renamed
from “Other Projects that Use SymPy”) has been moved from the supplement to
the main paper content, now section 6.
We disagree with the reviewer on a few points. We do not consider SymPy Live or
SymPy Gamma to be key components of SymPy. The goal of the paper is to be about
the SymPy library. While SymPy Live and SymPy Gamma are maintained by the
SymPy community, they are separate projects. A mention of SymPy Live has been
added to the introduction (see editor point 1), as it may assist an interested reader in
trying the examples from the paper. A mention of SymPy Gamma has been added to
the “Projects that Depends on SymPy” table (Table 3), now part of the main paper (see
above). We have left the more in depth discussion of SymPy Gamma in the supplement
(section 11).
The “Comparison with Mathematica” section has been placed in the supplement because
we do not feel that a treatment in the main manuscript would be fair without an equal
Page 9
comparison to other principle computer algebra systems, such as Maple, SageMath,
and Maxima. Unfortunately, our authorship is inexpert in these systems, so a complete
and fair comparison is impossible.
We deem all the sections in section 3 (now section 2) to be important core components
of the library. Regarding the supplement sections (previously) 5, 7, and 9—“Sets”,
“Category Theory”, and “SymPy Live”, respectively:
• In SymPy 1.0, the sets submodule is still in a relatively infant stage. It is mentioned already in conjunction with the solveset function. We have added some
additional text to the solvers subsection regarding sympy.sets.
• The category theory submodule in SymPy is very domain specific, and independent of other SymPy submodules. It is out of scope for an architecture paper.
• The SymPy Live section was short and would be mostly duplicated by the paragraph that has been added to the introduction in the paper. We have thus
removed it.
We also consider the supplement sections (previously) 2, 3, 4, and 6—“Series”, “Logic”,
“Diophantine Equations”, and “Statistics”, respectively—to be in depth beyond the goals
of the main manuscript:
• The “Series” section discusses relatively advanced series expansion methods. We
have added a paragraph detailing basic series expansion to the “Calculus” subsection, with a reference to the more detailed section in the supplement.
• The “Logic” section is alluded to in the “Assumptions” subsection, and the submodule is briefly mentioned in the “Extensibility” section, but is not discussed in
depth in the main manuscript. While the logic submodule is indeed important
for certain parts of SymPy, we do not consider it to be of interest to the general reader. Furthermore, on a technical note, the assumptions discussed in the
manuscript are the so-called “old assumptions”, which do not use the sympy.logic
submodule discussed in the supplement (they use an implementation of the Rete
algorithm, which is separate from sympy.logic). The so-called “new assumptions”, mentioned briefly in the “Conclusion and Future Work” section, does use
it, but we have opted to not discuss this submodule in depth, as it is still in
development and not fully usable. We have added some text to the assumptions
section discussing this and referencing the logic subsection of the supplement (see
footnote 5).
• Regarding the “Diophantine Solvers” section, the “Solvers” subsection already
mentions that SymPy can solve Diophantine Equations. The in depth discussion
in the supplement is outside of the scope of the main paper, because it’s specific,
and independent from other parts of SymPy (with the exception of computing
Page 10
integer set intersections in the sympy.sets submodule, no other part of SymPy
from solvers uses the Diophantine solvers). We have added a reference to the
supplement to the “Solvers” subsection.
• The “Statistics” module is not a core module and is independent (no other SymPy
submodules depend on it).
The “Architecture” section is a key component of the paper. The goal of the paper is to
discuss the architecture and core features of SymPy. It therefore would be inappropriate
to move this section to the supplement. We have followed the reviewer’s advice and
moved the architecture section to the end of the paper, to make the paper easier to
follow.
1. Explaining what Python is (lines 70–72) should go before talking about SymPy as a “CAS
written in Python”. Further, the paper assumes a moderate familiarity with Python (for
example, Python’s console, OOP, and exceptions), and this should be specified. There
should be a short note on the used Python console (>>> is the prompt, with the results
of computation following immediately in the lines after it). The citation [25] from line
65 should be moved next to “Python” in line 70.
The citation was moved and footnote 3 for the Python interactive prompt was added
where it was used first time. We have also added footnote 1 indicating a moderate
familiarity with Python is assumed.
2. Line 73 has outdated information. Sage was renamed to SageMath and it no longer aims
only at pure mathematics but also at algebra, numerical analysis, etc. The reference [40]
should be replaced by a more up-to-date one.
We have updated the name and citation, and changed “pure mathematics” to “pure and
applied mathematics”.
3. The plural “CASs” is usually written as “CASes” or “CAS’s”, with the latter being somewhat problematic due to it looking like it implies possession.
Different style guides appear to disagree on the proper way to pluralize non-plural
acronyms that end in “S”. We have been informed by the PeerJ publishing staff that
either form (“CASs” or “CAS’s”) is acceptable, so long as we are consistent. We have
chosen “CAS’s” as suggested, and have changed all instances in the manuscript.
4. Line 88 mentions “printers”, but it doesn’t state what they are, which is confusing for
those readers that are yet to learn the concept in section 3.4.
Page 11
We have changed “printers” to “display formatters”, which is the terminology used by
Jupyter.
5. Also in line 88, Jupyter’s citation [30] is actually about IPython and should be replaced
by a more up-to-date version.
We have changed the citation to
Kluyver, T., Ragan-Kelley, B., Pérez, F., Granger, B., Bussonnier, M., Frederic, J.,
Kelley, K., Hamrick, J., Grout, J., Corlay, S., et al. (2016). Jupyter notebooks—
a publishing format for reproducible computational workflows. In Positioning and
Power in Academic Publishing: Players, Agents and Agendas: Proceedings of the 20th
International Conference on Electronic Publishing, page 87. IOS Press.
as recommended by the Jupyter developers at https://github.com/jupyter/jupyter/
issues/190.
6. The word “software” in line 91 is ambiguous; “library” or “package” would make a better
choice.
We have changed “software” to “library”.
7. Lines 91–96: “we discuss/look at/etc” is the preferred form, instead of “section discusses/looks at/etc”.
We had decided not to use first person. We prefer to be consistent here, unless this does
violate some journal policy, which is not the case (see, for example, https://peerj.
com/articles/cs-80/).
8. The paragraph in lines 103–105 should be moved to the introduction, and the footnote
from line 104 should be added to that paragraph as a full-blown sentence, expanded by
all the relevant technical information (Python version, OS, . . . ). Given that the end of
life for Python 2 is 2020., a comment on whether all the presented examples work in both
Python 2 and 3 should be included as well. Further, emphasise that wildcard imports,
“import *”, should almost never be used in programs (see PEP 8, the item “wildcard
imports”). The same goes for the import mentioned in lines 484–487.
We have moved the footnote inline, and have noted that the examples should work with
Python 2.7, 3.2, 3.3, 3.4, or 3.5, with any operating system that supports Python.
We have added a footnote discussing the concerns with import * (see reviewer 2,
point 1).
Page 12
9. Line 119 should be removed, as it is basically a copy of the previous line.
Line 119 is the output of line 118, so not showing it would be incorrect.
The aim here was simply to show that the input expression remains unevaluated. We
have added a sentence before the example to note this. This particular example was
chosen because it shows the basic syntax for addition, subtraction, multiplication, division, and exponentiation.
10. In line 121 the word “stored” should be replaced by “used as keys”.
See the next answer.
11. What do the authors mean by “thereby permitting features such as caching” in line 122?
Caching can be done for mutable types as well, just not through hashing.
This paragraph was replaced by:
Importantly, SymPy expressions are immutable. This simplifies the design
of SymPy by allowing expression interning. It also enables expressions to
be hashed, which is used to implement caching in SymPy.
12. There is no need to repeat “(CAS)” in line 124, as it was already given in line 64.
That was removed.
13. In the same line, the word “represents” should be replaced by “stores”.
We have done this.
14. In line 184, “symbols are” should be replaced by “t is” (the general rule is already given
in line 179).
We have done this.
15. The code in line 210 should be made into its own line (like a displaymath formula), for
typesetting reasons and better readability.
Page 13
We now use a verbatim environment here.
16. The footnote 4 should be moved from line 221 to line 222, right after “Basic”.
We have made the suggested change.
17. The part “which defines some basic methods for symbolic expression trees” should be
removed from line 222, as it was already given in line 130.
This part was removed and footnote moved to the mentioned text in the section 5.1,
“The Core”.
18. In line 225, the sentence “Not all SymPy classes are subclasses of Expr.” sounds confusing
as a reader new to SymPy wouldn’t expect, for example, symbols to inherit “Expr”. It
would be better to expand this, for example “Most of the SymPy classes (including
Symbol) are subclasses of Expr, but there are exceptions to this rule”.
This paragraph now reads:
The typical way to create a custom SymPy object is to subclass an existing
SymPy class, usually Basic, Expr, or Function. As it was stated before,
all SymPy classes used for expression trees should be subclasses of the base
class Basic. Expr is the Basic subclass for mathematical objects that can
be added and multiplied together. The most commonly seen classes in
SymPy are subclasses of Expr, including Add, Mul, and Symbol. Instances
of Expr typically represent complex numbers, but may also include other
“rings”, like matrix expressions. Not all SymPy classes are subclasses of
Expr. For instance, logic expressions, such as And(x, y), are subclasses of
Basic but not of Expr.
19. The title “Features” in line 276 is ambiguous, as a “feature” has no precise meaning in
Python (or even software libraries in general). It should be replaced by “Packages and
modules” or a similar more precise wording. The same goes for “feature” in most other
places in the paper (for example, the caption of Table 1. and line 495). It would be very
useful to also include actual names of the packages/modules in Table 1., as well as in
any section covering those packages/modules.
IEEE 829 defines the term feature as “a distinguishing characteristic of a software item
(e.g., performance, portability, or functionality).” We believe that this definition fits
our needs. Note that we have renamed section (now) 2 to “Overview of Capabilities”.
Table 1 was extended to include submodule names.
Page 14
20. The sets support listing is unnaturally split in two by the “This includes. . . ” sentence
which would fit better in parentheses.
This suggestion was implemented.
21. Line 355, add a sentence explaining that in SymPy str == repr, because in Python
repr is used to get an unambiguous valid Python code representation, while the return
value of str is meant to be human-readable.
See the added footnote 8.
22. Lines 359 and 379: what is 2D text representation? It seems that “2D” shouldn’t be
here.
This sentence was replaced with:
A two-dimensional (2D) textual representation of the expression can be
printed with monospace fonts via pprint.
We use here the same terminology as in the Mathematica online documentation, e.g.,
the OutputForm function:
OutputForm[expr]: prints as a two-dimensional representation of expr using only keyboard characters.
23. Line 427: every dictionary is a “dictionary of keys”. This should be a dictionary with
coordinate tuples as keys associated with the appropriate values.
“Dictionary of Keys” (DOK) refers specifically to a sparse matrix representation where
entries are stored as (row, column) pairs mapping to the elements. See for instance
scipy.sparse.dok_sparse. We have fixed the capitalization of “Dictionary of Keys”
and added the abbreviation (DOK), and stated explicitly what is meant by this. We
have also made a similar note about List of Lists (LIL) earlier in the section.
24. Section 4 would benefit from an introduction, and lines 440– should become a new
subsection 4.1. (named “Float” or “Real numbers support” or similar).
We have added an introduction and a subsection 3.1, “Floating-Point Numbers” (which,
for mathematical clarity, we consider to be distinct from “real numbers”).
25. I suggest a better example for lines 459–460: a computation of (e100 + 1) − e100 :
Page 15
>>> (exp(100)+1).evalf() - exp(100).evalf()
0
>>> ((exp(100)+1) - exp(100)).evalf()
1.00000000000000
>>> (exp(100)+1) - exp(100)
1
or two different ways to compute the 100th Fibonacci number:
>>> phi = (1+sqrt(5))/2
>>> psi = (1-sqrt(5))/2
>>> ((phi**100-psi**100)/sqrt(5)).evalf()-fibonacci(100)
65536.0000000000
>>> ((phi**100-psi**100)/sqrt(5) - fibonacci(100)).evalf()
0.e-104
Obviously, like your own example, these are problematic because a part of the computation is relying on Python’s builtin floats. Please include a comment on whether symbolic
computation (i.e., applying evalf() on the whole expression) always avoids these errors
or not, possibly with an example when it doesn’t resolve this problem.
Your first example, unfortunately, doesn’t work due to automatic simplification (see the
last input line above: exp(100) cancels automatically, before any numeric evaluation
happens). Your second example doesn’t have this issue, but our example is simpler.
Numerical evaluation of the whole expression should always solve cancellation problem,
as it was stated in the sentence “Applying the evalf method to the whole expression
solves this problem.”
26. The footnote from line 477 should be moved as a sentence in its own right to the introduction, with other technical specifications.
The mpmath version is now mentioned in the introduction.
27. In line 495, the word “solving” seems more appropriate than “solutions”.
This was replaced with “solving ODEs”.
28. In line 504 “is” should be used instead of “are” (because “array” is singular).
Page 16
This was fixed.
29. In line 518, “produces” should become plural.
This was fixed.
30. The title of section 5, “Domain specific submodules”, seems inappropriate because the
section only covers Physics package (not “submodules”). It should either be expanded
with a short introduction listing other domain specific packages, or it should be renamed
to “Physics package”.
This section (section 4) was renamed to “Physics submodule”. (See also the discussion
at 37 about module vs package).
31. The word “symbolics” in line 528 should be removed (as almost everything in the paper
deals with symbolic computation).
We have done this.
32. In line 530, sympy.physics.vector is a module, not a package.
The word “package” replaced with “submodule” (see also point 37).
33. It is unclear what “both of these objects” refer to in line 532. My guess is vectors and
dyadic objects, but this should be reworded to make it more clear.
We have changed “both of these objects can be written. . . ” to “the vector and dyadic
objects both can be written. . . ”.
34. In lines 543 and 545, “rad” should be removed. Radians are assumed when no other
measure (like degrees) is given.
These units were removed.
35. In lines 567–568, “performing symbolic quantum mechanics” makes no sense. This should
probably be “computations”, “solving problems related to”, etc.
Page 17
This was reworded to:
to solve problems in quantum mechanics
36. The sentence “SymPy expressions are immutable trees of Python objects.” doesn’t belong
in the conclusion. This can be moved to the appropriate place when discussing SymPy’s
architecture.
This paper focuses on the architecture of SymPy, and we consider this to be a core
feature of SymPy’s architecture, hence, it’s inclusion in the conclusion. We have also
discussed the different points in this sentence in depth in the architecture section.
37. All “submodules” should be replaced by “modules” (examples: lines 657, 662).
Our use of “modules”, “submodules”, and “packages” coincides with the definitions used
in the official Python documentation (see https://docs.python.org/3/tutorial/
modules.html#packages and https://docs.python.org/3/glossary.html#term-package).
To clarify things, we have modified the manuscript to use “package” only when referring
to a library external to SymPy (such as mpmath and Xy-pic).
To avoid confusion, and to be consistent with the definitions used by the official Python
documentation, we have modified the manuscript to use “submodule” to refer to any
module that is a dotted submodule of sympy, such as sympy.physics.quantum. This is
to emphasize that these submodules all live within the sympy namespace, as “module”
can suggest a separate library.
38. The sentences in lines 657–661 should swap places, because “areas of mathematics” are
discrete mathematics, concrete mathematics, etc., while simplifying expressions, performing common calculus operations, pretty printing expressions, etc. belongs to common operations (“other areas” is also fine, albeit slightly wrong).
The “areas of mathematics” was intended to encompass all mathematical operations.
We have rewritten the first sentence as
SymPy supports a wide array of mathematical facilities.
We have also changed “other included areas” to “other supported facilities” in the third
sentence.
The order of the subsequent sentences was chosen to match the order they were presented in the paper. We have modified them to match the current ordering in the
paper.
Page 18
39. In line 662 “classical mechanics and quantum mechanics” are listed as the only example
of the support for specific domains, as in section 5, which leaves the impression that
physics the only one. Either more domains should be listed, or it should be reworded to
recognize the fact that there are no others.
We have changed the text to “certain specific physics domains”. Also, we list here only
two of the above mentioned submodules in sympy.physics, but there are others.
40. Lines 670–678 contain explanations what some of the authors’ institutions are. It is
customary to use acknowledgements to thank people and institutions, while institutions’
details should be provided in the authors’ footnotes in the documents’ head.
The institution details were moved to the footnotes in the head of the paper. The acknowledgments section was removed, as per the PeerJ staff requested technical changes
(this information is to be listed separately as a funding statement deceleration as part
of the submission process).
41. The citation [5] in line 691 is missing identification data, probably a URL.
This is fixed now.
42. The citation [21] in line 728 should have “2D” instead of “2d”.
This is fixed.
More specific comments and correction suggestions for the supplement follow.
43. The supplement should have an introduction, explaining what is being covered in it in
general and in each section.
We have added an introduction.
44. Since the Guntz algorithm is covered in depth, it would be good to include how an
interested reader can see SymPy’s steps of computation:
>>> import os
>>> os.environ['SYMPY_DEBUG'] = 'True'
>>> from sympy import *
sympy/external/importtools.py:145: UserWarning: gmpy2 module is not installed
warnings.warn('%s module is not installed' % module, UserWarning)
>>> x = symbols('x')
Page 19
>>> limit(sin(x)/x, x, 0)
DEBUG: parsing of expression [(0, 1, None, None)] with symbol _w
DEBUG: returned None
DEBUG: parsing of expression [(_w, 1, None, None)] with symbol _w
DEBUG: returned ([], [(_w, 1, None, None)], 1, False)
DEBUG: parsing of expression [(0, 1, None, None)] with symbol _w
DEBUG: returned None
DEBUG: parsing of expression [(_w, 1, None, None)] with symbol _w
DEBUG: returned ([], [(_w, 1, None, None)], 1, False)
DEBUG: parsing of expression [(_w, 1, None, None)] with symbol _w
DEBUG: returned ([], [(_w, 1, None, None)], 1, False)
DEBUG: parsing of expression [(0, 1, None, None)] with symbol _w
DEBUG: returned None
DEBUG: parsing of expression [(_w, 1, None, None)] with symbol _w
DEBUG: returned ([], [(_w, 1, None, None)], 1, False)
DEBUG: parsing of expression [(0, 1, None, None)] with symbol _w
DEBUG: returned None
DEBUG: parsing of expression [(_w, 1, None, None)] with symbol _w
DEBUG: returned ([], [(_w, 1, None, None)], 1, False)
DEBUG: parsing of expression [(1, 1, None, None)] with symbol _w
DEBUG: returned None
limitinf(x*sin(1/x), x) = 1
+-mrv_leadterm(_p*sin(1/_p), _p) = (1, 0)
| +-mrv(_p*sin(1/_p), _p) = ({_p: _Dummy_19}, {}, _Dummy_19*sin(1/_Dummy_19))
| | +-mrv(_p, _p) = ({_p: _Dummy_19}, {}, _Dummy_19)
| | +-mrv(sin(1/_p), _p) = ({_p: _Dummy_20}, {}, sin(1/_Dummy_20))
| |
+-mrv(1/_p, _p) = ({_p: _Dummy_20}, {}, 1/_Dummy_20)
| |
+-mrv(_p, _p) = ({_p: _Dummy_20}, {}, _Dummy_20)
| +-rewrite(_Dummy_19*sin(1/_Dummy_19), {exp(_p): _Dummy_19}, {}, _p, _w) = (sin(_w)/
| | +-sign(_p, _p) = 1
| | +-limitinf(1, _p) = 1
| +-calculate_series(sin(_w)/_w, _w) = 1
|
+-limitinf(_w, _w) = oo
|
| +-mrv_leadterm(_w, _w) = (1, -1)
|
| | +-mrv(_w, _w) = ({_w: _Dummy_23}, {}, _Dummy_23)
|
| | +-rewrite(_Dummy_23, {exp(_w): _Dummy_23}, {}, _w, _w) = (1/_w, -_w)
|
| | | +-sign(_w, _w) = 1
|
| | | +-limitinf(1, _w) = 1
|
| | +-calculate_series(1/_w, _w) = 1/_w
|
| +-sign(-1, _w) = -1
|
| +-sign(1, _w) = 1
|
+-limitinf(_w, _w) = oo
|
+-limitinf(_w, _w) = oo
+-sign(0, _p) = 0
Page 20
+-limitinf(1, _p) = 1
1
Emphasize that the environment variable SYMPY_DEBUG must be set before importing
SymPy for the first time.
Footnote 1 was added right before the final example from the Gruntz algorithm section.
45. In line 183, syntax symbols('a:d') is used without being previously defined or explained.
Now we use symbols('a b c d'), like in other places.
46. Figure 2 should not be a screenshot, but rather proper code with LATEX-rendered results
and explanations, as was done in the rest of the paper. A shortened link to SymPy
Gamma for the example’s expression can be included for the user to try this for themselves.
We believe this is not a good idea. This figure is meant to be an archival capture,
not software subject to being re-rendered. SymPy Gamma currently only renders as
HTML. SymPy Gamma doesn’t export output as LATEX or other external formats.
A link to the project site was added to the text “SymPy Gamma”. The text for the
figure caption now links to the page that generated it.
Page 21
" | Here is a paper. Please give your review comments after reading it. |
706 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>SymPy is an open source computer algebra system written in pure Python. It is built with a focus on extensibility and ease of use, through both interactive and programmatic applications. These characteristics have led SymPy to become a popular symbolic library for the scientific Python ecosystem. This paper presents the architecture of SymPy, a description of its features, and a discussion of select submodules. The supplementary material provide additional examples and further outline details of the architecture and features of SymPy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>SymPy is a full featured computer algebra system (CAS) written in the Python <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref> programming language. It is free and open source software, licensed under the 3-clause BSD license <ns0:ref type='bibr' target='#b52'>[49]</ns0:ref>.</ns0:p><ns0:p>The SymPy project was started by Ondřej Čertík in 2005, and it has since grown to over 500 contributors. Currently, SymPy is developed on GitHub using a bazaar community model <ns0:ref type='bibr' target='#b45'>[43]</ns0:ref>.</ns0:p><ns0:p>The accessibility of the codebase and the open community model allow SymPy to rapidly respond to the needs of users and developers.</ns0:p><ns0:p>Python is a dynamically typed programming language that has a focus on ease of use and readability. 1 Due in part to this focus, it has become a popular language for scientific computing and data science, with a broad ecosystem of libraries <ns0:ref type='bibr' target='#b39'>[37]</ns0:ref>. SymPy is itself used as a dependency by many libraries and tools to support research within a variety of domains, such as SageMath <ns0:ref type='bibr' target='#b61'>[58]</ns0:ref> (pure and applied mathematics), yt <ns0:ref type='bibr'>[64]</ns0:ref> (astronomy and astrophysics), PyDy <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> (multibody dynamics), and SfePy <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> (finite elements).</ns0:p><ns0:p>Unlike many CAS's, SymPy does not invent its own programming language. Python itself is used both for the internal implementation and end user interaction. By using the operator overloading functionality of Python, SymPy follows the embedded domain specific language paradigm proposed by Hudak <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref>. The exclusive usage of a single programming language makes it easier for people already familiar with that language to use or develop SymPy. Simultaneously, it enables developers to focus on mathematics, rather than language design. SymPy version 1.0 officially supports Python 2.6, 2.7 and 3.2-3.5.</ns0:p><ns0:p>SymPy is designed with a strong focus on usability as a library. Extensibility is important in its application program interface (API) design. Thus, SymPy makes no attempt to extend the Python language itself. The goal is for users of SymPy to be able to include SymPy alongside other Python libraries in their workflow, whether that be in an interactive environment or as a programmatic part in a larger system. Being a library, SymPy does not have a built-in graphical user interface (GUI). However, SymPy exposes a rich interactive display system, and supports registering display formatters with Jupyter <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref> frontends, including the Notebook and Qt Console, which will render SymPy expressions using MathJax <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> or L A T E X.</ns0:p><ns0:p>The remainder of this paper discusses key components of the SymPy library. Section 2 enumerates the features of SymPy and takes a closer look at some of the important ones.</ns0:p><ns0:p>The section 3 looks at the numerical features of SymPy and its dependency library, mpmath.</ns0:p><ns0:p>Section 4 looks at the domain specific physics submodules for performing symbolic and numerical calculations in classical mechanics and quantum mechanics. Section 5 discusses the architecture of SymPy. Section 6 looks at a selection of packages that depend on SymPy. Additionally, the supplementary material takes a deeper look at a few SymPy topics. Supplement section 1 discusses the Gruntz algorithm, which SymPy uses to calculate symbolic limits.</ns0:p><ns0:p>Sections 2-9 of the supplement discuss the series, logic, Diophantine equations, sets, statistics, category theory, tensor, and numerical simplification submodules of SymPy, respectively. Supplement section 10 provides additional examples for topics discussed in the main paper. Supplement section 11 discusses the SymPy Gamma project. Finally, section 12 of the supplement contains a brief comparison of SymPy with Wolfram Mathematica.</ns0:p><ns0:p>The following statement imports all SymPy functions into the global Python namespace. 2 From here on, all examples in this paper assume that this statement has been executed: 3 >>> from sympy import *</ns0:p><ns0:p>All the examples in this paper can be tested on SymPy Live, an online Python shell that uses the Google App Engine <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref> to execute SymPy code. SymPy Live is also integrated into the SymPy documentation at http://docs.sympy.org.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>OVERVIEW OF CAPABILITIES</ns0:head><ns0:p>This section gives a basic introduction of SymPy, and lists its features. A few featuresassumptions, simplification, calculus, polynomials, printers, solvers, and matrices-are core components of SymPy and are discussed in depth. Many other features are discussed in depth in the supplementary material.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Basic Usage</ns0:head><ns0:p>Symbolic variables, called symbols, must be defined and assigned to Python variables before they can be used. This is typically done through the symbols function, which may create multiple symbols in a single function call. For instance, >>> x, y, z = symbols('x y z') creates three symbols representing variables named x, y, and z. In this particular instance, these symbols are all assigned to Python variables of the same name. However, the user is free to assign them to different Python variables, while representing the same symbol, such as a, b, c = symbols('x y z'). In order to minimize potential confusion, though, all examples in this paper will assume that the symbols x, y, and z have been assigned to Python variables identical to their symbolic names.</ns0:p><ns0:p>Expressions are created from symbols using Python's mathematical syntax. For instance, the following Python code creates the expression (x 2 − 2x + 3)/y. Note that the expression remains unevaluated: it is represented symbolically.</ns0:p><ns0:formula xml:id='formula_0'>>>> (x**2 -2*x + 3)/y (x**2 -2*x + 3)/y</ns0:formula></ns0:div>
<ns0:div><ns0:head n='2.2'>List of Features</ns0:head><ns0:p>Although SymPy's extensive feature set cannot be covered in depth in this paper, bedrock areas, that is, those areas that are used throughout the library, are discussed in their own subsections below. Additionally, Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> gives a compact listing of all major capabilities present in the SymPy codebase. This grants a sampling from the breadth of topics and application domains that SymPy services. Unless stated otherwise, all features noted in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> are symbolic in nature. Numeric features are discussed in Section 3. Algorithms for computing derivatives, integrals, and limits.</ns0:p><ns0:p>2 import * has been used here to aid the readability of the paper, but is best to avoid such wildcard import statements in production code, as they make it unclear which names are present in the namespace. Furthermore, imported names could clash with already existing imports from another package. For example, SymPy, the standard Python math library, and NumPy all define the exp function, but only the SymPy one will work with SymPy symbolic expressions. 3 The three greater-than signs denote the user input for the Python interactive session, with the result, if there is one, shown on the next line.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Category Theory (sympy.categories) Representation of objects, morphisms, and diagrams. Tools for drawing diagrams with Xy-pic <ns0:ref type='bibr' target='#b51'>[48]</ns0:ref>. Code Generation (sympy.printing, sympy.codegen) Generation of compilable and executable code in a variety of different programming languages from expressions directly. Target languages include C, Fortran, Julia, JavaScript, Mathematica, MATLAB and Octave, Python, and Theano. Combinatorics & Group Theory (sympy.combinatorics) Permutations, combinations, partitions, subsets, various permutation groups (such as polyhedral, Rubik, symmetric, and others), Gray codes <ns0:ref type='bibr' target='#b37'>[36]</ns0:ref>, and Prufer sequences <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. Concrete Math (sympy.concrete) Summation, products, tools for determining whether summation and product expressions are convergent, absolutely convergent, hypergeometric, and for determining other properties; computation of Gosper's normal form <ns0:ref type='bibr' target='#b44'>[42]</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Assumptions</ns0:head><ns0:p>The assumptions system allows users to specify that symbols have certain common mathematical properties, such as being positive, imaginary, or integer. SymPy is careful to never perform simplifications on an expression unless the assumptions allow them. For instance, the simplification √ t 2 = t holds if t is nonnegative (t ≥ 0), but it does not hold for a general complex t. 4 By default, SymPy performs all calculations assuming that symbols are complex valued. This assumption makes it easier to treat mathematical problems in full generality.</ns0:p><ns0:formula xml:id='formula_1'>>>> t = Symbol('t') >>> sqrt(t**2) sqrt(t**2)</ns0:formula><ns0:p>By assuming the most general case, that t is complex by default, SymPy avoids performing mathematically invalid operations. However, in many cases users will wish to simplify expressions containing terms like √ t 2 .</ns0:p><ns0:p>Assumptions are set on Symbol objects when they are created. For instance Symbol('t', positive=True) will create a symbol named t that is assumed to be positive.</ns0:p><ns0:p>>>> t = Symbol('t', positive=True)</ns0:p><ns0:p>>>> sqrt(t**2) t 4 In SymPy, √ z is defined on the usual principal branch with the branch cut along the negative real axis.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Some of the common assumptions are negative, real, nonpositive, integer, prime and commutative. 5 Assumptions on any SymPy object can be checked with the is_assumption attributes, like t.is_positive.</ns0:p><ns0:p>Assumptions are only needed to restrict a domain so that certain simplifications can be performed. They are not required to make the domain match the input of a function. For instance, one can create the object m n=0 f (n) as Sum(f(n), (n, 0, m)) without setting integer=True when creating the Symbol object n.</ns0:p><ns0:p>The assumptions system additionally has deductive capabilities. The assumptions use a three-valued logic using the Python built in objects True, False, and None. Note that False is returned if the SymPy object doesn't or can't have the assumption. For example, both I.is_real and I.is_prime return False for the imaginary unit I.</ns0:p><ns0:p>None represents the 'unknown' case. This could mean that given assumptions do not unambiguously specify the truth of an attribute. For instance, Symbol('x', real=True).is_positive</ns0:p><ns0:p>will give None because a real symbol might be positive or negative. None could also mean that not enough is known or implemented to compute the given fact. For instance, (pi + E).is_irrational</ns0:p><ns0:p>gives None-indeed, the rationality of π + e is an open problem in mathematics <ns0:ref type='bibr' target='#b32'>[31]</ns0:ref>.</ns0:p><ns0:p>Basic implications between the facts are used to deduce assumptions. Deductions are made using the Rete algorithm <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>. 6 For instance, the assumptions system knows that being an integer implies being rational.</ns0:p><ns0:formula xml:id='formula_2'>>>> i = Symbol('i', integer=True) >>> i.is_rational</ns0:formula><ns0:p>True Furthermore, expressions compute the assumptions on themselves based on the assumptions of their arguments. For instance, if x and y are both created with positive=True, then (x + y).is_positive will be True (whereas (x -y).is_positive will be None).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.4'>Simplification</ns0:head><ns0:p>The generic way to simplify an expression is by calling the simplify function. It must be emphasized that simplification is not a rigorously defined mathematical operation <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref>. The simplify function applies several simplification routines along with heuristics to make the output expression 'simple'. 7 It is often preferable to apply more directed simplification functions. These apply very specific rules to the input expression and are typically able to make guarantees about the output. For instance, the factor function, given a polynomial with rational coefficients in several variables, is guaranteed to produce a factorization into irreducible factors. Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> lists common simplification functions.</ns0:p><ns0:p>Examples for these simplification functions can be found in section 10 of the supplementary material.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.5'>Calculus</ns0:head><ns0:p>SymPy provides all the basic operations of calculus, such as calculating limits, derivatives, integrals, or summations. Limits are computed with the limit function, using the Gruntz algorithm <ns0:ref type='bibr' target='#b23'>[22]</ns0:ref> for computing symbolic limits and heuristics (a description of the Gruntz algorithm may be found in section 1 5 SymPy assumes that two expressions A and B commute with each other multiplicatively, that is, A • B = B • A, unless they both have commutative=False. Many algorithms in SymPy require special consideration to work correctly with noncommutative products. 6 For historical reasons, this algorithm is distinct from the sympy.logic submodule, which is discussed in section 3 of the supplementary material. SymPy also has an experimental assumptions system which stores facts separate from objects, and uses sympy.logic and a SAT solver for deduction. We will not discuss this system here. 7 The measure parameter of the simplify function lets the user specify the Python function used to determine how complex an expression is. The default measure function returns the total number of operations in the expression.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref> hyperexpand expand hypergeometric functions <ns0:ref type='bibr' target='#b46'>[44,</ns0:ref><ns0:ref type='bibr' target='#b47'>45]</ns0:ref> of the supplementary material). For example, the following computes lim x→∞ x sin( <ns0:ref type='formula' target='#formula_13'>1</ns0:ref>x ) = 1. Note that SymPy denotes ∞ as oo (two lower case 'o's). Integrals are calculated with the integrate function. SymPy implements a combination of the Risch algorithm <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>, table lookups, a reimplementation of Manuel Bronstein's 'Poor Man's Integrator' <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>, and an algorithm for computing integrals based on Meijer G-functions <ns0:ref type='bibr' target='#b46'>[44,</ns0:ref><ns0:ref type='bibr' target='#b47'>45]</ns0:ref>. These allow SymPy to compute a wide variety of indefinite and definite integrals. The Meijer G-function algorithm and the Risch algorithm are respectively demonstrated below by the computation of Summations are computed with the summation function, which uses a combination of Gosper's algorithm <ns0:ref type='bibr' target='#b22'>[21]</ns0:ref>, an algorithm that uses Meijer G-functions <ns0:ref type='bibr' target='#b46'>[44,</ns0:ref><ns0:ref type='bibr' target='#b47'>45]</ns0:ref>, and heuristics. Products are computed with product function via a suite of heuristics. Series expansions are computed with the series function. This example computes the power series of sin(x) around x = 0 up to x 6 .</ns0:p><ns0:formula xml:id='formula_3'>∞ 0 e −st log (t) dt = − log (s) + γ s and −2x 2 (log (x) + 1) e x 2 + e x 2 + 1 2 x e x 2 + 1 2 (log (x) + 1) dx = log (log (x) + 1) + 1 e x 2 +</ns0:formula><ns0:p>>>> series(sin(x), x, 0, 6)</ns0:p><ns0:p>x -x**3/6 + x**5/120 + O(x**6)</ns0:p><ns0:p>Section 2 of the supplementary material discusses series expansions methods in more depth. Integrals, derivatives, summations, products, and limits that cannot be computed return unevaluated objects. These can also be created directly if the user chooses.</ns0:p><ns0:p>>>> integrate(x**x, x)</ns0:p><ns0:formula xml:id='formula_4'>Integral(x**x, x)</ns0:formula><ns0:p>>>> Sum(2**i, (i, 0, n -1)) Sum(2**i, (i, 0, n -1))</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.6'>Polynomials</ns0:head><ns0:p>SymPy implements a suite of algorithms for polynomial manipulation, which ranges from relatively simple algorithms for doing arithmetic of polynomials, to advanced methods for factoring multivariate polynomials into irreducibles, symbolically determining real and complex root isolation intervals, or computing Gröbner bases.</ns0:p><ns0:p>Polynomial manipulation is useful in its own right. Within SymPy, though, it is mostly used indirectly as a tool in other areas of the library. In fact, many mathematical problems in symbolic computing are first expressed using entities from the symbolic core, preprocessed, and then transformed into a problem in the polynomial algebra, where generic and efficient algorithms are used to solve the problem. The solutions to the original problem are subsequently recovered from the results. This is a common scheme in symbolic integration or summation algorithms.</ns0:p><ns0:p>SymPy implements dense and sparse polynomial representations. 8 Both are used in the univariate and multivariate cases. The dense representation is the default for univariate polynomials.</ns0:p><ns0:p>For multivariate polynomials, the choice of representation is based on the application. The most common case for the sparse representation is algorithms for computing Gröbner bases (Buchberger, F4, and F5) <ns0:ref type='bibr' target='#b7'>[8,</ns0:ref><ns0:ref type='bibr' target='#b14'>14,</ns0:ref><ns0:ref type='bibr' target='#b15'>15]</ns0:ref>. This is because different monomial orderings can be expressed easily in this representation. However, algorithms for computing multivariate GCDs or factorizations, at least those currently implemented in SymPy <ns0:ref type='bibr' target='#b40'>[38]</ns0:ref>, are better expressed when the representation is dense. The dense multivariate representation is specifically a recursively-dense representation, where polynomials in K[x 0 , x 1 , . . . , x n ] are viewed as a polynomials in</ns0:p><ns0:formula xml:id='formula_5'>K[x 0 ][x 1 ] . . . [x n ]. Note</ns0:formula><ns0:p>that despite this, the coefficient domain K, can be a multivariate polynomial domain as well.</ns0:p><ns0:p>The dense recursive representation in Python gets inefficient as the number of variables increases.</ns0:p><ns0:p>Some examples for the sympy.polys submodule can be found in section 10 of the supplementary material.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.7'>Printers</ns0:head><ns0:p>SymPy has a rich collection of expression printers. By default, an interactive Python session will render the str form of an expression, which has been used in all the examples in this paper so far. The str form of an expression is valid Python and roughly matches what a user would type to enter the expression. 9</ns0:p><ns0:p>repr form, which is mean to be valid Python that recreates the object. In SymPy, str(expr) == repr(expr). In other words, the string representation of an expression is designed to be compact, human-readable, and valid Python code that could be used to recreate the expression. As noted in section 5.1, the srepr function prints the exact, verbose form of an expression.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:p><ns0:p>Manuscript to be reviewed A two-dimensional (2D) textual representation of the expression can be printed with monospace fonts via pprint. Unicode characters are used for rendering mathematical symbols such as integral signs, square roots, and parentheses. Greek letters and subscripts in symbol names that have Unicode code points associated are also rendered automatically.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>>>> pprint(Integral(sqrt(phi0 + 1), phi0))</ns0:p><ns0:formula xml:id='formula_6'>⌠ ⎮ ________ ⎮ ╲╱ φ₀ + 1 d(φ₀) ⌡ 1</ns0:formula><ns0:p>Alternately, the use_unicode=False flag can be set, which causes the expression to be printed using only ASCII characters.</ns0:p><ns0:p>>>> pprint(Integral(sqrt(phi0 + 1), phi0), use_unicode=False)</ns0:p><ns0:formula xml:id='formula_7'>/ | | __________ | \/ phi0 + 1 d(phi0) | /</ns0:formula><ns0:p>The function latex returns a L A T E X representation of an expression.</ns0:p><ns0:p>>>> print(latex(Integral(sqrt(phi0 + 1), phi0)))</ns0:p><ns0:formula xml:id='formula_8'>\int \sqrt{\phi_{0} + 1}\, d\phi_{0}</ns0:formula><ns0:p>Users are encouraged to run the init_printing function at the beginning of interactive sessions, which automatically enables the best pretty printing supported by their environment.</ns0:p><ns0:p>In the Jupyter Notebook or Qt Console <ns0:ref type='bibr' target='#b42'>[40]</ns0:ref>, the L A T E X printer is used to render expressions using MathJax or L A T E X, if it is installed on the system. The 2D text representation is used otherwise.</ns0:p><ns0:p>Other printers such as MathML are also available. SymPy uses an extensible printer subsystem, which allows extending any given printer, and also allows custom objects to define their printing behavior for any printer. The code generation functionality of SymPy relies on this subsystem to convert expressions into code in various target programming languages.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.8'>Solvers</ns0:head><ns0:p>SymPy has equation solvers that can handle ordinary differential equations, recurrence relationships, Diophantine equations 10 , and algebraic equations. There is also rudimentary support for simple partial differential equations.</ns0:p><ns0:p>There are two functions for solving algebraic equations in SymPy: solve and solveset. The domain parameter can be any set from the sympy.sets module (see section 5 of the supplementary material for details on sympy.sets), but is typically either S.Complexes (the default) or S.Reals; the latter causes solveset to only return real solutions. 10 See section 4 of the supplementary material for an in depth discussion on the Diophantine submodule.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>An important difference between the two functions is that the output API of solve varies with input (sometimes returning a Python list and sometimes a Python dictionary) whereas solveset always returns a SymPy set object.</ns0:p><ns0:p>Both functions implicitly assume that expressions are equal to 0. For instance, solveset(x -1, x) solves x − 1 = 0 for x.</ns0:p><ns0:p>solveset is under active development as a planned replacement for solve. There are certain features which are implemented in solve that are not yet implemented in solveset, including multivariate systems, and some transcendental equations.</ns0:p><ns0:p>Some examples for solveset and solve can be found in section 10 of the supplementary material.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.9'>Matrices</ns0:head><ns0:p>Besides being an important feature in its own right, computations on matrices with symbolic entries are important for many algorithms within SymPy. The following code shows some basic usage of the Matrix class.</ns0:p><ns0:formula xml:id='formula_9'>>>> A = Matrix([[x, x + y], [y, x]]) >>> A Matrix([ [x, x + y], [y, x]])</ns0:formula><ns0:p>SymPy matrices support common symbolic linear algebra manipulations, including matrix addition, multiplication, exponentiation, computing determinants, solving linear systems, singular values, and computing inverses using LU decomposition, LDL decomposition, Gauss-Jordan elimination, Cholesky decomposition, Moore-Penrose pseudoinverse, or adjugate matrices.</ns0:p><ns0:p>All operations are performed symbolically. For instance, eigenvalues are computed by generating the characteristic polynomial using the Berkowitz algorithm and then finding its zeros using polynomial routines. Internally these matrices store the elements as Lists of Lists (LIL) <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>, meaning the matrix is stored as a list of lists of entries (effectively, the input format used to create the matrix A above), making it a dense representation. 11 For storing sparse matrices, the SparseMatrix class can be used. Sparse matrices store their elements in Dictionary of Keys (DOK) format, meaning that the entries are stored as a dict of (row, column) pairs mapping to the elements.</ns0:p><ns0:p>SymPy also supports matrices with symbolic dimension values. MatrixSymbol represents a matrix with dimensions m × n, where m and n can be symbolic. Matrix addition and multiplication, scalar operations, matrix inverse, and transpose are stored symbolically as matrix expressions.</ns0:p><ns0:p>Block matrices are also implemented in SymPy. BlockMatrix elements can be any matrix expression, including explicit matrices, matrix symbols, and other block matrices. All functionalities of matrix expressions are also present in BlockMatrix.</ns0:p><ns0:p>When symbolic matrices are combined with the assumptions submodule for logical inference, they provide powerful reasoning over invertibility, semi-definiteness, orthogonality, etc., which are valuable in the construction of numerical linear algebra systems <ns0:ref type='bibr' target='#b49'>[46]</ns0:ref>.</ns0:p><ns0:p>More examples for Matrix and BlockMatrix may be found in section 10 of the supplementary material.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>NUMERICS</ns0:head><ns0:p>While SymPy primarily focuses on symbolics, it is impossible to have a complete symbolic system without the ability to numerically evaluate expressions. Many operations directly use numerical 11 Similar to the polynomials submodule, dense here means that all entries are stored in memory, contrasted with a sparse representation where only nonzero entries are stored.</ns0:p></ns0:div>
<ns0:div><ns0:head>10/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science evaluation, such as plotting a function, or solving an equation numerically. Beyond this, certain purely symbolic operations require numerical evaluation to effectively compute. For instance, determining the truth value of e + 1 > π is most conveniently done by numerically evaluating both sides of the inequality and checking which is larger.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Floating-Point Numbers</ns0:head><ns0:p>Floating-point numbers in SymPy are implemented by the Float class, which represents an arbitrary-precision binary floating-point number by storing its value and precision (in bits). This representation is distinct from the Python built-in float type, which is a wrapper around machine double types and uses a fixed precision (53-bit).</ns0:p><ns0:p>Because Python float literals are limited in precision, strings should be used to input precise decimal values: The evalf method converts a constant symbolic expression to a Float with the specified precision, here 25 digits:</ns0:p><ns0:p>>>> (pi + 1).evalf(25)</ns0:p></ns0:div>
<ns0:div><ns0:head>4.141592653589793238462643</ns0:head><ns0:p>Float numbers do not track their accuracy, and should be used with caution within symbolic expressions since familiar dangers of floating-point arithmetic apply <ns0:ref type='bibr' target='#b21'>[20]</ns0:ref>. A notorious case is that of catastrophic cancellation:</ns0:p><ns0:p>>>> cos(exp(-100)).evalf(25) -1 0</ns0:p><ns0:p>Applying the evalf method to the whole expression solves this problem. Internally, evalf estimates the number of accurate bits of the floating-point approximation for each sub-expression, and adaptively increases the working precision until the estimated accuracy of the final result matches the sought number of decimal digits:</ns0:p><ns0:p>>>> (cos(exp(-100)) -1).evalf(25) -6.919482633683687653243407e-88</ns0:p><ns0:p>The evalf method works with complex numbers and supports more complicated expressions, such as special functions, infinite series, and integrals. The internal error tracking does not provide rigorous error bounds (in the sense of interval arithmetic) and cannot be used to accurately track uncertainty in measurement data; the sole purpose is to mitigate loss of accuracy that typically occurs when converting symbolic expressions to numerical values.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>The mpmath Library</ns0:head><ns0:p>The implementation of arbitrary-precision floating-point arithmetic is supplied by the mpmath library <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>. Originally, it was developed as a SymPy submodule but has subsequently been moved to a standalone pure-Python package. The basic datatypes in mpmath are mpf and mpc, which respectively act as multiprecision substitutes for Python's float and complex. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Like SymPy, mpmath is a pure Python library. A design decision of SymPy is to keep it and its required dependencies pure Python. This is a primary advantage of mpmath over other multiple precision libraries such as GNU MPFR <ns0:ref type='bibr' target='#b17'>[17]</ns0:ref>, which is faster. Like SymPy, mpmath is also BSD licensed (GNU MPFR is licensed under the GNU Lesser General Public License <ns0:ref type='bibr' target='#b52'>[49]</ns0:ref>).</ns0:p><ns0:p>Internally, mpmath represents a floating-point number (−1) s x • 2 y by a tuple (s, x, y, b) where</ns0:p><ns0:p>x and y are arbitrary-size Python integers and the redundant integer b stores the bit length of x for quick access. If GMPY <ns0:ref type='bibr' target='#b24'>[23]</ns0:ref> is installed, mpmath automatically uses the gmpy.mpz type for x, and GMPY methods for rounding-related operations, improving performance.</ns0:p><ns0:p>Most mpmath and SymPy functions use the same naming scheme, although this is not true in every case. For example, the symbolic SymPy summation expression Sum(f(x), (x, a, b))</ns0:p><ns0:formula xml:id='formula_10'>representing b x=a f (x) is represented in mpmath as nsum(f, (a, b))</ns0:formula><ns0:p>, where f is a numeric Python function.</ns0:p><ns0:p>The mpmath library supports special functions, root-finding, linear algebra, polynomial approximation, and numerical computation of limits, derivatives, integrals, infinite series, and solving ODEs. All features work in arbitrary precision and use algorithms that allow computing hundreds of digits rapidly (except in degenerate cases).</ns0:p><ns0:p>The double exponential (tanh-sinh) quadrature is used for numerical integration by default.</ns0:p><ns0:p>For smooth integrands, this algorithm usually converges extremely rapidly, even when the integration interval is infinite or singularities are present at the endpoints <ns0:ref type='bibr' target='#b57'>[54,</ns0:ref><ns0:ref type='bibr' target='#b1'>2]</ns0:ref>. However, for good performance, singularities in the middle of the interval must be specified by the user. To evaluate slowly converging limits and infinite series, mpmath automatically tries Richardson extrapolation and the Shanks transformation (Euler-Maclaurin summation can also be used) <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>.</ns0:p><ns0:p>A function to evaluate oscillatory integrals by means of convergence acceleration is also available.</ns0:p><ns0:p>A wide array of higher mathematical functions is implemented with full support for complex values of all parameters and arguments, including complete and incomplete gamma functions, Equivalently, with SymPy's interface this function can be evaluated as:</ns0:p><ns0:formula xml:id='formula_11'>>>> meijerg([[],[0]], [[-S(1)/2,-1,-S(3)/2],[]], 10000).evalf()</ns0:formula></ns0:div>
<ns0:div><ns0:head>2.43925769071996e-94</ns0:head><ns0:p>Symbolic integration and summation often produce hypergeometric and Meijer G-function closed forms (see section 2.5); numerical evaluation of such special functions is a useful complement to direct numerical integration and summation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>PHYSICS SUBMODULE</ns0:head><ns0:p>SymPy includes several submodules that allow users to solve domain specific physics problems.</ns0:p><ns0:p>For example, a comprehensive physics submodule is included that is useful for solving problems in mechanics, optics, and quantum mechanics along with support for manipulating physical quantities with units.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Classical Mechanics</ns0:head><ns0:p>One of the core domains that SymPy suports is the physics of classical mechanics. This is in turn separated into two distinct components: vector algebra and mechanics. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.1'>Vector Algebra</ns0:head><ns0:p>The sympy.physics.vector submodule provides reference frame-, time-, and space-aware vector and dyadic objects that allow for three-dimensional operations such as addition, subtraction, scalar multiplication, inner and outer products, and cross products. The vector and dyadic objects both can be written in very compact notation that make it easy to express the vectors and dyadics in terms of multiple reference frames with arbitrarily defined relative orientations.</ns0:p><ns0:p>The vectors are used to specify the positions, velocities, and accelerations of points; orientations, angular velocities, and angular accelerations of reference frames; and forces and torques. The dyadics are essentially reference frame-aware 3 × 3 tensors <ns0:ref type='bibr' target='#b56'>[53]</ns0:ref>. The vector and dyadic objects can be used for any one-, two-, or three-dimensional vector algebra, and they provide a strong framework for building physics and engineering tools.</ns0:p><ns0:p>The following Python code demonstrates how a vector is created using the orthogonal unit vectors of three reference frames that are oriented with respect to each other, and the result of expressing the vector in the A frame. The B frame is oriented with respect to the A frame using Z-X-Z Euler Angles of magnitude π, π 2 , and π 3 , respectively, whereas the C frame is oriented with respect to the B frame through a simple rotation about the B frame's X unit vector through π 2 .</ns0:p><ns0:p>>>> from sympy.physics.vector import ReferenceFrame </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2'>Mechanics</ns0:head><ns0:p>The sympy.physics.mechanics submodule utilizes the sympy.physics.vector submodule to populate time-aware particle and rigid-body objects to fully describe the kinematics and kinetics of a rigid multi-body system. These objects store all of the information needed to derive the ordinary differential or differential algebraic equations that govern the motion of the system, i.e., the equations of motion. These equations of motion abide by Newton's laws of motion and can handle arbitrary kinematic constraints or complex loads. The submodule offers two automated methods for formulating the equations of motion based on Lagrangian Dynamics <ns0:ref type='bibr' target='#b31'>[30]</ns0:ref> and Kane's Method <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref>. Lastly, there are automated linearization routines for constrained dynamical systems <ns0:ref type='bibr' target='#b43'>[41]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Quantum Mechanics</ns0:head><ns0:p>The sympy.physics.quantum submodule has extensive capabilities to solve problems in quantum mechanics, using Python objects to represent the different mathematical objects relevant in quantum theory <ns0:ref type='bibr' target='#b53'>[50]</ns0:ref>: states (bras and kets), operators (unitary, Hermitian, etc.), and basis sets, as well as operations on these objects such as representations, tensor products, inner products, outer products, commutators, and anticommutators. The base objects are designed in the most general way possible to enable any particular quantum system to be implemented by subclassing the base operators and defining the relevant class methods to provide system-specific logic.</ns0:p><ns0:p>Symbolic quantum operators and states may be defined, and one can perform a full range of operations with them. Commutators can be expanded using common commutator identities:</ns0:p></ns0:div>
<ns0:div><ns0:head>>>> Commutator(C+B, A*D).expand(commutator=True) -[A,B]*D -[A,C]*D + A*[B,D] + A*[C,D]</ns0:head><ns0:p>On top of this set of base objects, a number of specific quantum systems have been implemented in a fully symbolic framework. These include:</ns0:p><ns0:p>• Many of the exactly solvable quantum systems, including simple harmonic oscillator states and raising/lowering operators, infinite square well states, and 3D position and momentum operators and states.</ns0:p><ns0:p>• Second quantized formalism of non-relativistic many-body quantum mechanics <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref>.</ns0:p><ns0:p>• Quantum angular momentum <ns0:ref type='bibr'>[65]</ns0:ref>. Spin operators and their eigenstates can be represented in any basis and for any quantum numbers. A rotation operator representing the Wigner D-matrix, which may be defined symbolically or numerically, is also implemented to rotate spin eigenstates. Functionality for coupling and uncoupling of arbitrary spin eigenstates is provided, including symbolic representations of Clebsch-Gordon coefficients and Wigner symbols.</ns0:p><ns0:p>• Quantum information and computing <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>. Multidimensional qubit states, and a full set of one-and two-qubit gates are provided and can be represented symbolically or as matrices/vectors. With these building blocks, it is possible to implement a number of basic quantum algorithms including the quantum Fourier transform, quantum error correction, quantum teleportation, Grover's algorithm, dense coding, etc. In addition, any quantum circuit may be plotted using the circuit_plot function (Figure <ns0:ref type='figure' target='#fig_17'>1</ns0:ref>).</ns0:p><ns0:p>Here are a few short examples of the quantum information and computing capabilities in sympy.physics.quantum. Start with a simple four-qubit state and flip the second qubit from the right using a Pauli-X gate:</ns0:p><ns0:p>>>> from sympy.physics.quantum.qubit import Qubit >>> from sympy.physics.quantum.gate import XGate >>> q = Qubit('0101')</ns0:p><ns0:formula xml:id='formula_12'>>>> q |0101> >>> X = XGate(1)</ns0:formula><ns0:p>>>> qapply(X*q)</ns0:p></ns0:div>
<ns0:div><ns0:head>|0111></ns0:head><ns0:p>Qubit states can also be used in adjoint operations, tensor products, inner/outer products:</ns0:p><ns0:formula xml:id='formula_13'>>>> Dagger(q) <0101| >>> ip = Dagger(q)*q >>> ip <0101|0101> >>> ip.doit()<ns0:label>1</ns0:label></ns0:formula><ns0:p>Quantum gates (unitary operators) can be applied to transform these states and then classical measurements can be performed on the results: Lastly, the following example demonstrates creating a three-qubit quantum Fourier transform, decomposing it into one-and two-qubit gates, and then generating a circuit plot for the sequence of gates (see Figure <ns0:ref type='figure' target='#fig_17'>1</ns0:ref>). </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>ARCHITECTURE</ns0:head><ns0:p>Software architecture is of central importance in any large software project because it establishes predictable patterns of usage and development <ns0:ref type='bibr' target='#b54'>[51]</ns0:ref>. This section describes the essential structural components of SymPy, provides justifications for the design decisions that have been made, and gives example user-facing code as appropriate.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>The Core</ns0:head><ns0:p>A computer algebra system stores mathematical expressions as data structures. For example, the mathematical expression x + y is represented as a tree with three nodes, +, x, and y, where x and y are ordered children of +. As users manipulate mathematical expressions with traditional mathematical syntax, the CAS manipulates the underlying data structures. Symbolic computations such as integration, simplification, etc. are all functions that consume and produce expression trees.</ns0:p><ns0:p>In SymPy every symbolic expression is an instance of the class Basic, 12 the superclass of all SymPy types providing common methods to all SymPy tree-elements, such as traversals. The children of a node in the tree are held in the args attribute. A leaf node in the expression tree has empty args.</ns0:p><ns0:p>For example, consider the expression xy + 2: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>By order of operations, the parent of the expression tree for expr is an addition. It is of type A useful way to view an expression tree is using the srepr function, which returns a string representation of an expression as valid Python code 13 with all the nested class constructor calls to create the given expression. This means that expressions are rebuildable from their args. 14 Note that in SymPy the == operator represents exact structural equality, not mathematical equality. This allows testing if any two expressions are equal to one another as expression trees. For example, even though (x + 1) 2 and x 2 + 2x + 1 are equal mathematically, SymPy gives Another important property of SymPy expressions is that they are immutable. This simplifies the design of SymPy, and enables expression interning. It also enables expressions to be hashed, which allows expressions to be used as keys in Python dictionaries, and is used to implement caching in SymPy.</ns0:p><ns0:p>Python allows classes to override mathematical operators. The Python interpreter translates the above x*y + 2 to, roughly, (x.__mul__(y)).__add__ <ns0:ref type='bibr' target='#b1'>(2)</ns0:ref>. Both x and y, returned from the symbols function, are Symbol instances. The 2 in the expression is processed by Python as a 13 The dotprint function from the sympy.printing.dot submodule prints output to dot format, which can be rendered with Graphviz to visualize expression trees graphically.</ns0:p><ns0:p>14 expr.func is used instead of type(expr) to allow the function of an expression to be distinct from its actual Python class. In most cases the two are the same.</ns0:p></ns0:div>
<ns0:div><ns0:head>16/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science literal, and is stored as Python's built in int type. When 2 is passed to the __add__ method of Symbol, it is converted to the SymPy type Integer(2) before being stored in the resulting expression tree. In this way, SymPy expressions can be built in the natural way using Python operators and numeric literals.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Extensibility</ns0:head><ns0:p>While the core of SymPy is relatively small, it has been extended to a wide variety of domains by a broad range of contributors. This is due, in part, to the fact that the same language, Python, is used both for the internal implementation and the external usage by users. All of the extensibility capabilities available to users are also utilized by SymPy itself. This eases the transition pathway from SymPy user to SymPy developer.</ns0:p><ns0:p>The typical way to create a custom SymPy object is to subclass an existing SymPy class, usually Basic, Expr, or Function. As it was stated before, all SymPy classes used for expression trees should be subclasses of the base class Basic. Expr is the Basic subclass for mathematical objects that can be added and multiplied together. The most commonly seen classes in SymPy are subclasses of Expr, including Add, Mul, and Symbol. Instances of Expr typically represent complex numbers, but may also include other 'rings', like matrix expressions. Not all SymPy classes are subclasses of Expr. For instance, logic expressions, such as And(x, y), are subclasses of Basic but not of Expr. 15 The Function class is a subclass of Expr which makes it easier to define mathematical functions called with arguments. This includes named functions like sin(x) and log(x) as well as undefined functions like f (x). Subclasses of Function should define a class method eval, which returns an evaluated value for the function application (usually an instance of some other class, e.g., a Number), or None if for the given arguments it should not be automatically evaluated.</ns0:p><ns0:p>Many SymPy functions perform various evaluations down the expression tree. Classes define their behavior in such functions by defining a relevant _eval_* method. For instance, an object can indicate to the diff function how to take the derivative of itself by defining the _eval_derivative(self, x) method, which may in turn call diff on its args. (Subclasses of Function should implement the fdiff method instead; it returns the derivative of the function without considering the chain rule.) The most common _eval_* methods relate to the assumptions: _eval_is_assumption is used to deduce assumption on the object. Listing 1 presents an example of this extensibility. It gives a stripped down version of the gamma function Γ(x) from SymPy. The methods defined allow it to evaluate itself on positive integer arguments, define the real assumption, allow it to be rewritten in terms of factorial (with gamma(x).rewrite(factorial)), and allow it to be differentiated. self.func is used throughout instead of referencing gamma explicitly so that potential subclasses of gamma can reuse the methods. The gamma function implemented in SymPy has many more capabilities than the above listing, such as evaluation at rational points and series expansion.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Performance</ns0:head><ns0:p>Due to being written in pure Python without the use of extension modules, SymPy's performance characteristics are generally poorer than that of its commercial competitors. For many applications, the performance of SymPy, as measured by clock cycles, memory usage, and memory layout, is sufficient. However, the boundaries for when SymPy's pure Python strategy becomes insufficient are when the user requires handling of very long expressions or many small expressions.</ns0:p><ns0:p>Where this boundray lies depends on the system at hand, but tends to be within the range of 10 4 -10 6 symbols for modern computers.</ns0:p><ns0:p>For this reason, a new project called SymEngine <ns0:ref type='bibr' target='#b63'>[60]</ns0:ref> has been started. The aim of this poject is to develop a library with better performance characteristics for symbolic manipulation.</ns0:p><ns0:p>SymEngine is a pure C++ library, which allows it fine-grained control over the memory layout of expressions. SymEngine has thin wrappers to other languages (Python, Ruby, Julia, etc.). Its aim is to be the fastest symbolic manipulation library. Preliminary benchmarks suggest that SymEngine performs as well as its commercial and open source competitors.</ns0:p><ns0:p>The development version of SymPy has recently started to use SymEngine as an optional backend, initially in sympy.physics.mechanics only. Future work will involve allowing more algorithms in SymPy to use SymEngine as a backend.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>PROJECTS THAT DEPEND ON SYMPY</ns0:head><ns0:p>There are several projects that depend on SymPy as a library for implementing a part of their functionality. A selection of these projects are listed in Table <ns0:ref type='table'>3</ns0:ref>. Table <ns0:ref type='table'>3</ns0:ref>. Selected projects that depend on SymPy.</ns0:p></ns0:div>
<ns0:div><ns0:head>Project name Description</ns0:head></ns0:div>
<ns0:div><ns0:head>SymPy Gamma</ns0:head><ns0:p>An open source analog of Wolfram|Alpha that uses SymPy <ns0:ref type='bibr' target='#b64'>[61]</ns0:ref>. There is more information about SymPy Gamma in section 11 of the supplementary material.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cadabra</ns0:head><ns0:p>A CAS designed specifically for the resolution of problems encountered in field theory <ns0:ref type='bibr' target='#b41'>[39]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>GNU Octave Symbolic Package</ns0:head><ns0:p>An implementation of a symbolic toolbox for Octave using SymPy <ns0:ref type='bibr' target='#b62'>[59]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>SymPy.jl</ns0:head><ns0:p>A Julia interface to SymPy, provided using PyCall <ns0:ref type='bibr' target='#b65'>[62]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mathics</ns0:head><ns0:p>A free, online CAS featuring Mathematica compatible syntax and functions <ns0:ref type='bibr' target='#b59'>[56]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Mathpix</ns0:head><ns0:p>An iOS App that detects handwritten math as input and uses SymPy Gamma to evaluate the math input and generate the relevant steps to solve the problem <ns0:ref type='bibr' target='#b34'>[33]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>IKFast</ns0:head><ns0:p>A robot kinematics compiler provided by OpenRAVE <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>SageMath</ns0:head><ns0:p>A free open-source mathematics software system, which builds on top of many existing open-source packages, including SymPy <ns0:ref type='bibr' target='#b61'>[58]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>PyDy</ns0:head><ns0:p>Multibody Dynamics with Python <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>. galgebra A Python package for geometric algebra (previously sympy.galgebra) <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Quameon</ns0:head><ns0:p>Quantum Monte Carlo in Python <ns0:ref type='bibr' target='#b60'>[57]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Lcapy</ns0:head><ns0:p>An experimental Python package for teaching linear circuit analysis <ns0:ref type='bibr' target='#b58'>[55]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>SymPy is a robust computer algebra system that provides a wide spectrum of features both in traditional computer algebra and in a plethora of scientific disciplines. It can be used in a first-class way with other Python projects, including the scientific Python stack.</ns0:p><ns0:p>SymPy supports a wide array of mathematical facilities. These include functions for assuming and deducing common mathematical facts, simplifying expressions, performing common calculus operations, manipulating polynomials, pretty printing expressions, solving equations, and representing symbolic matrices. Other supported facilities include discrete math, concrete math, plotting, geometry, statistics, sets, series, vectors, combinatorics, group theory, code generation, tensors, Lie algebras, cryptography, and special functions. SymPy has strong support for arbitrary precision numerics, backed by the mpmath package. Additionally, SymPy contains submodules targeting certain specific physics domains, such as classical mechanics and quantum mechanics. This breadth of domains has been engendered by a strong and vibrant user community. Anecdotally, many of these users chose SymPy because of its ease of access. SymPy is a dependency of many external projects across a wide spectrum of domains.</ns0:p><ns0:p>SymPy expressions are immutable trees of Python objects. Unlike many other CAS's, SymPy is designed to be used in an extensible way: both as an end-user application and as a library.</ns0:p><ns0:p>SymPy uses Python both as the internal language and the user language. This permits users to access the same methods used by the library itself in order to extend it for their needs.</ns0:p><ns0:p>Some of the planned future work for SymPy includes work on improving code generation, improvements to the speed of SymPy using SymEngine, improving the assumptions system, and improving the solvers submodule.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Conclusions and future directions for SymPy are given in section 7. All examples in this paper use SymPy version 1.0 and mpmath version 0.19.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>function as p/q with common factors canceled apart compute the partial fraction decomposition of a rational function trigsimp simplify trigonometric expressions</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>>>> limit(x*sin(1/x), x, oo) 1 As a more complex example, SymPy computes lim x→0 2e 1−cos (x) sin (x) − 1 sinh (x) atan 2 (x) = e. >>> limit((2*exp((1-cos(x))/sin(x))-1)**(sinh(x)/atan(x)**2), x, 0) E Derivatives are computed with the diff function, which recursively uses the various differentiation rules.>>> diff(sin(x)*exp(x), x) exp(x)*sin(x) + exp(x)*cos(x)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>7 / 22</ns0:head><ns0:label>722</ns0:label><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016) Manuscript to be reviewed Computer Science >>> i, n = symbols('i n') >>> summation(2**i, (i, 0, n -1)) 2**n -1 >>> summation(i*factorial(i), (i, 1, n)) n*factorial(n) + factorial(n) -1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>>>> phi0 = Symbol('phi0') >>> str(Integral(sqrt(phi0), phi0)) 'Integral(sqrt(phi0), phi0)'</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>solveset</ns0:head><ns0:label /><ns0:figDesc>has several design changes with respect to the older solve function. This distinction is present in order to resolve the usability issues with the previous solve function API while maintaining backward compatibility with earlier versions of SymPy. solveset only requires essential input information from the user. The function signatures of solve and solveset are solve(f, *symbols, **flags) solveset(f, symbol, domain=S.Complexes)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>>>> A.eigenvals() {x -sqrt(y*(x + y)): 1, x + sqrt(y*(x + y)): 1}</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>Bessel functions, orthogonal polynomials, elliptic functions and integrals, zeta and polylogarithm functions, the generalized hypergeometric function, and the Meijer G-function. The Meijer G-function instance G 3,0 1,3 0; 1 2 , −1, − 3 2 |x is a good test case [63]; past versions of both Maple and Mathematica produced incorrect numerical values for large x > 0. Here, mpmath automatically removes an internal singularity and compensates for cancellations (amounting to 656 bits of precision when x = 10000), giving correct values: >>> mpmath.mp.dps = 15 >>> mpmath.meijerg([[],[0]], [[-0.5,-1,-1.5],[]], 10000) mpf('2.4392576907199564e-94')</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>>>> A, B, C = symbols('A B C', cls=ReferenceFrame) >>> B.orient(A, 'body', (pi, pi/3, pi/4), 'zxz') >>> C.orient(B, 'axis', (pi/2, B.x)) >>> v = 1*A.x + 2*B.z + 3*C.y >>> v A.x + 2*B.z + 3*C.y >>> v.express(A) A.x + 5*sqrt(3)/2*A.y + 5/2*A.z</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>22 PeerJ</ns0:head><ns0:label>22</ns0:label><ns0:figDesc>>>> from sympy.physics.quantum import Commutator, Dagger, Operator >>> from sympy.physics.quantum import Ket, qapply >>> A, B, C, D = symbols('A B C D', cls=Operator) >>> a = Ket('a') >>> comm = Commutator(A, B) >>> comm [A,B] >>> qapply(Dagger(comm*a)).doit() 13/Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016) Manuscript to be reviewed Computer Science -<a|*(Dagger(A)*Dagger(B) -Dagger(B)*Dagger(A))</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>14 / 22 PeerJFigure 1 .</ns0:head><ns0:label>14221</ns0:label><ns0:figDesc>Figure 1. The circuit diagram for a three-qubit quantum Fourier transform generated by SymPy.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>>>> from sympy.physics.quantum.qft import QFT >>> from sympy.physics.quantum.circuitplot import circuit_plot >>> fourier = QFT(0,3).decompose() >>> fourier SWAP(0,2)*H(0)*C((0),S(1))*H(1)*C((0),T(2))*C((1),S(2))*H(2) >>> c = circuit_plot(fourier, nqubits=3)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>2 12 15 / 22 PeerJ</ns0:head><ns0:label>21522</ns0:label><ns0:figDesc>>>> x, y = symbols('x y') >>> expr = x*y + Some internal classes, such as those used in the polynomial submodule, do not follow this rule for efficiency reasons. Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Add.</ns0:head><ns0:label /><ns0:figDesc>The child nodes of expr are 2 and x*y. >>> type(expr) <class 'sympy.core.add.Add'> >>> expr.args (2, x*y) Descending further down into the expression tree yields the full expression. For example, the next child node (given by expr.args[0]) is 2. Its class is Integer, and it has an empty args tuple, indicating that it is a leaf node. >>> expr.args[0] 2 >>> type(expr.args[0]) <class 'sympy.core.numbers.Integer'> >>> expr.args[0].args () Symbols or symbolic constants, like e or π, are other examples of leaf nodes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>>>> srepr(expr)'Add(Mul(Symbol('x'), Symbol('y')), Integer(2))'Every SymPy expression satisfies a key identity invariant: expr.func(*expr.args) == expr</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>>>> (x + 1 ) 1 False</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>**2 == x**2 + 2*x + because they are different as expression trees (the former is a Pow object and the latter is an Add object).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Listing 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>A minimal implementation of sympy.gamma. from sympy import Function, Integer, factorial, polygamma class gamma(Function): @classmethod def eval(cls, arg): if isinstance(arg, Integer) and arg.is_positive: return factorial(arg -1) def _eval_is_real(self): x = self.args[0] # noninteger means real and not integer if x.is_positive or x.is_noninteger: return True def _eval_rewrite_as_factorial(self, z): return factorial(z -1) 15 See section 3 of the supplementary material for more information on the sympy.logic submodule. 17/22 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016) Manuscript to be reviewed Computer Science def fdiff(self, argindex=1): from sympy.core.function import ArgumentIndexError if argindex == 1: return self.func(self.args[0])*polygamma(0, self.args[0]) else: raise ArgumentIndexError(self, argindex)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>18 / 22 PeerJ</ns0:head><ns0:label>1822</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016) Manuscript to be reviewed Computer Science yt A Python package for analyzing and visualizing volumetric data [64]. SfePy A Python package for solving partial differential equations (PDEs) in 1D, 2D, and 3D by the finite element (FE) method [66, 10].</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>SymPy Features and Descriptions.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature (submodules)</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>Calculus (sympy.core,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>sympy.calculus,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>sympy.integrals,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>sympy.series)</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Series expansion, sequences, and limits of sequences. This includes Taylor, Laurent, and Puiseux series as well as special series, such as Fourier and formal power series. Sets (sympy.sets) Representations of empty, finite, and infinite sets (including special sets such as the natural, integer, and complex numbers). Operations on sets such as union, intersection,</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell cols='2'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>Series (sympy.series)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Cartesian product, and building sets from other sets are</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>supported.</ns0:cell></ns0:row><ns0:row><ns0:cell>Simplification</ns0:cell><ns0:cell cols='2'>Functions for manipulating and simplifying expressions. In-</ns0:cell></ns0:row><ns0:row><ns0:cell>(sympy.simplify)</ns0:cell><ns0:cell cols='2'>cludes algorithms for simplifying hypergeometric functions,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>trigonometric expressions, rational functions, combinatorial</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>functions, square root denesting, and common subexpression</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>elimination.</ns0:cell><ns0:cell>for two</ns0:cell></ns0:row><ns0:row><ns0:cell>Solvers (sympy.solvers)</ns0:cell><ns0:cell cols='2'>univariate polynomials. Functions for symbolically solving equations, systems of equa-</ns0:cell></ns0:row><ns0:row><ns0:cell>Cryptography</ns0:cell><ns0:cell cols='2'>Block and stream ciphers, including shift, Affine, substitution, tions, both linear and non-linear, inequalities, ordinary differ-</ns0:cell></ns0:row><ns0:row><ns0:cell>(sympy.crypto)</ns0:cell><ns0:cell cols='2'>Vigenère's, Hill's, bifid, RSA, Kid RSA, linear-feedback shift ential equations, partial differential equations, Diophantine</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>registers, and Elgamal encryption. equations, and recurrence relations.</ns0:cell></ns0:row><ns0:row><ns0:cell>Differential Geometry Special Functions</ns0:cell><ns0:cell cols='2'>Representations of manifolds, metrics, tensor products, and Implementations of a number of well known special functions,</ns0:cell></ns0:row><ns0:row><ns0:cell>(sympy.diffgeom) (sympy.functions)</ns0:cell><ns0:cell cols='2'>coordinate systems in Riemannian and pseudo-Riemannian including Dirac delta, Gamma, Beta, Gauss error functions,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>geometries [52]. Fresnel integrals, Exponential integrals, Logarithmic integrals,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Geometry (sympy.geometry) Representations of 2D geometrical entities, such as lines and Trigonometric integrals, Bessel, Hankel, Airy, B-spline, Rie-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>circles. Enables queries on these entities, such as asking the mann Zeta, Dirichlet eta, polylogarithm, Lerch transcendent,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>area of an ellipse, checking for collinearity of a set of points, hypergeometric, elliptic integrals, Mathieu, Jacobi polynomi-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>or finding the intersection between objects. als, Gegenbauer polynomial, Chebyshev polynomial, Legendre</ns0:cell></ns0:row><ns0:row><ns0:cell>Lie Algebras</ns0:cell><ns0:cell cols='2'>Representations of Lie algebras and root systems. polynomial, Hermite polynomial, Laguerre polynomial, and</ns0:cell></ns0:row><ns0:row><ns0:cell>(sympy.liealgebras)</ns0:cell><ns0:cell>spherical harmonic functions.</ns0:cell></ns0:row><ns0:row><ns0:cell>Logic (sympy.logic) Statistics (sympy.stats)</ns0:cell><ns0:cell cols='2'>Boolean expressions, equivalence testing, satisfiability, and Support for a random variable type as well as the ability</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>normal forms. to declare this variable from prebuilt distribution functions</ns0:cell></ns0:row><ns0:row><ns0:cell>Matrices (sympy.matrices)</ns0:cell><ns0:cell cols='2'>Tools for creating matrices of symbols and expressions. Both such as Normal, Exponential, Coin, Die, and other custom</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>sparse and dense representations, as well as symbolic linear distributions [47].</ns0:cell></ns0:row><ns0:row><ns0:cell>Tensors (sympy.tensor)</ns0:cell><ns0:cell cols='2'>algebraic operations (e.g., inversion and factorization), are Symbolic manipulation of indexed objects.</ns0:cell></ns0:row><ns0:row><ns0:cell>Vectors (sympy.vector)</ns0:cell><ns0:cell cols='2'>supported. Basic operations on vectors and differential calculus with</ns0:cell></ns0:row><ns0:row><ns0:cell>Matrix Expressions</ns0:cell><ns0:cell cols='2'>Matrices with symbolic dimensions (unspecified entries). respect to 3D Cartesian coordinate systems.</ns0:cell></ns0:row><ns0:row><ns0:cell>(sympy.matrices.expressions)</ns0:cell><ns0:cell>Block matrices.</ns0:cell></ns0:row><ns0:row><ns0:cell>Number Theory</ns0:cell><ns0:cell cols='2'>Prime number generation, primality testing, integer factor-</ns0:cell></ns0:row><ns0:row><ns0:cell>(sympy.ntheory)</ns0:cell><ns0:cell cols='2'>ization, continued fractions, Egyptian fractions, modular</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>arithmetic, quadratic residues, partitions, binomial and multi-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>nomial coefficients, prime number tools, hexidecimal digits</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>of π, and integer factorization.</ns0:cell></ns0:row><ns0:row><ns0:cell>Plotting (sympy.plotting)</ns0:cell><ns0:cell cols='2'>Hooks for visualizing expressions via matplotlib [25] or as</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>text drawings when lacking a graphical back-end. 2D func-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>tion plotting, 3D function plotting, and 2D implicit function</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>plotting are supported.</ns0:cell></ns0:row><ns0:row><ns0:cell>Polynomials (sympy.polys)</ns0:cell><ns0:cell cols='2'>Polynomial algebras over various coefficient domains. Func-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>tionality ranges from simple operations (e.g., polynomial divi-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>sion) to advanced computations (e.g., Gröbner bases [1] and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>multivariate factorization over algebraic number domains).</ns0:cell></ns0:row><ns0:row><ns0:cell>Printing (sympy.printing)</ns0:cell><ns0:cell cols='2'>Functions for printing SymPy expressions in the terminal</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>with ASCII or Unicode characters and converting SymPy</ns0:cell></ns0:row><ns0:row><ns0:cell>Quantum Mechanics</ns0:cell><ns0:cell cols='2'>expressions to L A T E X and MathML. Quantum states, bra-ket notation, operators, basis sets, rep-</ns0:cell></ns0:row><ns0:row><ns0:cell>(sympy.physics.quantum)</ns0:cell><ns0:cell cols='2'>resentations, tensor products, inner products, outer products,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>commutators, anticommutators, and specific quantum system</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>implementations.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>4/22</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11410:2:0:NEW 10 Nov 2016)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Some SymPy Simplification Functions</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='1'>This paper assumes a moderate familiarity with the Python programming language.</ns0:note>
<ns0:note place='foot' n='8'>In a dense representation, the coefficients for all terms up to the degree of each variable are stored in memory. In a sparse representation, only the nonzero coefficients are stored.<ns0:ref type='bibr' target='#b8'>9</ns0:ref> Many Python libraries distinguish the str form of an object, which is meant to be human-readable, and the</ns0:note>
</ns0:body>
" | "Aaron Meurer
ERGS: http://www.ergs.sc.edu/
Nuclear Eng., Mechanical Eng. Dept.
University of South Carolina
541 Main Street, St. 009
Columbia, SC 29201
[email protected]
November 9, 2016
Dear Editors,
We thank you and the referees for your additional comments and suggestions on our manuscript,
“SymPy: symbolic computing in Python”. We have made all the changes that were requested. As
before, your comments and concerns are addressed below, with your and the reviewers’ comments
on white background and our replies on gray background.
Given these and the previous edits, we believe that our manuscript is now suitable for publication
in PeerJ Computer Science. If there are any further questions or concerns please do not hesitate to
contact us.
Best regards,
Aaron Meurer
1
Comments from Editor
1. In section 2.3, lines 128–129, it is written
√
For instance, the identity t2 = t holds if t is nonnegative (t ≥ 0). However,
for general complex t, no such identity holds.
√
The first sentence doesn’t make sense unless · is defined; obviously it is intended to
yield the nonnegative square root. The second
√ sentence is incorrect, so needs rewording
2
or deleting.
√ For complex t it is true that t = t holds if t lies in the right half-plane,
assuming · is defined to be the square root lying in the right half-plane.
We reworked the√
two sentences to clarify the situation. We have added a footnote with
the definition of z that SymPy uses.
2. Page 7, line 186: please state in the text whether this is typed as two lower case letter
o’s, as opposed to being some special symbol.
We have added a note here.
3. Page 10, line 315: replace “solving” by “finding its zeros”.
We have fixed this.
4. Page 18, line 684: replace “is as performant as” by “performs as well as”. The former is
not correct English.
We have fixed this.
5. Reference [42]: the title and publisher appear to have run together.
We have fixed this.
6. Supplement, line 1: delete “for”.
We have fixed this.
7. Supplement, line 18: “dependends” → “depends”.
Page 2
We have fixed this.
2
Comments from Reviewer 1
There are no additional comments from reviewer 1.
3
3.1
Comments from Reviewer 2
Basic reporting
No comments.
3.2
Experimental design
No comments.
3.3
Validity of the findings
No comments.
3.4
Comments for the author
This is a great paper. Thank you for addressing my concerns.
4
4.1
Comments from Reviewer 3
Basic reporting
The structure of the paper and the supplement is now improved, compared to the original
submission, and the materials are now much easier to read and comprehend.
4.2
Experimental design
No Comments
4.3
Validity of the findings
No Comments
Page 3
4.4
Comments for the author
4.4.1
The paper
1. The abstract and the paper itself mention the paper’s supplement in several places. I
believe it would be beneficial to add a short paragraph, possibly after line 90, that would
list the supplement’s contents in a manner similar to that of the paper’s contents (lines
83–90), with the list submodules, to make it easier for an interested reader to find their
way around it.
We have added a paragraph about the supplement at the suggested location. We have
also referenced specific supplement section numbers throughout the paper.
2. In line 323, it should say “. . . meaning that the entries. . . ”.
We have fixed this.
3. In line 673, it should say either “poorer than that of its commercial competitors” or
“poorer than its commercial competitors” ’ (ending in apostrophe).
We have used the first suggestion.
4. In line 813, one “PhD thesis” seems to be superficial.
We have fixed this.
4.4.2
The supplement
5. Figure 1 is hard to read. I suggest putting the labels on the outside of the loops.
The figure code is generated automatically from the category theory submodule. We
have made this clearer by changing the figure caption to “A diagram typeset in Xy-pic
automatically by XypicDiagramDrawer”. We have opened issue 11744 in our public
issue tracker for improving the label placement.
6. RFigure 2 has the wrong link towards SymPy Gamma’s computation of tan(x) instead of
tan(x) dx.
The original link does include the integral steps, since SymPy Gamma automatically
includes the integral as one of the computations when given an expression. However, we
have changed the link as suggested to a query for “integrate tan(x)”, as this is clearer.
Page 4
" | Here is a paper. Please give your review comments after reading it. |
707 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Network embedding has shown its effectiveness in many tasks, such as link prediction, node classification, and community detection. Most attributed network embedding methods consider topological features and attribute features to obtain a node embedding but ignore its implicit information behavior features, including information inquiry, interaction, and sharing. These can potentially lead to ineffective performance for downstream applications. In this paper, we propose a novel network embedding framework, named information behavior extraction (IBE), that incorporates nodes' topological features, attribute features, and information behavior features within a joint embedding framework. To design IBE, we use an existing embedding method (e.g., SDNE, CANE, or CENE) to extract a node's topological features and attribute features into a basic vector. Then, we propose a topic-sensitive network embedding (TNE) model to extract a node's information behavior features and eventually generate information behavior feature vectors. In our TNE model, we design an importance score rating algorithm (ISR), which considers both effects of the topic-based community of a node and its interaction with adjacent nodes to capture the node's information behavior features. Eventually, we concatenate a node's information behavior feature vector with its basic vector to get its ultimate joint embedding vector. Extensive experiments demonstrate that our method achieves significant and consistent improvements compared to several state-of-the-art embedding methods on link prediction.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Network embedding (NE) aiming to map nodes of networks into a low-dimensional vector space, has been proved extremely useful in many applications, such as node classification <ns0:ref type='bibr' target='#b21'>(Perozzi et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b26'>Tang et al., 2015)</ns0:ref>, node clustering <ns0:ref type='bibr' target='#b2'>(Cao et al., 2015)</ns0:ref>, link prediction <ns0:ref type='bibr' target='#b7'>(Grover and Leskovec, 2016)</ns0:ref>. A number of network embedding models have been proposed to learn low-dimensional vectors for nodes via leveraging their structure and attribute information in the network. For example, spectral clustering is an early method for learning node embedding, including models such as DGE <ns0:ref type='bibr' target='#b22'>(Perrault-Joncas and Meilǎ, 2011)</ns0:ref>, LE <ns0:ref type='bibr' target='#b31'>(Wang, 2012)</ns0:ref> and LLE <ns0:ref type='bibr' target='#b24'>(Roweis and Saul, 2000)</ns0:ref>. Matrix decomposition is another important method for learning node embedding, for example, GraRep <ns0:ref type='bibr' target='#b2'>(Cao et al., 2015)</ns0:ref> and TADW <ns0:ref type='bibr' target='#b33'>(Yang et al., 2015)</ns0:ref>.</ns0:p><ns0:p>Spectral clustering and matrix decomposition embedding methods usually have high computational complexity with the increasing network size. In recent years, various network embedding methods have been proposed using random walk-based methods, which are faster and more effective, including DeepWalk <ns0:ref type='bibr' target='#b21'>(Perozzi et al., 2014)</ns0:ref>, Node2Vec <ns0:ref type='bibr' target='#b7'>(Grover and Leskovec, 2016)</ns0:ref>, LINE <ns0:ref type='bibr' target='#b26'>(Tang et al., 2015)</ns0:ref>, and SDNE <ns0:ref type='bibr' target='#b29'>(Wang et al., 2016)</ns0:ref>. More recently, deep learning and attention mechanisms are used to generate network embeddings, including GCN <ns0:ref type='bibr' target='#b13'>(Kipf and Welling, 2016)</ns0:ref>, GAT <ns0:ref type='bibr' target='#b28'>(Veličković et al., 2018)</ns0:ref> and CANE </ns0:p><ns0:formula xml:id='formula_0'>(v 0 , v 3 ) ∈ C 0 , v 1 ∈ C 1 , v 2 ∈ C 2 ).</ns0:formula><ns0:p>Nodes interact in intra-community, such as nodes v 0 and v 3 . Nodes in different communities may interact with each other, such as nodes v 0 and v 1 , v 1 and v 2 , v 2 and v 3 . Meantime, due to the existence of bridge nodes v 0 or v 2 , nodes v 1 and v 3 may have a link which is not represented in the current network. <ns0:ref type='bibr' target='#b27'>(Tu et al., 2017)</ns0:ref>, which extract structure, text, topic, and other heterogeneous feature information more effectively. In addition to the above methods, a few role-based approaches of network embedding <ns0:ref type='bibr' target='#b11'>(Jiao et al., 2021)</ns0:ref> have been proposed recently, where a role-based network embedding <ns0:ref type='bibr' target='#b35'>(Zhang et al., 2021)</ns0:ref> captures the structural similarity and the role information. Nevertheless, these methods are all limited with focusing on the network topological structure and attributed information while ignoring the implicit relationship between information. In reality, information has interactive behavior in some real-world networks such as social networks and citation networks, where the nodes have information behavior <ns0:ref type='bibr' target='#b23'>(Pettigrew et al., 2001)</ns0:ref>, including information inquiry, information access, and information sharing.</ns0:p><ns0:p>Therefore, we introduce the concept of informational behavior to deal with the information interaction.</ns0:p><ns0:p>In real-world social networks and citation networks, it is intuitive that all of the nodes naturally prefer to interact with similar nodes. In this way, the exchange and sharing of information between nodes are more efficient, which is also the reason for the formation of various topic networks communities. For example, due to different majors, three topic-based communities, C 0 , C 1 , and C 2 , have been formed (as illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). In these communities, nodes interact within both intra-community (such as nodes v 0 and v 3 ) and inter-community (such as nodes v 0 and v 1 , v 1 and v 2 , v 2 and v 3 ). This means that one node may communicate and share information in various topics when interacting with neighboring nodes of different communities and build bridges among nodes that are not directly connected, such as nodes v 1 and v 3 may have a link because nodes v 0 or v 2 acts as a bridge, but this link is not observed. It can be seen that these information behaviors are very important features, and the representation vectors for nodes without information behavior features are incomplete. However, these existing embedding methods are not able to cope with the information behavior of nodes.</ns0:p><ns0:p>To tackle the above-identified problems, we make the following contributions: (1) We demonstrate the importance of integrating structure features, attributed features, and node's information behavior features in attribute networks. (2) We propose a joint embedding framework IBE to add the information behavior feature vector to a basic vector to obtain a final joint embedding vector, which has never been considered in the literature. The basic vector is generated by one of existing embedding methods. Within the framework, we design an algorithm ISR to generate a topic-sensitive vector for a given topic, and then we get information behavior feature vectors by matrix transposing a topic-sensitive embedding matrix composed of all topic-sensitive vectors. (3) We conduct extensive experiments in real-world information networks. Experimental results prove the effectiveness and efficiency of the proposed ISR algorithm and IBE framework.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 discusses several related works. We provide some definitions and problem formulation in Section 3. Section 4 presents in detail our proposed IBE</ns0:p></ns0:div>
<ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71996:1:1:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science framework and ISR algorithm. We then show experimental results in Section 5 before concluding the paper in Section 6.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORK</ns0:head><ns0:p>In the last few years, a large number of NE models have been proposed to learn node network embedding efficiently. These methods can be classified into two categories based on structural information and attributes: 1) SNE methods by considering purely structural information; and 2) ANE methods by considering both structural information and attributes. In this section, we briefly review related work in these two categories. SNE methods: DeepWalk <ns0:ref type='bibr' target='#b21'>(Perozzi et al., 2014)</ns0:ref> employs Skip-Gram <ns0:ref type='bibr' target='#b18'>(Mikolov et al., 2013)</ns0:ref> to learn the representations of nodes in the network. It uses a random selection of nodes and truncated random walk to generate random walk sequences of fixed length. Subsequently, these sequences are transported to the Skip-Gram model to learn the distributed node representations. LINE <ns0:ref type='bibr' target='#b26'>(Tang et al., 2015)</ns0:ref> studies the problem of embedding very large information networks into low-dimensional vector spaces. Node2vec <ns0:ref type='bibr' target='#b7'>(Grover and Leskovec, 2016)</ns0:ref> improves the strategy of random walk and achieves a balance between BFS and DFS. SDNE <ns0:ref type='bibr' target='#b29'>(Wang et al., 2016)</ns0:ref> proposes a semi-supervised deep model, which can learn a highly nonlinear network structure. It combines the advantages of first-order and second-order estimation to represent the global and local structure attributes of the network. Besides, there are many other SNE methods <ns0:ref type='bibr' target='#b6'>(Goyal and Ferrara, 2017)</ns0:ref>, which systematic analysis of various structural graph embedding models, and explain their differences. Nevertheless, these methods fully utilize structural information but do not consider attribute information.</ns0:p><ns0:p>ANE methods: CANE <ns0:ref type='bibr' target='#b27'>(Tu et al., 2017)</ns0:ref> proposed an approach of network embedding considering both node text information of context-free and context-aware. CENE (context-enhanced network embedding) <ns0:ref type='bibr' target='#b25'>(Sun et al., 2016)</ns0:ref> regards text content as a special kind of nodes and leverages both structural and textural information to learn network embeddings. TopicVec <ns0:ref type='bibr' target='#b14'>(Li et al., 2016)</ns0:ref> proposes to combine the word embedding pattern and document topic model. JMTS <ns0:ref type='bibr' target='#b0'>(Alam et al., 2016)</ns0:ref> proposes a domainindependent topic sentiment model to integrate topic semantic information into embedding. ASNE <ns0:ref type='bibr' target='#b15'>(Liao et al., 2018)</ns0:ref> adopts a deep neural network framework to model the complex interrelations between structural information and attributes. It learns node representations from social network data by leveraging both structural and attribute information. ABRW <ns0:ref type='bibr' target='#b10'>(Hou et al., 2018)</ns0:ref> reconstructs a unified denser network by fusing structural information and attributes for information enhancement. It employs weighted random walks based network embedding method for learning node embedding and addresses the challenges of embedding incomplete attributed networks. AM-GCN <ns0:ref type='bibr' target='#b32'>(Wang et al., 2020)</ns0:ref> is able to fuse topological structures, and node features adaptively. Moreover, there exist quite a few survey papers <ns0:ref type='bibr' target='#b11'>(Jiao et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b34'>Zhang et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b5'>Daokun et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b20'>Peng et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Cui et al., 2019)</ns0:ref>, which provide a comprehensive, up-to-date review of the state-of-the-art network representation learning techniques. They cover not only early work on preserving network structure, but also a new surge of incorporating node content and node labels.</ns0:p><ns0:p>It is challenging to get network embedding considering attributes of local context and topic due to its complexity. Quite a few works have been carried out on this issue. However none of them consider node information behavior features in attributed networks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>PROBLEM DEFINITION</ns0:head><ns0:p>In this section, we present the necessary definitions and formulate the problem of link prediction in attributed networks.</ns0:p><ns0:p>Definition 1 (Networks) A network can be represented graphically: G = (V, E, ∆, A), where V = {v 0 , v 1 , . . ., v (|V |−1) } represents the set of nodes, and |V | is the total number of nodes in G. E ⊆ V ×V is the set of edges between the nodes. ∆ = {δ 0 , δ 1 , . . . , δ (τ−1) } is a set, where δ represents a topic and it is also a topic-based node label identified the topic of the node, τ is the total number of topics. A is a function which associates each node in the network with a set of attributes, denoted as A(v).</ns0:p><ns0:p>Definition 2 (Adjacent-Node and Node degree) An adjacent-node set of node v ∈ V is defined as N v = {v : (v, v ) ∈ E}. v degree is the number of nodes in the adjacent-node set of v, called the degree of node v.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2022:03:71996:1:1:NEW 24 May 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science Definition 3 (Topic-based community) Each node v has topic-based labels to identify the topics it belongs to. A topic-based community is a node-set that consists of the nodes with same topic-based label.</ns0:head><ns0:p>Here, we define a topic-based community as C δ = {v : v ∈ V, δ ∈ ∆}, and the node number in C δ is defined as</ns0:p><ns0:formula xml:id='formula_1'>|C δ |. The topic-based community set is represented as C ∆ = {C δ 0 ,C δ 1 , . . . ,C δ (τ−1) } ( (τ−1) j=0 C δ j = V ),</ns0:formula><ns0:p>where C ∆ is a set of all topic-based communities. Note that we assume that each node has at least one topic-based label, and one node can belong to several topic-based communities.</ns0:p><ns0:p>Definition 4 (Information behavior) Information behavior is an individual node's action in a topic category, including information inquiry, information access, information interaction, information sharing, etc.</ns0:p><ns0:p>A message passing is a process of global information recursive, such as GCN. Different from the mechanism of message passing, the information behavior is a process of local-independent information aggregation.</ns0:p><ns0:p>Definition 5 (Importance Score) Given a topic δ , the importance score x i (0 ≤ i < |V |) of node v i is computed as follows:</ns0:p><ns0:formula xml:id='formula_2'>x i = β * m i + (1 − β ) * s i (1)</ns0:formula><ns0:p>where 0 ≤ β ≤ 1 is a hyper-parameter, m i and s i are the adjacent score and community score of node v i , respectively. v i 's adjacent score m i is defined as the weighted importance score of its adjacent nodes:</ns0:p><ns0:formula xml:id='formula_3'>m i = ∑ v k ∈N i x k (v degree k )</ns0:formula><ns0:p>, where x k is the importance score of v k , v degree k is the degree of node v k , and N i is the adjacent-node set of node v i . Moreover, v i 's community score s i with respect to the topic δ is defined as:</ns0:p><ns0:formula xml:id='formula_4'>s i =    1 |C δ | , if v i ∈ C δ 0 , otherwise where C δ is a topic-based community and |C δ | is the number of nodes in C δ .</ns0:formula><ns0:p>The importance score x i of node v i reflects the interaction between v i and its adjacent nodes N i , as well as the level of correlation between v i and its topic-based community C δ (δ ∈ ∆).</ns0:p><ns0:p>Definition 6 (Topic-sensitive vector) Given a topic δ , the importance scores of all nodes can be used to form a topic-sensitive vector</ns0:p><ns0:formula xml:id='formula_5'>− → γ δ = (x δ 0 , x δ 1 , . . . , x δ (|V |−1) ) (δ ∈ ∆, − → γ δ ∈ R |V | ).</ns0:formula><ns0:p>The learning process of the topic-sensitive vector is a repetitive iteration process for computing the importance scores of all nodes. The initial importance scores is</ns0:p><ns0:formula xml:id='formula_6'>x i = β * 1 |V | + (1 − β ) * s i (β = 0.85 is a hyper-parameter; s i = 1 |C δ | if v i ∈ C δ , otherwise s i = 0.</ns0:formula><ns0:p>). Illustrated by Equation <ns0:ref type='formula'>1</ns0:ref>, each iteration is an one-order aggregate operation of adjacent scores and community score and a new value 1</ns0:p><ns0:formula xml:id='formula_7'>|C δ | is added to x i</ns0:formula><ns0:p>for node v i . After a number of iterations (i.e., higher-order aggregations), the ratio between x i of each node v i will stabilize, but the x i for each node v i will continue to grow due to the continuous addition of the v i 's community score s i . So, after each iteration, we normalise the x i for every node by xi =</ns0:p><ns0:formula xml:id='formula_8'>x i n−1 ∑ i=0 (x i ) 2 , (0 ≤ i < |V |).</ns0:formula><ns0:p>In this way, the x i obtained from this iteration process will eventually converge.</ns0:p><ns0:p>Attributed network embedding. Given an attributed network G = (V, E, ∆, A), our goal is to extract the node information behavior features and learn an information behavior feature vector</ns0:p><ns0:formula xml:id='formula_9'>− → z I v ∈ R d (d = K * τ, d |V |) for each node v. The distance between − → z I v and − → z I v is the information behavior similarity of two nodes v, v (v, v ∈ V ). After that, node information behavior feature vector − → z I v is added to the node basic vector − → z B v ∈ R d (d |V |</ns0:formula><ns0:p>) generated by one of existing embedding methods to get the ultimate joint embedding vector: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_10'>− → z v = ([ − → z I v − → z B v ]) ∈ R d+d (2)</ns0:formula><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>OUR APPROACH</ns0:head><ns0:p>In this section, we introduce our method of information behavior features extraction. We firstly propose our framework IBE (Section 4.1) which elaborates the components of node joint embedding vectors.</ns0:p><ns0:p>Then, we present the model TNE (Section 4.2) which describes the process of generating information behavior feature vectors.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Information Behavior Extraction Framework (IBE)</ns0:head><ns0:p>The information behavior features generated based on topics are complementary to features of existing models. As shown in Figure <ns0:ref type='figure'>2</ns0:ref>, data sources of the framework IBE consist of two parts. One is the network embedding Z B generated by one of existing embedding methods, and the other is Z I , where </ns0:p><ns0:formula xml:id='formula_11'>Z B ∈ R |V</ns0:formula></ns0:div>
<ns0:div><ns0:head n='4.2'>Topic-Sensitive Network Embedding Model (TNE)</ns0:head><ns0:p>In this section, we present the process of extracting node information behavior features. As shown in Figure <ns0:ref type='figure'>3</ns0:ref>, the TNE model consists of two parts: an ISR algorithm (Section 4.2.1) and a topic-sensitive embedding matrix (Γ) transposing (Section 4.2.2).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.1'>Importance Score Rating Algorithm (ISR).</ns0:head><ns0:p>The ISR algorithm is used to get the importance scores of all nodes (illustrated by Equation 1 and Definition 5) and generates a topic-sensitive vector under a given topic. We firstly input raw data including a node set V , an adjacent-node set N v , and a topic-based community C δ to ISR (see Algorithm 1) and then simulate the iteration process of node information behavior under the given topic δ . When the importance scores of all nodes stabilize, the iteration is terminated. We propose loss as a metric for iteration termination, which is calculated as follows. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_12'>loss = |V |−1 ∑ i=0 (|x i − x i |)<ns0:label>(3</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where x i is the current iterative importance score for the node v i and x i is the importance score of the previous iteration for the node v i .</ns0:p><ns0:p>After the iteration is done, we can obtain a |V |-dimensional topic-sensitive vector − → γ δ = (x 0 , x 1 , . . .,</ns0:p><ns0:p>x (|V |−1) ) (illustrated by Definition 6), consisting of importance scores of all nodes under a given topic δ ∈ ∆.</ns0:p><ns0:p>Given a topic δ , the computing steps of topic-sensitive vector − → γ δ are given as follows (see Algorithm 1):</ns0:p><ns0:p>(1) For a node v i , the line 6-8 is used to compute its adjacent scores m i of all of neighbors N i and the line 9-13 are used to compute its community score s i . Using m i and s i , the importance scores x i of node v i is finally calculated out in the first statement of line 14.</ns0:p><ns0:p>(2) The second layer loop (line 5, line 14-15) is used to assemble the importance scores of all nodes to generate a list</ns0:p><ns0:formula xml:id='formula_13'>γ δ = [x 0 , x 1 , ..., x (|V |−1) ] in program which is the topic-sensitive vector − → γ δ = (x 0 , x 1 , . . . , x (|V |−1) ) ( − → γ δ ∈ R |V | ).</ns0:formula><ns0:p>(3) The third statement of line 14 and line 16 are use to calculate the Euclidean norm norm. The line 17-19 is used to normalise the importance scores of all nodes by the Euclidean norm (the first statement of line 18) and calculate the sum of loss for all node (the second statement of line 18).</ns0:p><ns0:p>The third statement of line 18 is used to update the value γδ [v i ] of node v i , which is the importance score and will be used in the next iteration.</ns0:p><ns0:p>(4) The first layer loop (line 3, line 20) is used to control the iterations.</ns0:p><ns0:p>In the ISR algorithm, an information behavior feature vector is generated without an increase in time complexity. We define L as the number of iterations, n represents the number of nodes in attributed networks, and v degree represents the degree of node v. The time complexity of the ISR algorithm to generate a topic-sensitive vector</ns0:p><ns0:formula xml:id='formula_14'>− → γ δ = (x 0 , x 1 , . . . , x (|V |−1) ) is O(L • n • v degree ).</ns0:formula><ns0:p>Because L • v degree and n are of the same order of magnitude, the time complexity of the ISR algorithm is thus O(n 2 ).</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2.2'>Topic-Sensitive Embedding Matrix (Γ) Transposing.</ns0:head><ns0:p>For ∆ = {δ 0 , δ 1 , . . ., δ (τ−1) }, the Γ matrix transposing calls the Algorithm 1 to get every topic-sensitive vectors − → γ δ in each topic δ ∈ ∆. After all τ topic-sensitive vectors are obtained, we combine the τ topicsensitive vectors to form a topic-sensitive embedding matrix Γ = ( − → γ δ 0 , − → γ δ 1 , . . ., − −− → γ δ (τ−1) ) T . Ultimately, Z I is obtained by the Γ matrix transposing, as illustrated by Equation <ns0:ref type='formula' target='#formula_15'>4</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_15'>Z I = ( − → z I v 0 , − → z I v 1 , . . . , − −−− → z I v (|V |−1) ) T = Γ T = ( − → γ δ 0 , − → γ δ 1 , . . . , − −− → γ δ (τ−1) )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Each row of Z I is an information behavior feature vector − → z I v for node v and the dimension of</ns0:p><ns0:formula xml:id='formula_16'>− → z I v is d = τ.</ns0:formula></ns0:div>
<ns0:div><ns0:head n='4.3'>Generating Node Joint Embedding Vectors based on IBE</ns0:head><ns0:p>Z B is a basic embedding matrix trained by one of existing embedding methods. Each row of the Z B is a basic vector − → z B v for a node v generated by one of existing embedding methods.</ns0:p><ns0:p>Before getting Z by Equation 7 according to the framework IBE, we firstly enlarge Z I or Z B by λ (Equation <ns0:ref type='formula' target='#formula_17'>5</ns0:ref>) so that the element values of λ * Z I and Z B or Z I and Z B λ are of the same order of magnitude, and the λ is calculated as follows.</ns0:p><ns0:formula xml:id='formula_17'>λ = |b| x = ( |V |−1 ∑ i=0 d−1 ∑ j=0 |b i j | |V | * d ) ÷ ( |V |−1 ∑ i=0 τ−1 ∑ j=0 x i j |V | * τ ) (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:p>where |b| is the average of all elements in Z B , and x is the average of all elements in Z I . And then we enlarge the element values of Z I or Z B who has the larger AUC <ns0:ref type='bibr' target='#b8'>(Hanley and Mcneil, 1982)</ns0:ref> value by weight coefficient α (Equation <ns0:ref type='formula'>6</ns0:ref>) again. |C δ |: node number of topic-based community δ ; β = 0.85: a hyper-parameter, imposing the ratio between m i and s i ; Output : γ δ ;</ns0:p><ns0:formula xml:id='formula_19'>1 γ δ = [x 0 , x 1 , ..., x (|V |−1) ] = [ 1 |V | , 1 |V | , ..., 1</ns0:formula><ns0:p>|V | ] = γδ , initializing every element of list γ δ and γδ with 1 |V | in a given topic δ ∈ ∆, where γδ is used to temporarily store a topic sensitive vector; 2 loss = 30; 3 while loss > 1</ns0:p><ns0:p>|V | do 4 loss = 0; norm = 0;</ns0:p><ns0:formula xml:id='formula_20'>5 for each v i ∈ V do 6 for each v k ∈ N i do 7 x k = γ δ [v k ]; m i = m i + x k (v degree k ) ; 8 end 9 if v i ∈ C δ then 10 s i = 1 |C δ | ; 11 else 12 s i = 0; 13 end 14 x i = β * m i + (1 − β ) * s i ; γ δ [v i ] = x i ; norm = norm + x 2 i ; 15 end 16 norm = √ norm; 17 for each v i ∈ V do 18 γ δ [v i ] = γ δ [v i ] norm ; loss = loss + (|γ δ [v i ] -γδ [v i ]|); γδ [v i ]) = γ δ [v i ]; 19 end 20 end 21 return γ δ = [x 0 , x 1 , ..., x (|V |−1) ];</ns0:formula><ns0:p>where auc() is a function used to calculate the value of AUC, ψ is an amplification factor of the ratio</ns0:p><ns0:formula xml:id='formula_21'>auc(Z I ) auc(Z B ) .</ns0:formula><ns0:p>Especially, we should not use the method of reducing the element values of Z I or Z B to make their element values of the same order of magnitude, because it may result in invalid results due to the element values are too small. So, according to the values of coefficients α and λ , we divide the methods of linearly concatenating Z I and Z B into four cases as follows.</ns0:p><ns0:formula xml:id='formula_22'>Z = ( − → z v 0 , − → z v 1 , . . . , − −−− → z v (|V |−1) ) T =                      [(α * λ * Z I ) Z B ], if λ ≥ 1 and α ≥ 1 [(λ * Z I ) Z B α ], if λ ≥ 1 and α < 1 [(α * Z I ) Z B λ ], if λ < 1 and α ≥ 1 [(Z I ) Z B α * λ ], if λ < 1 and α < 1 (7)</ns0:formula><ns0:p>where the operator [• •] denotes concatenation, α is an enlarging coefficient to make the joint embedding matrix Z more similar to Z I or Z B who has the higher AUC value, and λ (Equation <ns0:ref type='formula' target='#formula_17'>5</ns0:ref>) denotes the</ns0:p></ns0:div>
<ns0:div><ns0:head>7/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71996:1:1:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>enlargement factor who try to be adjusted to make the element values of λ * Z I and Z B or Z I and Z B λ in the same order of magnitude.</ns0:p><ns0:p>For the case of [(α * λ * Z I ) Z B ] (λ ≥ 1 and α ≥ 1) in Equation <ns0:ref type='formula'>7</ns0:ref>, Z is displayed in matrix form as follows:</ns0:p><ns0:formula xml:id='formula_23'>Z =      α * λ * x 00 • • • α * λ * x 0(τ−1) b 00 • • • b 0(d−1) α * λ * x 10 • • • α * λ * x 1(τ−1) b 10 • • • b 1(d−1) . . . . . . . . . . . . . . . . . . α * λ * x (|V |−1)0 • • • α * λ * x (|V |−1)(τ−1) b (|V |−1)0 • • • b (|V |−1)(d−1)     </ns0:formula><ns0:p>, where each row of Z is the final joint embedding vector − → z v for node v based on the framework of IBE</ns0:p><ns0:formula xml:id='formula_24'>and x i j (0 ≤ i < |V |, 0 ≤ j < τ) is the element of − → z I v i , b i j (0 ≤ j < d) is the element of − → z B v i .</ns0:formula><ns0:p>The other three cases of Equation <ns0:ref type='formula'>7</ns0:ref>have similar matrix representations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>EXPERIMENTS</ns0:head><ns0:p>In this section, we describe our datasets, baseline models and present the experimental results to demonstrate the performance of the IBE framework in link prediction tasks. The source code and datasets can be obtained from https://github.com/swurise/IBE. In Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, we consider the following real-world network datasets. BlogCatalog 1 is asocial blog directory.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Datasets</ns0:head><ns0:p>The dataset contains 39 topic labels, 10312 users, and 667,966 links. Zhihu is the largest online Q&A website in China. Users follow each other and answer questions on this site. We randomly crawl 10,000 active users from Zhihu and take the descriptions of their concerned topics as text information <ns0:ref type='bibr' target='#b27'>(Tu et al., 2017)</ns0:ref>. The topics of Zhihu are obtained by the fastText model <ns0:ref type='bibr' target='#b12'>(Joulin et al., 2016)</ns0:ref> and the ODP of predefined topic categories <ns0:ref type='bibr' target='#b9'>(Haveliwala, 2002)</ns0:ref>. The fastText presents the hierarchical softmax based on the Huffman tree to improve the softmax classifier taking advantage of the fact that classification is unbalanced in CBOW <ns0:ref type='bibr' target='#b12'>(Joulin et al., 2016)</ns0:ref>. WiKi contains 2,408 documents from 17 classes and 17,981 edges between them. Cora is a research paper classification citation network constructed by <ns0:ref type='bibr' target='#b16'>McCallum et al (McCallum et al., 2000)</ns0:ref>. After filtering out papers without text information, 2,277 machine learning papers are divided into seven categories and 36 subcategories in this network. Citeseer is divided into six communities: Agents, AI, DB, IR, ML, and HCI and 4,732 edges between them. Similar to Cora, it records the citing and cited information between papers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Baselines</ns0:head><ns0:p>To validate the performance of our approach, we employ several state-of-the-art network embedding methods as baselines to compare with our IBE framework. A number of existing embedding methods are introduced as follows.</ns0:p><ns0:p>• CANE <ns0:ref type='bibr' target='#b27'>(Tu et al., 2017)</ns0:ref> learns context-aware embeddings with mutual attention mechanism for nodes, and the semantic relationship features are extracted between nodes. It jointly leverages network structure and textural information by regarding text content as a special kind of node.</ns0:p><ns0:p>1 http://networkrepository.com/soc-BlogCatalog.php</ns0:p></ns0:div>
<ns0:div><ns0:head>8/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71996:1:1:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• DeepWalk <ns0:ref type='bibr' target='#b21'>(Perozzi et al., 2014)</ns0:ref> transforms a graph structure into a sample set of linear sequences consisting of nodes using uniform sampling. These linear sequences are transported to the Skip-Gram model to learn the distributed node embeddings.</ns0:p><ns0:p>• HOPE <ns0:ref type='bibr' target='#b19'>(Ou et al., 2016</ns0:ref>) is a graph embedding algorithm, which is scalable to preserve high-order proximities of large scale graphs and capable of capturing the asymmetric transitivity.</ns0:p><ns0:p>• LAP <ns0:ref type='bibr' target='#b1'>(Belkin and Niyogi, 2001</ns0:ref>) is a geometrically motivated algorithm for constructing a representation for data sampled from a low dimensional manifold embedded in a higher dimensional space.</ns0:p><ns0:p>• LINE <ns0:ref type='bibr' target='#b26'>(Tang et al., 2015)</ns0:ref> learns node embeddings in large-scale networks using first-order and second-order proximity between the nodes.</ns0:p><ns0:p>• Node2vec <ns0:ref type='bibr' target='#b7'>(Grover and Leskovec, 2016)</ns0:ref> has the same idea as DeepWalk using random walk sampling to get the combinational sequences of node context, and then the network embeddings of nodes are obtained by using the method of word2vec.</ns0:p><ns0:p>• GCN <ns0:ref type='bibr' target='#b13'>(Kipf and Welling, 2016)</ns0:ref> model uses an efficient layer-wise propagation rule based on a localized first-order approximation of spectral graph convolutions. The GCN model is capable of encoding graph structure and node features in a scalable approach for semi-supervised learning.</ns0:p><ns0:p>• GAT <ns0:ref type='bibr' target='#b28'>(Veličković et al., 2018</ns0:ref>) is a convolution-style neural network that operates on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of methods based on graph convolutions. GAT enables implicitly specifying different weights to different nodes within a neighborhood.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.3'>Evaluation Metrics and Parameter Settings</ns0:head><ns0:p>We randomly divide all edges into two sets, training set and testing set, and take a standard evaluation metric AUC scores (area under the ROC curve) <ns0:ref type='bibr' target='#b8'>(Hanley and Mcneil, 1982)</ns0:ref> as evaluation metrics to measure the link prediction performance. AUC represents the probability that nodes in a random unobserved link are more similar than those in a random nonexistent link. Because the number of topics is different in each dataset, we use τ to denote the maximum number of topics for each dataset. In concatenating weight coefficient α = [auc(Z I ) ÷ auc(Z B )] ψ , we set the factor ψ equal to 4 except that ψ has a specified value.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.4'>Experimental Results</ns0:head><ns0:p>For experiments, we evaluate the effectiveness and efficiency of our IBE on five networks for the link prediction task. For each dataset, we compare the AUC data of basic embedding matrix Z B generating by one of the existing embedding methods, the information behavior feature vectors Z I , and their joint embedding vectors Z generating by the framework IBE. We employ six state-of-the-art embedding methods as baselines, including Node2vec, DeepWalk, LINE, LAP, CANE, HOPE, for comparisons with their extending frameworks IBE in the following experiments. with their baselines. The result can be explained by the fact that the AUC values of information behavior feature vectors Z I are all low, about 55% and the direct reason for the low AUC values is that the number of topics is too small. Due to the number of topics is small, the topic subdivision degree of nodes is low, and the classification of node labels will not be too detailed.</ns0:p><ns0:p>In general, in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>, it can be seen that the AUC values of concatenating embedding vectors Z are higher than that of Z B and Z I , which indicates that the concatenating method can properly integrate the features of all parties.</ns0:p><ns0:p>Ablation experiments: To investigate the effectiveness of TNE, we perform several ablation studies.</ns0:p><ns0:p>In Manuscript to be reviewed <ns0:ref type='formula'>2</ns0:ref>. We observe that the quality of the joint embedding Z is better than itself Z B .</ns0:p><ns0:note type='other'>Computer Science Table 2.</ns0:note></ns0:div>
<ns0:div><ns0:head n='5.5'>Parameter sensitivity analysis</ns0:head><ns0:p>We further performed parameter sensitivity analysis in this section, and the results are summarized in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref> and Figure <ns0:ref type='figure'>5</ns0:ref>. Due to space limitations, we only take the dataset of Wiki, Zhihu as an examples to estimate the topic number j (0 ≤ j ≤ τ) and the amplification factor ψ of vector concatenation can affect the link prediction results.</ns0:p><ns0:p>The topic number j (0 ≤ j ≤ τ): In Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, we illustrate the relationship between the number of topics and link prediction, where the order of selection of node topics is random. When j = 0, Z I does not exist, Z degenerates to Z B . As shown in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>, we can see that as j increases from 1 to τ, Z B linearly combines with more topic-based feature dimensionality from Z I , and the AUC values keep changing.</ns0:p><ns0:p>When the AUC values of Z B are below 82% in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>(a), AUC values increase sharply with an increase of j. When the AUC values of Z B are higher than a certain critical value, the AUC values increase more slowly or even stop growing with an increase of j.</ns0:p><ns0:p>So, we can see that when the number of topics is large in a dataset, each node can be classified in detail by the topic classification labels, which helps to improve the AUC values using a small number of topics. These also show that the AUC values of the concatenating embedding vectors Zs will be higher than that of all parties for concatenating, that is Z I s and Z B s, but it will not increase indefinitely.</ns0:p><ns0:p>The amplification factor ψ of vector concatenation: ψ is an amplification factor for α (Equation <ns0:ref type='formula'>6</ns0:ref>) which is a weight coefficient for enlarging the element values of Z I or Z B who has the larger AUC (Equation <ns0:ref type='formula'>7</ns0:ref>). From Figure <ns0:ref type='figure'>5</ns0:ref>, we can see that the AUC value, when ψ is less than 0, is less than that when ψ is greater than 0. The reason is that the weight coefficient α, when ψ is less than 0, enlarges the Z I or the Z B who has the smaller AUC value. As a result, the joint embedding Z is more similar to one that has a lower AUC value. When ψ is between 1 and 5, the prediction result is the best. However, when the </ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>This paper has presented an effective network embedding framework IBE, which can easily incorporate topology features, attribute features, and features of topic-based information behavior into network embedding. In IBE, we linearly combinate Z I and Z B to generate node joint embedding matrix Z. To get the Z I , we have proposed the TNE model to extract the node's information behavior features. The model contains an ISR algorithm to generate the topic-sensitive embedding matrix (Γ) and a Γ matrix transposing algorithm to transpose Γ matrix into the information behavior feature matrix Z I for nodes eventually.</ns0:p><ns0:p>Experimental results in various real-world networks have shown the efficiency and effectiveness of joint embedding vectors in link prediction. In the future, we plan to investigate other methods of extracting features that may better integrate with the TNE model. Moreover, we will further investigate how the TNE model works in heterogeneous information networks.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Node information behavior of multiple topic-based communities (C 0 , C 1 , and C 2 are topic-based communities, and v 0 , v 1 , v 2 , v 3 are nodes, and in the meantime(v 0 , v 3 ) ∈ C 0 , v 1 ∈ C 1 , v 2 ∈ C 2 ).Nodes interact in intra-community, such as nodes v 0 and v 3 . Nodes in different communities may interact with each other, such as nodes v 0 and v 1 , v 1 and v 2 , v 2 and v 3 . Meantime, due to the existence of bridge nodes v 0 or v 2 , nodes v 1 and v 3 may have a link which is not represented in the current network.</ns0:figDesc><ns0:graphic coords='3,224.45,63.78,248.15,170.14' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .Figure 3 .</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2. Our information behavior extraction framework (IBE)</ns0:figDesc><ns0:graphic coords='6,221.96,63.78,248.16,161.01' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:03:71996:1:1:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>α</ns0:head><ns0:label /><ns0:figDesc>= [auc(Z I ) ÷ auc(Z B )] ψ ISR algorithm Input : δ : a topic, δ ∈ ∆; V : a node set of network G; |V |: node number of network G; N i : a adjacent node set of node v i ; v degree i : degree of node v i ∈ V ; C δ : a topic-based community, C δ ∈ C ∆ ;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>The AUC values of Z I , Z B (baseline) and Z(baseline) in different datasets where '(baseline)' in Z B (baseline), Z(baseline) is to distinguish all kind of network embeddings Z B and their extension embeddings Z, and the ψ equals 4 except that ψ has a specified value.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>τ = 17, Wiki dataset.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. ψ= 5. With the increase of τ, the AUC values are calculuated for different datasets. j is a topic number, if j=0 then Z=Z B else Z=[Z B Z I ].</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Statistics of the real-world information networks.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Social Network</ns0:cell><ns0:cell cols='2'>Language Network</ns0:cell><ns0:cell cols='2'>Citation Network</ns0:cell></ns0:row><ns0:row><ns0:cell>Name</ns0:cell><ns0:cell>Dataset</ns0:cell><ns0:cell>BlogCatalog</ns0:cell><ns0:cell>Zhihu</ns0:cell><ns0:cell>Wiki</ns0:cell><ns0:cell>Cora</ns0:cell><ns0:cell>Citeseer</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Nodes</ns0:cell><ns0:cell>10,312</ns0:cell><ns0:cell>10,000</ns0:cell><ns0:cell>2,408</ns0:cell><ns0:cell>2,277</ns0:cell><ns0:cell>3,312</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Edges</ns0:cell><ns0:cell>667,966</ns0:cell><ns0:cell>43,894</ns0:cell><ns0:cell>17,981</ns0:cell><ns0:cell>5,214</ns0:cell><ns0:cell>4,732</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Attributes</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>10,000</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2,277</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Number of topics (τ)</ns0:cell><ns0:cell>39</ns0:cell><ns0:cell>*13</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>6</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 compares</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>AUCs over five datasets. By concatenating Z B and Z I linearly, the joint embedding vectors Z achieves the best performance. Especially on the BlogCatalog and Zhihu datasets, the AUC values of the joint embedding vectors Z are higher, compared with their baselines, than 14.6% and 27.6% respectively on average. One of the reasons may be that the average AUC of information behavior feature vectors Z I are 81.7 and 82.4 respectively, which are more than 11% higher compared with the other three datasets on average. The other reason is that the maximum topic number τ of BlogCatalog and Zhihu are larger than those of Cora and Citeseer. In Table2, we can also see that most of the AUC values of the joint embedding vectors Z for datasets Wiki and Cora exceed 90%. The reason is that their AUC values of baselines are relatively high, most of them are more than 85%. On the Citeseer dataset of Table2, we also can see that the improvement of the AUC values of the joint embedding vectors Z is less, compared</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Table 2, the Z B is the basic embedding generated by an existing model. The Z I is obtained by the TNE</ns0:figDesc><ns0:table /><ns0:note>9/13PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71996:1:1:NEW 24 May 2022)</ns0:note></ns0:figure>
<ns0:note place='foot' n='13'>/13 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71996:1:1:NEW 24 May 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "We would like to express our thanks to Reviewer 1 for the careful review and valuable comments. After careful examination and consideration, we found these comments are extremely helpful for us in improving the quality of our manuscript. Please find below a list of reviewer comments in bold and our responses.
• Basic reporting
Throughout the manuscript, this paper uses clear and unambiguous professional English. Although adequate literature references and field background/context are provided, the literature discussion is missing a few related works. The proposed method is self-contained, with results correlating to hypotheses. The formal results should contain precise definitions of all terms and theorems, as well as detailed proofs.
Thank you for raising these questions. According to Reviewer’s suggestion, we revised the manuscript as follows.
1) From Line 46 to Line 52, we added two classic models of GCN and GAT and the latest method of role-based network embedding as follows.
“More recently, deep learning and attention mechanisms are used to generate network embeddings, including GCN (Kipf and Welling, 2016), GAT (Velikovi et al., 2018) and CANE (Tu et al., 2017), which extract structure, text, topic, and other heterogeneous feature information more effectively. In addition to the above methods, a few role-based approaches of network embedding (Jiao et al., 2021) have been proposed recently, where a role-based network embedding (Zhang et al., 2021) captures the structural similarity and the role information.”
2) From Line 143 to Line 148, we added the definition of information behavior as follows.
“Definition 4 (Information behavior) Information behavior is an individual node’s action in a topic category, including information inquiry, information access, information interaction, information sharing, etc. A message passing is a process of global information recursive, such as GCN. Different from the mechanism of message passing, the information behavior is a process of local-independent information aggregation.”
• Experimental design
Numerous evaluation results corroborate the proposed method. The research question is clearly defined, pertinent, and meaningful. It is stated how research fills a knowledge gap that has been identified. Extensive investigation conducted to the highest technical and ethical standards. Methods described in sufficient detail and detail to permit replication.
Thank you for this comment.
• Validity of the findings
The underlying experimental results have been provided in its entirety; they are robust, statistically sound, and well-controlled. Conclusions are succinct, relevant to the original research question, and limited to supporting data.
Thank you for this comment.
• Additional comments
This manuscript is written in clear and unambiguous professional English throughout. While adequate references to the literature and background/context of the field are provided, the literature discussion is missing a few related works [1,2,3,4,5]. The proposed method is self-contained and produces results that are consistent with hypotheses. All terms and theorems should have precise definitions in the formal results, as well as detailed proofs.
[1] WebFormer: The Web-page Transformer for Structure Information Extraction---WWW 2022
[2] A vector-based representation to enhance head pose estimation---WACV 2021
[3] Sg-net: Spatial granularity network for one-stage video instance segmentation---CVPR 2021
[4] DenserNet: Weakly supervised visual localization using multi-scale feature aggregation---AAAI 2021
[5] Video object detection for autonomous driving: Motion-aid feature calibration---Neurocomputing 2021
Thank you for raising this question.
We have updated the latest related work and strictly defined the concept of information behavior as mentioned in the response to the Basic reporting.
" | Here is a paper. Please give your review comments after reading it. |
708 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Deep convolutional neural networks (CNN) manifest the potential for computer-aided diagnosis systems (CADs) by learning features directly from images rather than using traditional feature extraction methods. Nevertheless, due to the limited sample sizes and heterogeneity in tumor presentation in medical images, CNN models suffer from training issues, including training from scratch, which leads to overfitting. Alternatively, a pretrained neural network's transfer learning (TL) is used to derive tumor knowledge from medical image datasets using CNN that were designed for non-medical activations, alleviating the need for large datasets. This study proposes two ensemble learning techniques: (E-CNN (product rule) and E-CNN (majority voting)). These techniques are based on the adaptation of the pretrained CNN models to classify colon cancer histopathology images into various classes. In these ensembles, the individuals are, initially, constructed by adapting pretrained DenseNet121, MobileNetV2, InceptionV3, and VGG16 models. The adaptation of these models is based on a block-wise fine-tuning policy, in which a set of dense and dropout layers of these pretrained models is joined to explore the variation in the histology images. Then, the models' decisions are fused via product rule and majority voting aggregation methods. The proposed model is validated against the standard pretrained models and the most recent works on two publicly available benchmark colon histopathological image datasets: Stoean (357 images) and Kather colorectal histology (5000 images). The results were 97.20% and 91.28% accurate, respectively. The achieved results outperformed the state-of-the-art studies and confirmed that the proposed E-CNNs could be extended to be used in various medical image applications.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Colon cancer is the third most deadly disease in males and the second most hazardous in females. According to the World Cancer Research Fund International, over 1.8 million new cases were reported in 2018 <ns0:ref type='bibr' target='#b7'>(Belciug and Gorunescu, 2020)</ns0:ref>. In colon cancer diagnosis, the study of histopathological images under the microscope plays a significant role in the interpretation of specific biological activities.</ns0:p><ns0:p>Among the microscopic inspection functions, classification of images (organs, tissues, etc.) is one of considerable important tasks. However, classifying medical images into a set of different classes is a very challenging issue due to low inter-class distance and high intra-class variability <ns0:ref type='bibr' target='#b45'>(Sahran et al., 2018)</ns0:ref>, as illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. Some objects in medical images may be found in images belonging to different classes, and different objects may appear at different orientations and scales in a given class. During the manual assessment, physicians examine the Hematoxylin and Eosin (H&E) stained tissues under a microscope to analyze their histopathological attributes, such as cytoplasm, nuclei, gland, and lumen, as well as change in the benign structure of the tissues. It is worth noting that early categorization of colon samples as benign or malignant, or discriminating between different malignant grades is critical for selecting the best treatment protocol. Nevertheless, manually diagnosing colon H&E stained tissue under a microscope is time-consuming and tedious, as illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. In addition, the diagnostic performance depends on the experience and personal skills of a pathologist. It, also, suffers from inter-observer variability with around 75% diagnostic agreement across pathologists <ns0:ref type='bibr' target='#b17'>(Elmore et al., 2015)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:1:2:CHECK 14 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As a result, the treatment protocol might differ from one pathologist to another. These issues motivate development and research into the automation of diagnostic and prognosis procedures <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In recent decades, various computer aided diagnosis systems (CADs) have been introduced to tackle the classification problems in cancer digital pathology diagnosis to achieve reproducible and rapid results <ns0:ref type='bibr' target='#b10'>(Bicakci et al., 2020)</ns0:ref>. CADs assist in enhancing the classification performance and, at the same time, minimize the variability in interpretations. The faults produced by CADs/machine learning model have been announced to be less than those produced by a pathologist <ns0:ref type='bibr' target='#b31'>(Kumar et al., 2020)</ns0:ref>. These models can also assist clinicians in detecting cancerous tissue in colon tissue images. As a result, researchers are trying to construct CADs to improve diagnostic effectiveness and raise inter-observer satisfaction <ns0:ref type='bibr' target='#b54'>(Tang et al., 2009)</ns0:ref>. Numerous conventional CADs for identifying colon cancer using histological images had been introduced by number of researchers in the past years <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b23'>Kalkan et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b32'>Li et al., 2019)</ns0:ref>. Most of the conventional CADs focus on discriminating between benign and malignant tissues. Furthermore, they focus on conventional machine learning and image processing techniques. In this regards, they emphasize on some complex tasks such as extracting features from medical images and require extensive preprocessing. The complex nature of these tasks in machine learning techniques degrades the results of the CADs regarding accuracy and efficiency <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021)</ns0:ref>. Conversely, recent advances in machine learning technologies make this task more accurate and cost-effective than traditional models (Abu <ns0:ref type='bibr' target='#b0'>Khurma et al., 2022;</ns0:ref><ns0:ref type='bibr' target='#b28'>Khurma et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b2'>Abu Khurmaa et al., 2021)</ns0:ref>. Manuscript to be reviewed Computer Science deep learning techniques is the deep convolutional neural networks (CNN) <ns0:ref type='bibr' target='#b27'>(Khan et al., 2020)</ns0:ref> that consists of series of convolutional and pooling layers. These are followed by FCC and softmax layers. The FCC and the softmax represent the neural networks classifiers. CNN has the ability to extract the features from images by parameter tuning of the convolutional and the pooling layers. Thus, it achieves great success in many fields especially in medical image classifications such as skin disease <ns0:ref type='bibr' target='#b21'>(Harangi, 2018)</ns0:ref>, breast <ns0:ref type='bibr' target='#b14'>(Deniz et al., 2018)</ns0:ref> and colon cancer classification <ns0:ref type='bibr' target='#b20'>(Ghosh et al., 2021)</ns0:ref>. CNN is categorized into two approaches: either training from scratch or pre-trained models (e.g., DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b47'>(Sandler et al., 2018)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref>). The most effective approach in medical image classification is the pretrained models due to the limited number of training samples <ns0:ref type='bibr' target='#b46'>(Saini and Susan, 2020)</ns0:ref>.</ns0:p><ns0:p>CNN has been used in the domain of colon histopathlogical image classification. For example, Stefan Postavaru <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017)</ns0:ref> utilized a CNN approach for the automated diagnosis of a set of colorectal cancer histopathological slides. They utilized CNN with 5 convolutional layers and reported accuracy of 91.4%. Ruxandra Stoean <ns0:ref type='bibr' target='#b51'>(Stoean, 2020)</ns0:ref> extended the work <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017)</ns0:ref> and presented a modality method to tune the convolutional of the deep CNN. She introduced two Evolutionary algorithms for CNN parametrization. She conducted the experiments on colorectal cancer <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> and reported the highest accuracy of 92%. It was obtained from these studies that the CNN models exceeded the handcrafted features.</ns0:p><ns0:p>While the CNN achieves high performance especially on large dataset size, it struggles to make such performance on small dataset size <ns0:ref type='bibr' target='#b14'>(Deniz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b34'>Mahbod et al., 2020)</ns0:ref>, and simply results in overfitting issue <ns0:ref type='bibr' target='#b59'>(Zhao et al., 2017)</ns0:ref>. To overcome this issue, the concept of transfer learning technique of pretrained CNN models is exploited for classification of colon histopathlogical images. In practical, the transfer learning technique of the pretrained models exports knowledge from previously CNN that has been trained on the large dataset to the new task with small dataset (target dataset). There are two approaches to transfer learning of pretrained models in medical image classification: feature extraction and fine-tuning <ns0:ref type='bibr' target='#b8'>(Benhammou et al., 2020)</ns0:ref>. The former method extracts features from any convolutional or pooling layers and removes the last FCC and softmax layers. While in the latter, the pretrained CNN models are adjusted for specific tasks. It is important to remember that the number of neurons in the final FC layer corresponds to the number of classes in the target dataset (i.e., the number of colon types). Following this replacement, the whole pre-trained model is retrained <ns0:ref type='bibr' target='#b34'>(Mahbod et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b8'>Benhammou et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b60'>Zhi et al., 2017)</ns0:ref> or the last FC layers are retrained <ns0:ref type='bibr' target='#b8'>(Benhammou et al., 2020)</ns0:ref>. Various pretrained models (e.g., DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b47'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b48'>(Simonyan and Zisserman, 2014)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref>) have been introduced in recent years. Each pretrained model is constructed based on several convolution layers and filter sizes to extract specific features from the input image. However, transferring the begotten experience from the source (ImageNet) to our target (colon images) leads to losing some powerful features of histopathological image analysis <ns0:ref type='bibr' target='#b12'>(Boumaraf et al., 2021)</ns0:ref>. For example, CNN pretrained AlexNet and GoogleNet models were used on the colon histopathological images classification <ns0:ref type='bibr' target='#b39'>(Popa, 2021)</ns0:ref>. However, they achieved poor standard deviation results. Besides, using these pretrained models on the colon dataset needs a specific fine-tuning approach to achieve acceptable results.</ns0:p><ns0:p>To accommodate the pretrained CNN models to the colon image classification, we design a new set of transfer learning models ( DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b47'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b48'>(Simonyan and Zisserman, 2014)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref> to refine the pretrained models on the colon histopathological image tasks. Our transfer learning methods are based on a block-wise finetuning policy. We make the last set of residual blocks of the deep network models more domain-specific to our target colon dataset by adding dense layers and dropout layers while freezing the remaining initial blocks in the deep pretrained model. The adaptability of the proposed method is further extended by fine-tuning the neural network's hyper-parameters to improve the model generalization ability. Besides, a single pretrained model has a limited capacity to extract complete discriminating features, resulting in an inadequate representation of the colon histopathology performance <ns0:ref type='bibr' target='#b58'>(Yang et al., 2019)</ns0:ref>. As a result, this study proposes an ensemble of pretrained CNN models architectures (E-CNN) to identify the representation of colon pathological images from various viewpoints for more effective classification tasks.</ns0:p><ns0:p>In this research, the following contributions are made: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• Investigation of the influence of the standard TL approaches ( DenseNet, MobileNet, VGG16, and InceptionV3) on the colon cancer classification task.</ns0:p><ns0:p>• Design a new set of transfer learning methods based on a block-wise fine-tuning approach to learn the powerful features of the colon histopathology images. The new design includes adding a set of dense and dropout layers while freezing the remainder of the initial layers in the pretrained models <ns0:ref type='bibr'>(DenseNet,</ns0:ref><ns0:ref type='bibr'>MobileNet,</ns0:ref><ns0:ref type='bibr'>VGG16,</ns0:ref><ns0:ref type='bibr'>and InceptionV3)</ns0:ref> to make them more specific for the colon domain requirements.</ns0:p><ns0:p>• Define and optimize a set of hyper-parameters for the new set of pretrained CNN models to classify colon histopathological images.</ns0:p><ns0:p>• An ensemble (E-CNN) is proposed to extract complementary features in colon histopathology images by using an ensemble of all the introduced transfer learning methods (base classifiers). The proposed E-CNN merges the decisions of all base classifiers via majority voting and product rules.</ns0:p><ns0:p>The remainder of this research is organized as follows. Literature review section goes over the related works. Our proposed methodology is presented in detail in the methodology section . The experiments results and discussion section analyzes and discusses the experimental results. The Section of conclusion brings this study to a close by outlining some research trends and viewpoints.</ns0:p></ns0:div>
<ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Deep learning pretrained models have made incredible progress in various kinds of medical image processing, specifically histopathological images, as they can automatically extract abstract and complex features from the input images <ns0:ref type='bibr' target='#b36'>(Manna et al., 2021)</ns0:ref>. Recently, CNN models based on deep learning design are dominant techniques in the CADs of cancer histopathological image classification <ns0:ref type='bibr' target='#b31'>(Kumar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b34'>Mahbod et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b4'>Albashish et al., 2021)</ns0:ref>. CNN learn high-and mid-level abstraction, which is obtained from input RGB images. Thus, developing CADs using deep learning and image processing routines can assist pathologists in classifying colon cancer histopathological images with better diagnostic performance and less computational time. Numerous CADs for identifying colorectal cancer using histological images had been introduced by a number of researchers in past years. These CADs vary from conventional machine learning algorithms of the deep CNN. In this study, we present the related work of the colorectal cancer classification relying on colorectal cancer dataset <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> as real-world test cases.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017</ns0:ref>) designed a CNN model for colon cancer classification based on colorectal histopathological slides belonging to a healthy case and three different cancer grades(1, 2, and 3). They used an input image with the size of 256 × 256 × 3. They created five convolutional neural networks, followed by the ReLU activation function. In the introduced CNN, various kernel sizes were utilized in each Conv. Layer. Besides, they utilized batch normalization and only two FCC layers. They reported 91% accuracy in multiclass classification for the colon dataset in <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref>. However, in the proposed approach, only the size of the kernels is considered, while other parameters, like learning rate and epoch size, were not taken into account.</ns0:p><ns0:p>The author in <ns0:ref type='bibr' target='#b51'>(Stoean, 2020)</ns0:ref> extended the previous study <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017)</ns0:ref> by applying an evolutionary algorithm (EA) in the CNN architecture. This is to automate two tasks: first, EA was conducted for tuning the CNN hyper-parameters of the convolutional layers. Stoean determined the number of kernels in CNN and their size. Second, the EA was used to support SVM in parameters ranking to determine the variable importance within the hyper-parameterization of CNN. The proposed approach achieved 92% colorectal cancer grading accuracy on the dataset in <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref>. However, using EA does not guarantee any diversity among the obtained hyper-parameters( solutions) <ns0:ref type='bibr' target='#b9'>(Bhargava, 2013)</ns0:ref>.Thus, choosing the kernel size and depth of CNN may not ensure high accuracy.</ns0:p><ns0:p>In another study for colon classification but on a different benchmark dataset, the authors in <ns0:ref type='bibr' target='#b35'>(Malik et al., 2019)</ns0:ref> have proved that the transferred learning from a pre-trained deep CNN model using Incep-tionV3 on a colon dataset with fine-tuning provides efficient results. Their methodology was mainly constructed based on InceptionV3. Then, the authors modified the last FCC layers to become harmonious Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>with the number of the classes in the colon classification task. Moreover, the adaptive CNN implementation was proposed to improve the performance of CNN architecture for the colon cancer detection task.</ns0:p><ns0:p>The study achieved around 87% accuracy for the multiclass classification task.</ns0:p><ns0:p>In another study <ns0:ref type='bibr' target='#b15'>(Dif and Elberrichi, 2020a)</ns0:ref>, a framework was proposed for the colon histopathological image classification task. The authors employed a CNN based on transferred learning from Resnet121 generating a set of models followed by a dynamic model selection using the particle swarm optimization (PSO) metaheuristic. The selected models were then combined by a majority vote and achieved 94.52% accuracy on the colon histopathological dataset <ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref>. In the same context, the authors in <ns0:ref type='bibr' target='#b16'>(Dif and Elberrichi, 2020b)</ns0:ref> explored the efficiency of reusing pre-trained models on histopathological images dataset instead of ImageNet based models for transfer learning. For this target, a fine-tuning method was presented to share the knowledge among different histopathological CNN models. The basic model was created by training InceptionV3 from scratch on one dataset while transfer learning and fine-tuning were performed using another dataset. However, this transfer learning-based strategy offered poor results on the colon histopathological images due to the limited number of the training dataset.</ns0:p><ns0:p>The conventional machine learning techniques have been utilized for the colon histopathology images dataset to achieve accepted results. For example, the 4-class colon cancer classification task on the dataset in <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> was utilized in <ns0:ref type='bibr' target='#b11'>(Boruz and Stoean, 2018;</ns0:ref><ns0:ref type='bibr' target='#b26'>Khadilkar, 2021)</ns0:ref> to discriminate between various cancer types. In the former case <ns0:ref type='bibr' target='#b11'>(Boruz and Stoean, 2018)</ns0:ref>, the authors extracted contour low-level image features from grayscale transformed images. Then, these features were used to train the SVM classifier. Despite its simplicity, the study displayed a comparable performance to some computationally expensive approaches. The authors reported accuracy averages between 84.1% and 92.6% for the different classes. However, transforming the input images to grayscale leads to losing some information and degrades the classification results. Besides, using thresholding needs fine-tuning, which is a complex task. In latter case <ns0:ref type='bibr' target='#b26'>(Khadilkar, 2021)</ns0:ref>, the authors extracted morphological features from the colon dataset.</ns0:p><ns0:p>Mainly, they extracted harris corner and Gabor wavelet features. These features were then used to feed the neural network classifier. The authors utilized their framework to discriminate between benign and malignant cases. However, they ignored the multiclass classification task, which is more complex task in this domain.</ns0:p><ns0:p>Most of the above studies utilized a single deep CNN (aka weak learner) model to address various colon histopathology images classification tasks (binary or multiclass). Despite their extensive use, a single CNN model has the restricted power to capture discriminative features from colon histopathology images, resulting in unsatisfactory classification accuracy <ns0:ref type='bibr' target='#b58'>(Yang et al., 2019)</ns0:ref>. Thus, merging a group of weak learners forms an ensemble learning model, which is likely to be a strong learner and moderate the shortcomings of the weak learners <ns0:ref type='bibr' target='#b43'>(Qasem et al., 2022)</ns0:ref>.</ns0:p><ns0:p>Ensemble learning of deep pretrained models has been designed to fuse the decisions of different weak learners (individuals) to increase classification performance <ns0:ref type='bibr' target='#b57'>(Xue et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b61'>Zhou et al., 2021)</ns0:ref>. A limited number of studies applied ensemble learning with deep CNN models on colon histopathological image classification tasks <ns0:ref type='bibr' target='#b39'>(Popa, 2021)</ns0:ref>, <ns0:ref type='bibr' target='#b33'>(Lichtblau and Stoean, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b44'>(Rachapudi and Lavanya Devi, 2021)</ns0:ref>.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b39'>(Popa, 2021)</ns0:ref> proposed a new framework for the colon multiclass classification task.</ns0:p><ns0:p>They employed CNN pretrained AlexNet and GoogleNet models followed by softmax activation layers to handle the 4-class classification task. The best-reported accuracies on <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> dataset ranged between 85% and 89%. However, the standard deviation of these results was around 4%. This means the results were not stable. AlexNet was also used in <ns0:ref type='bibr' target='#b33'>(Lichtblau and Stoean, 2019)</ns0:ref> as a feature extractor for the colon dataset. Then, an ensemble of five classifiers was built. The obtained results for this ensemble achieved around 87% accuracy.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b38'>(Ohata et al., 2021)</ns0:ref>, the authors use CNN to extract features of colorectal histological images.</ns0:p><ns0:p>They employed various pretrained models, i.e., VGG16 and Inception, to extract deep features from the input images. Then, they employed ensemble learning by utilizing five classifiers (SVM, Bayes, KNN, MLP, and Random Forest) to classify the input images. They reported 92.083% accuracy on the colon histological images dataset in <ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref>. A research study in <ns0:ref type='bibr' target='#b44'>(Rachapudi and Lavanya Devi, 2021)</ns0:ref> proposed light weighted CNN architecture. RGB-colored images of colorectal cancer histology</ns0:p></ns0:div>
<ns0:div><ns0:head>5/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_15'>2022:02:70538:1:2:CHECK 14 May 2022)</ns0:ref> Manuscript to be reviewed Computer Science Fine-tune: only kernel size and number of kernels in CNN using EA method <ns0:ref type='bibr' target='#b39'>(Popa, 2021)</ns0:ref> colorectal in <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> AlexNet and 89% feature extractor GoogleNet <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017)</ns0:ref> colorectal in <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> CNN model from scratch 91%</ns0:p><ns0:p>The number of filters and the kernel size <ns0:ref type='bibr' target='#b33'>(Lichtblau and Stoean, 2019)</ns0:ref> colorectal in <ns0:ref type='bibr'>(Stoean et</ns0:ref> Overall, the earlier studies, summarized in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, revealed a notable trend in using deep CNN to classify colon cancer histopathological images. It was used to provide much higher performance than the conventional machine learning models. Nevertheless, training CNN models are not that trivial as they need considerable memory resources and computation and are usually hampered by over-fitting problems. Besides, they require a large amount of training dataset. In this regard, the recent studies <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b12'>Boumaraf et al., 2021)</ns0:ref> have demonstrated that sufficient fine-tuned pretrained CNN models performance is much more reliable than the one trained from scratch, or in the worst cases the same. Besides, using ensemble learning of pretrained models show effective results in various applications of image classification tasks. Therefore, this research fills the gap in the previous studies for colon histopathological images classification by introducing a set of transfer learning models based on Dense.</ns0:p><ns0:p>Then, reap the benefits of the ensemble learning to fuse their decision.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This study constructs an ensemble of the pretrained models with fine-tuning for the colon diagnosis based on histopathological images. Mainly, four pretrained models (DenseNet121 MobileNetV2, InceptionV3, and VGG16) are fine-tuned, and then their predicted probabilities are fused to produce a final decision for a test/image. The pretrained models utilize transfer learning to mitigate these models' weights to handle a similar classification task. Ensemble learning of pretrained models attains superior performance for histopathological image classification.</ns0:p></ns0:div>
<ns0:div><ns0:head>Transfer Learning (TL) and pretrained Deep Learning Models for medical image</ns0:head><ns0:p>Transferring knowledge from one expert to another is known as transfer learning. In deep learning techniques, this approach is utilized where the CNN is trained on the base dataset (source domain), which has a large number of samples (e.g., ImageNet). Then, the weights of the convolutional layers are transferred to the new small dataset (target domain). Using pretrained models for classification tasks can be divided into two main scenarios: freezing the layers of the pretrained model and fine-tuning the models. In the former scenario: the convolutional layers of a deep CNN model are frozen, and the last FCC are omitted. In this way, the convolutional layers act as feature extractions. Then these features are Manuscript to be reviewed</ns0:p><ns0:p>Computer Science passed to a specific classifier (e.g., KNN, SVM) <ns0:ref type='bibr' target='#b55'>(Taspinar et al., 2021)</ns0:ref>. While in the latter case, the layers are fine-tuned, and some hyper-parameters are adjusted to handle a new task. Besides, the top layer (fully connected layer) is adjusted for the target domain. In this study, for example, we configure the number of neurons in this layer to (4) in accordance with the number of classes in the colon dataset. TL aims to boost the target field's accuracy (i.e., colon histopathological) by taking full advantage of the source field (i.e., ImageNet). Therefore, in this study, we transfer the weights of the set of four powerful pretrained CNN models ( DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b47'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b48'>(Simonyan and Zisserman, 2014)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref>) with fine-tuning to increase the diagnosis performance of the colon histopathological image classification. The pretrained Deep CNN models and the proposed ensemble learning are presented in the subsequent section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained DenseNet121</ns0:head><ns0:p>Dense CNN(DenseNet) was offered by Huang et al. <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>. The architecture of DenseNet was improved based on the ResNet model. The prominent architecture of DenseNet is based on connecting the model using dense connection instead of direct connection within all the hidden layers of the CNN <ns0:ref type='bibr' target='#b6'>(Alzubaidi et al., 2021)</ns0:ref>. The crucial benefits of such an architecture are that the extracted features/features map is shared with the model. The number of training parameters is low compared with other CNN models similar to CNN models because of the direct synchronization of the features to all following layers. Thus, the DenseNet reutilizes the features and makes their structure more efficient. As a result, the performance of the DenseNet is increased <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ghosh et al., 2021)</ns0:ref>. The main components of the DenseNet are: the primary composition layer, followed by the ReLU activation function, and dense blocks. The final layer is a set of FC layers <ns0:ref type='bibr' target='#b53'>(Talo, 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained MobileNetV2</ns0:head><ns0:p>MobileNet <ns0:ref type='bibr' target='#b47'>(Sandler et al., 2018)</ns0:ref> is a lightweight CNN model based on inverted residuals and a linear bottleneck, which form shortcut connections between the thin layers. It is designed to handle limited hardware resources because it is a low-latency model, and a small low power. The main advantage of the MobileNet is the tradeoff between various factors such as latency, accuracy, and resolution <ns0:ref type='bibr' target='#b30'>(Krishnamurthy et al., 2021)</ns0:ref>. In MobileNet, depth separable convolutional (DSC) and point-wise convolutional kernels are used to produce feature maps. Predominantly, DSC is a factorization approach, which replaces the standard convolution with a faster one. In MobileNet, DSC first uses depth-wise kennels 2-D filters to filter the spatial dimensions of the input image. The size of the depth-wise filter is Dk x Dk x1, where Dk is the size of the filter, which is much less than the size of the input images. Then, it is followed by a point-wise convolutional filter that mainly applied to filter the depth dimension of the input images. The size of the depth filter is1x1xn, where n is the number of kernels. They separate each DSC from point-wise convolutional using batch normalization and ReLU function. Therefore, DSC is called (separable). Finally, the last FCC is connected with the Softmax layer to produce the final output/ classification result. Using depth-wise convolutional can reduce the complexity by around 22.7%. This means the DSC takes only approximately 22% of the computation required by the standard convolutional. Based on this reduction, MobileNet is becoming seven times faster than the traditional convolutional. Thus, it becomes more desirable when the hardware is limited <ns0:ref type='bibr' target='#b49'>(Srinivasu et al., 2021)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained InceptionV3</ns0:head><ns0:p>Google teams in <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref> introduced the InceptionV3 CNN. The architecture of InceptionV3 was updated based on the inceptionV1 model, as illustrated in Figure <ns0:ref type='figure'>2</ns0:ref>. It mainly addressed some issues in the previous inceptionV1 such as auxiliary classifiers by add batch normalization and representation bottleneck by adding kernel factorization <ns0:ref type='bibr' target='#b37'>(Mishra et al., 2020)</ns0:ref>. The architecture of the inceptionV3 includes multiple various types of kernels (i.e., kernel size) in the same level. This structure aims to solve the issue of extreme variation in the location of the salient regions in the input images under consideration <ns0:ref type='bibr' target='#b37'>(Mishra et al., 2020)</ns0:ref>. The inceptionV3 <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref> utilizes a small filter size (1x7 and 1x5) rather than a large filter (7x7 and 5x5). In addition, a bottleneck of 1x1 convolution is utilized. Therefore, better feature representation.</ns0:p><ns0:p>The architecture of inceptionV3 <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>represents the output layer (e.g., ensemble technique). Using parallel layers with each other will save a lot of memory and increase the model's capacity without increasing its depth.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref>. The inception model from <ns0:ref type='bibr' target='#b53'>(Talo, 2019)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. The VGG16 model <ns0:ref type='bibr' target='#b48'>(Simonyan and Zisserman, 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained VGG16</ns0:head><ns0:p>VGG16 was presented by Simonyan et al. <ns0:ref type='bibr' target='#b48'>(Simonyan and Zisserman, 2014)</ns0:ref> as a deeper convolutional neural network model. The basic design of this model is to replace the large kernels with smaller kernels, and extending the depth of the CNN model <ns0:ref type='bibr' target='#b6'>(Alzubaidi et al., 2021)</ns0:ref>. Thus, the VGG16 becomes potentially more reliable in carrying out different classification tasks. Figure <ns0:ref type='figure'>3</ns0:ref> shows the basic VGG16 (Simonyan and Zisserman, 2014) architecture. It consists of five blocks with 41 layers, where 16 layers have learnable weights; 13 convolutional layers and 3 FCC layers from the learnable layers <ns0:ref type='bibr' target='#b27'>(Khan et al., 2020)</ns0:ref>. The first two blocks include two convolutional layers, while the last three blocks consist of three convolutional layers. The convolutional layers use small kernels with size of 3x3 and padding 1. These convolutional layers are separated using the max-pooling layers that use 2x2 filter size with padding 1. The output of the last convolutional layer is 4096, which makes the number of neurons in the FCC 4096 neurons.</ns0:p><ns0:p>As illustrated in Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref> , VGG16 uses around 134 million parameters, which raises the complexity of VGG16 relating to other pretrained models <ns0:ref type='bibr' target='#b56'>(Tripathi and Singh, 2020;</ns0:ref><ns0:ref type='bibr' target='#b29'>Koklu et al., 2022)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>The proposed Deep CNN Ensemble Based on softmax</ns0:head><ns0:p>The proposed deep ensemble CNNs (E-CNNs) architecture is based on two phases (base classifiers and fuse techniques). In the former phase, four modified models have been utilized: DenseNet121, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science all the distinguishing features from the input training images <ns0:ref type='bibr' target='#b20'>(Ghosh et al., 2021)</ns0:ref>.</ns0:p><ns0:p>Besides, using initial weights in pretrained models affect the classification performance because the CNNs pretrained models are nonlinear designs. These pretrained models learn complicated associations from training data with the assistance of back propagation and stochastic optimization <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021)</ns0:ref>. Therefore, this study introduces a block-wise fine-tuning technique to adapt the standard CNNs models to handle the heterogeneity nature in colorectal histology image classification tasks.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_1'>4</ns0:ref> illustrates the main steps of the design of the block-wise fine-tuning technique. First, the benchmark colon images are loaded. Then, some preprocessing tasks on the training and testing images are performed to prepare them for the pretrained models, (e.g. resizing them to 224x224x3). The images are then rescaled to 1/255 as in the previous related studies <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref>. After splitting the dataset into training and testing, the four independent pretrained models: <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b47'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b48'>(Simonyan and Zisserman, 2014)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b52'>(Szegedy et al., 2016)</ns0:ref>)</ns0:p><ns0:p>are loaded without changing their weights. Then, the FCC and softmax layers are omitted from the loaded pretrained CNN models. These layers were originally designed to output 1000 classes from the ImageNet dataset. Two dense layers with a varying number of hidden neurons are then added to strengthen the vital data-articular feature learning from each individual pretrained model. These dense layers are followed by the ReLU nonlinear activation function, which allows us to learn complex relationships among the data <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b19'>Garbin et al., 2020)</ns0:ref>. Next, a 0.3 dropout layer is added to address the long training time and overfitting issues in classification tasks <ns0:ref type='bibr' target='#b14'>(Deniz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b12'>Boumaraf et al., 2021)</ns0:ref>. While in the product rule, the posterior probability outputsP j t (I) for each class label j are generated by the base classifier t for the test image (I). Then the class with the maximum likelihood of product is considered the final decision. Eq. (2) shows the product rule technique in the proposed E-CNN (product rule). Algorithm 2 illustrates the proposed E-CNNs with majority voting and product rule. Evaluate the performance of I using the test data j. </ns0:p><ns0:formula xml:id='formula_0'>P(I) = max j=1toc T =4 ∏ t=1 P j t (I)<ns0:label>(2</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Resources used</ns0:head><ns0:p>All the experiments are implemented using TensorFlow, Keras API, and utilized python programming in Google Colaboratory or 'CoLab.' In the CoLab, we utilize Tesla GPU to run our experiment after loading the dataset into the Google drive <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS RESULTS AND DISCUSSION</ns0:head><ns0:p>This section outlines the experiments and evaluation results from the (E-CNN) and its individual models presented in this research. This section also entails a synopsis of the training and test datasets. The results using the proposed E-CNN, with majority voting and product rule, other standard pretrained models, and state-of-the-art colon cancer classification methods are also presented in this section. Comparisons between the proposed E-CNN and other CNN models from scratch are presented in this section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>To evaluate the validity of the proposed E-CNN for colon diagnosis from histopathological images, two distinct benchmarks colon histology images datasets from <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> and <ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref> are applied. Further information about these datasets is as follows:</ns0:p><ns0:p>(A) Stoean (370 images): The histology images dataset <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> were collected from the Hospital of Craiova, Romania. The benchmark dataset consist of 357 histopathological H&E of normal grade (grade 0) and for cancer grades (grades 1, 2, and 3), with 10x magnification. They have a similar 800 × 600 pixels resolution. The images' distribution for the classes is as follows: Grade 0: 62 images, grade 1: 96 images, grade 2: 99 images, and grade 3: 100 images. All images are RGB color 8-bit depth with JPEG format. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows some samples from the images and how they are close to each other in the structure, which discriminates between various complicated grades. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental Setting</ns0:head><ns0:p>As the proposed E-CNN aims to assist in diagnosing colon cancer based on the histopathological images, the benchmark dataset in <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b25'>Kaur and Gandhi (2020)</ns0:ref>. The number of epochs was selected as 10. These models were trained by stochastic gradient descent(SGD) with momentum. all the proposed TL models emploed the cross entrotpy (CE) as the loss function. The cross-entropy is mainly utilized to estimate the distance between the prediction likelihood vector(E) and the one-hot-encoded ground truth label(T) <ns0:ref type='bibr' target='#b12'>(Boumaraf et al., 2021)</ns0:ref> probability vector(The following equation depicts the CE Eq.3:</ns0:p><ns0:formula xml:id='formula_1'>CE(E, T ) = − ∑ t=1 T i log E i (3)</ns0:formula><ns0:p>where CE is used to tell how well the output E matches the ground truth T. Furthermore, the dropout layer was added to all the proposed TL models to avoid over-fitting affair during training. As a result, it drops the activation randomly during the training phase and avoiding units from over co-adapting <ns0:ref type='bibr' target='#b12'>(Boumaraf et al., 2021)</ns0:ref>. In this study, dropout was set to 0.3 to randomly drop out the units with a probability of 0.3, which is typical when introducing the dropout in deep learning models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation Criteria</ns0:head><ns0:p>In this work, multiclass (four-class) classification tasks have been carried out using the base classifiers and their ensembles on the benchmark colon dataset <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref>. The obtained results have been evaluated using average accuracy, average sensitivity, average specificity, and standard deviation over ten runs. All of these metrics are counted based on the confusion matrix, which includes the true negative (TN) and true positive (TP) values. TN and TP symbolize the acceptably classified benign and malignant samples, respectively. The false negative (FN), and false positive (FP) denote the wrong classified malignant and benign samples. These metrics are designed as follows:</ns0:p><ns0:p>• The average classification accuracy: The correctly categorized TP and TN numbers combined with the criterion parameter, are generally referred to as accuracy. A technique's classification accuracy is measured in Equation 4 as follows:</ns0:p><ns0:formula xml:id='formula_2'>Acc = 1 M M ∑ j=1 T P + T N T P + T N + FP + FN * 100%, (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>where M is the number of independent runs of the proposed ECNN with its individual. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>which are efficiently determined as described in Eq. 5:</ns0:p><ns0:formula xml:id='formula_4'>Sensitivity = 1 M M ∑ j=1 T P T P + FN * 100%, (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>)</ns0:formula><ns0:p>The sensitivity value is between [0, 1] scale. One shows the ideal classification, while zero shows the worst classification possible. Multiplication by 100 is applied on the sensitivity to obtain the required percentage.</ns0:p><ns0:p>• Average Specificity: Specificity represents an evaluation metric that is provided for negative samples within a classification approach. In particular, it attempts to measure the negative samples' proportion, which is efficiently classified. Specificity is computed as Eq. 6:</ns0:p><ns0:formula xml:id='formula_6'>Speci f icity = 1 M M ∑ j=1 T N T N + FP * 100% (6)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Results and discussion</ns0:head><ns0:p>This subsection presents the experimental results obtained from the proposed E-CNN and its individuals.</ns0:p><ns0:p>These results are compared to the classification accuracy results using standard pretrained models (e.g., DenseNet, MobileNet, VGG16, and InceptionV3). After that, the performance of the standard pretrained models was compared to the adaptive pretrained models' performance to evaluate the influence of blockwise fine-tuning policy. The proposed E-CNN was also compared with the state-of-the-art CNN models for colon cancer classification such as <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b51'>Stoean, 2020;</ns0:ref><ns0:ref type='bibr' target='#b39'>Popa, 2021)</ns0:ref>. In the end, to assess the significance of the proposed E-CNN, statistical test methods were used to verify whether there is a statistically significant difference between the performance of the E-CNN and the performance of the state-of-the-art CNN models.</ns0:p><ns0:p>The experimental results of this study were built based on the average runs. <ns0:ref type='bibr'>Modified MobileNetV2,</ns0:ref><ns0:ref type='bibr'>Modified InceptionV3,</ns0:ref><ns0:ref type='bibr'>and Modified VGG16)</ns0:ref>. The softmax of the FCC of these transfer learning set is used as the classification algorithm. Then, the ensemble (E-CNN) was obtained via product and majority voting aggregation methods. To illustrate the proposed E-CNN performance, the average accuracy, sensitivity, and specificity over the ten runs are used for evaluating the testing dataset.</ns0:p><ns0:p>Besides, the standard deviation (STD) for each base classifier and the E-CNN are also used to estimate the effectiveness of the proposed E-CNN. The experimental results of the proposed E-CNN and its individuals (i.e., base classifiers) on the first dataset are shown in Tables <ns0:ref type='table' target='#tab_13'>4, 5</ns0:ref> <ns0:ref type='bibr'>,</ns0:ref><ns0:ref type='bibr'>and 6,</ns0:ref><ns0:ref type='bibr'>and Figures 6,</ns0:ref><ns0:ref type='bibr'>7,</ns0:ref><ns0:ref type='bibr'>8,</ns0:ref><ns0:ref type='bibr'>9,</ns0:ref><ns0:ref type='bibr'>10,</ns0:ref><ns0:ref type='bibr'>and 11,</ns0:ref><ns0:ref type='bibr' /> respectively. Meanwhile, the results of the (Kather's) dataset <ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref> are shown in Table <ns0:ref type='table' target='#tab_15'>7</ns0:ref>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science However, among the four proposed individual pretrained models, the modified VGG16 is the least accurate, it rated about 79% for the multiclass classification task. This could be the explanation for VGG16's limited number of layers (i.e. 16 layers). Compared to the standard VGG16, the average accuracy rate difference between the modified VGG16 and the standard VGG16 was more than 10%, which was big and statistically significant. This astounding level of performance of the modified models could be attributed to the ability of the adaptation layers to find the most abstract features, which aid the FCC and softmax classifier in discriminating between various grades in colon histopathological images.</ns0:p><ns0:p>As a result, it reduces the problem of inter-class classification. Moreover, the proposed modified pretrained models outperformed the standard models, boosting the decisions of these models and enabling them to achieve a better generalization ability than a single pretrained model <ns0:ref type='bibr' target='#b13'>(Cao et al., 2020)</ns0:ref>. In this study, two ensemble learning models are utilized: E-CNN (product rule) and E-CNN (majority voting), to merge the decisions of the single models. The former is based on merging the probabilities of the individual modified models. While the latter is based on combining the output predictions of the individual, Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> confirms the confusion matrix obtained as a result of the tested samples (20% of the dataset) for one run of the classification performed through the proposed E-CNN (product rule). The empirical results of the proposed ECNN (majority voting) and E-CNN (product rule) achieved accuracy rates of 94.5% and 95.2%, respectively.</ns0:p><ns0:p>These accuracy values were higher compared to the individual models. For example, the E-CNN (product rule) result showed 3.2% increase compared to the modified DenseNet121 model. This result reveals Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table' target='#tab_11'>4</ns0:ref>. Evaluation results for the proposed E-CNN, its individuals (modified TL models) when number of epochs=10, and the standard TL models on colon histopathlogical images dataset based on the average Accuracy, Sensitivity, Specificity, and average standard deviation (STD) in 10 runs, best results in bold. the significance of the product rule in the proposed E-CNN for colon image classification because it is based on an independent event <ns0:ref type='bibr' target='#b5'>(Albashish et al., 2016)</ns0:ref>. To show the adequacy of the proposed E-CNN, sensitivity was computed. Table <ns0:ref type='table' target='#tab_11'>4</ns0:ref> confirms that the E-CNN has higher sensitivity values than all the individual models. It is worth noting that the sensitivity performance level matches the accuracy values, thereby emphasizing the consistency of the E-CNN results. E-CNN (product rule) was able to yield a better sensitivity value (95.6%). Among all the proposed transfer learning models, InceptionV3 delivered the overall maximum sensitivity performance. Besides, the specificity measure shows that the E-CNN and its individuals are able to detect negative samples which are correctly classified for each class.</ns0:p><ns0:p>Furthermore, the standard deviation analysis over the ten runs shows that the ensemble E-CNNs (product rule) has the minimum value (around 1.7%). These results indicate that it is stable and capable of producing optimal outcomes regardless of the randomization.</ns0:p><ns0:p>To show the adequacy of the proposed modified CNN models even after being trained on a smaller dataset, we have provided accuracy and loss (error function) curves. The loss function quantifies the cost of a particular set of network parameters based on how often they generate output in comparison to the ground truth labels in the training set. The TL models employ SGD to determine the optimal set of parameters for minimizing the loss error. DenseNet and Inception models achieved good accuracy for the training and test datasets over various epochs, while the MobileNet and VGG15 models performed adequately. One possible explanation is that the proposed models are stable during the training phase, allowing them to converge to the best effect.</ns0:p><ns0:p>The Densenet121 loss curve indicates that the training loss dropped significantly much faster than the VGG16 and that the testing accuracy improved much faster. In more detail, the VGG16 loss function was linearly reduced, whereas the DenseNet loss function was dramatically reduced. This is consistent with Densenet121's classification performance in Table <ns0:ref type='table' target='#tab_11'>4</ns0:ref>, where it outperformed all other proposed models.</ns0:p><ns0:p>Furthermore, one can see that all of the proposed TL models, except the VGG16, achieved high testing accuracies. These models improve the generalization performance simultaneously.</ns0:p><ns0:p>To further demonstrate the efficacy of the proposed E-CNNs, we also compare the obtained results on the colon histopathlogical images benchmark dataset with the most recent related works <ns0:ref type='bibr' target='#b51'>(Stoean, 2020;</ns0:ref><ns0:ref type='bibr' target='#b39'>Popa, 2021)</ns0:ref>. Table <ns0:ref type='table' target='#tab_13'>5</ns0:ref> contains the comparison between the proposed E-CNNs and the recent state-of-the-art studies. From the tabular results, one see that the proposed E-CNNs achieved higher results comparing either to pretrained models, such as in <ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref>, or to constructing CNN from scratch. One of the main reasons for these superior results is the adaptation of the transfer learning models with the appropriate layers; additionally, using ensemble learning demonstrates the ability of the proposed E-CNNs to increase discriminations between various classes in the histopathological colon images dataset. In comparison to the recent study by <ns0:ref type='bibr' target='#b51'>Stoean et al. (Stoean, 2020)</ns0:ref>, our ensemble E-CNNs has shown better performance. They constructed CNN from scratch and then used evolutionary algorithms (EA) to fin-tune its parameters. Their classification accuracy on the colon histopathlogical images dataset was 92%. We find that the proposed method's superior due to expanding deeper architecture and utilizing ensemble learning.</ns0:p><ns0:p>Moreover, the obtained classification accuracies were compared with the pretrained models GoogleNet and AlexNet in <ns0:ref type='bibr' target='#b39'>(Popa, 2021)</ns0:ref>. The proposed method exceeded the pretrained models. The classification accuracy of GoogleNet and AlexNet on colon histopathological images was 85.62% and 89.53%, respectively. The average accuracy rate difference between the proposed method and these pretrained models was more than 10% and 6%, respectively, which was large and statistically significant. Two critical observations are to be made here: first, adapting pretrained models to a specific task increases performance. Second, using pretrained models as a feature extraction without the softmax classifier may degrade the classification accuracy in the colon histopathological image dataset.</ns0:p><ns0:p>To verify that the modified pretrained models are not overfitted, we re-trained them through a number of 30 epochs as in Kather's work <ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref>. of epochs was a set of 10. For example, the modified DenseNet121, MobileNetV2, inceptionv3, and VGG16 with 30 epochs increased the accuracy by 6%, 2%, 4%, and 7%, respectively. This may indicate greater success when increasing the number of epochs and making the deep learning models train enough.</ns0:p><ns0:p>Furthermore, the increased performance of the base learner affects the ensemble models. Thus, the results of E-CNN (product rule) and E-CNN (majority voting) are increased by around 2% and 1.0%, respectively compared to the same ensemble when the number of epochs was ten. Figure <ns0:ref type='figure' target='#fig_14'>12</ns0:ref> confirms the confusion matrix of the E-CNN(product rule). These results indicate that the individual learners that we have modified and their ensemble perform robustly better and are not overfitted when increasing the number of epochs.</ns0:p><ns0:p>Moreover, to validate the proposed modified models and their ensembles, we applied these models to the second colon histopathological dataset called the Kather dataset <ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref>. This dataset contains 5,000 histological images of human colon cancer from eight distinct kinds of tissue. Table <ns0:ref type='table' target='#tab_15'>7</ns0:ref> gives the accuracy, sensitivity, and specificity of the proposed individual pretrained models, E-CNN (product rule), and E-CNN (majority voting). Besides, the proposed modified models and their ensembles are compared to similar experiments previously used to assess the classification of the Kather dataset.</ns0:p><ns0:p>Based on the results, the proposed modified models are able to separate eight different classes in the histopathological images. Both modified InceptionV3 and DenseNet121 achieved testing accuracy of roughly 89%, with a standard deviation of less than 0.5%. These results outperform the ResNet152 feature extraction results in <ns0:ref type='bibr' target='#b38'>(Ohata et al., 2021</ns0:ref>) by around 9%. That is because the fine-tuned is capable of extracting high-level features from the input images. Furthermore, by using the modified VGG16 on the same dataset, the obtained result has roughly 83% test accuracy, while <ns0:ref type='bibr' target='#b44'>(Rachapudi and Lavanya Devi, 2021</ns0:ref>) achieved a test accuracy of 77% when utilizing CNN architecture. This implies that the modification to the pretrained models gets an acceptable result on the histopathological image dataset. The E-CCN (product rule) and E-CNN (majority voting) achieved promising results on the Kather dataset. As shown in Table <ns0:ref type='table' target='#tab_15'>7</ns0:ref>, the E-CCN (product rule) and E-CNN (majority voting) performed better than all individual models, with an accuracy of 91.28% and 90.63%, respectively, which is better than the DensNet121 with only around 2%. These results demonstrate the effectiveness of the proposed modified pretrained models and their ensemble in this classification task.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>According to the above experimental results, it is clear that the proposed E-CNNs and adapted TL predictive models outperform other state-of-the-art models and standard pretrained models in the colon histopathological image classification task. The experimental results indicate that adapting the pretrained models for medical image classification tasks improves classification tasks. The results in Tables <ns0:ref type='table' target='#tab_13'>4 and 5</ns0:ref> demonstrate the critical importance of the introduced adapted models (DenseNet121, MobileNetV2, InceptionV3, and VGG16) in comparison to conventional methods. For example, the adaptive DenseNet model outperformed the standard DenseNet model. These findings show that tailoring the pretrained models to a specific classification task can boost performance. It has also been experimentally verified that using these models in medical image classification results in superior performance when compared to training CNN from scratch (as in previous works by <ns0:ref type='bibr' target='#b40'>(Postavaru et al., 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b51'>(Stoean, 2020)</ns0:ref>). One reason for this finding is that training a CNN from scratch would necessitate a large number of training samples. Moreover, it must be confirmed that the large number of parameters of the CNN are trained effectively and with a high degree of generalization to obtain acceptable results. Thus, the limitation of the number of training samples causes overfitting in classification tasks. Furthermore, based on the results, it was found that the selection of appropriate hyperparameters in pretrained models plays a vital role in the proper learning and performance of these models.</ns0:p><ns0:p>In this study, two ensemble learning (E-CNN (Majority voting), E-CNN (product rule)) models had been designed to further boost the colon histopathological image classification performance. In the proposed ensemble learning models, the adaptive pretrained models were used as base classifiers. Manuscript to be reviewed Computer Science Table <ns0:ref type='table'>6</ns0:ref>. Evaluation results for the proposed E-CNN, its individuals (modified TL models) when number of epochs=30, and the standard TL models on colon histopathlogical images dataset based on the average Accuracy, Sensitivity, Specificity, and average standard deviation (STD) in 10 runs, best results in bold. Manuscript to be reviewed classifiers. Furthermore, using product rules in the ensemble allows the probabilities of independent events to be fused, ultimately improving performance. This finding is in line with the results in Table <ns0:ref type='table' target='#tab_11'>4</ns0:ref>, where the proposed E-CNN (product) outperformed the proposed E-CNN (majority voting).</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Furthermore, the T-test is used to compare the proposed E-CNN product to the previously related studies on the same dataset. This test is performed to prove that the improvement made by the proposed E-CNN (product) and the state-of-the-art is statistically significant. The T-test is carried out based on the average accuracy and the standard deviation of the test samples (20% of the dataset), which are obtained by the E-CNN (product) over ten independent runs. By handling a T-test with a 95% spectrum of significance (alpha =0.05) on the collected p-values and the classification accuracy, the corresponding difference statistics are shown in Table <ns0:ref type='table' target='#tab_13'>5</ns0:ref>. As shown in Table <ns0:ref type='table' target='#tab_13'>5</ns0:ref>, the proposed E-CNN (product) outperforms most of the related works on the colon histopathological image dataset, where the majority of the p-values of< 0.0001. For example, comparing the proposed E-CCNN with CNN from scratch in <ns0:ref type='bibr' target='#b51'>(Stoean, 2020)</ns0:ref>, E-CNN is significantly better with a p-value <0.001. These findings show that using the E-CNN (product) is effective for handling medical image classification tasks. In summary, it has been demonstrated that the use of the proposed TL models assists in the colon histopathological image classification task, which can be used in the medical domain. Besides, using ensemble learning for the machine learning classification tasks can improve the classification results.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>Deep learning plays a key role in diagnosing colon cancer by grading captured images from colon histopathological images. In this study, we introduced a new set of transfer learning-based methods to help classify colon cancer from histopathological images, which can be used to discriminate between different classes in this domain. To solve this classification task, the pre-trained CNN models DenseNet121, MobileNetV2, InceptionV3, and VGG16 were used as backbone models. We introduced the TL technique based on a block-wise fine-tuning process to transfer learned experience to colon histopathological images. To accomplish this, we added new dense and drop-out layers to the pretrained models, followed by new Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>final decisions and accurately diagnosing colon cancer.</ns0:p><ns0:p>Future research could be considered to introduce a new strategy to select the best hyperparameters for the adaptive pretrained models-we recommend wrapper methods for this task.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Colon histopathology images from the Stoean benchmark dataset (Stoean et al., 2016) with 40× magnification factor: (A) normal (Grade 0), (B) cancer grade 1 (G1), (C) cancer grade 2 (G2), and (D) cancer grade 3 (G3).</ns0:figDesc><ns0:graphic coords='3,141.73,312.52,413.58,334.32' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Block diagram of the proposed Block-wise fine-tuning for each pretrained model from (DenseNet121, MobileNetV2, InceptionV3, and VGG16).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>)</ns0:head><ns0:label /><ns0:figDesc>Based on Algorithms 1, 2, and Figures4,5, the following points are taken into account: First, the CNN model is adapted to handle the heterogeneity in the colon histopathological images using the Block-wise fine-tuning technique for each of the pretrained models. It extracts additional abstract features from the image that aid in increasing intra-class discrimination. Second, ensemble learning is employed to improve the performance of the four adaptive pretrained models. As a result, the final decision regarding the test images will be more precise.Building and training the adaptive pretrained models [Block-wise fine-tuning for each pretrained model]. 1: input:Training data(T), N samples: T = [x 1 , x 2 ,. . ., x N ], with Category: y = [y 1 , y 2 ,. . ., y N ], pretrained CNN models( M), M=[ DenseNet121, MobileNetV2, InceptionV3, and VGG16 models]. 2: for each I in M do with number of neurons equal to 512 and activation function='ReLU' 5:Add Dense2 layer with number of neurons equal to 64 and activation function='ReLU' with number of neurons equal to 4( based on number classes in the colon dataset) -parameters values, as listed in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>, I]=probabilities of each class for the test image j when using the individual I.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>V</ns0:head><ns0:label /><ns0:figDesc>[ j, I]=prediction for the test image j when using the individual I.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The proposed E-CNNS with the four adaptive pretrained models (DenseNet121, MobileNetV2, InceptionV3, and VGG16).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>B) Kather (5000 images): The dataset<ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref> includes 5000 histology images of human colon cancer. The samples were gathered from the Institute of Pathology, University Medical Center, Mannheim, Germany. The benchmark dataset consists of histopathological H&E of eight classes: namely ADIPOSE, STROMA, TUMOR, DEBRIS, MUCOSA, COMPLEX, EMPTY, and LYMPHO. Each class consists of 625 images with a size of 150 × 150 pixels, 20× magnification, and RGB channel format.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Mainly, ten separate experiments were used to obtain the average value.The experiments were carried out on two benchmark colon histopathological image datasets: Stoean and Kather datasets, to test the robustness of the proposed methods. The former dataset is the Stoean dataset, which includes four different classes, mainly: benign, grade1, grade2, and grade3. While the second dataset (the Kather dataset) includes eight diverse classes, Each dataset was divided into 80% for the training set and 20% for the testing set. The results of classification performance in this study are for the test dataset. The The classification tasks were accomplished using individual classifiers of the modified transfer learning set (Modified DenseNet121,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. A comparison of modified TL models with standard TL models (original) in terms of average classification accuracy.</ns0:figDesc><ns0:graphic coords='16,141.73,63.83,413.53,376.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Confusion matrix of the E-CNN (product rule) on the Stoean testing dataset when number of epochs =10.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. The accuracy learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is ten on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The loss learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is ten on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. The accuracy learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is 30 on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The loss learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is 30 on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Confusion matrix of the E-CNN (product rule) on the Stoean testing dataset when number of epochs =30.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>FCC</ns0:head><ns0:label /><ns0:figDesc>with softmax layers to handle the four-class classification task. The adaptability of the proposed models has been enhanced further by the utilized ensemble learning. Two deep ensemble learning methods (E-CNN (product) and E-CNN (Majority voting)) have been proposed. The adapted pretrained models were used as individual classifiers in these proposed ensembles. Next, their output probabilities were fused using the majority voting and the product rule. The acquired results revealed the efficiency of the suggested E-CNNs and their individuals.We achieved accuracy results of 95.20% and 94.52% for the proposed E-CNN (product) and E-CNN (majority voting), respectively. The proposed E-CNNs and its individual performances were evaluated and compared against the standard (without adaption) pretrained models (DenseNet121, MobileNetV2, InceptionV3, and VGG16 models) as well as state-of-the-art pretrained models and CNN from scratch on colon histopathological images. On all evaluation metrics and the colon histopathological images benchmark dataset, the proposed E-CNNs considerably outperformed the standard pretrained and state-ofthe-art CNN from scratch models. The results indicate that the adaptation of pretrained models for TL is a viable option for dealing with the limited number of samples in any new classification task. As a result, the findings indicate that E-CNNs are being used in diagnostic pathology to assist pathologists in making23/26 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:1:2:CHECK 14 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of the major classification studies on colon cancer</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Authors in</ns0:cell><ns0:cell>Dataset used</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell>Using pretrained either</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>architecture</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>feature extraction/ fine-tuning</ns0:cell></ns0:row></ns0:table><ns0:note><ns0:ref type='bibr' target='#b51'>(Stoean, 2020)</ns0:ref> colorectal in<ns0:ref type='bibr' target='#b50'>(Stoean et al., 2016)</ns0:ref> CNN model from scratch 92%</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of deep architectures used in this work. InceptionV3, and VGG16 pretrained CNN classifier. While the latter phase focuses on combining the decisions of the base classifiers (in the first phase). Two types of fusion techniques have been employed in the proposed E-CNNS: majority voting and product rule. On the one hand, the majority voting is based on the prediction value of the base classifier. On the other hand, the product rule is based</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>No. of</ns0:cell><ns0:cell>No. of</ns0:cell><ns0:cell>No. of training</ns0:cell><ns0:cell>Minimum</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>Top 5 error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Conv layers</ns0:cell><ns0:cell>FCC layers</ns0:cell><ns0:cell>parameters</ns0:cell><ns0:cell>image size</ns0:cell><ns0:cell>extracted features</ns0:cell><ns0:cell>on ImageNet</ns0:cell></ns0:row><ns0:row><ns0:cell>DenseNet121</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>7 million</ns0:cell><ns0:cell>221x221</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>7.71%</ns0:cell></ns0:row><ns0:row><ns0:cell>InceptionV3</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>22 million</ns0:cell><ns0:cell>299x299</ns0:cell><ns0:cell>2048</ns0:cell><ns0:cell>3.08%</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>134 million</ns0:cell><ns0:cell>227x227</ns0:cell><ns0:cell>4096</ns0:cell><ns0:cell>7.30%</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNet</ns0:cell><ns0:cell>53</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3.4 million</ns0:cell><ns0:cell>224x224</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>-%</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNetV2,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>on the probabilities of the base classifiers (i.e., pretrained model), the details of the proposed E-CNNs are as follows:The proposed modified Deep pretrained models After adapting the four pretrained models (DenseNet121, MobileNetV2, InceptionV3, and VGG16), they serve as base classifiers in the proposed E-CNN for the automated classification of colon H&E histopathological images. The standard previous pretrained models extract various features from the training images to discriminate between different types of cancer (multiple classes) in the colon images dataset. However, each pretrained model is based on a set of convolution layers and filter sizes to extract different features from the input images. As a result, no pretrained model can be more general in extracting</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>10:</ns0:cell><ns0:cell>Build the final model (adaptI)</ns0:cell></ns0:row><ns0:row><ns0:cell>11:</ns0:cell><ns0:cell>Train the adapI on T</ns0:cell></ns0:row><ns0:row><ns0:cell>12:</ns0:cell><ns0:cell>Append adapI into adapM</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>13: end for</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>14: Output: Adaptive models (adaptM), adaptM=[ adapt DenseNet121, adapt MobileNetV2,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>adapt InceptionV3, and adapt VGG16 models]</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Algorithm 2 Ensemble of adaptive models and evaluating the ensemble model on test colon histopathlog-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ical images.</ns0:cell></ns0:row></ns0:table><ns0:note>1: Input: Adaptive models (adaptM), adaptM=[ adapt DenseNet121, adapt MobileNetV2, adapt InceptionV3, and adapt VGG16 models], Test images set( D), with z samples: R = [x 1 , x 2 , x 3 ,. . ., x z ], with Category: y = [y 1 , y 2 ,. . ., y z ] 2: for j in D do 3: for each individual I in adaptM do 4:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>is considered during the experiments' work. The dataset was divided into 80% training and 20% testing. In E-CNN, the Hyperparameters, as illustrated in Table3, were fine-tuned with the same setting for all the proposed transfer learning models. The training and Hyperparameters used in the proposed individual transfer learning models and an ensemble model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Hyperparameters</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Image size</ns0:cell><ns0:cell>224 × 224</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>0.005</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Maximum Habitat probability SGD with momentum</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate</ns0:cell><ns0:cell>1e-6</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of epochs</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Dropout</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss function</ns0:cell><ns0:cell>Cross Entropy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>testing images were resized to 224 × 224 for comfort with the proposed transform learning models. The</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>batch size was chosen as 16; the minimum learning rate was specified as min lr=0.000001. The learning</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>rate was determined to be small enough to slow down learning in the modelsPopa (2021);</ns0:cell></ns0:row></ns0:table><ns0:note>12/26PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:1:2:CHECK 14 May 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>compares the results obtained by the modified pretrained models, the baseline (standard) pretrained model, and the proposed (E-CNN). Table4, indicates that the results obtained from all the modified models successfully outperformed the standard pretrained models on the first dataset. The</ns0:figDesc><ns0:table><ns0:row><ns0:cell>highest classification success belongs to the modified DenseNet121 model. It achieved approximately</ns0:cell></ns0:row><ns0:row><ns0:cell>92.3% test accuracy, which was 2.0% more accurate than the standard DenseNet121. It is clear that the</ns0:cell></ns0:row><ns0:row><ns0:cell>modified DenseNet121 model has the highest specificity and sensitivity metrics as in the classification</ns0:cell></ns0:row><ns0:row><ns0:cell>success. This is due to the fine-tuned modified DenseNet121 architecture's custom design, which aids</ns0:cell></ns0:row></ns0:table><ns0:note>in extracting discriminating features from the input colon histopathological images and can distinguish between different classes in this domain. The second highest accuracy among the four modified pretrained models is the modified MobileNetV2. It achieved 92.19% test accuracy, which was comparable to the improved DenseNet121. In more details, the average accuracy rate difference between the modified MobileNetV2 and the standard MobileNetV2 is more than 2%.14/26PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:1:2:CHECK 14 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Summary of the major classification studies on colon cancer.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Pretrained Models</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Sensitivity</ns0:cell><ns0:cell cols='2'>Specificity</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard DenseNet121</ns0:cell><ns0:cell>90.41±3.1</ns0:cell><ns0:cell>91.25±2.9</ns0:cell><ns0:cell>100±0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard MobileNetV2</ns0:cell><ns0:cell>90.27±2.9</ns0:cell><ns0:cell>88.25±1.9</ns0:cell><ns0:cell cols='2'>99.23±2.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard InceptionV3</ns0:cell><ns0:cell>87.12±2.0</ns0:cell><ns0:cell>92.75±2.0</ns0:cell><ns0:cell>100±0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard VGG16</ns0:cell><ns0:cell>62.19±7.0</ns0:cell><ns0:cell>63.21±7.3</ns0:cell><ns0:cell>100±9.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified DenseNet121</ns0:cell><ns0:cell>92.32±2.8</ns0:cell><ns0:cell>92.99±2.8</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified MobileNetV2</ns0:cell><ns0:cell>92.19±3.8</ns0:cell><ns0:cell>90.75±2.0</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified InceptionV3</ns0:cell><ns0:cell>89.86±2.2</ns0:cell><ns0:cell>95.0±1.5</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified VGG16</ns0:cell><ns0:cell>72.73±3.9</ns0:cell><ns0:cell>73.0±3.6</ns0:cell><ns0:cell cols='2'>87.43±12.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Proposed E-CNN (product)</ns0:cell><ns0:cell>95.20±1.64</ns0:cell><ns0:cell>95.62±1.50</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Proposed E-CNN (Majority voting)</ns0:cell><ns0:cell>94.52±1.73</ns0:cell><ns0:cell>95.0±1.58</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Authors in</ns0:cell><ns0:cell>Dataset used</ns0:cell><ns0:cell cols='2'>CNN architecture</ns0:cell><ns0:cell /><ns0:cell>Accuracy</ns0:cell><ns0:cell>T-test/p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>(Stoean, 2020)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='2'>CNN model from scratch</ns0:cell><ns0:cell /><ns0:cell>92%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>(Popa, 2021)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell>AlexNet</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>89.53%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>(Popa, 2021)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell>GoogleNet</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>85.62%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>(Postavaru et al., 2017)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='2'>CNN model from scratch</ns0:cell><ns0:cell /><ns0:cell>91%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed E-CNN</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='3'>Modified TL models with ensemble</ns0:cell><ns0:cell>95.20%</ns0:cell></ns0:row><ns0:row><ns0:cell>(product rule)</ns0:cell><ns0:cell /><ns0:cell cols='2'>learning ( using product rule)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Proposed E-CNN</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='3'>Modified TL models with ensemble</ns0:cell><ns0:cell>94.52%</ns0:cell></ns0:row><ns0:row><ns0:cell>(Majority voting)</ns0:cell><ns0:cell /><ns0:cell cols='3'>learning ( using majority voting)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head /><ns0:label /><ns0:figDesc>Through the experimental results, one can find that ensemble learning outperformed using individual</ns0:figDesc><ns0:table /><ns0:note>20/26PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:1:2:CHECK 14 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Evaluation results for the proposed E-CNN, its individuals (modified TL models) when number of epochs=30, and the standard TL models on Kather's colon histopathlogical images dataset<ns0:ref type='bibr' target='#b24'>(Kather et al., 2016)</ns0:ref> based on the average Accuracy, Sensitivity, Specificity, and average standard deviation (STD) in 10 runs, best results in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Pretrained Models</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Sensitivity</ns0:cell><ns0:cell>Specificity</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified DenseNet121</ns0:cell><ns0:cell>96.8.±2.7</ns0:cell><ns0:cell>97.0±2.4</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified MobileNetV2</ns0:cell><ns0:cell>94.48±2.6</ns0:cell><ns0:cell>95.5±1.9</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified InceptionV3</ns0:cell><ns0:cell>94.52±1.7</ns0:cell><ns0:cell>95.1±1.2</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified VGG16</ns0:cell><ns0:cell>79.4±1.9</ns0:cell><ns0:cell>79.9±3.6</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed E-CNN (product)</ns0:cell><ns0:cell>97.2±1.27</ns0:cell><ns0:cell>97.5±1.8</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed E-CNN (Majority voting)</ns0:cell><ns0:cell>95.89±1.3</ns0:cell><ns0:cell>96.2±1.57</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:1:2:CHECK 14 May 2022)</ns0:note></ns0:figure>
<ns0:note place='foot' n='26'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:1:2:CHECK 14 May 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
Dear Editors,
We are glad that you have offered us to revise our work for PeerJ Computer Science
journal. We would also like to express our gratitude to you, the editorial team, and
reviewers who have provided valuable comments on this paper to improve its quality. We
really appreciate your selection of such highly qualified reviewers for this research. Their
efforts and visions have helped us greatly to fulfill this revision. We have addressed all of
the comments in this revised version of the paper and we list all the changes item-by-item
as outlined below.
Thanks in advance
Sincerely yours,
Corresponding author: Dheeb Albashish
Reviewer #1:
We were very happy while correcting the errors you have stated explicitly. Indeed, we
truly appreciate your valuable feedback and the time you spent in reviewing, commenting
and correcting our work. We also admire your vigilance in finding and suggesting the
correction for the errors. We have addressed all of your comments and errors as follows
and we hope you find them satisfactory (The corrections are highlighted in blue).
General note: We found your comments extremely helpful and have revised accordingly. The
Spaces, Figures, and citations have been processing in the new version of the manuscript.
COMMENT R1-C1: Your work is good, but there are many typos. The English of the article
should be checked by an expert. I've marked some of the misspellings in the PDF. I added
a comment. You must comply with the warnings.
RESPONSE #1: Done, Thank you. We regret there were problems with the English. The
paper has been carefully revised by a professional language expert to improve the grammar
and readability.
COMMENT R1-C2: The abstract article is a little weak compared to its content. It should
summarize the article in full.
RESPONSE #2: Done: We found your comments extremely helpful and have revised accordingly.
We have improved the abstract to highlight and explained our contributions clearly.
Please refer to the abstract section, we added the following paragraph:
Starting from line # 17-28
This study proposes two ensemble learning techniques: (E-CNN (product rule) and
1|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
E-CNN (majority voting)). These techniques are based on the adaptation of the pretrained
CNN models to classify colon cancer histopathology images into various classes. In these
ensembles, the individuals are, initially, constructed by adapting pretrained DenseNet121,
MobileNetV2, InceptionV3, and VGG16 models. The adaptation of these models is based
on a block-wise fine-tuning policy, in which a set of dense and dropout layers of these
pretrained models is joined to explore the variation in the histology images. Then, the
models’ decisions are fused via product rule and majority voting aggregation methods. The
proposed model is validated against the standard pretrained models and the most recent
works on two publicly available benchmark colon histopathological image datasets: Stoean
(357 images) and Kather colorectal histology (5000 images). The results were 97.20% and
91.28% accurate, respectively. The achieved results outperformed the state-of-the-art
studies and confirmed that the proposed E-CNNs could be extended to be used in various
medical image applications
COMMENT R1-C3: Numerical results should be included (in the abstract).
RESPONSE #3: Done. Thank you for this excellent observation.
We added the numerical results in the abstract.
Please refer to the abstract section
#line numbers 25-26
COMMENT R1-C4: citations to be added to this section.
Bicakci, M., Ayyildiz, O., Aydin, Z., Basturk, A., Karacavus, S., & Yilmaz, B. (2020).
Metabolic imaging based sub-classification of lung cancer. IEEE Access, 8, 218470-218476.
RESPONSE #4: Done: Thank you for this excellent observation.
Please refer to the introduction section:
# Line number 52
In recent decades, various computer aided diagnosis systems (CADs) have been introduced
to tackle the classification problems in cancer digital pathology diagnosis to achieve
reproducible and rapid results (Bicakci et al., 2020).
COMMENT R1-C5: Where are the section numbers?
RESPONSE #5: Done, thank you for this comment.
Please refer to the introduction section.
#line numbers 136-139
2|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
The following paragraphs were added in the end of introduction to show details of the
manuscript.
The remainder of this research is organized as follows. Literature review section goes over
the related works. Our proposed methodology is presented in detail in the methodology
section. The experiments results and discussion section analyzes and discusses the
experimental results. The Section of conclusion brings this study to a close by outlining
some research trends and viewpoints.
COMMENT R1-C6: convert to a single citation
RESPONSE #6: Done, thank you for this comment. The citation has been converted.
In the Related works section,
# Line numbers 190-203
The following paragraph were added:
The conventional machine learning techniques have been utilized for the colon
histopathology images dataset to achieve accepted results. For example, the 4-class colon
cancer classification task on the dataset in (Stoean et al., 2016) was utilized in (Boruz and
Stoean, 2018; Khadilkar, 2021) to discriminate between various cancer types. In the former
case (Boruz and Stoean, 2018), the authors extracted contour low-level image features
from grayscale transformed images. Then, these features were used to train the SVM
classifier. Despite its simplicity, the study displayed a comparable performance to some
computationally expensive approaches. The authors reported accuracy averages between
84.1% and 92.6% for the different classes. However, transforming the input images to
grayscale leads to losing some information and degrades the classification results. Besides,
using thresholding needs fine-tuning, which is a complex task. In latter case (Khadilkar,
2021), the authors extracted morphological features from the colon dataset. Mainly, they
extracted harris corner and Gabor wavelet features. These features were then used to feed
the neural network classifier. The authors utilized their framework to discriminate between
benign and malignant cases. However, they ignored the multiclass classification task, which
is more complex task in this domain
COMMENT R1-C7: Reference should be added here. Citation suggestion:
3|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
Taspinar, Y. S., Cinar, I., & Koklu, M. (2021). Classification by a stacking model using CNN
features for COVID-19 infection diagnosis. Journal of X-ray science and technology,
(Preprint), 1-16.
RESPONSE #7: Thank you for this comment. We added the suggested reference.
Please refer to the Methodology section line number 261:
In this way, the convolutional layers act as feature extractions. Then these features are
passed to a specific classifier (e.g., KNN, SVM) (Taspinar et al., 2021).
COMMENT R1-C8: Citation suggestions should be added. It will strengthen your thesis as
it is related to your work. Citation suggestion:
Koklu, M., Cinar, I., & Taspinar, Y. S. (2022). CNN-based bi-directional and directional
long-short term memory network for determination of face mask. Biomedical Signal
Processing and Control, 71, 103216.
RESPONSE #1: Done, thank you for this comment.
I added the requested reference at the end of the Pretrained VGG16 subsection, line
number 329:
VGG16 uses around 134 million parameters, which raises the complexity of
VGG16 relating to other pretrained models (Tripathi and Singh, 2020; Koklu et al., 2022)
COMMENT R1-C9:
It was written as RELU above. It should be the same throughout
the article.
RESPONSE #9: Done. Thank you for this comment.
COMMENT R1-C10: The Experimental Result section of the article is a bit weak compared
to the other sections. This section is the heart of the article. This is the part that belongs to
you. Therefore, this section should be developed.
RESPONSE #10: Done, thank you for this comment.
We developed the results section to highlight the obtained results.
Please refer to the Results subsection, we added the following paragraphs, line numbers
487-526:
4|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
Table 4 compares the results obtained by the modified pretrained models, the baseline
(standard) pretrained model, and the proposed (E-CNN). Table 4, indicates that the results
obtained from all the modified models successfully outperformed the standard pretrained
models on the first dataset. The highest classification success belongs to the modified
DenseNet121 model. It achieved approximately 92.3% test accuracy, which was 2.0%
more accurate than the standard DenseNet121. It is clear that the modified DenseNet121
model has the highest specificity and sensitivity metrics as in the classification success. This
is due to the fine-tuned modified DenseNet121 architecture’s custom design, which aids in
extracting discriminating features from the input colon histopathological images and can
distinguish between different classes in this domain. The second highest accuracy among
the four modified pretrained models is the modified MobileNetV2. It achieved 92.19%
test accuracy, which was comparable to the improved DenseNet121. In more details, the
average accuracy rate difference between the modified MobileNetV2 and the standard
MobileNetV2 is more than 2%.
However, among the four proposed individual pretrained models, the modified VGG16
is the least accurate, it rated about 79% for the multiclass classification task. This could be
the explanation for VGG16’s limited number of layers (i.e. 16 layers). Compared to the
standard VGG16, the average accuracy rate difference between the modified VGG16 and
the standard VGG16 was more than 10%, which was big and statistically significant. This
astounding level of performance of the modified models could be attributed to the ability
of the adaptation layers to find the most abstract features, which aid the FCC and softmax
classifier in discriminating between various grades in colon histopathological images. As a
result, it reduces the problem of inter-class classification. Moreover, the proposed modified
pretrained models outperformed the standard models, boosting the decisions of these
models and enabling them to achieve a better generalization ability than a single pretrained
model (Cao et al., 2020). In this study, two ensemble learning models are utilized: E-CNN
(product rule) and E-CNN (majority voting), to merge the decisions of the single models.
The former is based on merging the probabilities of the individual modified models. While
the latter is based on combining the output predictions of the individual, Figure 7 confirms
the confusion matrix obtained as a result of the tested samples (20% of the dataset) for
one run of the classification performed through the proposed E-CNN (product rule). The
empirical results of the proposed ECNN (majority voting) and E-CNN (product rule)
achieved accuracy rates of 94.5% and 95.2%, respectively. These accuracy values were
higher compared to the individual models. For example, the E-CNN (product rule) result
showed 3.2% increase compared to the modified DenseNet121 model. This result reveals
the significance of the product rule in the proposed E-CNN for colon image classification
because it is based on an independent event (Albashish et al., 2016). To show the adequacy
of the proposed E-CNN, sensitivity was computed. Table 4 confirms that the E-CNN has
higher sensitivity values than all the individual models. It is worth noting that the sensitivity
performance level matches the accuracy values, thereby emphasizing the consistency of the
E-CNN results. E-CNN (product rule) was able to yield a better sensitivity value (95.6%).
Among all the proposed transfer learning models, InceptionV3 delivered the overall
maximum sensitivity performance. Besides, the specificity measure shows that the E-CNN
5|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
and its individuals are able to detect negative samples which are correctly classified for
each class.
COMMENT R1-C11: The resolution of the figures should be increased. The biggest
shortcoming is that there is no confusion matrix. There are calculations, but there are no
confusion matrices used to make these calculations. Add at least the confusion matrix of
the model you propose (the most successful model, E-CNN). Include the ROC curve if
possible. These will enable the reader to better understand and evaluate the article.
RESPONSE #11: Done, Thank you for this excellent comments.
The figures are redrawn to increase the resolution.
The confusion matrix for the E-CCN (product) has been added. Please refer to line numbers
505 and 621.
Please refer to the “Figure 7 and Figure 12”. Confusion matrix of the E-CNN (product) on
the testing dataset” in the Result subsection.
COMMENT R1-C12: As stated in the previous comments, more detailed analysis is
required to prove the validity of the findings.
RESPONSE #12: Done. Thank you for the excellent comments we added more detailed
analysis.
Please refer to the results subsection.
6|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
Reviewer #2:
We truly appreciate your valuable feedback and the time you spent in reviewing and
commenting on our work. We also like your advice in correcting the introduction. We
have addressed all of your comments as follows and we hope you find them satisfactory
(the corrections are highlighted in green).
COMMENT R2-C1: In the literature, there is a term called 'ensemble learning'. If your
study uses a similar method to this one, ensemble learning should be mentioned and
cited.
RESPONSE #1: Done, thank you for the comment. We added the following paragraphs in
the literature review section, which describe the ensemble learning and using ensemble
learning with colon histopathlogical image classification
Please refer to “LITERATURE REVIEW” section, line numbers 204- 233, we added the
following:
Most of the above studies utilized a single deep CNN (aka weak learner) model to address
various colon histopathology images classification tasks (binary or multiclass). Despite their
extensive use, a single CNN model has the restricted power to capture discriminative
features from colon histopathology images, resulting in unsatisfactory classification
accuracy (Yang et al., 2019). Thus, merging a group of weak learners forms an ensemble
learning model, which is likely to be a strong learner and moderate the shortcomings of
the weak learners (Qasem et al., 2022).
Ensemble learning of deep pretrained models has been designed to fuse the decisions of
different weak learners (individuals) to increase classification performance (Xue et al.,
2020; Zhou et al., 2021). A limited number of studies applied ensemble learning with deep
CNN models on colon histopathological image classification tasks (Popa, 2021), (Lichtblau
and Stoean, 2019), (Rachapudi and Lavanya Devi, 2021).
The authors in (Popa, 2021) proposed a new framework for the colon multiclass
classification task. They employed CNN pretrained AlexNet and GoogleNet models
followed by softmax activation layers to handle the 4-class classification task. The bestreported accuracies on (Stoean et al., 2016) dataset ranged between 85% and 89%.
However, the standard deviation of these results was around 4%. This means the results
were not stable. AlexNet was also used in (Lichtblau and Stoean, 2019) as a feature
extractor for the colon dataset. Then, an ensemble of five classifiers was built. The obtained
results for this ensemble achieved around 87% accuracy.
In (Ohata et al., 2021), the authors use CNN to extract features of colorectal histological
images. They employed various pretrained models, i.e., VGG16 and Inception, to extract
deep features from the input images. Then, they employed ensemble learning by utilizing
five classifiers (SVM, Bayes, KNN, MLP, and Random Forest) to classify the input images.
7|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
They reported 92.083% accuracy on the colon histological images dataset in (Kather et
al., 2016). A research study in (Rachapudi and Lavanya Devi, 2021) proposed light
weighted CNN architecture. RGB-colored images of colorectal cancer histology dataset
(Kather et al., 2016) belonging to eight different classes were used to train this CNN model.
It consists of 16 convolutional layers, five dropout layers, five max-pooling layers, and one
FCC layer. This architecture exhibited high performance in term of incorrect classification
compared to existing CNN models. Using ensemble learning model achieved around 77%
accuracy (error of 22%).
COMMENT R2-C2: Line 159, 'EA was first conducted for tuning the CNN hyperparameters for the convolutional layers.' It is not correct.
There are many Studies conducted before this study to optimise CNN parameters using
GAs. Such as :
1) 'J. Yoo, H. Yoon, H. Kim, H. Yoon and S. Han, 'Optimization of Hyper-parameter for
CNN Model using Genetic Algorithm,' 2019 1st International Conference on Electrical,
Control and Instrumentation Engineering (ICECIE), 2019, pp. 1-6, doi:
10.1109/ICECIE47765.2019.8974762.',
2) 'Xie, Lingxi, and Alan Yuille. 'Genetic cnn.' Proceedings of the IEEE international
conference on computer vision. 2017.',
3)'Sun, Yanan, et al. 'Automatically designing CNN architectures using the genetic
algorithm for image classification.' IEEE transactions on cybernetics 50.9 (2020): 38403854.'.
RESPONSE #2: Done. Thank you for this comment, we adapted this paragraph to show
that there were two usages of the EA in the method of (Stoean, 2020). First, for CNN
hyper-parameters, and the second for SVM parameters.
Please refer to line numbers 162- 166, in literature review section.
The paragraph became:
The author in (Stoean, 2020) extended the previous study (Postavaru et al., 2017) by
applying an evolutionary algorithm (EA) in the CNN architecture. This is to automate two
tasks: first, EA was conducted for tuning the CNN hyper-parameters of the convolutional
layers. Stoean determined the number of kernels in CNN and their size. Second, the EA
was used to support SVM in parameters ranking to determine the variable importance
within the hyper-parameterization of CNN.
COMMENT R2-C3: Line 166: What does EV mean?
8|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
RESPONSE #3: Done, thank you for this comment. There was a typo error. We
modified it to EA.
Thank you again.
COMMENT R2-C4: Line 325: In the Methodology Section it was written that 'This
study introduces the ensemble for four CNN pretrained models' how the algorithm
works in an ensemble manner was not explained.
RESPONSE #4: Done: We found your comments extremely helpful and have revised
accordingly.
The methodology section was modified to show how the modified pretrained models'
works in the ensemble.
We added two subsections: “The proposed Deep CNN Ensemble Based on softmax”
And “Ensemble Fusing Methods for the Proposed E-CNN”
Please refer to the following paragraphs in the Methodology section, line numbers 331396:
The proposed deep ensemble CNNs (E-CNNs) architecture is based on two phases (base
classifiers and fuse techniques). In the former phase, four modified models have been
utilized: DenseNet121, MobileNetV2, InceptionV3, and VGG16 pretrained CNN classifier.
While the latter phase focuses on combining the decisions of the base classifiers (in the first
phase). Two types of fusion techniques have been employed in the proposed E-CNNS:
majority voting and product rule. On the one hand, the majority voting is based on the
prediction value of the base classifier. On the other hand, the product rule is based on the
probabilities of the base classifiers (i.e., pretrained model), the details of the proposed ECNNs are as follows:
The proposed Deep CNN Ensemble Based on softmax
After adapting the four pretrained models (DenseNet121, MobileNetV2, InceptionV3, and
VGG16), they serve as base classifiers in the proposed E-CNN for the automated
classification of colon H&E histopathological images. The standard previous pretrained
models extract various features from the training images to discriminate between different
types of cancer (multiple classes) in the colon images dataset. However, each pretrained
model is based on a set of convolution layers and filter sizes to extract different features
from the input images. As a result, no pretrained model can be more general in extracting
all the distinguishing features from the input training images (Ghosh et al., 2021).
9|Page
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
Besides, using initial weights in pretrained models affect the classification performance
because the CNNs pretrained models are nonlinear designs. These pretrained models learn
complicated associations from training data with the assistance of back propagation and
stochastic optimization (Ahmad et al., 2021). Therefore, this study introduces a block-wise
fine-tuning technique to adapt the standard CNNs models to handle the heterogeneity
nature in colorectal histology image classification tasks.
Figure 4 illustrates the main steps of the design of the block-wise fine-tuning technique.
First, the benchmark colon images are loaded. Then, some preprocessing tasks on the
training and testing images are performed to prepare them for the pretrained models,(e.g.
resizing them to 224x224x3). The images are then rescaled to 1/255 as in the previous
related studies (Szegedy et al., 2016). After splitting the dataset into training and testing,
the four independent pretrained models: (DenseNet (Huang et al., 2017), MobileNet
(Sandler et al., 2018), VGG16 (Simonyan and Zisserman, 2014), and InceptionV3 (Szegedy
et al., 2016)) are loaded without changing their weights. Then, the FCC and softmax layers
are omitted from the loaded pretrained CNN models. These layers were originally
designed to output 1000 classes from the ImageNet dataset. Two dense layers with a
varying number of hidden neurons are then added to strengthen the vital data- articular
feature learning from each individual pretrained model. These dense layers are followed
by the ReLU nonlinear activation function, which allows us to learn complex relationships
among the data (Ahmad et al., 2021; Garbin et al., 2020). Next, a 0.3 dropout layer is
added to address the long training time and overfitting issues in classification tasks (Deniz
et al., 2018; Boumaraf et al., 2021). At the end of each pretrained model, the last FCC
with the softmax layer is added. The FCC is simply a feed-forward neural network, which
is fed by flattened input from the last pooling layer of the pretrained model. In this study,
based on the number of classes in this work, the number of neurons in FCC is set to four
instead of 1000 classes of ImageNet. While the softmax layer (activation layer) is inserted
on top of each model to train the obtained features and produce the classification output
based on max probability. Algorithm 1 shows the main steps of the block-wise fine-tuning
technique for each individual model in the proposed E-CNNs.
Ensemble Fusing Methods for the Proposed E-CNN
This study introduces E-CNNs based on the four modified CNN pretrained models
(DenseNet (Huang et al., 2017), MobileNet (Sandler et al., 2018), VGG16 (Simonyan and
Zisserman, 2014), and InceptionV3 (Szeged et al., 2016)) for the automated classification
of colon H&E histopathological images. The four adaptive models are trained on the
training dataset. Then evaluated on the tested dataset. The output probabilities of the four
pretrained models are connected to produce 16-D feature vector (i.e., each individual with
its softmax produce four probabilities based on the number of classes in colon images).
Then, various combination methods (majority voting, and product rule) are employed to
produce a final decision for the test image. Figure 5 illustrates the proposed ECNNs with
the merging techniques (ECNN (product rule) and E-CNN (majority voting)). In the
10 | P a g e
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
majority voting technique, each base classifier allocates a class label output (i.e., a predicted
label) to the provided test sample. It counts the votes of all the class labels from the base
classifiers. Then, the class that obtains the maximum number of votes is nominated as the
final decision for the E-CNN (majority voting) as described in Eq. (1).
COMMENT R2-C5: Line 386-387: how were the hyperparameters in Table 3 decided?
RESPONSE #5: Thank you for the comment. For setting the parameters of the proposed
method for better performance: first, we followed the setting of some close related works
as in (Ghosh et al., 2021), (Stoean, 2020), (Kather et al., 2016), and (Postavaru et al.,
2017). For example, Kather’s works (Kather et al., 2016) utilized the number of
epochs=30. Second, several experiments were conducted on the colon dataset to select
the learning rate.
COMMENT R2-C6: Providing the results in Tables 4- and 5 in a single table should have
been better for easy comparison.
RESPONSE #6: Done, thank you for this comment. We combined Table 4 and 5 in a
single Table. It becomes Table 4.
Please refer to Table 4, page 15.
COMMENT R2-C7: For Figure 7: (1) Because the learning performance (accuracy and
loss) provided in Figure 7 has not become stable throughout 10 epochs, the networks
should be trained longer (until they have reached a steady-state) to evaluate the
performance of networks fairly.
RESPONSE #7: Done, Thank you for this excellent comment.
We re-trained the modified pretrained models with number of epochs=30 as in the
related work in Kather’s work (Kather et al., 2016).
Please refer to the results subsection. The more details are in Figures 10 and 11, and Table
6.
In addition, we added the following paragraphs to describe the new results, line numbers
568-586:
To verify that the modified pretrained models are not overfitted, we re-trained them
through a number of 30 epochs as in Kather’s work. Figures 10 and 11 present the training
and validation charts for all the proposed models after being re-trained. According to
Figure 10, validation accuracy, the validation curves for the modified models' graphs
increased dramatically after epoch number ten. Actually, they outperformed the training
11 | P a g e
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
curves. This indicates that the modified models were trained well and avoided the
overfitting issue. Figure 11 provides loss curves for the modified models with the 30 epochs
it is clear that there is a reduction in the validation loss compared to the training loss, which
is noticeable in the delivered loss curves for the individuals of the proposed E-CNN. The
results of the E-CNN and their individuals' base learning with the 30 epochs are illustrated
in Table 6. As depicted in this table, the results show that the modified models
outperformed the same models when the number of epochs was a set of 10. For example,
the modified DenseNet121, MobileNetV2, inceptionv3, and VGG16 with 30 epochs
increased the accuracy by 6%, 2%, 4%, and 7%, respectively. This may indicate greater
success when increasing the number of epochs and making the deep learning models train
enough. Furthermore, the increased performance of the base learner affects the ensemble
models. Thus, the results of E-CNN (product rule) and E-CNN (majority voting) are
increased by around 2% and 1.0%, respectively compared to the same ensemble when
the number of epochs was ten. These results indicate that the individual learners that we
have modified and their ensemble perform robustly better and are not overfitted when
increasing the number of epochs.
COMMENT R2-C8: (2) There is no information about how many runs (training with
different initial weights) have been executed to plot the loss and accuracy. It looks like
they are the results of a single or few runs. If so, it would not be a fair comparison as the
performance might depend on a chance factor.
RESPONSE #8: Thank you for this comment. The number of runs in this study is set to 10
runs. We show this in the experiment setup subsection # Evaluation Criteria
Please refer to the Evaluation Criteria subsection, line numbers 445-447. The following
statements were added.
The obtained results have been evaluated using average accuracy, average sensitivity,
average specificity, and standard deviation in ten runs.
COMMENT R2-C9: (3) The figures, especially the legends and labels, are not clear
enough.
RESPONSE #9: Done; thank you for the comment. The Figures are representing in clear
format. Please refer to the Figure 7 and 8 in the results subsection.
COMMENT R2-C10: It would be good to compare the plots in Figures 7- and 8 if
those plots were in the same figure. The training and testing results might be in a
different figure but the results compared which are the same kind should be in the same
figure.
12 | P a g e
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
RESPONSE #10: Done. Thank you for this excellent comments. We collected the accuracy
figures in one figures (become Figure 8). While the loss figures were collected in Figure 9.
Please refer to Figure 8 and 9, which show the accuracy and loss function, respectively.
Figure 8. The accuracy learning curves of training and testing derived from the four
modified CNN base learners: (a) Modified DenseNet121 (b) Modified InceptionV3 (c)
Modified MobileNetV2 (d) Modified VGG16 when the number of epochs is ten on the
colon histopathological image benchmark Stoean’s dataset used in this study.
COMMENT R2-C11: Line 546: It should be better to write how many samples be used to
obtain the t- and p- values, and also at what point the statistical parameters were measured
(the accuracy values obtained at the end of training or the accuracy values which are the
maximum throughout the training)? It could be good to show the p-values at each epoch
along with the accuracy plots.
RESPONSE #11: Thank you for your comment. We added the following paragraph in the
discussion section, line numbers 632-638, to show the required three key data values for
the T-test.
Furthermore, the T-test is used to compare the proposed E-CNN product to the previously
related studies on the same dataset. This test is performed to prove that the improvement
made by the proposed E-CNN (product) and the state-of-the-art is statistically significant.
The T-test is carried out based on the average accuracy and the standard deviation of the
test samples (20% of the dataset), which are obtained by the E-CNN (product) over ten
independent runs.
COMMENT R2-C12: There are also a few small typos. For Ex. (1) There are no spaces
between some words, (2) the author defined an acronym for the term fully connected
layers as FCC but in the rest of the manuscript, he used the long form of the term etc.
RESPONSE #12: Done. Thank you for your comment. First, we follow the manuscript to
add spaces between the words. Second, we used the abbreviation FCC in the manuscript.
Reviewer #3:
We really appreciate your valuable feedback and the time you spent in reviewing and
commenting on our work. We have processed all of your comments as follows and we
hope you find them satisfactory (the corrections are highlighted in orange).
13 | P a g e
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
COMMENT R3-C1: The overview of the related papers should be expanded. It's a very
popular research area. I suggest the authors evaluate these papers related to Transfer
Learning and Ensemble Learning Techniques:
https://www.nature.com/articles/s41598-021-93783-8
https://ieeexplore.ieee.org/document/9107128
RESPONSE #1: Done; thank you for the comment. We added these related works to show
the important of the CNN and ensemble learning for medical histopathological images.
Please refer to section “LITERATURE REVIEW”, line numbers 143 and 213, where we
added the references in the following paragraphs:
Deep learning pretrained models have made incredible progress in various kinds of
medical image processing, specifically histopathological images, as they can automatically
extract abstract and complex features from the input images (Manna et al., 2021).
And the following:
Ensemble learning of deep pretrained models has been designed to fuse the decisions of
different weak learners (individuals) to increase classification performance (Xue et al., 2020;
Zhou et al., 2021)
COMMENT R3-C2:. The presentation of Figures 7, 8 should be improved.
RESPONSE #2: Done, thank you for this comment.
Figures 7 and 8 were improved, please refer to the page 16 and 17.
COMMENT R3-C3: What image preprocessing technique do the author use?
RESPONSE #3: Done, thank you for this comment. We highlighted the preprocess tasks in
the methodology section, line numbers 354-359.
Please refer to subsection “The proposed modified Deep pretrained models”, where we
added the following paragraph:
Then, some preprocessing tasks on the training and testing images are performed to
prepare them for the pretrained models, (e.g. resizing them to 224x224x3). The images
are then rescaled to 1/255 as in the previous related studies (Szegedy et al., 2016).
14 | P a g e
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
COMMENT R3-C4: - The authors should present confusion matrices for each dataset.
RESPONSE #4: Done, thank you for this comment.
the proposed E-CNN (product rule).
We added the confusion matrix for
Please refer to Figures 7 and 12 in result section.
“Figure 7. Confusion matrix of the E-CNN (product rule) on the Stoean testing dataset
when number of epochs =10.”
“Figure 12. Confusion matrix of the E-CNN (product rule) on the Stoean testing dataset
when number of epochs =30.”
COMMENT R3-C5: - The authors used only one dataset for testing and validation. It's not
enough.
RESPONSE #5: Done, we tested our proposed E-CNN on (Kather et al., 2016) dataset,
which includes 5000 colon histology images with eight different classes. Thus, it becomes
more complicated.
Please refer to the Dataset subsection, line numbers 420-424.
And the results subsection, where we analyzed the results of the second dataset, line
numbers 588-606.
Specially, Table 7. The flowing paragraphs were added in the results subsection:
Moreover, to validate the proposed modified models and their ensembles, we applied
these models to the second colon histopathological dataset called the Kather dataset
(Kather et al., 2016). This dataset contains 5,000 histological images of human colon cancer
from eight distinct kinds of tissue. Table 7 gives the accuracy, sensitivity, and specificity of
the proposed individual pretrained models, E-CNN (product rule), and E-CNN (majority
voting). Besides, the proposed modified models and their ensembles are compared to
similar experiments previously used to assess the classification of the Kather dataset. Based
on the results, the proposed modified models are able to separate eight different classes in
the histopathological images. Both modified InceptionV3 and DenseNet121 achieved
testing accuracy of roughly 89%, with a standard deviation of less than 0.5%. These results
outperform the esNet152 feature extraction results in (Ohata et al., 2021) by around 9%.
That is because the fine-tuned is capable of extracting high-level features from the input
images. Furthermore, by using the modified VGG16 on the same dataset, the obtained
result has roughly 83% test accuracy, while (Rachapudi and Lavanya Devi, 2021) achieved
a test accuracy of 77% when utilizing CNN architecture. This implies that the modification
to the pretrained models gets an acceptable result on the histopathological image dataset.
The E-CCN (product rule) and E-CNN (majority voting) achieved promising results on the
15 | P a g e
Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
Kather dataset. As shown in Table 7, the E-CCN (product rule) and E-CNN (majority
voting) performed better than all individual models, with an accuracy of 91.28% and
90.63%, respectively, which is better than the DensNet121 with only around 2%. These
results demonstrate the effectiveness of the proposed modified pretrained models and their
ensemble in this classification task.
16 | P a g e
" | Here is a paper. Please give your review comments after reading it. |
709 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Deep convolutional neural networks (CNN) manifest the potential for computer-aided diagnosis systems (CADs) by learning features directly from images rather than using traditional feature extraction methods. Nevertheless, due to the limited sample sizes and heterogeneity in tumor presentation in medical images, CNN models suffer from training issues, including training from scratch, which leads to overfitting. Alternatively, a pretrained neural network's transfer learning (TL) is used to derive tumor knowledge from medical image datasets using CNN that were designed for non-medical activations, alleviating the need for large datasets. This study proposes two ensemble learning techniques: (E-CNN (product rule) and E-CNN (majority voting)). These techniques are based on the adaptation of the pretrained CNN models to classify colon cancer histopathology images into various classes. In these ensembles, the individuals are, initially, constructed by adapting pretrained DenseNet121, MobileNetV2, InceptionV3, and VGG16 models. The adaptation of these models is based on a block-wise fine-tuning policy, in which a set of dense and dropout layers of these pretrained models is joined to explore the variation in the histology images. Then, the models' decisions are fused via product rule and majority voting aggregation methods. The proposed model is validated against the standard pretrained models and the most recent works on two publicly available benchmark colon histopathological image datasets: Stoean (357 images) and Kather colorectal histology (5000 images). The results were 97.20% and 91.28% accurate, respectively. The achieved results outperformed the state-of-the-art studies and confirmed that the proposed E-CNNs could be extended to be used in various medical image applications.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Colon cancer is the third most deadly disease in males and the second most hazardous in females. According to the World Cancer Research Fund International, over 1.8 million new cases were reported in 2018 <ns0:ref type='bibr' target='#b8'>(Belciug and Gorunescu, 2020)</ns0:ref>. In colon cancer diagnosis, the study of histopathological images under the microscope plays a significant role in the interpretation of specific biological activities.</ns0:p><ns0:p>Among the microscopic inspection functions, classification of images (organs, tissues, etc.) is one of considerable important tasks. However, classifying medical images into a set of different classes is a very challenging issue due to low inter-class distance and high intra-class variability <ns0:ref type='bibr' target='#b47'>(Sahran et al., 2018)</ns0:ref>, as illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. Some objects in medical images may be found in images belonging to different classes, and different objects may appear at different orientations and scales in a given class. During the manual assessment, physicians examine the Hematoxylin and Eosin (H&E) stained tissues under a microscope to analyze their histopathological attributes, such as cytoplasm, nuclei, gland, and lumen, as well as change in the benign structure of the tissues. It is worth noting that early categorization of colon samples as benign or malignant, or discriminating between different malignant grades is critical for selecting the best treatment protocol. Nevertheless, manually diagnosing colon H&E stained tissue under a microscope is time-consuming and tedious, as illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. In addition, the diagnostic performance depends on the experience and personal skills of a pathologist. It, also, suffers from inter-observer variability with around 75% diagnostic agreement across pathologists <ns0:ref type='bibr' target='#b18'>(Elmore et al., 2015)</ns0:ref>.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:2:2:NEW 15 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As a result, the treatment protocol might differ from one pathologist to another. These issues motivate development and research into the automation of diagnostic and prognosis procedures <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In recent decades, various computer aided diagnosis systems (CADs) have been introduced to tackle the classification problems in cancer digital pathology diagnosis to achieve reproducible and rapid results <ns0:ref type='bibr' target='#b11'>(Bicakci et al., 2020)</ns0:ref>. CADs assist in enhancing the classification performance and, at the same time, minimize the variability in interpretations <ns0:ref type='bibr' target='#b44'>(Rahman et al., 2021)</ns0:ref>. The faults produced by CADs/machine learning model have been announced to be less than those produced by a pathologist <ns0:ref type='bibr' target='#b32'>(Kumar et al., 2020)</ns0:ref>. These models can also assist clinicians in detecting cancerous tissue in colon tissue images. As a result, researchers are trying to construct CADs to improve diagnostic effectiveness and raise interobserver satisfaction <ns0:ref type='bibr' target='#b56'>(Tang et al., 2009)</ns0:ref>. Numerous conventional CADs for identifying colon cancer using histological images had been introduced by number of researchers in the past years <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b24'>Kalkan et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b33'>Li et al., 2019)</ns0:ref>. Most of the conventional CADs focus on discriminating between benign and malignant tissues. Furthermore, they focus on conventional machine learning and image processing techniques. In this regards, they emphasize on some complex tasks such as extracting features from medical images and require extensive preprocessing. The complex nature of these tasks in machine learning techniques degrades the results of the CADs regarding accuracy and efficiency <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021)</ns0:ref>. Conversely, recent advances in machine learning technologies make this task more accurate and cost-effective than traditional models (Abu <ns0:ref type='bibr' target='#b0'>Khurma et al., 2022;</ns0:ref><ns0:ref type='bibr' target='#b29'>Khurma et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b2'>Abu Khurmaa et al., 2021)</ns0:ref>. Manuscript to be reviewed Computer Science machine learning for colon histopathlogical image classification. Recently, one of the most successful deep learning techniques is the deep convolutional neural networks (CNN) <ns0:ref type='bibr' target='#b28'>(Khan et al., 2020)</ns0:ref> that consists of series of convolutional and pooling layers. These are followed by FCC and softmax layers. The FCC and the softmax represent the neural networks classifiers <ns0:ref type='bibr' target='#b7'>(Alzubi, 2022)</ns0:ref>. CNN has the ability to extract the features from images by parameter tuning of the convolutional and the pooling layers. Thus, it achieves great success in many fields especially in medical image classifications such as skin disease <ns0:ref type='bibr' target='#b21'>(Harangi, 2018)</ns0:ref>, breast <ns0:ref type='bibr' target='#b15'>(Deniz et al., 2018)</ns0:ref> and colon cancer classification <ns0:ref type='bibr' target='#b20'>(Ghosh et al., 2021)</ns0:ref>. CNN is categorized into two approaches: either training from scratch or pre-trained models (e.g., DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b49'>(Sandler et al., 2018)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref>). The most effective approach in medical image classification is the pretrained models due to the limited number of training samples <ns0:ref type='bibr' target='#b48'>(Saini and Susan, 2020)</ns0:ref>.</ns0:p><ns0:p>CNN has been used in the domain of colon histopathlogical image classification. For example, Stefan Postavaru <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017)</ns0:ref> utilized a CNN approach for the automated diagnosis of a set of colorectal cancer histopathological slides. They utilized CNN with 5 convolutional layers and reported accuracy of 91.4%. Ruxandra Stoean <ns0:ref type='bibr' target='#b53'>(Stoean, 2020)</ns0:ref> extended the work <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017)</ns0:ref> and presented a modality method to tune the convolutional of the deep CNN. She introduced two Evolutionary algorithms for CNN parametrization. She conducted the experiments on colorectal cancer <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> and reported the highest accuracy of 92%. It was obtained from these studies that the CNN models exceeded the handcrafted features.</ns0:p><ns0:p>While the CNN achieves high performance especially on large dataset size, it struggles to make such performance on small dataset size <ns0:ref type='bibr' target='#b15'>(Deniz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b35'>Mahbod et al., 2020;</ns0:ref><ns0:ref type='bibr'>?)</ns0:ref>, and simply results in overfitting issue <ns0:ref type='bibr' target='#b61'>(Zhao et al., 2017)</ns0:ref>. To overcome this issue, the concept of transfer learning technique of pretrained CNN models is exploited for classification of colon histopathlogical images. In practical, the transfer learning technique of the pretrained models exports knowledge from previously CNN that has been trained on the large dataset to the new task with small dataset (target dataset). There are two approaches to transfer learning of pretrained models in medical image classification: feature extraction and fine-tuning <ns0:ref type='bibr' target='#b9'>(Benhammou et al., 2020)</ns0:ref>. The former method extracts features from any convolutional or pooling layers and removes the last FCC and softmax layers. While in the latter, the pretrained CNN models are adjusted for specific tasks. It is important to remember that the number of neurons in the final FC layer corresponds to the number of classes in the target dataset (i.e., the number of colon types). Following this replacement, the whole pre-trained model is retrained <ns0:ref type='bibr' target='#b35'>(Mahbod et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b9'>Benhammou et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b62'>Zhi et al., 2017)</ns0:ref> or the last FC layers are retrained <ns0:ref type='bibr' target='#b9'>(Benhammou et al., 2020)</ns0:ref>. Various pretrained models (e.g., DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b49'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b50'>(Simonyan and Zisserman, 2014)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref>) have been introduced in recent years. Each pretrained model is constructed based on several convolution layers and filter sizes to extract specific features from the input image. However, transferring the begotten experience from the source (ImageNet) to our target (colon images) leads to losing some powerful features of histopathological image analysis <ns0:ref type='bibr' target='#b13'>(Boumaraf et al., 2021)</ns0:ref>. For example, CNN pretrained AlexNet and GoogleNet models were used on the colon histopathological images classification <ns0:ref type='bibr' target='#b40'>(Popa, 2021)</ns0:ref>. However, they achieved poor standard deviation results. Besides, using these pretrained models on the colon dataset needs a specific fine-tuning approach to achieve acceptable results.</ns0:p><ns0:p>To accommodate the pretrained CNN models to the colon image classification, we design a new set of transfer learning models ( DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b49'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b50'>(Simonyan and</ns0:ref><ns0:ref type='bibr' target='#b50'>Zisserman, 2014), and</ns0:ref><ns0:ref type='bibr'>InceptionV3 (Szegedy et al., 2016)</ns0:ref> to refine the pretrained models on the colon histopathological image tasks. Our transfer learning methods are based on a block-wise finetuning policy. We make the last set of residual blocks of the deep network models more domain-specific to our target colon dataset by adding dense layers and dropout layers while freezing the remaining initial blocks in the deep pretrained model. The adaptability of the proposed method is further extended by fine-tuning the neural network's hyper-parameters to improve the model generalization ability. Besides, a single pretrained model has a limited capacity to extract complete discriminating features, resulting in an inadequate representation of the colon histopathology performance <ns0:ref type='bibr' target='#b60'>(Yang et al., 2019)</ns0:ref>. As a result, this study proposes an ensemble of pretrained CNN models architectures (E-CNN) to identify the representation of colon pathological images from various viewpoints for more effective classification tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/26</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:2:2:NEW 15 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In this research, the following contributions are made:</ns0:p><ns0:p>• Investigation of the influence of the standard TL approaches ( DenseNet, MobileNet, VGG16, and InceptionV3) on the colon cancer classification task.</ns0:p><ns0:p>• Design a new set of transfer learning methods based on a block-wise fine-tuning approach to learn the powerful features of the colon histopathology images. The new design includes adding a set of dense and dropout layers while freezing the remainder of the initial layers in the pretrained models <ns0:ref type='bibr'>(DenseNet,</ns0:ref><ns0:ref type='bibr'>MobileNet,</ns0:ref><ns0:ref type='bibr'>VGG16,</ns0:ref><ns0:ref type='bibr'>and InceptionV3)</ns0:ref> to make them more specific for the colon domain requirements.</ns0:p><ns0:p>• Define and optimize a set of hyper-parameters for the new set of pretrained CNN models to classify colon histopathological images.</ns0:p><ns0:p>• An ensemble (E-CNN) is proposed to extract complementary features in colon histopathology images by using an ensemble of all the introduced transfer learning methods (base classifiers). The proposed E-CNN merges the decisions of all base classifiers via majority voting and product rules.</ns0:p><ns0:p>The remainder of this research is organized as follows. Literature review section goes over the related works. Our proposed methodology is presented in detail in the methodology section . The experiments results and discussion section analyzes and discusses the experimental results. The Section of conclusion brings this study to a close by outlining some research trends and viewpoints.</ns0:p></ns0:div>
<ns0:div><ns0:head>LITERATURE REVIEW</ns0:head><ns0:p>Deep learning pretrained models have made incredible progress in various kinds of medical image processing, specifically histopathological images, as they can automatically extract abstract and complex features from the input images <ns0:ref type='bibr' target='#b37'>(Manna et al., 2021)</ns0:ref>. Recently, CNN models based on deep learning design are dominant techniques in the CADs of cancer histopathological image classification <ns0:ref type='bibr' target='#b32'>(Kumar et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b35'>Mahbod et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b4'>Albashish et al., 2021)</ns0:ref>. CNN learn high-and mid-level abstraction, which is obtained from input RGB images. Thus, developing CADs using deep learning and image processing routines can assist pathologists in classifying colon cancer histopathological images with better diagnostic performance and less computational time. Numerous CADs for identifying colorectal cancer using histological images had been introduced by a number of researchers in past years. These CADs vary from conventional machine learning algorithms of the deep CNN. In this study, we present the related work of the colorectal cancer classification relying on colorectal cancer dataset <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> as real-world test cases.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017</ns0:ref>) designed a CNN model for colon cancer classification based on colorectal histopathological slides belonging to a healthy case and three different cancer grades(1, 2, and 3). They used an input image with the size of 256 × 256 × 3. They created five convolutional neural networks, followed by the ReLU activation function. In the introduced CNN, various kernel sizes were utilized in each Conv. Layer. Besides, they utilized batch normalization and only two FCC layers. They reported 91% accuracy in multiclass classification for the colon dataset in <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref>. However, in the proposed approach, only the size of the kernels is considered, while other parameters, like learning rate and epoch size, were not taken into account.</ns0:p><ns0:p>The author in <ns0:ref type='bibr' target='#b53'>(Stoean, 2020)</ns0:ref> extended the previous study <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017)</ns0:ref> by applying an evolutionary algorithm (EA) in the CNN architecture. This is to automate two tasks: first, EA was conducted for tuning the CNN hyper-parameters of the convolutional layers. Stoean determined the number of kernels in CNN and their size. Second, the EA was used to support SVM in parameters ranking to determine the variable importance within the hyper-parameterization of CNN. The proposed approach achieved 92% colorectal cancer grading accuracy on the dataset in <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref>. However, using EA does not guarantee any diversity among the obtained hyper-parameters( solutions) <ns0:ref type='bibr' target='#b10'>(Bhargava, 2013)</ns0:ref>.Thus, choosing the kernel size and depth of CNN may not ensure high accuracy.</ns0:p><ns0:p>In another study for colon classification but on a different benchmark dataset, the authors in <ns0:ref type='bibr' target='#b36'>(Malik et al., 2019)</ns0:ref> Manuscript to be reviewed Computer Science constructed based on InceptionV3. Then, the authors modified the last FCC layers to become harmonious with the number of the classes in the colon classification task. Moreover, the adaptive CNN implementation was proposed to improve the performance of CNN architecture for the colon cancer detection task.</ns0:p><ns0:p>The study achieved around 87% accuracy for the multiclass classification task.</ns0:p><ns0:p>In another study <ns0:ref type='bibr' target='#b16'>(Dif and Elberrichi, 2020a)</ns0:ref>, a framework was proposed for the colon histopathological image classification task. The authors employed a CNN based on transferred learning from Resnet121 generating a set of models followed by a dynamic model selection using the particle swarm optimization (PSO) metaheuristic. The selected models were then combined by a majority vote and achieved 94.52% accuracy on the colon histopathological dataset <ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref>. In the same context, the authors in <ns0:ref type='bibr' target='#b17'>(Dif and Elberrichi, 2020b)</ns0:ref> explored the efficiency of reusing pre-trained models on histopathological images dataset instead of ImageNet based models for transfer learning. For this target, a fine-tuning method was presented to share the knowledge among different histopathological CNN models. The basic model was created by training InceptionV3 from scratch on one dataset while transfer learning and fine-tuning were performed using another dataset. However, this transfer learning-based strategy offered poor results on the colon histopathological images due to the limited number of the training dataset.</ns0:p><ns0:p>The conventional machine learning techniques have been utilized for the colon histopathology images dataset to achieve accepted results. For example, the 4-class colon cancer classification task on the dataset in <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> was utilized in <ns0:ref type='bibr' target='#b12'>(Boruz and Stoean, 2018;</ns0:ref><ns0:ref type='bibr' target='#b27'>Khadilkar, 2021)</ns0:ref> to discriminate between various cancer types. In the former case <ns0:ref type='bibr' target='#b12'>(Boruz and Stoean, 2018)</ns0:ref>, the authors extracted contour low-level image features from grayscale transformed images. Then, these features were used to train the SVM classifier. Despite its simplicity, the study displayed a comparable performance to some computationally expensive approaches. The authors reported accuracy averages between 84.1% and 92.6% for the different classes. However, transforming the input images to grayscale leads to losing some information and degrades the classification results. Besides, using thresholding needs fine-tuning, which is a complex task. In latter case <ns0:ref type='bibr' target='#b27'>(Khadilkar, 2021)</ns0:ref>, the authors extracted morphological features from the colon dataset.</ns0:p><ns0:p>Mainly, they extracted harris corner and Gabor wavelet features. These features were then used to feed the neural network classifier. The authors utilized their framework to discriminate between benign and malignant cases. However, they ignored the multiclass classification task, which is more complex task in this domain.</ns0:p><ns0:p>Most of the above studies utilized a single deep CNN (aka weak learner) model to address various colon histopathology images classification tasks (binary or multiclass). Despite their extensive use, a single CNN model has the restricted power to capture discriminative features from colon histopathology images, resulting in unsatisfactory classification accuracy <ns0:ref type='bibr' target='#b60'>(Yang et al., 2019)</ns0:ref>. Thus, merging a group of weak learners forms an ensemble learning model, which is likely to be a strong learner and moderate the shortcomings of the weak learners <ns0:ref type='bibr' target='#b42'>(Qasem et al., 2022)</ns0:ref>.</ns0:p><ns0:p>Ensemble learning of deep pretrained models has been designed to fuse the decisions of different weak learners (individuals) to increase classification performance <ns0:ref type='bibr' target='#b59'>(Xue et al., 2020;</ns0:ref><ns0:ref type='bibr' target='#b63'>Zhou et al., 2021)</ns0:ref>. A limited number of studies applied ensemble learning with deep CNN models on colon histopathological image classification tasks <ns0:ref type='bibr' target='#b40'>(Popa, 2021)</ns0:ref>, <ns0:ref type='bibr' target='#b34'>(Lichtblau and Stoean, 2019)</ns0:ref>, <ns0:ref type='bibr' target='#b43'>(Rachapudi and Lavanya Devi, 2021)</ns0:ref>.</ns0:p><ns0:p>The authors in <ns0:ref type='bibr' target='#b40'>(Popa, 2021)</ns0:ref> proposed a new framework for the colon multiclass classification task.</ns0:p><ns0:p>They employed CNN pretrained AlexNet and GoogleNet models followed by softmax activation layers to handle the 4-class classification task. The best-reported accuracies on <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> dataset ranged between 85% and 89%. However, the standard deviation of these results was around 4%. This means the results were not stable. AlexNet was also used in <ns0:ref type='bibr' target='#b34'>(Lichtblau and Stoean, 2019)</ns0:ref> as a feature extractor for the colon dataset. Then, an ensemble of five classifiers was built. The obtained results for this ensemble achieved around 87% accuracy.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b39'>(Ohata et al., 2021)</ns0:ref>, the authors use CNN to extract features of colorectal histological images.</ns0:p><ns0:p>They employed various pretrained models, i.e., VGG16 and Inception, to extract deep features from the input images. Then, they employed ensemble learning by utilizing five classifiers (SVM, Bayes, KNN, MLP, and Random Forest) to classify the input images. They reported 92.083% accuracy on the colon histological images dataset in <ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref>. A research study in <ns0:ref type='bibr'>(Rachapudi and</ns0:ref> <ns0:ref type='table' target='#tab_13'>2022:02:70538:2:2:NEW 15 Jun 2022)</ns0:ref> Manuscript to be reviewed Computer Science Fine-tune: only kernel size and number of kernels in CNN using EA method <ns0:ref type='bibr' target='#b40'>(Popa, 2021)</ns0:ref> colorectal in <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> AlexNet and 89% feature extractor GoogleNet <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017)</ns0:ref> colorectal in <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> CNN model from scratch 91%</ns0:p><ns0:p>The number of filters and the kernel size <ns0:ref type='bibr' target='#b34'>(Lichtblau and Stoean, 2019)</ns0:ref> colorectal in <ns0:ref type='bibr'>(Stoean et</ns0:ref> Overall, the earlier studies, summarized in Table <ns0:ref type='table' target='#tab_2'>1</ns0:ref>, revealed a notable trend in using deep CNN to classify colon cancer histopathological images. It was used to provide much higher performance than the conventional machine learning models. Nevertheless, training CNN models are not that trivial as they need considerable memory resources and computation and are usually hampered by over-fitting problems. Besides, they require a large amount of training dataset. In this regard, the recent studies <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b13'>Boumaraf et al., 2021)</ns0:ref> have demonstrated that sufficient fine-tuned pretrained CNN models performance is much more reliable than the one trained from scratch, or in the worst cases the same. Besides, using ensemble learning of pretrained models show effective results in various applications of image classification tasks. Therefore, this research fills the gap in the previous studies for colon histopathological images classification by introducing a set of transfer learning models based on Dense.</ns0:p><ns0:p>Then, reap the benefits of the ensemble learning to fuse their decision.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>This study constructs an ensemble of the pretrained models with fine-tuning for the colon diagnosis based on histopathological images. Mainly, four pretrained models (DenseNet121 MobileNetV2, InceptionV3, and VGG16) are fine-tuned, and then their predicted probabilities are fused to produce a final decision for a test/image. The pretrained models utilize transfer learning to mitigate these models' weights to handle a similar classification task. Ensemble learning of pretrained models attains superior performance for histopathological image classification.</ns0:p></ns0:div>
<ns0:div><ns0:head>Transfer Learning (TL) and pretrained Deep Learning Models for medical image</ns0:head><ns0:p>Transferring knowledge from one expert to another is known as transfer learning. In deep learning techniques, this approach is utilized where the CNN is trained on the base dataset (source domain), which has a large number of samples (e.g., ImageNet). Then, the weights of the convolutional layers are transferred to the new small dataset (target domain). Using pretrained models for classification tasks can be divided into two main scenarios: freezing the layers of the pretrained model and fine-tuning the models. In the former scenario: the convolutional layers of a deep CNN model are frozen, and the last Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>FCC are omitted. In this way, the convolutional layers act as feature extractions. Then these features are passed to a specific classifier (e.g., KNN, SVM) <ns0:ref type='bibr' target='#b57'>(Taspinar et al., 2021)</ns0:ref>. While in the latter case, the layers are fine-tuned, and some hyper-parameters are adjusted to handle a new task. Besides, the top layer (fully connected layer) is adjusted for the target domain. In this study, for example, we configure the number of neurons in this layer to (4) in accordance with the number of classes in the colon dataset. TL aims to boost the target field's accuracy (i.e., colon histopathological) by taking full advantage of the source field (i.e., ImageNet). Therefore, in this study, we transfer the weights of the set of four powerful pretrained CNN models ( DenseNet <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b49'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b50'>(Simonyan and Zisserman, 2014)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref>) with fine-tuning to increase the diagnosis performance of the colon histopathological image classification. The pretrained Deep CNN models and the proposed ensemble learning are presented in the subsequent section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained DenseNet121</ns0:head><ns0:p>Dense CNN(DenseNet) was offered by Huang et al. <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>. The architecture of DenseNet was improved based on the ResNet model. The prominent architecture of DenseNet is based on connecting the model using dense connection instead of direct connection within all the hidden layers of the CNN <ns0:ref type='bibr' target='#b6'>(Alzubaidi et al., 2021)</ns0:ref>. The crucial benefits of such an architecture are that the extracted features/features map is shared with the model. The number of training parameters is low compared with other CNN models similar to CNN models because of the direct synchronization of the features to all following layers. Thus, the DenseNet reutilizes the features and makes their structure more efficient. As a result, the performance of the DenseNet is increased <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b20'>Ghosh et al., 2021)</ns0:ref>. The main components of the DenseNet are: the primary composition layer, followed by the ReLU activation function, and dense blocks. The final layer is a set of FC layers <ns0:ref type='bibr' target='#b55'>(Talo, 2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained MobileNetV2</ns0:head><ns0:p>MobileNet <ns0:ref type='bibr' target='#b49'>(Sandler et al., 2018)</ns0:ref> is a lightweight CNN model based on inverted residuals and a linear bottleneck, which form shortcut connections between the thin layers. It is designed to handle limited hardware resources because it is a low-latency model, and a small low power. The main advantage of the MobileNet is the tradeoff between various factors such as latency, accuracy, and resolution <ns0:ref type='bibr' target='#b31'>(Krishnamurthy et al., 2021)</ns0:ref>. In MobileNet, depth separable convolutional (DSC) and point-wise convolutional kernels are used to produce feature maps. Predominantly, DSC is a factorization approach, which replaces the standard convolution with a faster one. In MobileNet, DSC first uses depth-wise kennels 2-D filters to filter the spatial dimensions of the input image. The size of the depth-wise filter is Dk x Dk x1, where Dk is the size of the filter, which is much less than the size of the input images. Then, it is followed by a point-wise convolutional filter that mainly applied to filter the depth dimension of the input images. The size of the depth filter is1x1xn, where n is the number of kernels. They separate each DSC from point-wise convolutional using batch normalization and ReLU function. Therefore, DSC is called (separable). Finally, the last FCC is connected with the Softmax layer to produce the final output/ classification result. Using depth-wise convolutional can reduce the complexity by around 22.7%. This means the DSC takes only approximately 22% of the computation required by the standard convolutional. Based on this reduction, MobileNet is becoming seven times faster than the traditional convolutional. Thus, it becomes more desirable when the hardware is limited <ns0:ref type='bibr' target='#b51'>(Srinivasu et al., 2021)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained InceptionV3</ns0:head><ns0:p>Google teams in <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref> introduced the InceptionV3 CNN. The architecture of InceptionV3 was updated based on the inceptionV1 model, as illustrated in Figure <ns0:ref type='figure'>2</ns0:ref>. It mainly addressed some issues in the previous inceptionV1 such as auxiliary classifiers by add batch normalization and representation bottleneck by adding kernel factorization <ns0:ref type='bibr' target='#b38'>(Mishra et al., 2020)</ns0:ref>. The architecture of the inceptionV3 includes multiple various types of kernels (i.e., kernel size) in the same level. This structure aims to solve the issue of extreme variation in the location of the salient regions in the input images under consideration <ns0:ref type='bibr' target='#b38'>(Mishra et al., 2020)</ns0:ref>. The inceptionV3 <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref> utilizes a small filter size (1x7 and 1x5) rather than a large filter (7x7 and 5x5). In addition, a bottleneck of 1x1 convolution is utilized. Therefore, better feature representation.</ns0:p><ns0:p>The architecture of inceptionV3 <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science layers with 3x3 or 5x5 filter size. The output of these layers is aggregated into a single layer, which represents the output layer (e.g., ensemble technique). Using parallel layers with each other will save a lot of memory and increase the model's capacity without increasing its depth.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref>. The inception model from <ns0:ref type='bibr' target='#b55'>(Talo, 2019)</ns0:ref>.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. The VGG16 model <ns0:ref type='bibr' target='#b50'>(Simonyan and Zisserman, 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pretrained VGG16</ns0:head><ns0:p>VGG16 was presented by Simonyan et al. <ns0:ref type='bibr' target='#b50'>(Simonyan and Zisserman, 2014)</ns0:ref> as a deeper convolutional neural network model. The basic design of this model is to replace the large kernels with smaller kernels, and extending the depth of the CNN model <ns0:ref type='bibr' target='#b6'>(Alzubaidi et al., 2021)</ns0:ref>. Thus, the VGG16 becomes potentially more reliable in carrying out different classification tasks. Figure <ns0:ref type='figure'>3</ns0:ref> shows the basic VGG16 (Simonyan and Zisserman, 2014) architecture. It consists of five blocks with 41 layers, where 16 layers have learnable weights; 13 convolutional layers and 3 FCC layers from the learnable layers <ns0:ref type='bibr' target='#b28'>(Khan et al., 2020)</ns0:ref>. The first two blocks include two convolutional layers, while the last three blocks consist of three convolutional layers. The convolutional layers use small kernels with size of 3x3 and padding 1. These convolutional layers are separated using the max-pooling layers that use 2x2 filter size with padding 1. The output of the last convolutional layer is 4096, which makes the number of neurons in the FCC 4096 neurons.</ns0:p><ns0:p>As illustrated in Table <ns0:ref type='table' target='#tab_7'>2</ns0:ref> , VGG16 uses around 134 million parameters, which raises the complexity of VGG16 relating to other pretrained models <ns0:ref type='bibr' target='#b58'>(Tripathi and Singh, 2020;</ns0:ref><ns0:ref type='bibr' target='#b30'>Koklu et al., 2022)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>The proposed Deep CNN Ensemble Based on softmax</ns0:head><ns0:p>The proposed deep ensemble CNNs (E-CNNs) architecture is based on two phases (base classifiers and fuse techniques). In the former phase, four modified models have been utilized: DenseNet121, Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Besides, using initial weights in pretrained models affect the classification performance because the CNNs pretrained models are nonlinear designs. These pretrained models learn complicated associations from training data with the assistance of back propagation and stochastic optimization <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021)</ns0:ref>. Therefore, this study introduces a block-wise fine-tuning technique to adapt the standard CNNs models to handle the heterogeneity nature in colorectal histology image classification tasks.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> illustrates the main steps of the design of the block-wise fine-tuning technique. First, the benchmark colon images are loaded. Then, some preprocessing tasks on the training and testing images are performed to prepare them for the pretrained models, (e.g. resizing them to 224x224x3). The images are then rescaled to 1/255 as in the previous related studies <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref>. After splitting the dataset into training and testing, the four independent pretrained models: <ns0:ref type='bibr' target='#b22'>(Huang et al., 2017)</ns0:ref>, MobileNet <ns0:ref type='bibr' target='#b49'>(Sandler et al., 2018)</ns0:ref>, VGG16 <ns0:ref type='bibr' target='#b50'>(Simonyan and Zisserman, 2014)</ns0:ref>, and InceptionV3 <ns0:ref type='bibr' target='#b54'>(Szegedy et al., 2016)</ns0:ref>)</ns0:p><ns0:p>are loaded without changing their weights. Then, the FCC and softmax layers are omitted from the loaded pretrained CNN models. These layers were originally designed to output 1000 classes from the ImageNet dataset. Two dense layers with a varying number of hidden neurons are then added to strengthen the vital data-articular feature learning from each individual pretrained model. These dense layers are followed by the ReLU nonlinear activation function, which allows us to learn complex relationships among the data <ns0:ref type='bibr' target='#b3'>(Ahmad et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b19'>Garbin et al., 2020)</ns0:ref>. Next, a 0.3 dropout layer is added to address the long training time and overfitting issues in classification tasks <ns0:ref type='bibr' target='#b15'>(Deniz et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b13'>Boumaraf et al., 2021)</ns0:ref>. While in the product rule, the posterior probability outputsP j t (I) for each class label j are generated by the base classifier t for the test image (I). Then the class with the maximum likelihood of product is considered the final decision. Eq. (2) shows the product rule technique in the proposed E-CNN (product rule). Algorithm 2 illustrates the proposed E-CNNs with majority voting and product rule. Evaluate the performance of I using the test data j. </ns0:p><ns0:formula xml:id='formula_0'>P(I) = max j=1toc T =4 ∏ t=1 P j t (I)<ns0:label>(2</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Resources used</ns0:head><ns0:p>All the experiments are implemented using TensorFlow, Keras API, and utilized python programming in Google Colaboratory or 'CoLab.' In the CoLab, we utilize Tesla GPU to run our experiment after loading the dataset into the Google drive <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTS RESULTS AND DISCUSSION</ns0:head><ns0:p>This section outlines the experiments and evaluation results from the (E-CNN) and its individual models presented in this research. This section also entails a synopsis of the training and test datasets. The results using the proposed E-CNN, with majority voting and product rule, other standard pretrained models, and state-of-the-art colon cancer classification methods are also presented in this section. Comparisons between the proposed E-CNN and other CNN models from scratch are presented in this section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>To evaluate the validity of the proposed E-CNN for colon diagnosis from histopathological images, two distinct benchmarks colon histology images datasets from <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> and <ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref> are applied. Further information about these datasets is as follows:</ns0:p><ns0:p>(A) Stoean (370 images): The histology images dataset <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> were collected from the Hospital of Craiova, Romania. The benchmark dataset consist of 357 histopathological H&E of normal grade (grade 0) and for cancer grades (grades 1, 2, and 3), with 10x magnification. They have a similar 800 × 600 pixels resolution. The images' distribution for the classes is as follows: Grade 0: 62 images, grade 1: 96 images, grade 2: 99 images, and grade 3: 100 images. All images are RGB color 8-bit depth with JPEG format. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows some samples from the images and how they are close to each other in the structure, which discriminates between various complicated grades. </ns0:p></ns0:div>
<ns0:div><ns0:head>Experimental Setting</ns0:head><ns0:p>As the proposed E-CNN aims to assist in diagnosing colon cancer based on the histopathological images, the benchmark dataset in <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> <ns0:ref type='bibr' target='#b26'>Kaur and Gandhi (2020)</ns0:ref>. The number of epochs was selected as 10. These models were trained by stochastic gradient descent(SGD) with momentum. all the proposed TL models emploed the cross entrotpy (CE) as the loss function. The cross-entropy is mainly utilized to estimate the distance between the prediction likelihood vector(E) and the one-hot-encoded ground truth label(T) <ns0:ref type='bibr' target='#b13'>(Boumaraf et al., 2021)</ns0:ref> probability vector(The following equation depicts the CE Eq.3:</ns0:p><ns0:formula xml:id='formula_1'>CE(E, T ) = − ∑ t=1 T i log E i (3)</ns0:formula><ns0:p>where CE is used to tell how well the output E matches the ground truth T. Furthermore, the dropout layer was added to all the proposed TL models to avoid over-fitting affair during training. As a result, it drops the activation randomly during the training phase and avoiding units from over co-adapting <ns0:ref type='bibr' target='#b13'>(Boumaraf et al., 2021)</ns0:ref>. In this study, dropout was set to 0.3 to randomly drop out the units with a probability of 0.3, which is typical when introducing the dropout in deep learning models.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation Criteria</ns0:head><ns0:p>In this work, multiclass (four-class) classification tasks have been carried out using the base classifiers and their ensembles on the benchmark colon dataset <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref>. The obtained results have been evaluated using average accuracy, average sensitivity, average specificity, and standard deviation over ten runs. All of these metrics are counted based on the confusion matrix, which includes the true negative (TN) and true positive (TP) values. TN and TP symbolize the acceptably classified benign and malignant samples, respectively. The false negative (FN), and false positive (FP) denote the wrong classified malignant and benign samples. These metrics are designed as follows:</ns0:p><ns0:p>• The average classification accuracy: The correctly categorized TP and TN numbers combined with the criterion parameter, are generally referred to as accuracy. A technique's classification accuracy is measured in Equation 4 as follows:</ns0:p><ns0:formula xml:id='formula_2'>Acc = 1 M M ∑ j=1 T P + T N T P + T N + FP + FN * 100%, (<ns0:label>4</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>where M is the number of independent runs of the proposed ECNN with its individual. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>which are efficiently determined as described in Eq. 5:</ns0:p><ns0:formula xml:id='formula_4'>Sensitivity = 1 M M ∑ j=1 T P T P + FN * 100%, (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_5'>)</ns0:formula><ns0:p>The sensitivity value is between [0, 1] scale. One shows the ideal classification, while zero shows the worst classification possible. Multiplication by 100 is applied on the sensitivity to obtain the required percentage.</ns0:p><ns0:p>• Average Specificity: Specificity represents an evaluation metric that is provided for negative samples within a classification approach. In particular, it attempts to measure the negative samples' proportion, which is efficiently classified. Specificity is computed as Eq. 6:</ns0:p><ns0:formula xml:id='formula_6'>Speci f icity = 1 M M ∑ j=1 T N T N + FP * 100% (6)</ns0:formula></ns0:div>
<ns0:div><ns0:head>Results and discussion</ns0:head><ns0:p>This subsection presents the experimental results obtained from the proposed E-CNN and its individuals.</ns0:p><ns0:p>These results are compared to the classification accuracy results using standard pretrained models (e.g., DenseNet, MobileNet, VGG16, and InceptionV3). After that, the performance of the standard pretrained models was compared to the adaptive pretrained models' performance to evaluate the influence of blockwise fine-tuning policy. The proposed E-CNN was also compared with the state-of-the-art CNN models for colon cancer classification such as <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b53'>Stoean, 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Popa, 2021)</ns0:ref>. In the end, to assess the significance of the proposed E-CNN, statistical test methods were used to verify whether there is a statistically significant difference between the performance of the E-CNN and the performance of the state-of-the-art CNN models. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Table <ns0:ref type='table'>4</ns0:ref>. Evaluation results for the proposed E-CNN, its individuals (modified TL models) when number of epochs=10, and the standard TL models on colon histopathlogical images dataset based on the average Accuracy, Sensitivity, Specificity, and average standard deviation (STD) in 10 runs, best results in bold. <ns0:ref type='bibr'>Modified MobileNetV2,</ns0:ref><ns0:ref type='bibr'>Modified InceptionV3,</ns0:ref><ns0:ref type='bibr'>and Modified VGG16)</ns0:ref>. The softmax of the FCC of these transfer learning set is used as the classification algorithm. Then, the ensemble (E-CNN) was obtained via product and majority voting aggregation methods. To illustrate the proposed E-CNN performance, the average accuracy, sensitivity, and specificity over the ten runs are used for evaluating the testing dataset.</ns0:p><ns0:p>Besides, the standard deviation (STD) for each base classifier and the E-CNN are also used to estimate the effectiveness of the proposed E-CNN. The experimental results of the proposed E-CNN and its individuals (i.e., base classifiers) on the first dataset are shown in <ns0:ref type='bibr'>Tables 4,</ns0:ref><ns0:ref type='bibr'>5,</ns0:ref><ns0:ref type='bibr'>and 6,</ns0:ref><ns0:ref type='bibr'>and Figures 6,</ns0:ref><ns0:ref type='bibr'>7,</ns0:ref><ns0:ref type='bibr'>8,</ns0:ref><ns0:ref type='bibr'>9,</ns0:ref><ns0:ref type='bibr'>10,</ns0:ref><ns0:ref type='bibr'>and 11,</ns0:ref><ns0:ref type='bibr' /> respectively. Meanwhile, the results of the (Kather's) dataset <ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref> are shown in Table <ns0:ref type='table' target='#tab_13'>7</ns0:ref>.</ns0:p><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> compares the results obtained by the modified pretrained models, the baseline (standard) pretrained model, and the proposed (E-CNN). Table <ns0:ref type='table'>4</ns0:ref>, indicates that the results obtained from all the modified models successfully outperformed the standard pretrained models on the first dataset. The However, among the four proposed individual pretrained models, the modified VGG16 is the least accurate, it rated about 79% for the multiclass classification task. This could be the explanation for VGG16's limited number of layers (i.e. 16 layers). Compared to the standard VGG16, the average accuracy rate difference between the modified VGG16 and the standard VGG16 was more than 10%, which was big and statistically significant. This astounding level of performance of the modified models could be attributed to the ability of the adaptation layers to find the most abstract features, which aid the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>FCC and softmax classifier in discriminating between various grades in colon histopathological images.</ns0:p><ns0:p>As a result, it reduces the problem of inter-class classification. Moreover, the proposed modified pretrained models outperformed the standard models, boosting the decisions of these models and enabling them to achieve a better generalization ability than a single pretrained model <ns0:ref type='bibr' target='#b14'>(Cao et al., 2020)</ns0:ref>. In this study, two ensemble learning models are utilized: E-CNN (product rule) and E-CNN (majority voting), to merge the decisions of the single models. The former is based on merging the probabilities of the individual modified models. While the latter is based on combining the output predictions of the individual, Figure <ns0:ref type='figure' target='#fig_9'>7</ns0:ref> confirms the confusion matrix obtained as a result of the tested samples (20% of the dataset) for one run of the classification performed through the proposed E-CNN (product rule). The empirical results of the proposed ECNN (majority voting) and E-CNN (product rule) achieved accuracy rates of 94.5% and 95.2%, respectively.</ns0:p><ns0:p>These accuracy values were higher compared to the individual models. For example, the E-CNN (product rule) result showed 3.2% increase compared to the modified DenseNet121 model. This result reveals the significance of the product rule in the proposed E-CNN for colon image classification because it is based on an independent event <ns0:ref type='bibr' target='#b5'>(Albashish et al., 2016)</ns0:ref>. To show the adequacy of the proposed E-CNN, sensitivity was computed. Table <ns0:ref type='table'>4</ns0:ref> confirms that the E-CNN has higher sensitivity values than all the individual models. It is worth noting that the sensitivity performance level matches the accuracy values, thereby emphasizing the consistency of the E-CNN results. E-CNN (product rule) was able to yield a better sensitivity value (95.6%). Among all the proposed transfer learning models, InceptionV3 delivered the overall maximum sensitivity performance. Besides, the specificity measure shows that the E-CNN and its individuals are able to detect negative samples which are correctly classified for each class.</ns0:p><ns0:p>Furthermore, the standard deviation analysis over the ten runs shows that the ensemble E-CNNs (product rule) has the minimum value (around 1.7%). These results indicate that it is stable and capable of producing optimal outcomes regardless of the randomization.</ns0:p><ns0:p>To show the adequacy of the proposed modified CNN models even after being trained on a smaller dataset, we have provided accuracy and loss (error function) curves. The loss function quantifies the cost of a particular set of network parameters based on how often they generate output in comparison to the ground truth labels in the training set. The TL models employ SGD to determine the optimal set of parameters for minimizing the loss error. Figures <ns0:ref type='figure' target='#fig_11'>8 and 9</ns0:ref> depict the proposed TL models' accuracies and loss curves for the training and testing datasets over ten epochs. Figure <ns0:ref type='figure' target='#fig_10'>8</ns0:ref> shows that the proposed DenseNet and Inception models achieved good accuracy for the training and test datasets over various epochs, while the MobileNet and VGG15 models performed adequately. One possible explanation is that the proposed models are stable during the training phase, allowing them to converge to the best effect.</ns0:p><ns0:p>The Densenet121 loss curve indicates that the training loss dropped significantly much faster than the VGG16 and that the testing accuracy improved much faster. In more detail, the VGG16 loss function was linearly reduced, whereas the DenseNet loss function was dramatically reduced. This is consistent with Densenet121's classification performance in Table <ns0:ref type='table'>4</ns0:ref>, where it outperformed all other proposed models.</ns0:p><ns0:p>Furthermore, one can see that all of the proposed TL models, except the VGG16, achieved high testing accuracies. These models improve the generalization performance simultaneously.</ns0:p><ns0:p>To further demonstrate the efficacy of the proposed E-CNNs, we also compare the obtained results on the colon histopathlogical images benchmark dataset with the most recent related works <ns0:ref type='bibr' target='#b53'>(Stoean, 2020;</ns0:ref><ns0:ref type='bibr' target='#b40'>Popa, 2021)</ns0:ref>. Table <ns0:ref type='table' target='#tab_11'>5</ns0:ref> contains the comparison between the proposed E-CNNs and the recent state-of-the-art studies. From the tabular results, one see that the proposed E-CNNs achieved higher results comparing either to pretrained models, such as in <ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref>, or to constructing CNN from scratch. One of the main reasons for these superior results is the adaptation of the transfer learning models with the appropriate layers; additionally, using ensemble learning demonstrates the ability of the proposed E-CNNs to increase discriminations between various classes in the histopathological colon images dataset. In comparison to the recent study by <ns0:ref type='bibr' target='#b53'>Stoean et al. (Stoean, 2020)</ns0:ref>, our ensemble E-CNNs has shown better performance. They constructed CNN from scratch and then used evolutionary algorithms (EA) to fin-tune its parameters. Their classification accuracy on the colon histopathlogical images dataset was 92%. We find that the proposed method's superior due to expanding deeper architecture and utilizing ensemble learning.</ns0:p><ns0:p>Moreover, the obtained classification accuracies were compared with the pretrained models GoogleNet and AlexNet in <ns0:ref type='bibr' target='#b40'>(Popa, 2021)</ns0:ref>. The proposed method exceeded the pretrained models. The classifica- tion accuracy of GoogleNet and AlexNet on colon histopathological images was 85.62% and 89.53%, respectively. The average accuracy rate difference between the proposed method and these pretrained models was more than 10% and 6%, respectively, which was large and statistically significant. Two critical observations are to be made here: first, adapting pretrained models to a specific task increases performance. Second, using pretrained models as a feature extraction without the softmax classifier may degrade the classification accuracy in the colon histopathological image dataset.</ns0:p><ns0:p>To verify that the modified pretrained models are not overfitted, we re-trained them through a number of 30 epochs as in Kather's work <ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref>. Figures 10 and 11 present the training and validation charts for all the proposed models after being re-trained. According to Figure <ns0:ref type='figure' target='#fig_13'>10</ns0:ref>, validation accuracy, the validation curves for the modified models' graphs increased dramatically after epoch number ten. Actually, they outperformed the training curves. This indicates that the modified models were trained well and avoided the overfitting issue. Figure <ns0:ref type='figure' target='#fig_14'>11</ns0:ref> provides loss curves for the modified models with the 30 epochs It is clear that there is a reduction in the validation loss compared to the training loss, which is noticeable in the delivered loss curves for the individuals of the proposed E-CNN. The results of the E-CNN and their individuals' base learning with the 30 epochs are illustrated in Table <ns0:ref type='table'>6</ns0:ref>. As depicted in this table, the results show that the modified models outperformed the same models when the number of epochs was a set of 10. For example, the modified DenseNet121, MobileNetV2, inceptionv3, and VGG16 with 30 epochs increased the accuracy by 6%, 2%, 4%, and 7%, respectively. This may indicate greater success when increasing the number of epochs and making the deep learning models train enough.</ns0:p><ns0:p>Furthermore, the increased performance of the base learner affects the ensemble models. Thus, the results of E-CNN (product rule) and E-CNN (majority voting) are increased by around 2% and 1.0%, respectively compared to the same ensemble when the number of epochs was ten. Figure <ns0:ref type='figure' target='#fig_15'>12</ns0:ref> confirms the confusion matrix of the E-CNN(product rule). These results indicate that the individual learners that we have modified and their ensemble perform robustly better and are not overfitted when increasing the number of epochs. Moreover, to validate the proposed modified models and their ensembles, we applied these models to the second colon histopathological dataset called the Kather dataset <ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref>. This dataset contains 5,000 histological images of human colon cancer from eight distinct kinds of tissue. Table <ns0:ref type='table' target='#tab_13'>7</ns0:ref> gives the accuracy, sensitivity, and specificity of the proposed individual pretrained models, E-CNN (product rule), and E-CNN (majority voting). Besides, the proposed modified models and their ensembles are compared to similar experiments previously used to assess the classification of the Kather dataset.</ns0:p><ns0:p>Based on the results, the proposed modified models are able to separate eight different classes in the histopathological images. Both modified InceptionV3 and DenseNet121 achieved testing accuracy of roughly 89%, with a standard deviation of less than 0.5%. These results outperform the ResNet152 feature extraction results in <ns0:ref type='bibr' target='#b39'>(Ohata et al., 2021</ns0:ref>) by around 9%. That is because the fine-tuned is capable of extracting high-level features from the input images. Furthermore, by using the modified VGG16 on the same dataset, the obtained result has roughly 83% test accuracy, while <ns0:ref type='bibr' target='#b43'>(Rachapudi and Lavanya Devi, 2021</ns0:ref>) achieved a test accuracy of 77% when utilizing CNN architecture. This implies that the modification to the pretrained models gets an acceptable result on the histopathological image dataset. The E-CCN (product rule) and E-CNN (majority voting) achieved promising results on the Kather dataset. As shown in Table <ns0:ref type='table' target='#tab_13'>7</ns0:ref>, the E-CCN (product rule) and E-CNN (majority voting) performed better than all individual models, with an accuracy of 91.28% and 90.63%, respectively, which is better than the DensNet121 with only around 2%. These results demonstrate the effectiveness of the proposed modified pretrained models and their ensemble in this classification task.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>According to the above experimental results, it is clear that the proposed E-CNNs and adapted TL predictive models outperform other state-of-the-art models and standard pretrained models in the colon histopathological image classification task. The experimental results indicate that adapting the pretrained models for medical image classification tasks improves classification tasks. The results in Tables <ns0:ref type='table' target='#tab_2'>4 and 19</ns0:ref> Manuscript to be reviewed Computer Science Table <ns0:ref type='table'>6</ns0:ref>. Evaluation results for the proposed E-CNN, its individuals (modified TL models) when number of epochs=30, and the standard TL models on colon histopathlogical images dataset based on the average Accuracy, Sensitivity, Specificity, and average standard deviation (STD) in 10 runs, best results in bold. Manuscript to be reviewed training CNN from scratch (as in previous works by <ns0:ref type='bibr' target='#b41'>(Postavaru et al., 2017)</ns0:ref> and <ns0:ref type='bibr' target='#b53'>(Stoean, 2020)</ns0:ref>). One reason for this finding is that training a CNN from scratch would necessitate a large number of training samples. Moreover, it must be confirmed that the large number of parameters of the CNN are trained effectively and with a high degree of generalization to obtain acceptable results. Thus, the limitation of the number of training samples causes overfitting in classification tasks. Furthermore, based on the results, it was found that the selection of appropriate hyperparameters in pretrained models plays a vital role in the proper learning and performance of these models.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>In this study, two ensemble learning (E-CNN (Majority voting), E-CNN (product rule)) models had been designed to further boost the colon histopathological image classification performance. In the proposed ensemble learning models, the adaptive pretrained models were used as base classifiers.</ns0:p><ns0:p>Through the experimental results, one can find that ensemble learning outperformed using individual classifiers. Furthermore, using product rules in the ensemble allows the probabilities of independent events to be fused, ultimately improving performance. This finding is in line with the results in Table <ns0:ref type='table'>4</ns0:ref>, where the proposed E-CNN (product) outperformed the proposed E-CNN (majority voting).</ns0:p><ns0:p>Furthermore, the T-test is used to compare the proposed E-CNN product to the previously related studies on the same dataset. This test is performed to prove that the improvement made by the proposed E-CNN (product) and the state-of-the-art is statistically significant. statistics are shown in Table <ns0:ref type='table' target='#tab_11'>5</ns0:ref>. As shown in Table <ns0:ref type='table' target='#tab_11'>5</ns0:ref>, the proposed E-CNN (product) outperforms most of the related works on the colon histopathological image dataset, where the majority of the p-values of< 0.0001. For example, comparing the proposed E-CCNN with CNN from scratch in <ns0:ref type='bibr' target='#b53'>(Stoean, 2020)</ns0:ref>, E-CNN is significantly better with a p-value <0.001. These findings show that using the E-CNN (product)</ns0:p><ns0:p>is effective for handling medical image classification tasks.</ns0:p><ns0:p>In summary, it has been demonstrated that the use of the proposed TL models assists in the colon histopathological image classification task, which can be used in the medical domain. Besides, using ensemble learning for the machine learning classification tasks can improve the classification results.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>Deep learning plays a key role in diagnosing colon cancer by grading captured images from colon histopathological images. In this study, we introduced a new set of transfer learning-based methods to help classify colon cancer from histopathological images, which can be used to discriminate between different classes in this domain. To solve this classification task, the pre-trained CNN models DenseNet121, MobileNetV2, InceptionV3, and VGG16 were used as backbone models. We introduced the TL technique based on a block-wise fine-tuning process to transfer learned experience to colon histopathological images.</ns0:p><ns0:p>To accomplish this, we added new dense and drop-out layers to the pretrained models, followed by new Future research could be considered to introduce a new strategy to select the best hyperparameters for the adaptive pretrained models-we recommend wrapper methods for this task.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Colon histopathology images from the Stoean benchmark dataset (Stoean et al., 2016) with 40× magnification factor: (A) normal (Grade 0), (B) cancer grade 1 (G1), (C) cancer grade 2 (G2), and (D) cancer grade 3 (G3).</ns0:figDesc><ns0:graphic coords='3,141.73,324.48,413.58,334.32' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>The proposed modified Deep pretrained modelsAfter adapting the four pretrained models (DenseNet121, MobileNetV2, InceptionV3, and VGG16), they serve as base classifiers in the proposed E-CNN for the automated classification of colon H&E histopathological images. The standard previous pretrained models extract various features from the training images to discriminate between different types of cancer (multiple classes) in the colon images dataset. However, each pretrained model is based on a set of convolution layers and filter sizes to extract different features from the input images. As a result, no pretrained model can be more general in extracting all the distinguishing features from the input training images<ns0:ref type='bibr' target='#b20'>(Ghosh et al., 2021)</ns0:ref>.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Block diagram of the proposed Block-wise fine-tuning for each pretrained model from (DenseNet121, MobileNetV2, InceptionV3, and VGG16).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>)</ns0:head><ns0:label /><ns0:figDesc>Based on Algorithms 1, 2, and Figures4,5, the following points are taken into account: First, the CNN model is adapted to handle the heterogeneity in the colon histopathological images using the Block-wise fine-tuning technique for each of the pretrained models. It extracts additional abstract features from the image that aid in increasing intra-class discrimination. Second, ensemble learning is employed to improve the performance of the four adaptive pretrained models. As a result, the final decision regarding the test images will be more precise.Building and training the adaptive pretrained models [Block-wise fine-tuning for each pretrained model]. 1: input:Training data(T), N samples: T = [x 1 , x 2 ,. . ., x N ], with Category: y = [y 1 , y 2 ,. . ., y N ], pretrained CNN models( M), M=[ DenseNet121, MobileNetV2, InceptionV3, and VGG16 models]. 2: for each I in M do with number of neurons equal to 512 and activation function='ReLU' 5:Add Dense2 layer with number of neurons equal to 64 and activation function='ReLU' with number of neurons equal to 4( based on number classes in the colon dataset) -parameters values, as listed in</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>, I]=probabilities of each class for the test image j when using the individual I.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>V</ns0:head><ns0:label /><ns0:figDesc>[ j, I]=prediction for the test image j when using the individual I.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The proposed E-CNNS with the four adaptive pretrained models (DenseNet121, MobileNetV2, InceptionV3, and VGG16).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>B) Kather (5000 images): The dataset<ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref> includes 5000 histology images of human colon cancer. The samples were gathered from the Institute of Pathology, University Medical Center, Mannheim, Germany. The benchmark dataset consists of histopathological H&E of eight classes: namely ADIPOSE, STROMA, TUMOR, DEBRIS, MUCOSA, COMPLEX, EMPTY, and LYMPHO. Each class consists of 625 images with a size of 150 × 150 pixels, 20× magnification, and RGB channel format.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. A comparison of modified TL models with standard TL models (original) in terms of average classification accuracy.</ns0:figDesc><ns0:graphic coords='15,141.73,386.85,413.54,273.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Confusion matrix of the E-CNN (product rule) on the Stoean testing dataset when number of epochs =10.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. The accuracy learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is ten on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. The loss learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is ten on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:2:2:NEW 15 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. The accuracy learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is 30 on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. The loss learning curves of training and testing derived from the four modified CNN base learners, when the number of epochs is 30 on the colon histopathological image benchmark Stoean's dataset used in this study: (A) Modified DenseNet121, (B) Modified InceptionV3, (C) Modified MobileNetV2, and (D) Modified VGG16.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>FCCFigure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Confusion matrix of the E-CNN (product rule) on the Stoean testing dataset when number of epochs =30.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>have proved that the transferred learning from a pre-trained deep CNN model using Incep-tionV3 on a colon dataset with fine-tuning provides efficient results. Their methodology was mainly</ns0:figDesc><ns0:table><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:2:2:NEW 15 Jun 2022)</ns0:cell><ns0:cell>4/26</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of the major classification studies on colon cancer</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Authors in</ns0:cell><ns0:cell>Dataset used</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell>Using pretrained either</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>architecture</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>feature extraction/ fine-tuning</ns0:cell></ns0:row></ns0:table><ns0:note><ns0:ref type='bibr' target='#b53'>(Stoean, 2020)</ns0:ref> colorectal in<ns0:ref type='bibr' target='#b52'>(Stoean et al., 2016)</ns0:ref> CNN model from scratch 92%</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Summary of deep architectures used in this work. InceptionV3, and VGG16 pretrained CNN classifier. While the latter phase focuses on combining the decisions of the base classifiers (in the first phase). Two types of fusion techniques have been employed in the proposed E-CNNS: majority voting and product rule. On the one hand, the majority voting is based on the prediction value of the base classifier. On the other hand, the product rule is based</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Architecture</ns0:cell><ns0:cell>No. of</ns0:cell><ns0:cell>No. of</ns0:cell><ns0:cell>No. of training</ns0:cell><ns0:cell>Minimum</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>Top 5 error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Conv layers</ns0:cell><ns0:cell>FCC layers</ns0:cell><ns0:cell>parameters</ns0:cell><ns0:cell>image size</ns0:cell><ns0:cell>extracted features</ns0:cell><ns0:cell>on ImageNet</ns0:cell></ns0:row><ns0:row><ns0:cell>DenseNet121</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>7 million</ns0:cell><ns0:cell>221x221</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>7.71%</ns0:cell></ns0:row><ns0:row><ns0:cell>InceptionV3</ns0:cell><ns0:cell>42</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>22 million</ns0:cell><ns0:cell>299x299</ns0:cell><ns0:cell>2048</ns0:cell><ns0:cell>3.08%</ns0:cell></ns0:row><ns0:row><ns0:cell>VGG16</ns0:cell><ns0:cell>13</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>134 million</ns0:cell><ns0:cell>227x227</ns0:cell><ns0:cell>4096</ns0:cell><ns0:cell>7.30%</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNet</ns0:cell><ns0:cell>53</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3.4 million</ns0:cell><ns0:cell>224x224</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>-%</ns0:cell></ns0:row><ns0:row><ns0:cell>MobileNetV2,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>on the probabilities of the base classifiers (i.e., pretrained model), the details of the proposed E-CNNs are as follows:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>10:</ns0:cell><ns0:cell>Build the final model (adaptI)</ns0:cell></ns0:row><ns0:row><ns0:cell>11:</ns0:cell><ns0:cell>Train the adapI on T</ns0:cell></ns0:row><ns0:row><ns0:cell>12:</ns0:cell><ns0:cell>Append adapI into adapM</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>13: end for</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>14: Output: Adaptive models (adaptM), adaptM=[ adapt DenseNet121, adapt MobileNetV2,</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>adapt InceptionV3, and adapt VGG16 models]</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Algorithm 2 Ensemble of adaptive models and evaluating the ensemble model on test colon histopathlog-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ical images.</ns0:cell></ns0:row></ns0:table><ns0:note>1: Input: Adaptive models (adaptM), adaptM=[ adapt DenseNet121, adapt MobileNetV2, adapt InceptionV3, and adapt VGG16 models], Test images set( D), with z samples: R = [x 1 , x 2 , x 3 ,. . ., x z ], with Category: y = [y 1 , y 2 ,. . ., y z ] 2: for j in D do 3: for each individual I in adaptM do 4:</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>is considered during the experiments' work. The dataset was divided into 80% training and 20% testing. In E-CNN, the Hyperparameters, as illustrated in Table3, were fine-tuned with the same setting for all the proposed transfer learning models. The training and Hyperparameters used in the proposed individual transfer learning models and an ensemble model.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Hyperparameters</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Image size</ns0:cell><ns0:cell>224 × 224</ns0:cell></ns0:row><ns0:row><ns0:cell>Optimizer</ns0:cell><ns0:cell>0.005</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Maximum Habitat probability SGD with momentum</ns0:cell></ns0:row><ns0:row><ns0:cell>Learning rate</ns0:cell><ns0:cell>1e-6</ns0:cell></ns0:row><ns0:row><ns0:cell>Batch size</ns0:cell><ns0:cell>16</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of epochs</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell>Dropout</ns0:cell><ns0:cell>0.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Loss function</ns0:cell><ns0:cell>Cross Entropy</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>testing images were resized to 224 × 224 for comfort with the proposed transform learning models. The</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>batch size was chosen as 16; the minimum learning rate was specified as min lr=0.000001. The learning</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>rate was determined to be small enough to slow down learning in the modelsPopa (2021);</ns0:cell></ns0:row></ns0:table><ns0:note>12/26 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:2:2:NEW 15 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Summary of the major classification studies on colon cancer. Stoean and Kather datasets, to test the robustness of the proposed methods. The former dataset is the Stoean dataset, which includes four different classes, mainly: benign, grade1, grade2, and grade3. While the second dataset (the Kather dataset) includes eight diverse classes, Each dataset was divided into 80% for the training set and 20% for the testing set. The results of classification performance in this study are for the test dataset. The The classification tasks were accomplished using individual classifiers of the modified transfer learning set (Modified DenseNet121,</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Pretrained Models</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Sensitivity</ns0:cell><ns0:cell cols='2'>Specificity</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard DenseNet121</ns0:cell><ns0:cell>90.41±3.1</ns0:cell><ns0:cell>91.25±2.9</ns0:cell><ns0:cell>100±0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard MobileNetV2</ns0:cell><ns0:cell>90.27±2.9</ns0:cell><ns0:cell>88.25±1.9</ns0:cell><ns0:cell cols='2'>99.23±2.3</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard InceptionV3</ns0:cell><ns0:cell>87.12±2.0</ns0:cell><ns0:cell>92.75±2.0</ns0:cell><ns0:cell>100±0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Standard VGG16</ns0:cell><ns0:cell>62.19±7.0</ns0:cell><ns0:cell>63.21±7.3</ns0:cell><ns0:cell>100±9.9</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified DenseNet121</ns0:cell><ns0:cell>92.32±2.8</ns0:cell><ns0:cell>92.99±2.8</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified MobileNetV2</ns0:cell><ns0:cell>92.19±3.8</ns0:cell><ns0:cell>90.75±2.0</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified InceptionV3</ns0:cell><ns0:cell>89.86±2.2</ns0:cell><ns0:cell>95.0±1.5</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modified VGG16</ns0:cell><ns0:cell>72.73±3.9</ns0:cell><ns0:cell>73.0±3.6</ns0:cell><ns0:cell cols='2'>87.43±12.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Proposed E-CNN (product)</ns0:cell><ns0:cell>95.20±1.64</ns0:cell><ns0:cell>95.62±1.50</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Proposed E-CNN (Majority voting)</ns0:cell><ns0:cell>94.52±1.73</ns0:cell><ns0:cell>95.0±1.58</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Authors in</ns0:cell><ns0:cell>Dataset used</ns0:cell><ns0:cell cols='2'>CNN architecture</ns0:cell><ns0:cell /><ns0:cell>Accuracy</ns0:cell><ns0:cell>T-test/p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>(Stoean, 2020)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='2'>CNN model from scratch</ns0:cell><ns0:cell /><ns0:cell>92%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>(Popa, 2021)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell>AlexNet</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>89.53%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>(Popa, 2021)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell>GoogleNet</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>85.62%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>(Postavaru et al., 2017)</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='2'>CNN model from scratch</ns0:cell><ns0:cell /><ns0:cell>91%</ns0:cell><ns0:cell>P<0.0001</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed E-CNN</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='3'>Modified TL models with ensemble</ns0:cell><ns0:cell>95.20%</ns0:cell></ns0:row><ns0:row><ns0:cell>(product rule)</ns0:cell><ns0:cell /><ns0:cell cols='2'>learning ( using product rule)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Proposed E-CNN</ns0:cell><ns0:cell>colorectal in (Stoean et al., 2016)</ns0:cell><ns0:cell cols='3'>Modified TL models with ensemble</ns0:cell><ns0:cell>94.52%</ns0:cell></ns0:row><ns0:row><ns0:cell>(Majority voting)</ns0:cell><ns0:cell /><ns0:cell cols='3'>learning ( using majority voting)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>colon histopathological image datasets:</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Evaluation results for the proposed E-CNN, its individuals (modified TL models) when number of epochs=30, and the standard TL models on Kather's colon histopathlogical images dataset<ns0:ref type='bibr' target='#b25'>(Kather et al., 2016)</ns0:ref> based on the average Accuracy, Sensitivity, Specificity, and average standard deviation (STD) in 10 runs, best results in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Pretrained Models</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Sensitivity</ns0:cell><ns0:cell>Specificity</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified DenseNet121</ns0:cell><ns0:cell>96.8.±2.7</ns0:cell><ns0:cell>97.0±2.4</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified MobileNetV2</ns0:cell><ns0:cell>94.48±2.6</ns0:cell><ns0:cell>95.5±1.9</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified InceptionV3</ns0:cell><ns0:cell>94.52±1.7</ns0:cell><ns0:cell>95.1±1.2</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Modified VGG16</ns0:cell><ns0:cell>79.4±1.9</ns0:cell><ns0:cell>79.9±3.6</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed E-CNN (product)</ns0:cell><ns0:cell>97.2±1.27</ns0:cell><ns0:cell>97.5±1.8</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed E-CNN (Majority voting)</ns0:cell><ns0:cell>95.89±1.3</ns0:cell><ns0:cell>96.2±1.57</ns0:cell><ns0:cell>100±0.0</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:2:2:NEW 15 Jun 2022)</ns0:note></ns0:figure>
<ns0:note place='foot' n='26'>/26 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70538:2:2:NEW 15 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Response to reviewer’s comments
Ensemble of adapted CNN methods for classifying colon histopathological
images
Dear Editor,
We are glad that you have offered us to revise our work for PeerJ Computer Science
journal. We would also like to express our gratitude to you, the editorial team, and
reviewers who have provided valuable comments on this paper to improve its quality. We
really appreciate your selection of such highly qualified reviewers for this research. Their
efforts and visions have helped us greatly to fulfill this revision. We have addressed all of
the comments in this revised version of the paper and we list all the changes item-by-item
as outlined below.
COMMENT Editor-C1:
It is appropriate to accept this article after the plots in the Figures are labelled.
RESPONSE #1: Done, thank you for this excellent observation.
We have improved the figures by adding the labels to them. Please refer to the figures: 6,
8,9,10, and 11.
Thanks in advance
Sincerely yours,
Corresponding author: Dheeb Albashish
1|Page
" | Here is a paper. Please give your review comments after reading it. |
710 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Abdominal aortic aneurysm (AAA) is one of the most common diseases worldwide. 3D</ns0:p><ns0:p>segmentation of AAA provides useful information for surgical decisions and follow-up treatment. However, existing segmentation methods are time consuming and not practical in routine use. In this paper, the segmentation task will be addressed automatically using a deep learning based approach which has been proved to successfully solve several medical imaging problems with excellent performances. This paper therefore proposes a new solution of AAA segmentation using deep learning in a type of 3D convolutional neural network (CNN) architecture that also incorporates coordinate information. The tested CNNs are UNet, AG-DSV-UNet, VNet, ResNetMed and DenseVoxNet. The 3D-CNNs are trained with a dataset of high resolution (256 x 256) non-contrast and post-contrast CT images containing 64 slices from each of 200 patients. The dataset consists of contiguous CT slices without augmentation and no post-processing step. The experiments show that incorporation of coordinate information improves the segmentation results. The best accuracies on non-contrast and contrast-enhanced images have average dice scores of 97.13% and 96.74%, respectively. Transfer learning from a pre-trained network of a preoperative dataset to post-operative endovascular aneurysm repair (EVAR) was also performed. The segmentation accuracy of post-operative EVAR using transfer learning on non-contrast and contrast-enhanced CT datasets achieved the best dice scores of 94.90% and 95.66%, respectively.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Abdominal aortic aneurysm (AAA) is a common disease of the aorta, characterized by abnormal dilatation. In Western countries, the disease is common in males older than 65 with prevalence of about 4-7% <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Risk of rupture and associated risk of mortality increases with dilation size. In the United States, more than 10,000 people die from rupture each year <ns0:ref type='bibr' target='#b1'>[2,</ns0:ref><ns0:ref type='bibr' target='#b2'>3]</ns0:ref>. In addition, for predicting the rupture risk, 3D geometry of AAA could provide useful information, which could also be used for a pre-operative evaluation of endovascular stenting approach. Therefore, it is necessary to obtain the 3D segmentation of the outer wall of AAA, in order to generate its 3D geometry. The outer wall of AAA is the structure that is the outer surface surrounding AAA. Segmenting the outer wall of AAA could be considered as a difficult segmentation task, since its pixels' intensity values are very similar to surrounding organs in CT images.</ns0:p><ns0:p>Several previous approaches based on prior medical knowledge have been proposed for AAA segmentation, such as intensity-based and contour-based methods <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref><ns0:ref type='bibr' target='#b4'>[5]</ns0:ref><ns0:ref type='bibr' target='#b5'>[6]</ns0:ref><ns0:ref type='bibr' target='#b6'>[7]</ns0:ref><ns0:ref type='bibr' target='#b7'>[8]</ns0:ref><ns0:ref type='bibr' target='#b8'>[9]</ns0:ref><ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. Convolutional neural networks (CNN) have recently been used for analyzing medical images in CT, MRI and ultrasound <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref><ns0:ref type='bibr' target='#b11'>[12]</ns0:ref><ns0:ref type='bibr' target='#b12'>[13]</ns0:ref><ns0:ref type='bibr' target='#b13'>[14]</ns0:ref><ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. However, there are yet only a few studies applying CNNs to AAA segmentation on computed tomographic angiography (CTA) <ns0:ref type='bibr' target='#b15'>[16]</ns0:ref><ns0:ref type='bibr' target='#b16'>[17]</ns0:ref><ns0:ref type='bibr' target='#b17'>[18]</ns0:ref><ns0:ref type='bibr' target='#b18'>[19]</ns0:ref><ns0:ref type='bibr' target='#b19'>[20]</ns0:ref>. The previous studies addressed only segmentation for post-operative endovascular aneurysm repair (EVAR) of AAA <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref> with limited amounts of training data <ns0:ref type='bibr' target='#b21'>[21,</ns0:ref><ns0:ref type='bibr' target='#b22'>22]</ns0:ref>.</ns0:p><ns0:p>The method proposed in this paper incorporates coordinate or location information as well as spatial information into a 3D CNN-based approach to AAA segmentation on noncontrast and contrast-enhanced datasets. Recent work on use of CNNs for medical image segmentation has explored various network architectures to improve performance <ns0:ref type='bibr' target='#b13'>[14,</ns0:ref><ns0:ref type='bibr' target='#b24'>23]</ns0:ref>. With this line of work seeming to have reached a plateau, a promising approach to achieve further improvement is to incorporate additional data, such as coordinate information <ns0:ref type='bibr' target='#b25'>[24,</ns0:ref><ns0:ref type='bibr' target='#b26'>25]</ns0:ref>. Segmentation of AAA is a good candidate for this approach since the AAA is a tubular structure almost always oriented from head to toe. To examine the generality of using coordinate information, we use several kinds of CNN networks in the experiments including standard UNet <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>, AG-DSV-UNet <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>, VNet <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref>, ResNetMed <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> and DenseVoxNet <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>. The raw datasets are non-contrast and post-contrast enhanced CT datasets, which are different in low and high contrast resolution of AAA. We also directly compare the accuracy of segmentation with ground-truth and with an advanced graph-cut based segmentation technique. Furthermore, we experiment with transfer learning of segmentation from a pre-operative model to post-operative EVAR. EVAR is a procedure of stent-graft implantation inside AAA under endovascular intervention in order to decrease the size of the aortic lumen. Measurement of AAA volume after EVAR is one of the considerations of imaging surveillance for patient follow up <ns0:ref type='bibr' target='#b32'>[30]</ns0:ref>. If AAA size continues to increase after EVAR, it indicates a complication called endoleak. Directly training a model to segment EVAR images would require a dataset for that problem, which would be highly labor intensive to produce. Thus it is useful to explore whether transfer learning can be used to reduce the amount of effort required. The problem is challenging since the appearance of AAA after EVAR is changed, with the inner lumen replaced by a metal stent-graft (much smaller in size) and the presence of a further metal artifact in the image (Fig <ns0:ref type='figure' target='#fig_5'>1</ns0:ref>).</ns0:p><ns0:p>One of the key contributions of this paper is to evaluate the performance gains from incorporation of coordinate/location based information into the CNN-based approach to AAA segmentation in different low and high contrast resolution datasets of non-contrast and contrastenhanced CT images. The proposed method archives outstanding performance when compared with existing methods in the literature. The second main contribution is to adopt the transfer learning-based approach using a pre-trained model of pre-operative AAA to post-operative EVAR, with only a small amount of data for the re-training. This approach has benefits in clinical applications for both pre-operative and post-operative AAA segmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related work</ns0:head><ns0:p>Coordinate or positional encoding recently became a hot topic in computer science that was first implemented in the language processing domain <ns0:ref type='bibr' target='#b33'>[31,</ns0:ref><ns0:ref type='bibr' target='#b34'>32]</ns0:ref>. In language processing, position encoding is used to assign the order of words in a sentence in a sequence in order to represent the position of the word. The positional encoding was implemented by a linear function <ns0:ref type='bibr' target='#b34'>[32]</ns0:ref> and sinusoidal function <ns0:ref type='bibr' target='#b33'>[31]</ns0:ref>. In the image processing domain, the coordinate information encoding was also proposed in recent literature <ns0:ref type='bibr' target='#b25'>[24,</ns0:ref><ns0:ref type='bibr' target='#b26'>25]</ns0:ref> by adding extra channels in input images. Lie et al. <ns0:ref type='bibr' target='#b25'>[24]</ns0:ref> proposed incorporating coordinate information in two extra channels for the x, y axes of 2D images using a continuous sequence of integers starting with 0 in row and column. They demonstrated an improvement of CNNs in image classification, objection detection and generative models. Ren et al. <ns0:ref type='bibr' target='#b26'>[25]</ns0:ref> proposed a similar coordinate information embedding in the extra channels of images as the input of a downstream CNN. They demonstrated an improvement of object detection in traffic sign images.</ns0:p><ns0:p>Ronneberger et al. <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> proposed a fully convolutional network and training strategy for biomedical images. They modified and extended a previous architecture <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> such that is could work with very few training images and archived better segmentation results. The upsampling module of this network architecture consists of many channels to extract features which would be propagated to their upper layers. The architecture is U-shaped because the expansive path is symmetric to the contracting path, and is thus called U-Net. U-Net applies elastic deformations for data augmentation on training images. This allows the network to compensate for the reduced amount of data, so that it can be applied to tasks with relatively little available training data. Jackson et al. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> proposed a CNN for segmentation of kidneys in non-contrast CT images using a 3D U-Net architecture. Non-contrast CT images contain less contrast difference and are thus more difficult to distinguish from adjacent structures as compared with post-contrast CT images. They reported mean dice scores of 91% and 86% for right and left kidney segmentation, respectively.</ns0:p><ns0:p>Lopez et al. <ns0:ref type='bibr' target='#b36'>[33]</ns0:ref> proposed a fully automatic approach to AAA segmentation in postoperative CTA images, based on deep convolutional neural networks (DCNN). The proposed method has two steps. In the first step, the DCNN <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> provides 2D region-of-interest candidates for the thrombus. The second step is fine thrombus segmentation with a holistically-nested edge detection (HED) network <ns0:ref type='bibr' target='#b37'>[34]</ns0:ref>. The DCNN is used to detect the thrombus region in the region of interest followed by HED for subsequent fine thrombus segmentation. The HED reduces the need for large deconvolution filters and increases the output resolution. The networks are trained, validated, and tested on 13 post-operative AAA images, archiving 82% dice score. Lu et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> recently proposed CNN segmentation of pre-operative AAA. The proposed method modified the 3D U-Net combined with ellipse filling for detection and segmentation of AAA. The model was trained on 321 CT examinations (non-contrast 168, post contrast 153) with testing on 57 examinations (non-contrast 28, post-contrast 29). The AAA was present in 77% of the training dataset, yielding samples for non-contrast and post-contrast of about 129 and 117 examinations, respectively. The test was evaluated in terms of the maximum diameter of the aorta with an average dice score of 91.0%. Dziubish et al. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> proposed CNN-based AAA segmentation with an ensemble of 2D U-Net, ResNet and VBNet. The ensemble predictions from these frameworks were reported to have a dice score of 94.0%. In addition, Siriapisith et al. <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref> proposed an advanced graph-cut segmentation method on post-contrast CT images of AAA. The on integration of intensity-based and contour-based properties was deployed in the graph-cut with probability density function (GCPDF) and graph-cut based active contour (GCBAC). The performance, reported based on 20 CT examinations, yielded an average dice score of 91.88%.</ns0:p><ns0:p>In this study, we explore the incorporation of pixel-level coordinate/location information into a variety of network architectures, including UNet, VNet, ResNet and DenseNet. The AG-DSV-UNet <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref> is the recent advanced integration of attention gate (AG) and deep supervision (DSV) modes into the standard UNet. The AG module generates an attention-awareness mechanism <ns0:ref type='bibr' target='#b38'>[35]</ns0:ref> in the images that improves the performance in difficult structures. The DSV module solves the problem of vanishing gradients in the deeper layers of the CNN <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>. The VNet <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref> is an extension of UNet by replacing the max-pooling and upsampling with convolutions. The VNet increases the number of hyperparameters of a trained CNN as compared with standard UNet. ResNetMed <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> is a 3D modification and a representative of 2D ResNet to allow the network to train with 3D medical data <ns0:ref type='bibr' target='#b39'>[36]</ns0:ref>. ResNet improves the performance of deep convolutional neural networks by increasing the number of network layers and slowly degrading the features at deeper layers <ns0:ref type='bibr' target='#b39'>[36]</ns0:ref>. ResNet is a popular network that has been proven effective with image medical data <ns0:ref type='bibr' target='#b40'>[37]</ns0:ref><ns0:ref type='bibr' target='#b41'>[38]</ns0:ref><ns0:ref type='bibr' target='#b42'>[39]</ns0:ref>. It is widely used in classification and detection in various medical images such as thoracic, ophthalmology and abdominal studies <ns0:ref type='bibr' target='#b42'>[39]</ns0:ref>. DenseVoxNet <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref> is 3D modification and a representative of DenseNet, which preserves the concept of dense connectivity. The DenseVoxNet has direct connections from a layer to all its subsequent layers and makes the network much easier to train.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Network architecture</ns0:head><ns0:p>To examine the generality of the value of incorporating coordinate information in the segmentation process, we experiment with five CNN network architectures: standard UNet, AG-DSV-UNet, VNet, ResNetMed and DenseVoxNet. We describe each network architecture in turn. The 3D UNet <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref> network architecture consists of contracting and expansive paths. The contracting path follows the classical convolutional network architecture. It consists of the repeated operation of 3x3x3 kernel size of two convolution layers, each of them followed by a rectified linear unit (ReLU) and then 2x2x2 kernel size of a max pooling operation. The detailed architecture of UNET was previous described in Ronneberger et al. <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. In the downsampling step, the number of filtering kernels is double of the one in the previous layer. Whereas, in the corresponding upsampling step, the number of filtering kernels is halved back to the number before the downsampling step. The features from the contraction path are concatenated with the features from the corresponding upsampling step, in all pairs of downsampling-upsampling steps. Specifically, in the final layer, each 64-dimension feature vector must be projected to have the same dimension as the number of classes, using a convolution layer with 1x1x1 kernel size In total the network has 23 convolutional layers. Two classes (aorta and background) were applied at the output layers with threshold 0.5 to generate the binary classification of the aorta.</ns0:p><ns0:p>The 3D AG-DSV-UNet <ns0:ref type='bibr' target='#b27'>[26,</ns0:ref><ns0:ref type='bibr' target='#b43'>40]</ns0:ref> is based on the standard UNet with the addition of AG and DSV modules. The AG module is added in the connection between pair of corresponding encoding and decoding modules. The features from the encoding path is combined with the input features, in which both of them are processed through a convolutional layer with 1x1x1x kernel and a batch normalization, in order to compute the attention map. DSV <ns0:ref type='bibr' target='#b27'>[26,</ns0:ref><ns0:ref type='bibr' target='#b43'>40,</ns0:ref><ns0:ref type='bibr' target='#b44'>41]</ns0:ref> is the module added at the final step of the network by combining multiple segmentation maps from different resolution levels with element-wise sum. The detail of architecture was previously described in Tureckova et al. <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref>. It was based on the combination of two maps before upsampling to the next level-up. The second segmentation map was constructed by applying a 1x1x1 convolution on each level of decoding paths. The 3D VNet <ns0:ref type='bibr' target='#b28'>[27]</ns0:ref> consists of encoder and decoder paths. The encoder path of the VNet is divided into many levels that operate at different resolutions. Each level comprises one to three convolutional layers with a 5x5x5 kernel. The data resolution is reduced with the convolution. The second operation extracts features by non-overlapping 2x2x2 volume patch, which reduces the size of the output feature map by half. The max-pooling operation in UNet is replaced with convolution in VNet. The decoder path of VNet expands the lower resolution maps in order to assemble the output volumetric segmentation. Each level of decoder path consists of deconvolution operation to increase the size of the inputs followed by one to three convolutional layers with 5x5x5 kernel.</ns0:p><ns0:p>The ResNetMed <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> is composed of 34 plain convolution layers of encoders and decoders. The first two sets of encoder layers are composed of 1) a convolution layer with a 3x3x3 kernel size and 256 channels, and 2) a convolutional layer with a 3x3x3 kernel and 128 channels. Each convolution layer has a 3x3x3 kernel with the same output of the feature map. The remaining two groups of decoder layers are similar to the first two groups except doubling the number of channels per layer progressively. The final convolution layer is a Conv with a 1x1x1 kernel size to generate the final output.</ns0:p><ns0:p>The DenseVoxNet <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref> has down-sampling and up-sampling components. The downsampling component is divided into two densely-connected blocks called DenseBlocks and each DenseBlock is composed of 12 transformation layers with dense connections. Each transformation layer is sequentially composed of batch normalization (BN), ReLU and 3x3x3 Conv. The up-sampling component is composed of BN, ReLU, 1x1x1 Conv and two 2x2x2 deconvolution (Deconv) layers. The final layer is 1x1x1 Conv to generate the final label map.</ns0:p></ns0:div>
<ns0:div><ns0:head>Coordinate information</ns0:head><ns0:p>The coordinate matrices (CoMat) were constructed using three matrices , ∈ 1 to embed the x, y, and x coordinate information of the input</ns0:p><ns0:formula xml:id='formula_0'>∈ 1 , ∈ 1 dataset</ns0:formula><ns0:p>, where H is the height, W is the width and D is depth of the input data. The ∈ initial value of CoMat is a sequence integer indices in each axis plane of the image in the ranges , as shown in Figures <ns0:ref type='figure' target='#fig_6'>2 and 3</ns0:ref>. No additional object , , ∈ [0, ], , , ∈ [0, ], , , ∈ [0, ] prediction is required to generate this coordinate information. The experiments create three types of coordinate information, CoMat1, CoMat2 and CoMat3. The CoMat1 is composed of x, y, z coordinates separately in three channel matrices. Because the aorta is tubular in shape along the z-axis, CoMat2 contains only the z coordinate information in one channel. The CoMat3 is a summation of x, y, z coordinate information into one channel using equation 1. The CoMat information is embedded into the original image data as the additional input channels of downstream CNN training. The original image data for medical images is one channel, so the input data for CNN will be two or four. It can be simply applied and does not change the core training network. </ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset and experiment</ns0:head><ns0:p>The experiments in this paper were conducted under the approval of the institutional review board of Siriraj Hospital, Mahidol University (certificate of approval number: Si 818/2019). The raw datasets of this experiment were collected from 220 patients with AAA on whom performing both before and after contrast administration of CTA acquisition. The exclusion criteria included any surgery and other diseases of abdominal aorta such as bleeding, infection, and dissection. All CTA study was acquired with the 256-slice multi-detector row CT scanner (Revolution CT; GE Medical Systems, Milwaukee, Wisconsin, United States) using a nonionic monomer iodinated compound. The initial source of CT images was in axial slices with 1.25 mm slice thickness covering the entire abdominal aorta. The 64 DICOM images of each CTA were selected at the region of aortic aneurysm and the data was incorporated into a single volume file. To preserve the original pixel intensity, the dataset was kept in 12 bits grayscale. The volume metric of each dataset was 512x512x64 pixels. No additional feature map or augmentation was performed in this study.</ns0:p><ns0:p>The pytorch (v1.8.0) deep learning library in Python (v3.6.9) was used in the implementation. The CNN input volume was a matrix with 256x256x64 voxel dimensions. The pre-processing step was only voxel rescaling from 512x512 pixels to 256x256 pixels in the x-y plane. Each voxel contained the raw 12 bits grayscale as input data. The number of CT slices was limited to 64 due to the limitation of GPU memory. The coordinate information array of the volume dataset in xyz-direction was concatenated to the dataset and then fed into the training process (Figure <ns0:ref type='figure'>2</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Transfer learning</ns0:head><ns0:p>The segmentation of post-operative EVAR was done with a network trained using transfer learning. The pre-training 3D CNN models (A1, A2) were trained on large pre-operative AAA datasets. The transfer learning fine-tuned the shallow layers (contracting path) of the 3D CNN <ns0:ref type='bibr' target='#b45'>[42]</ns0:ref> which represents low-level features. The training dataset was small, containing only 20 cases of NCCT and CECT datasets. These training datasets were not the same cases as the pre-trained AAA models. All of the data was used for training without validation (Fig <ns0:ref type='figure'>4</ns0:ref>). The hyper-parameters were manually set to be the same values as used in the pre-trained model. The models are named E1 and E2, corresponding to the NCCT and CECT datasets, respectively. To evaluate the performance of the transfer learning, 20-cases with post-operative EVAR were also used for test datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance evaluation</ns0:head><ns0:p>To validate the performance of our proposed CNN segmentation method, it is compared with the performance of the advanced graph-cut-based method that combines GCPDF with GCBAC <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. The experiment was performed on only contrast-enhanced AAA without stent graft implantation in 20 datasets. The ground-truth of AAA segmentation in all axial slices was prepared by a cardiovascular radiologist with 18 years of experience using the 3D slicer software version 4.10.0 <ns0:ref type='bibr' target='#b46'>[43]</ns0:ref>. The quantitative assessment was evaluated by pixel wise comparison with the ground-truth using the dice similarity coefficient (DSC), Jaccard similarity coefficient (JSC) and Hausdroff distance (HD) as shown in equations ( <ns0:ref type='formula'>2</ns0:ref>)-( <ns0:ref type='formula' target='#formula_1'>4</ns0:ref>), where the insight toolkit library of 3D slicer software <ns0:ref type='bibr' target='#b46'>[43]</ns0:ref> was used to calculate an average HD value. The significant differences in the comparison coefficient among the several groups of experiments (comparison between CNN alone and CNN+CoMat) were assessed using a paired Student's t-test. A statistically significant difference could be identified when P values <0.05.The consistency of the ground-truth was accessed by pixel wise inter-rater correlation of DSC using two cardiovascular radiologists with 18 and 14 years of experience. The process of inter-rater correlation was performed by randomly selecting 40 CT images from the NCCT dataset (2 slices for each case). Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>% Dice similarity coefficient</ns0:head><ns0:formula xml:id='formula_1'>= 2| ∩ | (| | + | |) 100 (2) % Jaccard similarity coefficient = | ∩ | | ∪ | 100 (3) Hausdorff distance = ∈ { ∈ {|| , ||}}<ns0:label>(4)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where R A is the region of the segmentation result, and R R is the region of ground truth by manual segmentation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>Table <ns0:ref type='table'>1</ns0:ref> demonstrates the patient demographics. The training dataset contains case with an average age of 73.69 years, maximum aortic diameter of 57.9 mm, and average volume of 55.7 ml. The testing data has a similar distribution, with average age 72.52 years, maximum aortic diameter 62.5 mm, and average volume 161.60 ml.</ns0:p><ns0:p>Most recent studies have used manual drawing to create the ground-truth for evaluating the AAA segmentation result. However, in this study, the ground-truth was created by manual drawing of the CT images slice by slice by one experienced cardiovascular radiologist. The quality of the ground-truth segmentations was validated by inter-observer correlation which was found to be 97.68±0.82%. This could indicate excellent inter-observer agreement.</ns0:p><ns0:p>The experiments on the proposed 3D CNN-based methods of AAA segmentation demonstrate excellent results in most networks (i.e. UNet, AG-DSV UNet, VNet and ResNetMed). The training accuracy of all networks is demonstrated in graph (Fig5). Performances on the CECT dataset tend to be better than on the NCCT datasets. The best accuracy on NCCT and CECT datasets are AG-DSV-UNet+CoMat3 and standard UNet+CoMat1 with DSC values of 96.56±2. <ns0:ref type='bibr' target='#b17'>18</ns0:ref> For the transfer learning approach to the post-operative EVAR, the coordinate information also improves the segmentation results on both NCCT and CECT datasets. The CECT dataset also tends to get better accuracy than the NCCT dataset. The best accuracy on NCCT and CECT datasets is achieved by UNet+CoMat2 and UNet+CoMat3 with DSC values of 94.90±4.23 and 95.66±2.70, respectively. The CoMat2 is the best coordinate information to improve the segmentation accuracy in all training networks of the NCCT dataset. In the NCCT dataset, the coordinate information can provide an improvement in all re-training networks except UNet+CoMat1. In the CECT dataset, coordinate information can make an improvement on only UNet, AG-DSV-UNet and DenseVoxNet. The DenseVoxNet shows also the best performance gain for all types of added coordinate information on both NCCT and CECT datasets (p=0.00).</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>CNNs have been applied to a number of medical image segmentation problems. Jackson et al. <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> implemented a CNN for kidney segmentation in NCCT images. The NCCT is more difficult than CECT datasets because it has less contrast information in raw images. However, they reported good segmentation results with dice scores of 91% and 86% for right and left kidneys, respectively. The development of CNNs has tended to explore networks of increasing complexity, which in turn requires more data to train. Incorporation of coordinate information is an alternative approach that improves performance by providing more information for training. Recently, the incorporation of coordinate information has been proposed to improve object detection in 2D images <ns0:ref type='bibr' target='#b25'>[24,</ns0:ref><ns0:ref type='bibr' target='#b26'>25]</ns0:ref>. Incorporation of coordinate information has the advantage of being able to work with existing CNN models without modification to their architecture.</ns0:p><ns0:p>The coordinate information can provide more information to CNN training by adding more channels into the dataset. The coordinate or positional encoding is an interesting topic in the language processing domain that deals with order of words in the sentence. The positional encoding can be implemented by a fixed position (linear function) <ns0:ref type='bibr' target='#b34'>[32]</ns0:ref> and relative position (sinusoidal function) <ns0:ref type='bibr' target='#b33'>[31]</ns0:ref>. The sinusoidal function is used to deal with the problems of variable length sequences that map positions into a vector. The variable length of input data always occurs in language processing but not in image processing with fixed image size. However, the performance of these two approaches has nearly similar results <ns0:ref type='bibr' target='#b33'>[31]</ns0:ref>. In the recent implementations of image processing, researchers still used a linear function to generate a simple index sequence of coordinate information <ns0:ref type='bibr' target='#b25'>[24,</ns0:ref><ns0:ref type='bibr' target='#b26'>25]</ns0:ref>. The coordinate information tends to be of more benefit on the NCCT dataset than the CECT dataset because the NCCT dataset has less contrast resolution. This additional coordinate information provides useful information for the segmentation on the NCCT dataset. The segmentation of AAA is a good problem to test the concept of incorporation of coordination information because the structure is tubular in shape and orientation is almost always along the z-axis. Because of the tubular shape of AAA and fixed size of the volume dataset, the best option of embedded coordinate information is as fixed coordinate information in a linear index sequence. Our proposed method achieves excellent results on NCCT and CECT datasets of AAA segmentation with dice scores of 96.75% and 96.69%, respectively. (Table <ns0:ref type='table'>2</ns0:ref>, <ns0:ref type='bibr'>Fig 6)</ns0:ref>. The result on non-contrast AAA is slightly better than post-contrast but not statistically significant (p>0.05). The best segmentation result on the NCCT dataset is AG-DSV-UNet+CoMat3. By integration of coordinate information (CoMat3) into the network, the accuracy almost significantly improved from AG-DSV-UNet alone (p =0.06). On the other hand, on the CECT dataset, the best segmentation result is standard UNet with a dice score of 96.62%. The coordinate information (CoMat1) provides a little improvement on the result with a dice score of 96.69% (p=0.38). The post-contrast AAA has an enhanced aortic lumen used to define the candidate region of AAA. However, in the case of no enhancement of aortic lumen in the non-contrast images of AAA, less information is available for training of the network.</ns0:p><ns0:p>In our experiment, we demonstrate that incorporating coordinate information can generally improve the segmentation performance in a variety of network architectures such as UNet, VNet ResNet and DenseNet models. In the NCCT dataset, the segmentation performance is improved by incorporating CoMat1 and CoMat3, which have more coordinate information from all xyz-directions, whereas CoMat2 has only z-direction coordinate information. Most of the networks are best with CoMat3 but they are not significantly different from CoMat1. In the CECT dataset, most of the networks are also best with CoMat3 except UNet and DenseVoxNet. The DenseVoxNet alone provides poor performance because with only 2M parameters, it has less complexity than the other models. It could be noticed that DenseVoxNet has fewer feature maps by reducing the number of features in each layer. Incorporating coordinate information in this less complex model would be a challenging area for future research. However, the coordinate information gives maximum benefit on DenseVoxNet on all types of CoMat1 data which can give a significant improvement of performance (p=0.00). Furthermore, the coordinate information can enhance the stability of the DenseVoxNet on both NCCT and CECT datasets during the network training, as seen by less fluctuation of training accuracy on the epoch stream (Fig <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>). The coordinate information does not only provide the benefit on a less complex model but also in an advanced model. The AG-DSV-UNet, which is an example of an advanced and complex model architecture with 104 million parameters, also gets the benefit of incorporating coordinate information on both NCCT and CECT datasets.</ns0:p><ns0:p>The coordinate information can help to eliminate false detection in low contrast resolution images (NCCT dataset). For example, in one case in the NCCT dataset (Fig 7 <ns0:ref type='figure'>)</ns0:ref>, the standard UNet model detects two AAAs. The left AAA is the real AAA, while the right one is a well-distended gallbladder. The distended gallbladder has a round well-defined shape similar to AAA. After applying coordinate information in the model (UNet+CoMat3) the false AAA is no longer detected.</ns0:p><ns0:p>A number of previous studies have explored the use of CNNs for AAA segmentation. Lu et al. <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref> proposed a CNN for segmentation of pre-operative AAA. They modified 3D UNet combined with ellipse filling for detection and segmentation of AAA. The dataset was a mixture of non-contrast, contrast-enhanced, normal diameter aorta, and presence of AAA. The training dataset was 321 CT examinations (non-contrast 168 and contrast-enhanced 153; 247 with AAA and 74 without AAA). They used 5-fold cross-validation for CNN training, with 64 cases in each fold. The labeled ground-truth dataset was annotated by multiple annotators. The test was evaluated in terms of the maximum diameter of the aorta with an average dice score of 91%. In our experiment, we used a larger amount of training data in separate non-contrast and contrastenhanced datasets with presence of AAA. Normal diameter of aorta less than 3. Manuscript to be reviewed Computer Science testing in the case of limited sample size <ns0:ref type='bibr' target='#b47'>[44]</ns0:ref>. Dziubich et al. <ns0:ref type='bibr' target='#b19'>[20]</ns0:ref> proposed a CNN-based approach to AAA segmentation using 3D UNet, ResNet and VBNet. The training set was 8126 images from 16 scans. The experiment gave the best result of DSC 94%. In contrast, our model incorporating coordinate information achieves a dice score accuracy of over 96%.</ns0:p><ns0:p>Previous studies have also explored non-deep learning approaches to AAA segmentation. Wang et al. <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref> proposed registration-based geodesic active contour to segmentation of the outer wall of AAA in MRI images. The experimental result showed an average DSC of 89.79%. Of particular note is a graph-cut based method that achieves strong results by iteratively interleaving intensity-based (GCPDF) and contour-based (GCBAC) segmentation <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>. The algorithm is designed for contrast-enhanced CT images. The accuracies of that graph-cut approach and this proposed CNN based method have dice scores of 95.04% and 96.69%, respectively. The result of our 3D CNN based method is slightly higher than the graph-cut but not significantly so (p>0.05). However, the 3D CNN based method has the limitation of matrix size of the dataset. In this experiment, the maximum allowed size of the dataset is 64 slices in the z-axis of the training dataset. The previous implementation of graph-cut has less limitation of dataset allowing 150-200 slices of dataset. In the CNN approach, the size of the dataset can be doubled or tripled by adding more slices of labeled ground-truth to use in the CNN training.</ns0:p><ns0:p>The AAA segmentation in post-operative stent graft images is challenging because of the existing metallic artifact of the device. Lopez et al. <ns0:ref type='bibr' target='#b36'>[33]</ns0:ref> proposed a fully automatic approach to AAA segmentation of post-operative CTA images based on DCNN. The proposed method has two steps. The first step uses the DCNN network to define the candidate region of the thrombus. The second step is fine-tuned segmentation of the thrombus. The models were tested with 13 post-operative AAA cases. The method achieved a dice score of 82%. The segmentation result on NCCT and CECT datasets of post-operative EVAR achieves excellent results with the dice scores of 94.90 and 95.66%, respectively (Table <ns0:ref type='table'>3</ns0:ref>, <ns0:ref type='bibr'>Fig 8)</ns0:ref>. The best network for NCCT and CECT datasets of post-operative EVAR is standard UNet+CoMat2 and standard UNet+CoMat3, respectively.</ns0:p><ns0:p>In our proposed approach, the post-operative EVAR has slightly worse results than preoperative results because of the presence of metallic stent graft and smaller size of aortic lumen as compared with pre-operative. Our result showed that a transfer learning approach with a small amount of training data is sufficient to provide good segmentation results. The benefit of transfer learning is that small amounts of data to train still provide excellent results. In the NCCT dataset of post-operative EVAR, all of the networks are best with CoMat2 but there is no significant difference from other types of coordinate information. The reason is that metallic stent-graft (high pixel intensity) can enhance contrast resolution in the center of the AAA lumen, in which CoMat2 has enough information to enhance the performance. In the CECT dataset, the coordinate information provides benefit only for AG-DSV-UNet and DenseVoxNet. The contrast and metallic stent in the lumen of AAA should be strong coordinate information for training the models. Furthermore, all types of coordinate information also significantly improved the The major limitation of the CNN-based approach is the matrix size of the dataset. In this implementation, the maximum allowed size of the CT dataset due to limited GPU memory is 64 slices in the z-axis and 256x256 in the transverse plane. The original 512x512 matrix of the CT image was thus scaled down to 256x256. Future development of GPUs with increased size of memory will enable using the original matrix size and longer slices of the training dataset. An alternative approach to this problem is to separate the volume into multiple patches such as 256x256x32 pixels to maintain the original resolution of the dataset volume. However, because of the difference in patient body size and variation of the aorta, the aorta may get distributed among different patches, resulting in a decrease in accuracy of segmentation at the edges of adjacent patches. In addition, the multiple patches also require post-processing steps</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This paper has introduced a CNN-based approach for AAA segmentation with incorporated coordinate information. It was shown that the proposed solution using 3D CNN can be effectively applied for segmenting AAA in both NCCT and CECT preoperative datasets. No data augmentation or pre-processing was required in our proposed method. The best networks for pre-operative NCCT and CECT datasets are AG-DSV-UNet+CoMat3 and standard UNet+CoMat1, respectively. Our model can be effectively transferred to the post-operative EVAR dataset with high accuracy. The best networks for post-operative NCCT and CECT datasets are standard UNet+CoMat2 and standard UNet+CoMat3, respectively. The incorporated coordinate information demonstrates the non-dependent improvement of performance in AAA segmentation on a variety of CNN networks. A further extension of this research can be carried forward by extending the segmentation algorithm to cover the entire aorta. Future research could examine advanced encoding methods for incorporating coordinate information to improve performance in the image processing domain. The segmentation result could be reliable for volumetric assessment for further clinical research. We believe that this CNN approach with incorporated coordinate information also has a potential to solve difficult segmentation problems in the other grayscale medical images. Manuscript to be reviewed Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70360:1:2:NEW 24 May 2022) Manuscript to be reviewed Computer Science ' = ( + + )/3 (1)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>, 3). The experiments used three types of coordinate information, CoMat1, CoMat2 and CoMat3, which have 3, 1 and 1 channels, respectively. Fig4demonstrates the overview flow of the data training process. Our experiments were operated on a single GPU (Nvidia DGX-A100) with CUDA-enabled containing 40 GB RAM. The networks were learned with specific optimizer (i.e. RMSprop) and error loss (i.e. mean squared). The training parameters were set with learning rate of le-3, weight decay of le-8, and momentum of 0.9. The random seed was initially set to be 0. The dataset was split into training and validating sets with a respect ratio of 9:1. The training process was allowed to perform till a maximum of 300 epochs. The processing time required was 150-210 seconds for each epoch resulting in a total of 12.5-17.5 hours for a one-time training process.The non-contrast (NCCT) and contrast-enhanced (CECT) CT datasets were acquired from each patient in the same study. The network training was performed in two separate groups of experiments: NCCT and CECT datasets (Fig 3). We call the trained models A1 and A2, respectively. The datasets for training with validation had 200 cases each. Each experiment randomly split the dataset into 90% for training and 10% for validation. The total number of images for training and validation was 12,800 for each experiment. The NCCT test dataset consisted of 1,280 images from 20 cases. The CECT test dataset also contained 1,280 images from 20 cases. PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70360:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>PeerJ</ns0:head><ns0:label /><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2022:01:70360:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>0 cm was excluded from our experiment. No additional augmentation or cross validation technique was used during the CNN training process. The cross validation technique is often used for model PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70360:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70360:1:2:NEW 24 May 2022)Manuscript to be reviewed Computer Science performance with the DenseVoxNet network (p=0.00) on both NCCT and CECT similar to preoperative dataset.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 Three</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Method frameworkFramework of full training of pre-operative abdominal aortic aneurysm (AAA): The preprocessing step is to select 64 slices of contiguous CT images at infrarenal segments of abdominal aorta, which are then converted into a single 3D volume dataset. networks are trained using 3D CNN and set up in two separate experiments. The coordinate information is embedded into input data as the additional channels. The pre-contrast and contrastenhanced CT datasets are used to train each network to create two models A1 and A2, respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 Example</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>False positive exmaple False positive prediction of AAA. The left column is non-contrast AAA with standard UNet prediction on axial view (A) and 3D volume rendering on coronal view(D). There is false prediction of a second AAA on the right side of the abdomen (*) that is a well distended gallbladder. The middle column is non-contrast AAA with UNet+CoMat3 prediction on axial view (B) and 3D volume rendering (E) images on the same patient. The false prediction does not occur on UNet+CoMat3. The ground-truth is demonstrated on the right column on noncontrast axial view (C) and 3D volume rendering (F) CT images. PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70360:1:2:NEW 24 May 2022) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 8 example</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,42.52,229.87,525.00,366.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,70.87,525.00,434.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,70.87,297.73,672.95' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>and 96.69±1.11, respectively. The coordinate information improved the segmentation results in all networks, particularly, CoMat1 and CoMat3. CoMat3 shows improvement of segmentation in most training networks on NCCT and CECT datasets, except UNet on CECT datasets and DenseVoxNet on both NCCT and CECT datasets, for which CoMat1 is best. The DenseVoxNet without coordinate information shows the worst accuracy on both NCCT and CECT datasets with DSC values of 35.85±12.16 and 24.67±12.75, respectively. However, the accuracy is significantly improved when coordinate information with CoMat1, CoMat2 and CoMat3 is added. The best improvements of DenseVoxNet occur with CoMat1 on both NCCT and CECT datasets with DSC values of 89.30±5.80 and 87.48±10.50, respectively.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70360:1:2:NEW 24 May 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
May 24th, 2022
Dear Editors
We thank the reviewers for their generous comments on the manuscript and have edited the manuscript to address their concerns
In particular all of the code we wrote is available and I have included multiple links throughout the paper to the appropriate code repositories.
We believe the manuscript is now suitable for publication in Peer J.
Dr. Thanongchai Siriapisith
Professor in Radiology
On behalf of all authors.
Reviewer 1
Basic reporting
a. Clear and unambiguous, professional English used throughout.
The paper is generally well-written and easy to follow, despite some sentences are confusing and need revisions:
1. Line 252-253 (“The experiments were implemented using the PyTorch (v1.8.0) deep learning library with Tensorflow backend in Python”) is confusing as PyTorch and TensorFlow are two different DL libraries, and as far as I know, TensorFlow cannot be used as backend of PyTorch;
-The confusion was rectified by removing the terms “Tensorflow backend” from the “dataset and experiment” subsection. (page 7, line 263-264).
2. Line 273-274 (“… the non-contrast and contrast-enhanced datasets in consecutive arrays were combined in another experiment”) says there is “another experiment”, please make it clear which experiment is referred to here.
- We are sorry for this mistake. There is no other experiment in this paper. So we removed this sentence from the “dataset and experiment” subsection. (page 8, line 285-286).
b. Literature references, sufficient field background/context provided.
The paper introduces sufficient background on AAA segmentation. However, while coordinating information and transfer learning are the two main techniques of this paper, the related work section only briefly introduces them. As they are not originally proposed by the authors, it is important to provide more details on these two methods.
3. Also, in lines 156-157 (ResNet is a kind of popular network that has been proven effective in medical data), ResNet deserves a more detailed explanation, and reference should be provided on the claim that it is effective in medical data.
- In this revised version, more detail of ResNet and additional related references were added in the “related work” section. (page 5, line 169-173).
c. Professional article structure, figures, tables. Raw data shared.
4. Figure 5 training curve is hard to read. Increasing the resolution of the font is encouraged.
- The resolution of Figure 5 has been increased to 300 dpi.
d. Self-contained with relevant results to hypotheses.
No comment from the reviewer in this section.
e. Formal results should include clear definitions of all terms and theorems and detailed proofs.
No comment from the reviewer in this section.
Experimental design
a. Original primary research within the Aims and Scope of the journal.
No comment from the reviewer in this section.
b. Research question well defined, relevant & meaningful. It is stated how research fills an identified knowledge gap.
5. The paper conducts extensive experiments on the proposed coordinate integration methods. As the use of positional encoding is a hot topic in computer vision, how does the proposed scheme compare to other positional encoding method, for example, the fixed positional encoding in [1] (use sinusoid to encode the position) or learnable positional encoding?
[1] Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł. and Polosukhin, I., 2017. Attention is all you need. Advances in neural information processing systems, 30.
- We added text referring to this recommended paper for the positional encoding method in the “related work” and “discussion” sections. Our proposed method was based on the fixed positional encoding with a sequence index. (page 3 line 115-126, page 10 line 379-380).
c. Rigorous investigation performed to a high technical & ethical standard.
No comment from the reviewer in this section.
d. Methods described with sufficient detail & information to replicate.
6. Line 233 is suggested to change into “the input data for CNN will be two or four”.
- In the “coordinate” subsection of this revised version, the sentence was changed as recommended. (page 6 line 245).
Validity of the findings
a. Impact and novelty not assessed. Meaningful replication encouraged where rationale & benefit to literature is clearly stated.
No comment from the reviewer in this section.
b. All underlying data have been provided; they are robust, statistically sound, & controlled.
No comment from the reviewer in this section.
c. Conclusions are well stated, linked to original research question & limited to supporting results.
c.1: In table 2, we can see that the integration of coordinate information as an additional input only has marginal improvement over the baseline UNet. For example, in the contrast-enhanced dataset the DSC and JSC improvement is less than 0.2% in UNet. This challenges the main contribution of this paper.
7. On the other hand, the coordinate information significantly improves the performance of DenseVoxNet, and the authors explain in line 376-379 that coordinate information make the network converges easier. I would suggest investigating further on this direction.
- Additional explanation has been added to the “discussion” section. Further development of the model in this direction is an area for future work. (page 11 line 408-410)
c.2: The author states that the main limitation of CNN-based methods is the limited size of input data, due to the GPU memory constraints. The common practice in medical deep learning is to use patch-based inference, which means a 512x512x64 volume can be patched into multiple, e.g. 256x256x32 patches so as to maintain the original resolution. In this paper, the authors choose to scale down the image volume.
- “An alternative approach to this problem is to separate the volume into multiple patches such as 256x256x32 pixels to maintain the original resolution of the dataset volume. However, because of the difference in patient body size and variation of the aorta, the aorta may get distributed among different patches, resulting in a decrease in accuracy of segmentation at the edges of adjacent patches. In addition, the multiple patches also require post-processing steps”. We added in the discussion section (page 13, line 478-487).
Additional comments
This paper proposes incorporating voxel position as an additional input for AAA segmentation. The paper needs more work on the related work section, including how the proposed coordinate encoding scheme compared to previous works, and why the scheme would be a good fit for the problem as well as transfer learning.
- Additional text has been added to the “related work” and “discussion” sections regarding incorporating position information (page 3 line 115-126, page 10 line 372-381).
The authors conduct extensive experiments on various network architectures, nevertheless, the improvement is marginal over the baseline UNet. Also, the novelty of the proposed method is insufficient and most of the work is derivative. I suggest looking at more methods to incorporate positional information, e.g. in multiple layers, or a better encoding scheme
- Exploring additional methods to incorporate positional information would be an area for future research, as mentioned in the “conclusion” section. (page 13 line 500-503)
-The contribution of this paper is more on evaluating the performance gains from incorporation of coordinate/location based information into the CNN-based approach for AAA segmentation in different low and high contrast resolutions. Also, it demonstrates the value of re-applying a trained model of pre-operative AAA to post-operative EVAR, using only a small amount of data for re-training.
Reviewer 2
Basic reporting
This paper presents a new 3D AAA segmentation approach that incorporates coordinate information to improve the segmentation results. The authors have tested the proposed method on various network architectures, including UNet, AG-DSV-UNet, VNet, ResNetMed, and DenseVoxNet, and transfer learning from a network pre-trained on the pre-operative dataset to post-operative EVAR.
1. The authors didn't explain how to obtain the coordinate information for test images, which was critical to applying the proposed method in real-world applications. We usually use prediction methods for unseen images in practice, and obtaining coordinate information for these images could be difficult.
** Please provide the related explanation in the paper.
- The coordinate information used and proposed in this paper is a sequence index of integers in the image's axis plane such as from left to right, and upper to lower. No additional prediction of the object is required to generate coordinate information. This explanation was added in the “coordinate information” subsection. (page 6, line 236-238).
2. Some recently published related works are not referred to and compared in the paper. For example
[1] Wang, Yan, Florent Seguro, Evan Kao, Yue Zhang, Farshid Faraji, Chengcheng Zhu, Henrik Haraldsson, Michael Hope, David Saloner, and Jing Liu. 'Segmentation of lumen and outer wall of abdominal aortic aneurysms from 3D black-blood MRI with a registration based geodesic active contour model.' Medical image analysis 40 (2017): 1-10.
[2] Salvi, Anish, Ender Finol, and Prahlad G. Menon. 'Convolutional Neural Network-based Segmentation of Abdominal Aortic Aneurysms.' In 2021 43rd Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), pp. 2629-2632. IEEE, 2021.
[3] Dziubich, Tomasz, Paweł Białas, Łukasz Znaniecki, Joanna Halman, and Jakub Brzeziński. 'Abdominal aortic aneurysm segmentation from contrast-enhanced computed tomography angiography using deep convolutional networks.' In ADBIS, TPDL and EDA 2020 Common Workshops and Doctoral Consortium, pp. 158-168. Springer, Cham, 2020.
[4] Wang, Yan, Florent Seguro, Evan Kao, Yue Zhang, Farshid Faraji, Chengcheng Zhu, Henrik Haraldsson, Michael Hope, David Saloner, and Jing Liu. 'Segmentation of lumen and outer wall of abdominal aortic aneurysms from 3D black-blood MRI with a registration based geodesic active contour model.' Medical image analysis 40 (2017): 1-10.
** Please read and cite these papers accordingly in your paper.
- They have been added in this revised version and related explanations have been added in the “introduction” and “discussion” sections. (page 2 line 77-78, page 12 line 437-440)
Experimental design
3. No validation set is created and used in transfer learning so it is unclear to me how do the authors tune the model hyper-parameters in this case. Please describe it in detail in the paper.
- The hyper-parameters were set the same as the ones in the pre-trained model. This explanation has been added in the “dataset and experiment” and “transfer learning” subsection. (page 7 line 272-275, page 8, 294-295).
4. Please provide a detailed definition of dice score, including how to compute it.
- The equations to calculate DSC, JSC and HD, and related explanations have been added in the “performance evaluation” subsection. (page 8-9 line 317-320).
" | Here is a paper. Please give your review comments after reading it. |
711 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Modern deep learning schemes have shown human-level performance in the area of medical science. However, the implementation of deep learning algorithms on dedicated hardware remains a challenging task because modern algorithms and neuronal activation functions are generally not hardware-friendly and require a lot of resources. Recently, researchers have come up with some hardware-friendly activation functions that can yield high throughput and high accuracy at the same time. In this context, we propose a hardware-based neural network that can predict the presence of cancer in humans with 98.23% accuracy. This is done by making use of cost-efficient, highly accurate activation functions, Sqish and LogSQNL. Due to its inherently parallel components, the system can classify a given sample in just one clock cycle, i.e., 15.75 nanoseconds. Though this system is dedicated to cancer diagnosis, it can predict the presence of many other diseases such as those of the heart. This is because the system is reconfigurable and can be programmed to classify any sample into one of two classes. The proposed hardware system requires about 983 slice registers, 2655 slice look-up tables, and only 1.1 kilo-bits of on-chip memory. The system can predict about 63.5 million cancer samples in a second and can perform about 20 Giga operations per second. The proposed system is about 5 to 16 times cheaper and at least four times speedier than other dedicated hardware systems using neural networks for classification tasks.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Modern deep learning schemes have shown human-level performance in the area of medical science. However, the implementation of deep learning algorithms on dedicated hardware remains a challenging task because modern algorithms and neuronal activation functions are generally not hardware-friendly and require a lot of resources. Recently, researchers have come up with some hardware-friendly activation functions that can yield high throughput and high accuracy at the same time. In this context, we propose a hardware-based neural network that can predict the presence of cancer in humans with 98.23% accuracy. This is done by making use of cost-efficient, highly accurate activation functions, Sqish and LogSQNL. Due to its inherently parallel components, the system can classify a given sample in just one clock cycle, i.e., 15.75 nanoseconds. Though this system is dedicated to cancer diagnosis, it can predict the presence of many other diseases such as those of the heart. This is because the system is reconfigurable and can be programmed to classify any sample into one of two classes. The proposed hardware system requires about 983 slice registers, 2655 slice look-up tables, and only 1.1 kilo-bits of on-chip memory. The system can predict about 63.5 million cancer samples in a second and can perform about 20 Giga operations per second. The proposed system is about 5 to 16 times cheaper and at least four times speedier than other dedicated hardware systems using neural networks for classification tasks.</ns0:p></ns0:div>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Deep learning is a subset of machine learning that does not involve much human effort and does not require handcrafting of features <ns0:ref type='bibr' target='#b8'>Guo et al. (2019)</ns0:ref>. In fact, by using deep learning techniques, machines and systems learn by themselves. It is important to note that deep learning and neural networks are not two separate ideas or techniques; any neural network that has two or more layers is considered 'deep'.</ns0:p><ns0:p>Neural networks find applications in stock market prediction, agriculture, medical sciences, document recognition, and facial recognition, among others <ns0:ref type='bibr' target='#b1'>Awais et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b16'>Nti et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b30'>Zhou et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b7'>Guan (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Chen et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b10'>Kim et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>Lammie et al. (2019)</ns0:ref>. The process of learning is usually carried out using 'backpropagation', a supervised learning technique in which the parameters of a neural network are adjusted according to a predefined error function. The parameters that give the lowest error at the output are selected as the optimal parameters.</ns0:p><ns0:p>It must be noted that hardware throughput is directly dependent on underlying algorithms. Therefore, efficient ANN algorithms and activation functions need to be devised if real-time neural processing is required. Sometimes, accuracy has to be sacrificed to support low-delay classification at low costs. The required level of accuracy, latency, speed, etc. depends on the underlying application, as shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>.</ns0:p><ns0:p>A major challenge facing deep learning researchers is the growing complexity of neural networks, which makes them unsuitable for execution on general-purpose processors. It is a fact that deep learning has traditionally been carried out on general-purpose computers. However, with time, neural networks PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science they are becoming popular day by day. However, such platforms come with their own set of challenges:</ns0:p><ns0:p>these platforms and costly and inflexible, and their cost-efficiency is highly dependent on the underlying algorithms. Therefore, it is of utmost importance to develop algorithms and activation functions that are friendly to the hardware.</ns0:p><ns0:p>Conventional activation functions such as sigmoid, softmax, and hyperbolic tangent (TanH) yield high accuracy but are not suitable for hardware implementations. This is because they involve division and many other hardware-inefficient operations <ns0:ref type='bibr' target='#b27'>Wuraola et al. (2021)</ns0:ref>. Though rectified linear unit (ReLU)</ns0:p><ns0:p>Nair and Hinton ( <ns0:ref type='formula'>2010</ns0:ref>) is an extremely powerful activation function that does not require any costly elements and is the most hardware-friendly function to date, sometimes it does not produce good results. This is because it suffers from dying neurons, since it cancels out all the negative input values <ns0:ref type='bibr' target='#b13'>Lu (2020)</ns0:ref>. If output neurons receive negative inputs only, the system will always produce zero for all the output neurons, and no sample will be correctly classified. This is why scientists have come up with functions that are not only accurate but are friendly to hardware platforms. Such activation functions do not involve any costly functions such as exponentials or long divisions <ns0:ref type='bibr' target='#b27'>Wuraola et al. (2021)</ns0:ref>. Two of these functions are squish and square logistic sigmoid <ns0:ref type='bibr'>(LogSQNL)</ns0:ref>. Both these functions do not require any storage element or division operation. This is the reason why we adopt these functions for neuronal implementation in the In this article we present a system based on sqish and square logistic sigmoid (Log SQNL) functions <ns0:ref type='bibr' target='#b27'>Wuraola et al. (2021)</ns0:ref> for breast cancer classification. <ns0:ref type='bibr' target='#b24'>Tiwari and Khare (2015)</ns0:ref>.</ns0:p><ns0:p>These excellent results can be attributed to the following features:</ns0:p><ns0:p>• High Degree of Parallelism: all the required operations can be completed in a single clock cycle.</ns0:p><ns0:p>• Pipelining: use of pipeline registers at appropriate places in the system to improve throughput.</ns0:p><ns0:p>• Cost-Efficient Functions: use of Sqish in hidden layers and LogSQNL at the output layer. None of these functions require costly operations such as exponentials. Both these functions can be realized in hardware using combinational MAC computers. Since FPGAs contain a lot of DSP48 elements, multiplications can be efficiently performed.</ns0:p><ns0:p>• Proper HyperParameter Tuning: Hyperparameter tuning is extremely important for high network accuracy. The proposed network has been carefully tuned using the so-called 'grid search <ns0:ref type='bibr' target='#b29'>' Zheng, Alice (2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The rest of this paper is organized as follows. Section 2 presents a critical review of various highperformance activation functions and inference systems. The proposed scheme along with its hardware implementation is given in detail in Section 3. The test conditions and performance metrics are mentioned in Section 4. The results obtained by using the proposed scheme are given in Section 5; the system is also compared against other state-of-the-art systems to prove that the proposed scheme outperforms other schemes when it comes to classification accuracy, precision, recall, and hardware efficiency. The discussion is concluded in Section 6. A recently-developed activation function is 'swish <ns0:ref type='bibr' target='#b18'>' Ramachandran, Prajit, and Barret Zoph, and Quoc V. Le. (2018)</ns0:ref>. According to available reports, swish is more accurate than ReLU, especially when the network is very deep. Unlike ReLU, it is universally differentiable, i.e., the function has a valid derivative at all points on the real line. Like ReLU, the swish activation function solves the gradient vanishing problem. Swish allows negative values to backpropagate to the input side, which is impossible in the case of ReLU, since ReLU completely cancels out the negative values <ns0:ref type='bibr' target='#b15'>Nair and Hinton (2010)</ns0:ref>. However, swish is not a hardware-friendly function since it involves division and a lot of other costly elements <ns0:ref type='bibr' target='#b27'>Wuraola et al. (2021)</ns0:ref>.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b19'>Sarić et al. (2020)</ns0:ref>, the authors present an FPGA-based system that can predict two types of epileptic seizures. Moreover, the system can predict whether a seizure is present in the first place. The overall accuracy of the system is around 95.14%. The system in <ns0:ref type='bibr' target='#b21'>Shymkovych et al. (2021)</ns0:ref> implements a simple neural network that has 4-5 synapses. The system has four Gaussian neurons, which are radial basis functions (RBFs). The system has not been tested on any well-known dataset and the purpose of the system is to demonstrate the hardware efficiency of the proposed scheme.</ns0:p><ns0:p>A high-performance activation function based on exponential units is proposed in <ns0:ref type='bibr' target='#b5'>Clevert et al. (2015)</ns0:ref> that obviates the need for batch normalization. Batch normalization is an extremely costly process that requires big storage elements as well as large computational elements. Therefore, ELU is a good function in that context. However, ELU suffers from the same problems that many other activation functions do:</ns0:p><ns0:p>ELU is still a cost function that is not as hardware-efficient as ReLU.</ns0:p><ns0:p>Another recently-proposed activation function is ReLTanH <ns0:ref type='bibr' target='#b26'>Wang et al. (2019)</ns0:ref>. According to its developers, it has all the nice qualities possessed by hyperbolic tangent (TanH) and at the same time, it solves the problem of gradient vanishing. A big flaw in their work is that they apply the proposed function only to the diagnosis of rotating machinery faults. They do not perform any extensive tests. Moreover, they do not implement their scheme on any dedicated hardware platform, due to which it is quite hard to determine the hardware efficiency of their algorithm and functions. However, one thing that can certainly be said about their function is that the function is not friendly to the hardware because it, like TanH, requires division and other costly operations.</ns0:p><ns0:p>A hardware system for weed classification is proposed in <ns0:ref type='bibr' target='#b12'>Lammie et al. (2019)</ns0:ref>. The system finds applications in agricultural robots. The system they design uses binary weights (±1). Due to this property, the system can operate with 98.83% accuracy while having small computational units and storage elements.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Eyeriss is another system that relies on extensive data reuse to reduce energy consumption Chen et al.</ns0:p><ns0:p>(2016). The system uses convolutional neural networks (CNNs) along with row-and column-wise data reuse techniques. In this way, the system achieves both high accuracy and low energy consumption.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b24'>Tiwari and Khare (2015)</ns0:ref>, the researchers first implement the sigmoid function using 'Coordinate Rotation Digital Computer (CoRDiC)' technique and then implement a complete neural network having 35 synapses using such CoRDiC neurons. The minimum value of the root mean squared error (RMSE)</ns0:p><ns0:p>between the CoRDiC sigmoid and the original sigmoid is 1.67E-11.</ns0:p><ns0:p>A hardware-based radial basis function neural network (RBF-NN) capable of online learning is proposed in <ns0:ref type='bibr' target='#b23'>Thanh et al. (2016)</ns0:ref>. The network has 20 synapses and uses stochastic gradient descent (SGD)</ns0:p><ns0:p>for on-chip learning. To increase hardware efficiency, the exponential terms are approximated using</ns0:p><ns0:p>Taylor series expansion and look-up tables. The system has been implemented on a Cyclone-IV FPGA. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>PROPOSED METHODOLOGY</ns0:head><ns0:p>In order to understand how the proposed system works, it is extremely important to get familiarized with a few basic concepts regarding ANN operation. Therefore, we first explain the basic ANN operation and then explain the proposed ANN topology, learning scheme, and the proposed hardware system along with its constituent components.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Basic ANN Operation</ns0:head></ns0:div>
<ns0:div><ns0:head>Network Inputs</ns0:head><ns0:p>The input values are first standardized in order to make them zero-centered. The process of standardization follows Equation 2. In Equation 1, X represents the input vector, µ represents the average, and σ represents the standard deviation of data samples. The process of standardization is visually represented in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. It is important to note that standardization is sometimes referred to as 'normalization' in literature, though normalization is, in reality, different from standardization. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Accumulation and Activation</ns0:head><ns0:p>These normalized/standardized inputs are multiplied by the corresponding weights and the resulting products are then summed up (accumulated). A neuron is activated or deactivated based on the value of this weighted sum.</ns0:p><ns0:p>The activation of a neuron is dictated by a so-called 'activation function'. Some of the various popular activation functions are rectified linear unit (ReLU) <ns0:ref type='bibr' target='#b15'>Nair and Hinton (2010)</ns0:ref>, swish Ramachandran, Prajit, and Barret Zoph, and Quoc V. Le. ( <ns0:ref type='formula'>2018</ns0:ref>), exponential linear unit (ELU) <ns0:ref type='bibr' target='#b5'>Clevert et al. (2015)</ns0:ref>, among others. Every activation function has its own merits and demerits. ReLU, for example, is used to solve the 'gradient vanishing' problem that occurs in hidden layers during learning. However, ReLU completely cancels out the negative region, due to which functions like Swish were developed. A detailed discussion on this topic can be found in <ns0:ref type='bibr' target='#b18'>Ramachandran, Prajit, and Barret Zoph, and Quoc V. Le. (2018)</ns0:ref>.</ns0:p><ns0:p>The output values produced by the activated neurons are then multiplied by the corresponding weights of the next layer and the process is repeated. To offset a neuron's value for better learning, a bias term b j is added to the weighted sum.</ns0:p></ns0:div>
<ns0:div><ns0:head>Classification and Backpropagation</ns0:head><ns0:p>At the output layer, the neuron that is activated the most corresponds to the predicted class. The prediction of an input sample corresponds to the completion of a single iteration of the forward pass.</ns0:p><ns0:p>In the backward pass, synaptic weights are modified according to an algorithm called 'backpropagation'. The basic idea is that the magnitude of synaptic weight updates is dictated by the magnitude of output error. If a wrong prediction is made, the error (such as mean squared error) is computed and the synaptic weights corresponding to that (wrong) neuron are decreased. At the same time, the synapses corresponding to the correct output neuron are increased. With time, the network improves itself and eventually achieves convergence. This algorithm will be explained at length in the coming sections.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Proposed Network Topology and Actuators</ns0:head><ns0:p>In this work, we have chosen sqish activation function for the hidden layer, and square logistic sigmoid (LogSQNL) for the output layer. The Sqish function is morphologically similar to Swish, and LogSQNL is similar in behavior to the traditional sigmoid. The beauty of these functions is that both of these can be implemented in a single cycle using arithmetic and logic units (ALUs) only. As shown in Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, the proposed system can take 30 (or less) input features, has five hidden neurons, and two outputs. The top level diagram of the proposed system is shown in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>. In Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>, I M represents M th input. The multiplier units (MUs) are responsible for multiplying weights with incoming inputs and the accumulator is responsible for adding these products. The output of the multiplier-accumulator (MAC) unit is sent to the appropriate actuator for neuronal activation.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='3.3'>Mathematical Setup</ns0:head><ns0:p>Now we give details of the complete mathematical setup. Here, weights are denoted by W W W i i i and inputs are denoted by X X X i i i . Biases are represented by b b b j j j , and the weighted sum is represented by Z Z Z j j j . Finally, activation values are represented by A A A j j j . Here, the subscript j represents the postsynaptic neuron and the subscript i represents the presynaptic neuron. The following equations represent the complete mathematical process.</ns0:p><ns0:formula xml:id='formula_0'>Z 1 = ∑ i (W i • X i ) + b 1</ns0:formula><ns0:p>Since Sqish is used as the activation function for Layer 1 in the proposed scheme, A 1 (Z 1 ) is given by the following equation:</ns0:p><ns0:formula xml:id='formula_1'>A 1 =          Z 1 + Z 2 1 32 Z 1 ≥ 0 Z 1 + Z 2 1 2 −2 ≤ Z 1 < 0 0 Z 1 < −2</ns0:formula><ns0:p>The Layer 1 activation vector is then passed as input to Layer 2 in order to obtain the weighted sum Z 2 , as shown in the following equation.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_2'>Z 2 = ∑ i (W i • A 1 ) + b 2</ns0:formula><ns0:p>The LogSQNL neurons in the output layer are activated according to the following rule:</ns0:p><ns0:formula xml:id='formula_3'>A 2 =                1 Z 2 > 2 Z 2 − Z 2 2 4 1 2 + 1 2 0 ≤ Z 2 ≤ 2 Z 2 + Z 2 2 4 1 2 + 1 2 −2 ≤ Z 2 < 0 0 Z 2 < −2</ns0:formula><ns0:p>The derivative of LogSQNL function is given by Equation <ns0:ref type='formula'>2</ns0:ref>and the dependence of the loss function on Layer 2 weight vector and bias vector is given by Equation <ns0:ref type='formula'>3</ns0:ref>and Equation <ns0:ref type='formula'>4</ns0:ref>. The derivative of Sqish function is given by Equation <ns0:ref type='formula'>5</ns0:ref>and the dependence of the loss function on Layer 1 weight vector and bias vector is given by Equation <ns0:ref type='formula'>6</ns0:ref>and Equation <ns0:ref type='formula'>7</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_4'>∂ A 2 ∂ Z 2 =        2 − Z 2 4 0 ≤ Z 2 ≥ 2 2 + Z 2 4 −2 ≤ Z 2 < 0 0 otherwise (2) ∂ ∂ ∂ L L L ∂ ∂ ∂W W W 2 2 2 =        (A 2 − y) • A 1 • 2 − Z 2 4 0 ≤ Z 2 ≥ 2 (A 2 − y) • A 1 • 2 + Z 2 4 −2 ≤ Z 2 < 0 0 otherwise (3) ∂ ∂ ∂ L L L ∂ ∂ ∂ b b b 2 2 2 =        (A 2 − y) • 2 − x 4 0 ≤ Z 2 ≥ 2 (A 2 − y) • 2 + x 4 −2 ≤ Z 2 < 0 0 otherwise (4) ∂ A 1 ∂ Z 1 =      1 + Z 1 16 Z 1 ≥ 0 1 + Z 1 −2 ≤ Z 2 < 0 0 otherwise (5) ∂ ∂ ∂ L L L ∂ ∂ ∂W W W 1 1 1 =                      (A 2 − y) • A 1 • (2 − Z 2 ) •W 2 • (16 + Z 1 ) • X 1 64 0 ≤ Z 1 ≤ 2; 0 ≤ Z 2 ≤ 2 (A 2 − y) • A 1 • (2 + Z 2 ) •W 2 • (1 + Z 1 ) • X 1 4 −2 ≤ Z 1 < 0; −2 ≤ Z 2 < 0 0 otherwise (6) ∂ ∂ ∂ L L L ∂ ∂ ∂ b b b 1 1 1 =                    (A 2 − y) • A 1 • (2 − Z 2 ) •W 2 • (16 + Z 1 ) 64 0 ≤ Z 1 ≤ 2; 0 ≤ Z 2 ≤ 2 (A 2 − y) • A 1 • (2 + Z 2 ) •W 2 • (1 + Z 1 ) 4 −2 ≤ Z 1 < 0; −2 ≤ Z 2 < 0 0 otherwise (7) 7/13</ns0:formula><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>Proposed Hardware System</ns0:head><ns0:p>As mentioned before, there are two layers in the proposed system: Layer 1 and Layer 2. Both these layers have their memories to store weights. It is pertinent to mention that all the weights are stored in the chip. The total number of weights in the system is 160 since there are 160 synapses. The on-chip weight memory consumes 1.1 kilobits. The required weights that are fetched from the corresponding memory are then multiplied by the respective layer inputs using the multiplier units (MUs) shown in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>. The multiplication is carried out using the built-in DSP48 elements. The resulting products are then summed up using an accumulator that contains an array of adders. This process is carried out for all the neurons in a layer. In the end, N weighted sums are obtained, where N represents the number of neurons in a layer. These N weighted sums are passed to their respective actuators for neuronal activation. The actuator then passes these calculated values onto the next layer and the same process is repeated. The complete structure of the proposed hardware system is shown in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>. As mentioned earlier, Sqish neurons are used in Layer 1 and LogSQNL neurons are used in Layer 2. The structure of a Sqish neuron is shown in Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> and that of a LogSQNL neuron is shown in Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>. A sqish neuron can be implemented using two multiplexers (MUXes), two adders, two shifters and two multipliers. A LogSQNL neuron, on the other hand, consumes more resources than Sqish. A LogSQNL neuron can be implemented in hardware using three MUXes, four shifters, four adders, and two multipliers. However, since there are only two outputs in the proposed system for binary classification (one-hot encoding), the implementation of LogSQNL neurons is not a big deal.</ns0:p><ns0:p>Interestingly, the distribution of data in the dataset under consideration, i.e., Wisconsin Breast Cancer (WBC) University of California ( <ns0:ref type='formula'>2022</ns0:ref>) is highly non-uniform. The standard deviation of the data is extremely large. The final input values obtained after standardization require 11 bits, where 4 bits are reserved for the integer part and 7 bits are reserved for the mantissa (fractional part). As per our observations and calculations, the classification accuracy is more sensitive to the fractional part than the integer part. Therefore, more bits are reserved for the mantissa.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>TEST CONDITIONS AND PERFORMANCE METRICS</ns0:head><ns0:p>In this section, we mention the test conditions under which the evaluation and comparisons are carried out. We also mention the philosophy behind the performance metrics used for evaluation.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Test Conditions</ns0:head><ns0:p>Since the system has been developed for cancer diagnosis, the dataset used for experimentation is Wisconsin Breast Cancer (WBC) University of <ns0:ref type='bibr'>California (2022)</ns0:ref>. This dataset has 30 features, 569 samples and 2 classes (benign and malignant). As per rules, about 80% samples have been used for</ns0:p></ns0:div>
<ns0:div><ns0:head>8/13</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science training and 20% have been used for evaluation. To achieve class balance, some samples are picked up from the 'benign' class and others are picked up from 'malignant' class. We use Python for evaluation of the proposed scheme. The hardware is described in Verilog language at the register transfer level (RTL).</ns0:p><ns0:p>The learning rate is kept equal to 1 3 . The momentum is equal to 0.9. The data is processed in batches to achieve high accuracy; the batch size used in the proposed system is 100. The network is trained for 4300 epochs. All these hyperparameter values have been found through empirical tuning using the so-called 'grid search <ns0:ref type='bibr' target='#b29'>' Zheng, Alice (2015)</ns0:ref>. The specifications of the platform on which all the tests are carried out are given in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Performance Metrics</ns0:head><ns0:p>The metrics used for the evaluation of the proposed scheme are classification accuracy, precision, recall, hardware implementation cost, and system throughput. The classification accuracy is simply defined as the number of correctly-classified samples out of the total number of samples. In the context of disease diagnosis, accuracy is not a good measure of system performance. Therefore, we use precision and recall in order to properly quantify performance. The precision and recall are defined in Equation 8 and Equation <ns0:ref type='formula'>9</ns0:ref>respectively. Since these metrics are very common, we believe there is no need to discuss them in detail here. In Equation <ns0:ref type='formula'>8</ns0:ref>and Equation <ns0:ref type='formula'>9</ns0:ref>, TP stands for 'true positive', TN stands for 'true negative', FN stands for 'false negative', and FP stands for 'false positive'. To evaluate hardware efficiency, we use two metrics: the number of resources (number of slice registers, number of slice look-up tables, number of block memories, and DSP elements) consumed by the system, and system throughput. The throughput is defined in two ways: the number of multiply-andaccumulate (MAC) operations that can be performed in a second, and the number of input samples that can be processed by the system in a second.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>RESULTS AND DISCUSSION</ns0:head><ns0:p>Here, the proposed system is compared with other state-of-the-art systems such as <ns0:ref type='bibr' target='#b19'>Sarić et al. (2020)</ns0:ref>; <ns0:ref type='formula'>2020</ns0:ref>); <ns0:ref type='bibr' target='#b24'>Tiwari and Khare (2015)</ns0:ref> in terms of classification accuracy, throughput, and implementation cost. We demonstrate how the proposed scheme is better than other traditional as well as contemporary schemes, especially for disease diagnosis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Classification Accuracy, Precision, and Recall</ns0:head><ns0:p>As per obtained results, the system can predict the type of cancer with 98.23% accuracy. Moreover, the average precision of the proposed system is 97.5% and recall is around 98.5%. The classification report and the confusion matrix for the proposed system are given in Table <ns0:ref type='table' target='#tab_5'>4 and Table 5</ns0:ref>, respectively. Moreover, the proposed system is compared with many other state-of-the-art systems in terms of classification accuracy in Table <ns0:ref type='table' target='#tab_6'>6</ns0:ref>. The classification accuracy as a function of epochs is presented in Fig. <ns0:ref type='figure' target='#fig_12'>9a</ns0:ref>, and the confusion matrix is visually shown in Fig. <ns0:ref type='figure' target='#fig_12'>9b</ns0:ref>. Manuscript to be reviewed There are 160 synapses in the proposed neural system that operates at 63.487 MHz. The number of synaptic multiplications and additions to be performed are 160 and 153 respectively. Therefore, the system can perform 20 Giga operations in a second (GOPS). Since the system can classify one cancerous sample in one cycle (≈15.75 ns), the system can classify about 63.5×10 6 (63.5 million) samples in a second. Since a sample contains 30 inputs, about 1.91 × 10 9 1-input samples can be classified by the proposed system in one second. The system is compared with other state-of-the-art systems in terms of implementation cost and throughput in Table <ns0:ref type='table' target='#tab_7'>7</ns0:ref> and Table <ns0:ref type='table' target='#tab_8'>8</ns0:ref> respectively. </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION</ns0:head><ns0:p>This paper presents a high-throughput, hardware-efficient training scheme that uses Sqish neurons in the hidden layer and sigmoid-like LogSQNL neurons in the output layer. Since these functions do not require multiple cycles to process, the proposed system-based on these functions-does not consume a lot of hardware resources and yields high throughput. With only 160 synapses, the system can classify a cancerous sample into one of the two classes: benign and malignant. The proposed hardware system requires only 1.1 kilobits of on-chip memory, and can process about 1.91 × 10 9 1-input samples in a second. In just one second, the system can process 63.5 million cancer samples, and can perform 20 × 10 9 MAC operations. The system is about 5-16 times cheaper and at least four times speedier than most state-of-the-art hardware solutions designed for similar problems. Moreover, the system is way more Manuscript to be reviewed</ns0:p><ns0:p>Computer Science accurate than most contemporary systems. An important thing worth mentioning here is that to improve accuracy even by 1%, a lot of extra hardware resources are required. Therefore, the improvement in accuracy obtained by using the proposed scheme must not be undermined. Though the proposed system is specifically designed for cancer classification, the system can perform binary classification on any data sample that has 30 features or less. This is because the proposed system uses reconfigurable memory that can be programmed using an external computer. In future, convolutional neural networks can be applied to high-resolution mammograms (and/or ultrasound images) for diagnosing CoViD-19, cancer, and other ailments. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>Figure6</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>proposed system. The Sqish and LogSQNL are shown in Fig. 1 (a) and Fig. 1 (b), respectively.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>( a )</ns0:head><ns0:label>a</ns0:label><ns0:figDesc>Sqish function and its derivative. (b) Log SQNL function and its derivative.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Sqish and Log SQNL functions along with their derivatives.</ns0:figDesc><ns0:graphic coords='4,151.20,156.43,129.60,61.20' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Standardization of input data Stanford University (2022).</ns0:figDesc><ns0:graphic coords='5,227.92,571.01,241.19,86.40' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The proposed ANN topology.</ns0:figDesc><ns0:graphic coords='6,276.52,444.55,144.00,144.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. RTL Schematic of the complete system</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,439.20,216.01' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Top-level view of the proposed hardware system.</ns0:figDesc><ns0:graphic coords='9,265.72,218.83,165.60,208.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Internal structure of the Sqish (hidden) layer.</ns0:figDesc><ns0:graphic coords='10,186.52,63.78,324.00,165.60' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Structure of the output LogSQNL array.</ns0:figDesc><ns0:graphic coords='10,197.32,262.50,302.41,143.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Farsa</ns0:head><ns0:label /><ns0:figDesc>et al. (2019); Shymkovych et al. (2021); Thanh et al. (2016); Ortega-Zamorano et al. (2016); Zhang et al. (</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>( a )</ns0:head><ns0:label>a</ns0:label><ns0:figDesc>Accuracy as a function of Epochs. (b) Confusion Matrix -The proposed Scheme.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. Accuracy, Precision, and Recall yielded by the Proposed System</ns0:figDesc><ns0:graphic coords='12,172.75,81.77,176.40,151.21' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 1 Figure2</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head>Figure1</ns0:head><ns0:label /><ns0:figDesc>Figure1</ns0:figDesc><ns0:graphic coords='16,42.52,178.87,525.00,104.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>Figure5</ns0:head><ns0:label /><ns0:figDesc>Figure5</ns0:figDesc><ns0:graphic coords='17,42.52,178.87,525.00,137.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure3</ns0:head><ns0:label /><ns0:figDesc>Figure3</ns0:figDesc><ns0:graphic coords='18,42.52,178.87,525.00,503.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>Figure7</ns0:head><ns0:label /><ns0:figDesc>Figure7</ns0:figDesc><ns0:graphic coords='19,42.52,178.87,525.00,266.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head>Figure9</ns0:head><ns0:label /><ns0:figDesc>Figure9</ns0:figDesc><ns0:graphic coords='20,42.52,178.87,525.00,212.25' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,291.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,207.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Features and requirements of various deep-learning application areas.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Application</ns0:cell><ns0:cell>Required Latency</ns0:cell><ns0:cell>Required Accuracy</ns0:cell><ns0:cell>Cost</ns0:cell></ns0:row><ns0:row><ns0:cell>Military</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>High</ns0:cell></ns0:row><ns0:row><ns0:cell>Medical Sciences</ns0:cell><ns0:cell>Medium</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>Medium-High</ns0:cell></ns0:row><ns0:row><ns0:cell>Video Surveillance</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Medium</ns0:cell><ns0:cell>Medium-High</ns0:cell></ns0:row><ns0:row><ns0:cell>Agriculture</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Low</ns0:cell></ns0:row><ns0:row><ns0:cell>Digit Classification</ns0:cell><ns0:cell>Medium-High</ns0:cell><ns0:cell>Low-Medium</ns0:cell><ns0:cell>Low</ns0:cell></ns0:row><ns0:row><ns0:cell>Stock Market</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>High</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>have grown extremely large and deep. Therefore, modern neural networks cannot be efficiently trained</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>and/or executed on a general-purpose computer Lacey et al. (2016); Merolla et al. (2014). For efficient</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>processing and training, specialized hardware-based neural networks are required. Since dedicated</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>hardware platforms such as field-programmable gate arrays (FPGAs) and application-specific integrated</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>circuits (ASICs) offer a low-power, high-speed alternative to conventional personal computers (PCs),</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The forward computation component consumes 14,067 logic elements and the SGD learning algorithm component consumes 17,309 logic elements. A comprehensive comparison of various modern works is presented in Table2. In Table2, C&R stands for classification and regression. Summary of the related work.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Neuron</ns0:cell><ns0:cell>Algo.</ns0:cell><ns0:cell>Learning</ns0:cell><ns0:cell>Implem.</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Synapses</ns0:cell><ns0:cell>H/W</ns0:cell><ns0:cell>Application</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Platform</ns0:cell><ns0:cell>Platform</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>Efficiency</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ramachandran, Prajit,</ns0:cell><ns0:cell>Swish</ns0:cell><ns0:cell>BP</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>Extr. High</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Extr. Low</ns0:cell><ns0:cell>C&R</ns0:cell></ns0:row><ns0:row><ns0:cell>and Barret Zoph, and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Quoc V. Le. (2018)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Sarić et al. (2020)</ns0:cell><ns0:cell>Sigmoid</ns0:cell><ns0:cell>BP</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>FPGA</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Mod. High</ns0:cell><ns0:cell>Epil. Seizure C.</ns0:cell></ns0:row><ns0:row><ns0:cell>Shymkovych et al. (2021)</ns0:cell><ns0:cell>Gaussian</ns0:cell><ns0:cell>BP</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>FPGA</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>4-5</ns0:cell><ns0:cell>Mod. High</ns0:cell><ns0:cell>Classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Clevert et al. (2015)</ns0:cell><ns0:cell>ELU</ns0:cell><ns0:cell>BP</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>Extr. High</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Moderate</ns0:cell><ns0:cell>C&R</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2019)</ns0:cell><ns0:cell>ReLTanH</ns0:cell><ns0:cell>BP</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>Fault Diagnosis</ns0:cell></ns0:row><ns0:row><ns0:cell>Lammie et al. (2019)</ns0:cell><ns0:cell>Binary</ns0:cell><ns0:cell>BP</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>FPGA</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>Weed Classif.</ns0:cell></ns0:row><ns0:row><ns0:cell>Chen et al. (2016)</ns0:cell><ns0:cell>Mixed</ns0:cell><ns0:cell>BP</ns0:cell><ns0:cell>Software</ns0:cell><ns0:cell>ASIC</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>Classification</ns0:cell></ns0:row><ns0:row><ns0:cell>Tiwari and Khare (2015)</ns0:cell><ns0:cell>Sigmoid</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>FPGA</ns0:cell><ns0:cell>High</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>C&R</ns0:cell></ns0:row><ns0:row><ns0:cell>Thanh et al. (2016)</ns0:cell><ns0:cell>Radial</ns0:cell><ns0:cell>SGD</ns0:cell><ns0:cell>FPGA</ns0:cell><ns0:cell>FPGA</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>Low</ns0:cell><ns0:cell>C&R</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Specifications of the platform used for performance evaluation.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Processor</ns0:cell><ns0:cell>Intel Core i7-5500 (4 CPUs)</ns0:cell></ns0:row><ns0:row><ns0:cell>Memory</ns0:cell><ns0:cell>8.00 GB</ns0:cell></ns0:row><ns0:row><ns0:cell>Operating System</ns0:cell><ns0:cell>Windows 8.1</ns0:cell></ns0:row><ns0:row><ns0:cell>System Type</ns0:cell><ns0:cell>x64 (64-bit OS)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification report: The proposed system.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Precision Recall</ns0:cell></ns0:row><ns0:row><ns0:cell>0 (benign)</ns0:cell><ns0:cell>0.95</ns0:cell><ns0:cell>1.00</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>1 (malignant) 1.00</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>0.975</ns0:cell><ns0:cell>0.985</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Confusion Matrix: The proposed system.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>TP = 38</ns0:cell><ns0:cell>FP = 0</ns0:cell></ns0:row><ns0:row><ns0:cell>FN = 2</ns0:cell><ns0:cell>TN = 73</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Classification Accuracy Comparisons</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Wuraola et al.</ns0:cell><ns0:cell>Aljarah et al.</ns0:cell><ns0:cell>Sarić et al.</ns0:cell><ns0:cell>Zhang et al.</ns0:cell><ns0:cell>Farsa et al.</ns0:cell><ns0:cell>Ortega-</ns0:cell><ns0:cell>Proposed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>(2021)</ns0:cell><ns0:cell>(2018)</ns0:cell><ns0:cell>(2020)</ns0:cell><ns0:cell>(2020)</ns0:cell><ns0:cell>(2019)</ns0:cell><ns0:cell>Zamorano</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>et al. (2016)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Features</ns0:cell><ns0:cell>784</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>4-35</ns0:cell><ns0:cell>30</ns0:cell></ns0:row><ns0:row><ns0:cell>Classes</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2-6</ns0:cell><ns0:cell>2</ns0:cell></ns0:row><ns0:row><ns0:cell>Synapses</ns0:cell><ns0:cell>102k</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>153</ns0:cell><ns0:cell>144</ns0:cell><ns0:cell>130</ns0:cell><ns0:cell>≥ 84</ns0:cell><ns0:cell>160</ns0:cell></ns0:row><ns0:row><ns0:cell>Samples</ns0:cell><ns0:cell>70k</ns0:cell><ns0:cell>822</ns0:cell><ns0:cell>699</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell><1000</ns0:cell><ns0:cell>569</ns0:cell></ns0:row><ns0:row><ns0:cell>Accuracy</ns0:cell><ns0:cell>96.71</ns0:cell><ns0:cell>95.14</ns0:cell><ns0:cell>98.32</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>73-89</ns0:cell><ns0:cell>88.26</ns0:cell><ns0:cell>≈98.23</ns0:cell></ns0:row><ns0:row><ns0:cell>(%)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>5.2 Implementation Cost and Throughput ComparisonsThe use of Sqish and Log SQNL<ns0:ref type='bibr' target='#b27'>Wuraola et al. (2021)</ns0:ref> allows the processing of one sample in one clock cycle. In a single cycle, the system can perform all MAC operations and can activate all the neurons 10/13 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71420:1:2:NEW 2 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>FPGA Implementation Cost Comparisons</ns0:figDesc><ns0:table><ns0:row><ns0:cell>System</ns0:cell><ns0:cell>Acc.</ns0:cell><ns0:cell>Synapses</ns0:cell><ns0:cell>S. Regs.</ns0:cell><ns0:cell>S. LuTs</ns0:cell><ns0:cell>Max. Freq.</ns0:cell><ns0:cell>Mults.</ns0:cell><ns0:cell>Platform</ns0:cell><ns0:cell>Learning</ns0:cell></ns0:row><ns0:row><ns0:cell>Farsa et al. (2019)</ns0:cell><ns0:cell>89%</ns0:cell><ns0:cell>130</ns0:cell><ns0:cell>1023</ns0:cell><ns0:cell>11,339</ns0:cell><ns0:cell>189 MHz</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>Virtex 6</ns0:cell><ns0:cell>Offline</ns0:cell></ns0:row><ns0:row><ns0:cell>Ortega-Zamorano et al. (2016)</ns0:cell><ns0:cell>88.3%</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>6766</ns0:cell><ns0:cell>13,062</ns0:cell><ns0:cell>Variable</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>Virtex 5</ns0:cell><ns0:cell>Online</ns0:cell></ns0:row><ns0:row><ns0:cell>Sarić et al. (2020)</ns0:cell><ns0:cell>95.14%</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>114</ns0:cell><ns0:cell>12,960</ns0:cell><ns0:cell>50 MHz</ns0:cell><ns0:cell>116</ns0:cell><ns0:cell>Cyclone IV</ns0:cell><ns0:cell>Offline</ns0:cell></ns0:row><ns0:row><ns0:cell>Tiwari and Khare (2015)</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>35</ns0:cell><ns0:cell>1898</ns0:cell><ns0:cell>3,124</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>154</ns0:cell><ns0:cell>Virtex 5</ns0:cell><ns0:cell>Offline</ns0:cell></ns0:row><ns0:row><ns0:cell>Shymkovych et al. (2021)</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>790</ns0:cell><ns0:cell>1195</ns0:cell><ns0:cell>10 MHz</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>Spartan 3</ns0:cell><ns0:cell>Offline</ns0:cell></ns0:row><ns0:row><ns0:cell>Prop.</ns0:cell><ns0:cell>≈98.23%</ns0:cell><ns0:cell>160</ns0:cell><ns0:cell>983</ns0:cell><ns0:cell>2655</ns0:cell><ns0:cell>63.49 MHz</ns0:cell><ns0:cell>234</ns0:cell><ns0:cell>Virtex 6</ns0:cell><ns0:cell>Offline</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Throughput (TP) Comparisons</ns0:figDesc><ns0:table><ns0:row><ns0:cell>System</ns0:cell><ns0:cell>Synapses</ns0:cell><ns0:cell>Sample Size</ns0:cell><ns0:cell>NTP</ns0:cell></ns0:row><ns0:row><ns0:cell>Farsa et al. (2019)</ns0:cell><ns0:cell>130</ns0:cell><ns0:cell>25</ns0:cell><ns0:cell>4.73 × 10 9</ns0:cell></ns0:row><ns0:row><ns0:cell>Sarić et al. (2020)</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0.25 × 10 9</ns0:cell></ns0:row><ns0:row><ns0:cell>Shymkovych et al. (2021)</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.04 × 10 9</ns0:cell></ns0:row><ns0:row><ns0:cell>Proposed</ns0:cell><ns0:cell>160</ns0:cell><ns0:cell>30</ns0:cell><ns0:cell>1.91 × 10 9</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Response to review of PeerJ Computer Science, Manuscript-ID CS-2022:03:71420:1:0:NEW
entitled
A High-Performance, Hardware-based Deep Learning System for Disease Diagnosis
Addressed comments for publication as a regular paper to
PeerJ Comptuer Science
By
Ali Siddique, Muhammad Azhar Iqbal, Muhammad Aleem, Jerry Chun-Wei Lin
July 2022
May 01, 2022
Dear Editor-in-Chief,
PeerJ Computer Science.
Please find enclosed the revision of our paper “A High-Performance, Hardware-based Deep Learning System for Disease Diagnosis ” with Manuscript-ID “CS-2022:03:71420:1:0:NEW”. We would like to forward our sincere thanks to all anonymous Reviewers and Editor-in-Chief for their detailed and valuable suggestions, which significantly contributed to revising the original manuscript.
We have carefully revised the manuscript and provided relevant responses to the concerns raised by the anonymous reviewers. We declare that this paper is not submitted to another journal(s) or conference(s) for possible publication, and the contents presented in the paper have not been published, except for the cited information.
If you have any questions, please contact us without any hesitation.
Thank you very much!
Yours sincerely,
Ali Siddique
Muhammad Azhar Iqbal
Muhammad Aleem
Jerry Chun-Wei Lin
Note: To help legibility of the remainder of this response letter, all the reviewers’ and editor’s comments and questions are typeset in italic font. Our responses and remarks are written in plain font.
PeerJ Computer Science
Manuscript-ID CS-2022:03:71420:1:0:NEW
Authors’ response to Reviewer 1
Comment 1:
Basic reporting
1) The paper proposes a novel methodology to diagnose cancer. The abstract should be verified in order to remove the grammatical errors.
2) Which type of cancer and from which organ has to be explained in detail.
The abstract states that the proposed system can also detect diseases from the heart. This sentence can be removed or the evidence for this sentence could be included.
3) Table 1 must be shown in the introduction section as it is not related to the literature review. Rather, a summary of the literature can be included as a separate table in thissection.
4) The proposed solution can be written as Proposed Methodology.
5) More details can be included in the Basic ANN Operation as it plays a major role in this paper.
Experimental design
6) It will be well if the experiments were done for the proposed system.
A table explaining the values of TP, FP, and TN for the input data should be included.
Validity of the findings
7) Table 3: Comparison of Various Neural Networks in terms of Classification Accuracyhas to be rewritten so that it can be easily understandable to the readers.
Response:
1. The abstract has been modified as per instructions.
2. A breast cancer dataset (Wisconsin Breast Cancer) has been used for performance evaluation of the system. Most non-image datasets regarding disease diagnosis have two output classes and less than 30 inputs. The proposed system allows 30 11-bit inputs and can classify an input sample into one of the two classes. Therefore, the proposed system is capable of classifying/diagnosing almost all diseases. Whether or not a person has any heart ailment can be easily diagnosed by the proposed system.
3. Table 1 has been moved to the ‘Introduction’. A summary of the literature has been added to the manuscript. Please refer to Table 2.
4. The title ‘Proposed Solution’ has been changed to ‘Proposed Methodology’.
5. The Basic ANN Operation is more detailed than it was in previous submission.
6. All the required experiments have carefully been carried out. The following information has been tabulated: TP, FP, TN. Moreover, the classification report has been included in the manuscript. Please refer to Table 4 and Table 5. The classification accuracy is presented in Table 6. A comparison of the proposed hardware system with other contemporary systems is presented in Table 7 and Table 8.
7. The caption of Table 3 has been made more clear. The new title is, “Classification Accuracy Comparisons”.
Authors’ response to Reviewer 2
Basic reporting
More points have to be included based on the complete methodology and results
The sentence “Moreover, the system is reconfigurable and can be programmed toclassify any sample into one of two classes” has to be justified as the classification isnowhere shown throughout the manuscript.
Response:
The proposed system can classify an input sample—that supports up to 30 features—into one of the two classes (binary classification). The dataset Wisconsin Breast Cancer (WBC) has been used for performance evaluation. The cancer samples in the dataset used are either ‘benign’ or ‘malignant’. The proposed system is capable of classifying/diagnosing various diseases i.e., heart, kidney, etc. This is because most non-image datasets regarding disease diagnosis have two output classes and less than 30 inputs, and the proposed system allows 30 11-bit inputs and can classify an input sample into one of the two classes (binary classification). These points have been mentioned in several places in the paper. For your convenience, all such sentences have been highlighted.
Experimental design
An experiment has to be done in order to depict the prediction of 63.5 million cancersamples in a second. Also, a tabulation has to be framed to analyze the prediction accuracy.
Response:
The experiments, simulation, testing, and synthesis have already been done. The proposed system can operate at a maximum frequency of 63.5 MHz. The system can classify one sample in one clock cycle. Therefore, the proposed system can predict 63.5 million cancer samples in a second. This has been highlighted in the Results and Discussion. The prediction/classification accuracy is mentioned in Table 3.
Validity of the findings
It will be well if executed results of image samples were shown in the results and discussion section so that the proposed methodology seems to be executed.
Response:
The dataset Wisconsin Breast Cancer (WBC), used for performance evaluation, is not based on images, but on features such as age, blood pressure, etc. Therefore, it is impossible to provide sample images.
Authors’ response to Reviewer 3
Basic reporting
The paper proposed a hardware implementation for the cancer diagnosis problem. However, the contribution is not clear in the hardware methodologies that are introduced
1- The proposed architecture is very weak and has no enough contribution
as in Figure 3: Proposed ANN Topology
2- The proposed system is not clear from Figure 6: Internal Structure of the Proposed Hardware System.
3- Figures 4,5, and 7 are not clear.
Response:
1. Figure 3 does not represent the proposed hardware system. The proposed system is presented in Figures 6-8. The sole purpose of Figure 3 is to show to the readers the network structure, i.e., the system has 30 input neurons, 5 hidden neurons, and 2 output neurons, connected in an all-all fashion.
2. Figure 6 shows just the top-level view of the proposed system. The expanded view is given in Figure 7 and Figure 8. Moreover, the RTL schematics are shown in Figure 4 and Figure 5.
3. Figures 4, 5, 7 are clear now.
Experimental design
1- The experiments should include several cancer datasets and evaluate the performance between them
2- The splitting technique is very weak, I think you should use 10 folds cross-validation to get fair comparison.
3- The training parameters needs to be illustrated why u choose these parameters
The learning rate is kept equal to 1/3 . The momentum is equal to 0.9. The data isprocessed in batches in order to achieve high accuracy; the batch size used in the proposed system is 100.
Response:
1. A hardware system can be optimized only for a particular type of datasets. It is impossible to use multiple datasets and design an optimal system.
2. The dataset Wisconsin Breast Cancer (WBC), used for performance evaluation, is one of the most popular datasets for cancer diagnosis. As per defined rules, about 20% of samples have been used for testing and the remaining ones are used for training.
3. All hyper-parameters (batch size, learning rate, momentum, etc.) have been found through empirical tuning using ‘grid search’. This point has been highlighted in the revised manuscript.
Validity of the findings
The paper did not contribute enough to be published in the journal and needs to high modification levele mentioned and highlighted this under the section ‘Test Conditions’.
Response:
More details have been added and highlighted that show novelty and significance of the proposed work (See Section 1 and Section 3).
Authors’ response to Reviewer 4
Basic reporting
Authors have proposed a hardware-based deep learning system for cancer diagnosis. The research problem selected by author(s) is timely and important to be addressed. The overall structure and organization of the paper are satisfactory, and the paper qualifies for an above-average up-to-date bibliography. However, there are a few issues that are required to be addressed by the authors.
1) The main problem with this paper is coherency. In many paragraphs, sentences arenot written coherently.
2) Why the title is specific to Cancer Diseases? Authors have also claimed the ability of the proposed scheme to predict the presence of other diseases. So, the title should be a hardware-based deep learning system for disease diagnosis.
3) Authors are encouraged to improve the abstract by focusing only on the mostsignificant details that are unique to their proposal. Authors have simply claimed in theabstract that “In this context, we propose a hardware-based neural network that canpredict the presence of cancer in humans with 98.23% accuracy.” But they didn’t mention the features of their system that help to gain an accuracy of 98.23%. Authors are advised to write their significant contributions clearly. Moreover, in the abstract, ithas been mentioned that the proposed system is about 5 to 16 times cheaper and atleast four times speedier than many other contemporary systems? What is meant bymany? Not clear.
4) Authors have several times used the term hardware friendly but have not describedit. For example, in the abstract and other sections, it is stated “this is why scientists have come up with functions that are not only accurate but are friendly to hardware platforms”. What is meant by friendly to hardware platforms? Any reference?
5) “Conventional activation functions such as sigmoid and hyperbolic tangent (TanH)yield high accuracy but are not suitable for hardware implementations. This is because they involve division and many other hardware-inefficient operations.”Authors have mentioned
this fact, but without reference(s).
Response:
1. The paper is now clearer than before.
2. The title has been modified as per suggestions.
3. The proposed techniques are mentioned in the abstract now. The abstract has been modified as per suggestions.
4. A function that does not contain complex computational elements such as dividers and exponents is considered ‘hardware friendly’. The relevant references have been mentioned. The required parts have been highlighted.
5. The relevant references have been provided.
Experimental design
6) It seems there is no need for this sentence “To know more about efficient implementation of neural networks on edge devices, the reader is referred to[11].” Is the implementation of neural networks on edge devices is related to the authors’ work?
7) Authors have claimed “this is the reason why we adopt these functions forimplementation in the proposed system. Adopted these functions for implementationof what? in the proposed system. It is not clear here.
8) A summary or comparison table of proposed techniques discussed in the literature review is not available.
9) Details in paragraphs 2, 3, and 4 of Section 1 are not coherent. It is highly recommended to re-write these paragraphs to highlight your contribution.
Response:
6. The following sentence has been removed: “To know more about efficient implementation of neural networks on edge devices, the reader is referred to[11]”.
7. A neural network is an intricated network of neurons. Different activation functions can be used for building neurons in a neural network. The hardware-friendly functions Sqish and LogSQNL have been used for constructing neurons in the proposed system. The rationale is that these functions do not contain any complex exponential term or long divisions. The required multiplications can easily be carried out using the DSP48 elements built in an FPGA.
8. A summary of various modern works is added in tabular form to Section 2 (Literature Review).
9. Paragraphs 2, 3, 4 in Section 1 have been re-written.
Validity of the findings
10) The required level of accuracy, speed, etc. depends on the underlying application, as shown in Table 1. Delay is not mentioned and how this line is related to the other lines of this paragraph. Again, authors are advised to be coherent in writing paragraphs.
11) Section 3 proposed solution starts with the working of the proposed system but authors must add text about the proposed techniques here. What is the proposed system, components, and working?
12) “Data Preprocessing” heading is not required and the text under this heading should be adjusted in section 3.1.
13) Figure 4 is not clear.
14) Figure 6 should be discussed in section 3.2.
15) The proposed algorithm is compared with other state-of-the-art algorithms interms of classification accuracy, throughput, and implementation cost. Which state-ofthe art algorithms? Names?
16) The proposed system consumes only 983 slice registers, 2655 slice look-uptables, 234 DSP48 elements, and 33 block random access memories (BRAMs). But why? It has not been mentioned.
17) It can be seen from Table 4 and Table 5 that the proposed system is about 5-16 times cheaper and at least four times speedier than most modern systems. What modern systems? Mention features of your proposed solutions that make these results better.
18) Future work or extension is not clear. Just one line has been mentioned, “In future, more complex datasets can be chosen for better diagnosis.” This is not enough to say what is meant by complex datasets?
Response:
10. Delay is the inverse of frequency. The frequency of the system is 64.5 MHz. Since the system takes an input sample every clock cycle, the delay between two consecutive samples is () 15.74 ns. The sentences are more coherent than before.
11. The proposed hardware system is presented in Section 3.4. The operation of the constituent layers is given in both Section 3.2 and Section 3.4.
12. The contents of the heading “Data Pre-processing” have been re-adjusted.
13. Figure 4 is clear now.
14. The details of Figure 6 have been included in Section 3.2.
15. Actually, it is “systems”, not “algorithms”. The sentence has been modified. Moreover, names of such systems along with appropriate references have been added and highlighted.
16. The synthesis process is entirely dependent on the synthesizer (Xilinx ISE in this case) and the user does not have a great degree of control over it. However, there are certain reasons why the proposed system performs so well: high parallelism, use of cost-efficient neurons (Sqish and LogSQNL), pipelining, and proper hyperparameter tuning.
17. The high speed can be attributed to the parallelism inherent to the proposed system. The low cost can be attributed to the use of cost-efficient neurons, i.e., Sqish and Log_SQNL. These neurons do not contain costly elements and at the same time, yield the same accuracy as that yielded by Swish and Sigmoid. It is pertinent to mention that Swish and Sigmoid are quite costly. Sqish and Log_SQNL provide a cost-effective alternative to Swish and Sigmoid.
18. CoViD-19 datasets based on mammograms are quite complex. In fact, any dataset that contains high-resolution images is considered ‘complex’. In order to achieve high accuracy, convolutional neural networks (CNNs), suitable for image-based datasets, should be used. The ‘Conclusion’ has been updated.
" | Here is a paper. Please give your review comments after reading it. |
712 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The Internet of Things (IoT) concept evolved into a slew of applications. To satisfy the requests of these applications, using cloud computing is troublesome because of the high latency caused by the distance between IoT devices and cloud resources. Fog computing has become promising with its geographically distributed infrastructure for providing resources using fog nodes near IoT devices, thereby reducing the bandwidth and latency.</ns0:p><ns0:p>A geographical distribution, heterogeneity and resource constraints of fog nodes introduce the key challenge of placing application modules/services in such a large scale infrastructure. In this work, we propose an improved version of JAYA approach for optimal placement of modules that minimizes the energy consumption of a fog landscape. We analyzed the performance in terms of energy consumption, network usage, delays and execution time. Using iFogSim, we ran simulations and observed that our approach reduces on average 31\% of the energy consumption compared to modern methods.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Swarm Optimization (PSO) and genetic algorithm were used. But these algorithms fall within local optimum and are sensitive to the initial population.</ns0:p><ns0:p>In the proposed algorithm, we introduced a new operator in the JAYA algorithm called 'Levy flight', which produces a random walk following probability distribution. We use the proposed approach for module placement in the cloud-fog environment. The 'Levy flight' escape the locally optimal solution, resulting in an efficient placement of the modules in the fog landscape. The proposed Levy flight based JAYA(LJAYA) approach led to a fair trade-off between utilization of fog landscape and energy consumption for running applications in fog landscape.</ns0:p><ns0:p>The following are the major contributions of this research:</ns0:p><ns0:p>• Formulated service/module placement problem to minimize energy consumption.</ns0:p><ns0:p>• A new Levy flight based JAYA algorithm is proposed to solve the module/service placement problem in the fog landscape.</ns0:p><ns0:p>• Experiments for performance analysis are conducted by varying loads considering the said metrics. The results conclude that the proposed placement approach significantly optimizes the module/service placement and reduces energy consumption.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORKS</ns0:head><ns0:p>With the continuous development of fog computing technology, resource management has become a difficult task. <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>. This section presents the existing resource management techniques with their advantages and limitations. A quick overview of some of these proposed module/service placement approaches is provided below. Fog computing deals with computationally intensive applications at the edges of the network. There exist various challenges to complex resource allocation and communication resources under QoS requirements. The issue of task scheduling and resource allocation for multi-devices in wireless IoT networks is being investigated. Xi Li et al. <ns0:ref type='bibr' target='#b14'>[14]</ns0:ref> proposed a non-orthogonal multiple access approach. The use of various computing modes would impact the energy consumption and average delay.</ns0:p><ns0:p>So the proposed method would make the optimal decision of choosing a suitable computing mode that offers good performance. The optimization issue is composed of a mixed-integer nonlinear programming problem that helps reduce energy consumption. The authors used an Improved Genetic Algorithm (IGA) to resolve this nonlinear problem. Zhu et al. <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> proposed Folo, which is aimed to reduce the latency and comprehensive quality loss while also facilitating the mobility of vehicles. A bi-objective minimization problem for a task allocation to fog nodes is introduced. The vehicular network is widely adopted as a result of the imminent technologies in wireless communication, inventive manufacturing so on. Lin et al. <ns0:ref type='bibr' target='#b15'>[15]</ns0:ref> investigated the resource allocation management in vehicular fog computing that aims to reduce the energy consumption of the computing nodes and enhance the execution time. A utility model is also built that follows two steps. In the beginning, all sub-optimal solutions counting on the Lagrangian algorithm are given to resolve this problem. Then, the proposed optimal solution selection procedure. QoS might get degraded for the battery-energy mobile devices due to a lack of energy supply.</ns0:p><ns0:p>Chang et al. <ns0:ref type='bibr' target='#b5'>[5]</ns0:ref> proposed a technology of Energy Harvesting (EH) that helps the devices to gain energy from the environment. The authors proposed reducing the execution cost through the Lyapunov optimization algorithm. Huang et al. <ns0:ref type='bibr' target='#b11'>[11]</ns0:ref> solved the energy-efficient resource allocation problem in fog computing networks. To increase the network energy efficiency, they proposed a Fog Node (FN) based resource allocation algorithm and converted it into Lyapunov optimization. Due to the immense volume of data transmissions, communication issues were hiked by big data. So, fog computing has been implemented to resolve the communication issue. But still, a limitation in resource management due to the amount of accessible heterogeneous computing relied on fog computing. So, Gai et al. <ns0:ref type='bibr' target='#b7'>[7]</ns0:ref> addresses the problem by proposing an Energy-Aware Fog Resource Optimization (EFRO) approach.</ns0:p><ns0:p>EFRO considers three components such as cloud, fog and edge layers. This approach would integrate the standardization and smart shift operations that also reduce energy consumption and scheduling length.</ns0:p><ns0:p>To reduce the delays due to the inefficiency of task scheduling in fog computing, Potu et al. <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref> had proposed an Extended Particle Swarm Optimization(EPSO) that would help optimize a task scheduling problem. Load balancing techniques associated with fog computing follow two ways: dynamic load balancing and static load balancing. Singh et al. <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref> compared various load balancing algorithms and found a fundamentally easy round-robin load-balancing algorithm. Jamil et al. <ns0:ref type='bibr' target='#b12'>[12]</ns0:ref> proposed QoS relied Manuscript to be reviewed Computer Science load balancing algorithm, the custom load method. This algorithm aims to increase the use of fog devices in a specific area while reducing energy consumption and latency. When it comes to resource optimization, linear programming is a popular approach. Arkian et al. <ns0:ref type='bibr' target='#b9'>[9]</ns0:ref>, in their work, suggested a mixed-integer programming approach that took into account the bottom station association as well as task distribution.</ns0:p><ns0:p>Skarlat et al <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref> have introduced fog colonies and used a Genetic Algorithm (GA) to decide where the services have to be placed within the colonies Time Cost Aware Scheduling was proposed by Binh et al. <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. The algorithm distributes jobs to the client as well as the fog layer based on overall response time, data centre costs, and processing time. But, there is no dynamic allocation of resources, and the proposed approach allocates the resources before the processing time. Liu et al. <ns0:ref type='bibr' target='#b16'>[16]</ns0:ref> presented a cooperative game based on the method to maximize the number of tasks whose deadlines are satisfied for a Multi-User Mobile Edge Computing (MEC) environment. Alelaiwi et al <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>, have taken this a leap forward using deep learning to optimize the response time for critical tasks in the fog landscape.</ns0:p><ns0:p>Chen et al. <ns0:ref type='bibr' target='#b6'>[6]</ns0:ref>, <ns0:ref type='bibr' target='#b27'>[26]</ns0:ref> focused on how a user's independent computing tasks are distributed between their end device, computing access point and a remote cloud server. To reduce the energy consumption of the above components, they employ semi-definite relaxation and a randomization mapping method.Shefali Varshney et al. <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref> prospered Applicant Hierarchy Processing (AHP) method for distributing applications to suitable fog layer. The suggested framework assures end-user QoE. The suggested method is evaluated for storage, CPU cycle,and processing time.</ns0:p><ns0:p>Improving the algorithm for mapping application modules/services to the fog nodes is a good research method. In the literature, module placement algorithms were proposed, but still, there is a scope for improving the optimal solution. Most of the existing solutions focused on minimizing latency in the fog landscape. This paper proposes an enhanced module placement algorithm using Levy flight. Our goal is to reduce energy consumption, network utilization and execution time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>PROBLEM FORMULATION</ns0:head><ns0:p>The fog-cloud design takes advantage of both edge and cloud computing capabilities. Low-latency processing is carried out at lower-level fog nodes that are distributed geographically while leveraging centralized cloud services.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Architecture of Fog computing</ns0:head><ns0:p>Fog computing is a type of computing that takes place between the end node and the cloud data centre.</ns0:p><ns0:p>The cloud, fog, and IoT sensors are the three layers of fog architecture. Sensors capture and emit the data but do not have the computation or storage capability. Along with sensors, we have actuators to control the system and react to the changes in the environment as detected by sensors Fog nodes are devices with little computing capability and network-connected devices such as smart gateway and terminal devices. This layer collect data from sensors and perform data processing before sending it to the upper layers. Fog computing is suitable for low-latency applications. As shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, we extend the basic framework of fog computing in <ns0:ref type='bibr' target='#b4'>[4,</ns0:ref><ns0:ref type='bibr' target='#b10'>10]</ns0:ref> by allowing service/ module placement in both the fog and cloud. For this, we introduce two levels of control: i) cloud-fog controller and ii) fog orchestration controller(FOC). Cloud-fog controller controls all fog nodes. Fog orchestration controllers are a special kind of fog node used to run the IoT applications without any involvement of the cloud. A fog orchestration controller is responsible for all the nodes connected to it, called a fog colony. Our fog architecture supports a hierarchy with the cloud-fog controller, fog orchestration controller, fog nodes, and the sensor/IoT devices at the bottom layer.</ns0:p><ns0:p>The controller nodes need to be provided with the information to analyze the IoT application and place the respective modules onto virtualized resources. For example, the fog orchestration controller is provided with complete details about its fog colony and the state of neighbourhood colonies. With this information, the scheduler develops a service placement plan and accordingly places the application modules on particular fog resources. Fog landscape consist of set of fog nodes ( f 1 , f 2 , ......, f n ). These fog nodes are split into colonies, with a FOC node in charge of each colony. Each Fog node f j is equipped with sensors and actuators.</ns0:p><ns0:p>Each fog node f j can be indicated with a tuple < id, R j , S j ,Cu j > where id is the unique identifier, R j is the RAM capacity, S j is the storage capacity and Cu j is the CPU capacity of the fog node. FOC node Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='3.2'>IoT applications and services</ns0:head><ns0:p>153 Let W denote a set of different IoT apps. The Distributed Data Flow (DDF) deployment approach is used for the IoT application, as stated in <ns0:ref type='bibr' target='#b8'>[8]</ns0:ref>. Each of these applications (W k ) is made up of several modules, where each module m j ∈ W k is to be executed on the fog/cloud resources. All the modules that belong to an application (W k ) need to be deployed before W k starts execution. Once the application executes, modules will communicate with each other, and data flows between modules. The application response time r A is calculated as shown in equation 1.</ns0:p><ns0:formula xml:id='formula_0'>r A = makespan(W k ) + deployment(W k )<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Where makespan(W k ) is the sum of the makespan duration of each module m j ∈ W k and the execution delays. The makespan(m j ) is the total time spent by the module from start to its completion. deployment(W k ) is the sum of the current deployment time deployment t W k and the additional time for propagation of the module to the closest neighbour colony. We assume that the application's deployment time includes administrative tasks such as module placement. Each module m j is defined by a tuple < CPU m j , R m j , S m j , Type > where these are the demands of CPU, main memory, and storage.The service type indicates specific kinds of computing resources for a module m j . Our goal is to utilize the fog landscape to the maximum extent, and the placement of modules must reduce the total energy consumption of the fog landscape. This issue is referred to as Module Placement Problem (MPP) in fog landscape. The controllers monitor all the fog nodes. Each fog node f i has fixed processing power CPU i and memory R i . Let m 1 , m 2 , m 3 ....., m p be the modules that need to be placed on to the set of fog nodes ( f 1 , f 2 , ......, f n ). This work addresses the MPP to reduce the delay in application processing and the total energy consumption of the fog landscape. A levy-based JAYA (LJAYA) algorithm for mapping modules and fog nodes has been developed. In the proposed approach, each solution is modelled by an array. This array consists of integer numbers (unique identifiers of fog nodes) corresponding to the fog node on which the modules m 1 , m 2 , m 3 ....., m p will be placed.</ns0:p><ns0:formula xml:id='formula_1'>Solution i = ( f 3 , f 9 , .... f i , ..., f 6 ).</ns0:formula><ns0:p>This solution places the m 1 onto f 3 , m 2 onto f 9 etc. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Energy consumption model</ns0:head><ns0:p>An efficient placement strategy can optimize fog resources and minimize energy consumption. Most of the previous placement algorithms have focused on enhancing the performance of the fog landscape while ignoring the energy consumption. The energy consumption by a fog node/controller can be accurately described as a linear relationship of CPU utilization <ns0:ref type='bibr' target='#b21'>[21,</ns0:ref><ns0:ref type='bibr' target='#b1'>2]</ns0:ref>. We define energy consumption of a computing node (P i ) considering idle energy consumption and CPU utilization (u), given in equation 2:</ns0:p><ns0:formula xml:id='formula_2'>P i (u) = k * P max + (1 − k) * P max * u<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>P max is the energy consumption of a host running with full capacity (100% utilization), k represents the percentage of power drawn by an idle host. The total energy consumption of fog landscape with n nodes can be determined using equation 3 <ns0:ref type='bibr' target='#b13'>[13]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_3'>E = n ∑ i=1 P i (u)<ns0:label>(3)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head n='3.4'>Module placement using Levy based JAYA algorithm</ns0:head><ns0:p>The wide spectrum of bio-inspired algorithms, emphasizing evolutionary computation swarm intelligence, are probabilistic. An important aspect of obtaining high performance using the above algorithms depends highly on fine-tuning algorithm-specific parameters. Rao <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref> implemented the JAYA algorithm with few algorithm-specific parameters to tackle this disadvantage. JAYA algorithm updates each candidate using the global best and worst solutions and moves towards the best by avoiding the worst particle. This algorithm updates the solution according to equation 4. We have to update the population until the optimal solution is found or maximum iterations are reached.</ns0:p><ns0:formula xml:id='formula_4'>Solution i+1 = Solution i + r 1 * (B i − Solution i ) − r 2 * (W i − Solution i )<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>Where Solution i is the value at i th iteration, and Solution i+1 is the updated value.r 1 , r 2 are random numbers and W i , B i are the worst and best according to the fitness value.</ns0:p><ns0:p>We modified the JAYA algorithm by introducing a new operator that searches the vicinity of each solution using aLevy flight(LF). Levy flight produces a random walk following heavy-tailed probability distribution. Levy flight steps are distributed according to Levy distribution with several small steps, and some rare steps are very long. These long jumps help the algorithm's global search capability (Exploration). Meanwhile, the small steps improve the local search capabilities (Exploitation). The updating in our approach is as follows:</ns0:p><ns0:formula xml:id='formula_5'>Solution i+1 = Solution i + LF(Solution i ) + r 1 * (B i − Solution i ) − r 2 * (W i − Solution i )<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>Where</ns0:p><ns0:formula xml:id='formula_6'>LF(Solution i ) = 0.01 * u v 1/β * (Solution i − B i )<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>Where u and v are two numbers drawn from normal distributions, B i is the best solution and 0 < β < 2 is an index.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref> shows the steps involved in the improved JAYA algorithm for module/service placement and are described as follows:</ns0:p><ns0:p>If the condition is true the input for next level is the updated particle, which we got after applying Equation 6 and Equation 7 to the original particle. But if the condition is false then the input for next level is the original particle.</ns0:p><ns0:p>Step 1: Initial solution Each solution/candidate is a randomized list where each entry specifies the fog node that satisfies the requirement of a given module. For example, the second module request will be placed on the fog node given as the second element of the list. Then the fitness for each solution is calculated as shown in Manuscript to be reviewed Manuscript to be reviewed Step 2: Updation Calculate the fitness of each candidate and select the solutions that lead to higher and lower fitness (energy consumption in our case) values as the worst and best candidates. The movement of all the candidates is revised using the global best and worst according to equation 4. This equation changes the candidate's direction to move towards better solution areas.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Step</ns0:p></ns0:div>
<ns0:div><ns0:head>3: Spatial dispersion</ns0:head><ns0:p>To improve the exploration and exploitation of the particles we add the Levy distribution to the updated particles, as shown in equation 5. We keep Solution i+1 , if it is the promising solution than the Solution i . In the next iteration, we apply these operations to the updated population. During this process, all candidates move towards optimal solutions keeping away from the worst candidate.</ns0:p><ns0:p>Step 4: Final selection</ns0:p><ns0:p>All the particles are updated until the global optimum is found or the number of iterations is over. Finally, the solution with the highest fitness value is selected, and modules are placed on the respective fog nodes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>PERFORMANCE EVALUATION</ns0:head><ns0:p>We simulated a cloud-fog environment using iFogSim <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>. It is a generalized and expandable system for simulating various fog components and real time applications. iFogSim allows simulation and the evaluation of algorithms for resource management on fog landscape. iFogSim has been used by most universities and industries to evaluate resource allocation algorithms and for energy-efficient management of computing resources. So, we also used the iFogSim to simulate our experiments. We analyzed the proposed approach concerning energy consumption, delays, execution time, network usage, etc. We have considered Intelligent Surveillance through Distributed Camera Networks (ISDCN) for our work. Smart camera-based distributed video surveillance has gained popularity as it has lot a of applications like linked cars, security, smart grids, and healthcare. However, multi-site video monitoring manually makes the surveillance quite complex. Hence we need video management software to analyze the feed from the camera and provide a complex solution such as object detection and tracking. Low-latency connectivity, handling large amounts of data, and extensive long-term processing are all required for such a system <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>.</ns0:p><ns0:p>When motion is detected in the smart camera's Fields Of View (FOV), it begins delivering a video feed to the ISDCN application.The target object is identified by the application and located in its position in each frame. Moving object tracking is accomplished by adjusting camera parameters from time to time.</ns0:p><ns0:p>ISDCN application comprises five modules, as shown in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. The first module is Object Detector which identifies an object in a given frame. The second module is for Motion Detection, and the third module tracks the identified object over time by updating the Pan-tilt-zoom (PTZ) Control parameters.</ns0:p><ns0:p>The user interface is to display the detected object. A detailed description of these modules is given in <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>. The application will take the feed from the number of CCTV cameras, and after processing these streams, the PTZ control parameters are adjusted to track the object. The edges connect the modules in the application and these edges carry tuples. Table <ns0:ref type='table' target='#tab_3'>1</ns0:ref> lists the properties of these tuples.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> shows the different types of fog devices employed in the topology and their configurations. Here, the cameras serve as sensors and provide input data to the application. On average, the sensors have 5-millisecond inter-arrival times, which require 1000 MIPS and a bandwidth of 20000 bytes. The physical topology is modelled in iFogSim using the FogDevice, Sensor and Actuator classes.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Results and Discussion</ns0:head><ns0:p>This section presents the results of the proposed module placement algorithm for the ISDCN application and compares them with state-of-the-art approaches in terms of energy, latency, and network utilization.</ns0:p><ns0:p>We compared the proposed module placement approach with the approaches like EPSO <ns0:ref type='bibr' target='#b19'>[19]</ns0:ref>, PSO <ns0:ref type='bibr' target='#b18'>[18]</ns0:ref>,</ns0:p><ns0:p>JAYA <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>, and Cloud Only <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>. To compare the performance of these approaches, we perform several experiments using the same physical topology of the ISDCN application and varying the number of areas.</ns0:p><ns0:p>The proposed approach is evaluated on ISDCN application by varying the number of areas with four cameras. All the cameras are connected to the cloud via a router in a cloud-only approach Manuscript to be reviewed</ns0:p><ns0:p>Computer Science cameras to detect the objects' motion in frames. Total energy consumption was significantly less in the LJAYA method than in JAYA, EPSO, PSO, and Cloud Only. For instance, the total energy consumption with EPSO, JAYA, PSO and Cloud Only is 509.12 kJ, 523.39 kJ, 689.48 kJ, and 1915.10 kJ. In comparison, the LJAYA method was 480.10 kJ for ten areas. When the number of areas is increased, the total energy consumption also increases with all the approaches. The proposed approach can find the optimal solution in all the cases. The analysis of the energy consumption for various configurations demonstrated that the proposed LJAYA approach reduces energy consumption up to 31% on average compared to modern methods.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1.2'>Execution Time Analysis</ns0:head><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> shows the execution time (in milliseconds) of various topologies and input workloads. From Figure <ns0:ref type='figure'>5</ns0:ref>, it is clear that the proposed LJAYA approach can complete the execution faster than the other approaches. On average, the proposed approach reduced the execution time up to 7%, 15%, 22%, and 53% over EPSO, JAYA, PSO, and Cloud Only approach, respectively.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 5. Execution time analysis</ns0:head></ns0:div>
<ns0:div><ns0:head n='4.1.3'>Network Usage Analysis</ns0:head><ns0:p>The network usage will increase if traffic is increased toward the cloud. At the same time, the network usage decreases when we have a dedicated fog node in each area. The network usage is calculated using equation 7 <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_7'>Networkusage = Latency * δ ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>Where δ = tupleNWSize Experimental results in terms of the network usage in bytes are shown in Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref>. Network usage is high with the cloud-only approach because all processing happens in a cloud server. But, with the proposed approach, processing occurs at efficient fog nodes, reducing the network usage. Considering 40 areas, the network usage with the proposed LJAYA, EPSO, JAYA, PSO, and CloudOnly are 2483404 bytes, 2485275 bytes, 2485814 bytes, 2487663 bytes, and 2991055, respectively. We can reduce the network usage by up to 16% using the proposed approach when compared to the CloudOnly approach. Manuscript to be reviewed Real-time IoT applications need high performance and can achieve this only by reducing latency. The latency is computed using equation 8 <ns0:ref type='bibr' target='#b10'>[10]</ns0:ref>.</ns0:p><ns0:note type='other'>Computer Science Areas LJAYA EPSO JAYA PSO CloudOnly 10</ns0:note><ns0:formula xml:id='formula_8'>Latency = α + µ + θ (<ns0:label>8</ns0:label></ns0:formula><ns0:formula xml:id='formula_9'>)</ns0:formula><ns0:p>Where α is the delay incurred while capturing video streams in the form of tuples and µ is the time to upload and perform motion detection. Finally, θ is the time to display the detected object on the user interface.</ns0:p><ns0:p>Areas <ns0:ref type='table' target='#tab_7'>4</ns0:ref>. All application modules are placed in the cloud in a cloud-only placement algorithm, causing a bottleneck in application execution. This bottleneck causes a significant increase (106 ms) in the latency. On the other hand, the proposed placement approach can maintain low latency (1.1 ms) as it places the modules close to the network edge. Compared with the other algorithms, the proposed LJAYA approach shows superior performance in minimizing execution time, latency and energy consumption.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSION</ns0:head><ns0:p>Cloud and fog computing oversee a model that can offer a solution for IoT applications that are sensitive to delay. Fog nodes are typically used to store and process data near the end devices, which helps to reduce latency and communication costs. This paper aims to provide an evaluation framework that minimizes energy consumption by optimally pacing the modules in a fog landscape. An improved natureinspired algorithm LJAYA was used with levy flight and evaluated the performance in various scenarios.</ns0:p><ns0:p>Experimental results demonstrated that the LJAYA algorithm outperforms the other four algorithms by escaping from the local optimal solutions using levy flight. With the proposed algorithm, we can reduce the energy consumption on average by up to 31% and execution time up to 53%. In the future, we plan to consider different applications and propose an efficient resource provisioning technique by considering the application requirements</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. architecture for Fog computing</ns0:figDesc><ns0:graphic coords='5,162.41,63.78,372.25,245.66' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>152</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>154 4 / 12 PeerJ</ns0:head><ns0:label>412</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2022:03:71383:1:2:CHECK 11 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 ./ 12 PeerJ</ns0:head><ns0:label>212</ns0:label><ns0:figDesc>Figure 2. Steps involved in the proposed algorithm</ns0:figDesc><ns0:graphic coords='7,162.41,119.61,372.24,530.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Modelling of the ISDCN application</ns0:figDesc><ns0:graphic coords='8,203.77,63.78,289.50,122.93' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 ./ 12 PeerJ</ns0:head><ns0:label>412</ns0:label><ns0:figDesc>Figure 4. Energy consumption of all devices in fog landscape</ns0:figDesc><ns0:graphic coords='9,162.41,426.91,372.21,227.05' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='10,162.41,244.22,372.24,260.24' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Details of the edges in the ISDCN application</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Tuple Type</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>MIPS</ns0:cell><ns0:cell>Network Bandwidth</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>OBJECT LOCATION</ns0:cell><ns0:cell>1000</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>RAW VIDEO STREAM 1000</ns0:cell><ns0:cell>20000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>PTZ PARAMS</ns0:cell><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>100</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>MOTION DETECTION 2000</ns0:cell><ns0:cell>2000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='4'>DETECTED OBJECT</ns0:cell><ns0:cell>500</ns0:cell><ns0:cell>2000</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>CPU MIPS</ns0:cell><ns0:cell>RAM (MB)</ns0:cell><ns0:cell>Uplink Bw (MB)</ns0:cell><ns0:cell>Downlink Bw (MB)</ns0:cell><ns0:cell>Level</ns0:cell><ns0:cell>Rate Per MIPS</ns0:cell><ns0:cell>Busy power (Watt)</ns0:cell><ns0:cell>Idle power (Watt)</ns0:cell></ns0:row><ns0:row><ns0:cell>Cloud</ns0:cell><ns0:cell>44800</ns0:cell><ns0:cell>40000</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>16*103</ns0:cell><ns0:cell>16*83.25</ns0:cell></ns0:row><ns0:row><ns0:cell>Proxy</ns0:cell><ns0:cell>2800</ns0:cell><ns0:cell>4000</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>107.3</ns0:cell><ns0:cell>83.43</ns0:cell></ns0:row><ns0:row><ns0:cell>Fog</ns0:cell><ns0:cell>2800</ns0:cell><ns0:cell>4000</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>10000</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>107.3</ns0:cell><ns0:cell>83.43</ns0:cell></ns0:row></ns0:table><ns0:note>7/12 PeerJ Comput. Sci. reviewing PDF | (CS-2022:03:71383:1:2:CHECK 11 May 2022) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Characteristics of the Fog devices used for ISDCN</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Total network usage in bytes</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>1466620 1466806 1466804</ns0:cell><ns0:cell>1467504 1474585</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>1972125 1972196</ns0:cell><ns0:cell cols='2'>1972271 1974304 1980075</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell cols='3'>2478204 2480074 2482234 2482234 2485565</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell cols='3'>2483404 2485275 2485814 2487663 2991055</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>4.1.4 Latency Analysis</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Latency Analysis in ms experimental results in terms of latency are showed in Table</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>LJAYA EPSO JAYA PSO</ns0:cell><ns0:cell>CloudOnly</ns0:cell></ns0:row><ns0:row><ns0:cell>10</ns0:cell><ns0:cell>1.1</ns0:cell><ns0:cell>2.2</ns0:cell><ns0:cell>2.2</ns0:cell><ns0:cell cols='2'>20.899 105.999</ns0:cell></ns0:row><ns0:row><ns0:cell>20</ns0:cell><ns0:cell>2.16</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell>4.3</ns0:cell><ns0:cell>30.9</ns0:cell><ns0:cell>105.999</ns0:cell></ns0:row><ns0:row><ns0:cell>30</ns0:cell><ns0:cell>2.89</ns0:cell><ns0:cell>3.3</ns0:cell><ns0:cell cols='2'>7.015 31.7</ns0:cell><ns0:cell>105.999</ns0:cell></ns0:row><ns0:row><ns0:cell>40</ns0:cell><ns0:cell>3.2</ns0:cell><ns0:cell>5.4</ns0:cell><ns0:cell>19.9</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell>105.999</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Dear Editorial Team
Thank you for your important input. My research has moved to the next quality as a result of your input.
Basic reporting
1. Authors are required to improve the English language and some sentence structures need to be updated.
Verified compete paper expert in professor English, as well as I checked using grammarly software
2. Some cited references are very old of year 2012
I used reference starting from 2012 to 2021
3. Overall article structure is good.
Thank you for feedback
4. Hypotheses and results are OK and shared with the RAW data.
Thank you for feedback
5. In some sections authors have written too many paragraphs for expressing their views. Need to update the style with some professional writing style.
Modified as per suggestions
6. Provide expanded from of abbreviations wherever used for the first time.
Modified as per suggestion
7. Make your Introduction section more technical than in the current form it just focuses on the basics of fog computing.
Modified
8. What is the worst-case time complexity of the modified JAYA approach and previously published JAYA approach.
9. Kindly refer the following work for further improvement in the literature survey section
10.1109/ISPCC53510.2021.9609479
10.4018/IJKSS.2020100102
Added reference
10. Why authors have not applied any MCDM based approach for the allocation of resources and services in the Fog landscape?
MCDCM approaches are used in decision making where conflicting criteria are typical in evaluating option. But our main focus is only one criteria i.e. energy consumption. Though we have presented the comparison of other parameters in the paper, in placement approach we considered only energy consumption of the data centre as fitness function.
11. Equation(3) is a general equation and computes the energy consumption. But how the Eq 2 has been derived?
Equation 2 is taken from the following reference.
DOI 10.1007/s11227-010-0421-3
we use SpecPower benchmark [39] measurements to depict the relationship between server power consumption and server utilization. According to this data, a server’s total drawn power increases linearly with the increase in utilization.
39. “Standard performance evaluation corporation. SPECpower” 2008. [Online]. Available: http://www.spec.org/power_ssj2008
- In the figure 2, there is a flaw in the flowchart. As the output from the first decision box irrespective of the condition whether it is true or false is becoming the input for the next level.....Why??
If the condition is true the input for next level is the updated particle, which we got after applying Equation 6 and Equation 7 to the original particle. But if the condition is false then the input for next level is the original particle.
- What is the purpose of Figure 3?
Process of Modelling of the ISDCN application
- Figure 4 doesn't include the unit of energy consumption.
Mentioned at the time of explanation in 4.1.1
Overall summary
The proposed approach is able to reduce energy consumption up to 9%, 16%, 31%, and 65% compared to EPSO, JAYA,PSO, and Cloud Only respectively.
Cloud inly approach places all the modules in cloud which leads to high energy consumption compared to fog nodes. The proposed approach has up to 9% improvement over EPSO which we feel is a good improvement.
" | Here is a paper. Please give your review comments after reading it. |
713 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Understanding the complexity of restricted research data is vitally important in the current new era of Open Science. While the FAIR Guiding Principles have been introduced to help researchers to make data Findable, Accessible, Interoperable and Reusable, it is still unclear how the notions of FAIR and Openness can be applied in the context of restricted data. Many methods have been proposed in support of the implementation of the principles, but there is yet no consensus among the scientific community as to the suitable mechanisms of making restricted data FAIR. We present here a systematic literature review to identify the methods applied by scientists when researching restricted data in a FAIR-compliant manner in the context of the FAIR principles. Through the employment of a descriptive and iterative study design, we aim to answer the following three questions: 1) What methods have been proposed to apply the FAIR principles to restricted data?, 2) How can the relevant aspects of the methods proposed be categorized?, 3) What is the maturity of the methods proposed in applying the FAIR principles to restricted data?. After analysis of the 40 included publications, we noticed that the methods found, reflect the stages of the Data Life Cycle, and can be divided into the following Classes: Data Collection, Metadata Representation, Data Processing, Anonymization, Data Publication, Data Usage and Post Data Usage. We observed that a large number of publications used 'Access Control' and 'Usage & License Terms' methods, while others such as 'Embargo on Data Release' and the use of 'Synthetic Data' were used in fewer instances. In conclusion, we are presenting the first extensive literature review on the methods applied to confidential data in the context of FAIR, providing a comprehensive conceptual framework for future research on restricted access data.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION 29</ns0:head><ns0:p>In the last ten years, the role of Open Science in scientific research has received considerable attention 30 across several disciplines, and a growing body of literature has been proposed to implement Openness.</ns0:p></ns0:div>
<ns0:div><ns0:head>31</ns0:head><ns0:p>Evidence suggests that the replication of results, the discovery and exchange of information, and the reuse 32 of research data have emerged as some of the most important reasons for Open Science <ns0:ref type='bibr' target='#b20'>(Hey et al., 2009;</ns0:ref><ns0:ref type='bibr' /> 33 <ns0:ref type='bibr' target='#b50'>Wilkinson et al., 2019)</ns0:ref>. Interestingly, the reuse of 'data created by others', also known as secondary data, 34 described as 'the basis of scholarly knowledge' <ns0:ref type='bibr' target='#b37'>(Pampel and Dallmeier-Tiessen, 2014)</ns0:ref>, is considered one 35 of the key aspects of Open Science <ns0:ref type='bibr' target='#b47'>(Vicente-Sáez and Martínez-Fuentes, 2018)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>36</ns0:head><ns0:p>To facilitate the creation, management and usage of secondary data, several initiatives have been 37 involved in building solutions for Open Science research. For instance, the Center for Open Science 1 has 38 proposed the Open Science Framework (OSF) <ns0:ref type='bibr' target='#b15'>(Foster and Deardorff, 2017)</ns0:ref>, to promote an open tool 39 for the storage, development and usage of secondary data. The LIGO Open Science Center (LOSC) 2 is 40 another initiative, with the intent to facilitate Open Science research in the Astronomy domain by offering 41 a platform to discover and analyse data from the Laser Interferometer Gravitational-wave Observatory <ns0:ref type='bibr' target='#b48'>(Widmann and Thiemann, 2016)</ns0:ref> is an international initiative to help overcome the challenges related to the reuse of data, by offering a Collaborative Data Infrastructure (CDI) for the research community. Other developments, in the effort to render the technical difficulties linked to the use of secondary data, have been in 2016 with the introduction of the FAIR Guiding Principles <ns0:ref type='bibr' target='#b49'>(Wilkinson et al., 2016)</ns0:ref>. The Principles aim to provide guidelines for making data Findable, Accessible, Interoperable and Reusable. The implementation of FAIR has been demonstrated to improve data management and stewardship <ns0:ref type='bibr' target='#b9'>(Boeckhout et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b31'>Mons, 2018)</ns0:ref>, by enabling the reuse of data, promoting collaborations and facilitating resource citation <ns0:ref type='bibr' target='#b23'>(Lamprecht et al., 2020)</ns0:ref>. Ensuring transparency, reproducibility and reusability can also help data owners and publishers to define data sharing plans and to improve the discoverability of resources <ns0:ref type='bibr' target='#b52'>(Wilkinson et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Several studies have shown how to implement Open and FAIR data, but not all data is the same and not all data is suitable for being publicly available. For example, medical records and patients' data contain, by nature, Personal Identifiable Information (PII) if not sanitized. Government data, such as census data and other types of information retrieved by governmental agencies about the population, are often not open to the public because of confidential concerns. Despite recent attempts from the European Union to provide methods for dealing with personal and confidential information, i.e. GDPR, there are still considerable limitations that have not yet been fully investigated. Regulations can often be vague, ambiguous and not well defined. For instance, the GDPR requires data owners and stakeholders to provide a 'reasonable level of protection', without clearly specifying what the word 'reasonable' actually involves. Also, the concept of 'privacy by design' is consistently supported within the regulations, but no clear guidelines on how to achieve it are proposed. Overall, legal compliance in the context of restricted and privacy concerning data is most often a challenge, first by determining what are the regulations to comply with and second by having the technical ability to guarantee such compliance <ns0:ref type='bibr' target='#b35'>(Otto and Antón, 2007)</ns0:ref>. To date, it remains unclear how sensitive data should be managed, accessed and analysed <ns0:ref type='bibr' target='#b13'>(Cox et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b24'>Leonelli et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b4'>Bender et al., 2022)</ns0:ref>.</ns0:p><ns0:p>Researchers have claimed that data containing confidential and private information should not be made open, and its access should be tightly regulated <ns0:ref type='bibr' target='#b25'>(Levin and Leonelli, 2017)</ns0:ref>. This notion comes from the fact that data has been seen, so far, to have a binary state of either open or closed. The FAIR Principles, on the other hand, do not provide an all-or-nothing view on the data (either FAIR or not FAIR), but they represent more of a guideline and a continuum between data being less FAIR and more FAIR <ns0:ref type='bibr' target='#b6'>(Betancort Cabrera et al., 2020)</ns0:ref>. Moreover, FAIR data, and more specifically Accessible data, does not necessarily require to be also open <ns0:ref type='bibr' target='#b32'>(Mons et al., 2017)</ns0:ref>. Accessible data can be defined as such when once fulfilled certain requirements, the data can be made either partially or fully accessible. More in detail, data access can be mediated through automated authorization protocols <ns0:ref type='bibr' target='#b49'>(Wilkinson et al., 2016)</ns0:ref> as well as through direct contact with the data owner, but as long as access to the data can, in theory, be achieved, then that data can be considered as accessible <ns0:ref type='bibr' target='#b16'>(Gregory et al., 2019)</ns0:ref>. As mentioned above, the Principles do not define a binary state of either FAIR or non-FAIR data, between accessible and inaccessible, open and closed. Instead they define guidelines for the ' optimal choices to be made for data management and tool development' <ns0:ref type='bibr' target='#b6'>(Betancort Cabrera et al., 2020)</ns0:ref>. The application of the FAIR Guiding Principles to government and confidential data does not have the aim to make it publicly open but, indeed, to make it more Findable, Accessible, Interoperable and Reusable. At this point, it is important to clearly define what the authors of this paper mean when referring to the term 'restricted access data'. In the context of this paper, such a term will be abbreviated to 'restricted data' and refers to any type of datasets, artefacts or digital objects which is not freely available (e.g. medical records, patient data and government data).</ns0:p><ns0:p>The lack of accessibility can either be determined by confidential and privacy protection regulations, as well as usage and license terms.</ns0:p><ns0:p>While the Principles have sparked many international and interdisciplinary initiatives, such as GO FAIR 3 , the Commission on Data of the International Science Council (CODATA) 4 and the Research Data Alliance (RDA) 5 , most of the application of FAIR have been seen in the 'hard' sciences (e.g. biology, astronomy, physics). There is still a lack of understanding and specific recommendations on how FAIR can be implemented more in Social, Behavioural and Economic (SBE) domains, also referred to as 'soft' sciences. SBE scientists are often faced with many domain-specific challenges, often linked to the various data collection methods and data types used. In fact, SBE sciences generally require data from questionnaires, interviews or surveys which are usually gathered from public institutions such as official registries and government bodies. Therefore, it is highly likely that the data contain personal and confidential information that can disclose the identity of individuals and institutions. Before the data can be used for analysis and shared with researchers, the data owner is responsible for assuring that the confidentiality of the data subjects is kept intact and is not at risk, and this process is usually performed by anonymizing the data and implementing strict access control policies. Nevertheless, this process is often not completely transparent and can require strenuous bureaucratic steps for the researchers before gaining access. Moreover, there is still no consensus within the scientific community about what are the methods and procedures recommended when dealing with restricted data. The purpose of this investigation is to explore the relationship between FAIR and restricted data and to assess the mechanisms for making restricted data (more) FAIR, to facilitate the reuse and discoverability of secondary data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Problem Statement & Contributions</ns0:head><ns0:p>The primary aim of this review is to investigate the methods employed by data owners and users when dealing with restricted research data, in the context of the FAIR principles. Understanding the complexity of reusing restricted data is crucial in a variety of fields, from the biomedical to the social science domain.</ns0:p><ns0:p>The present review provides the first comprehensive assessment of the relationship between the FAIR Guiding Principles and restricted data, by answering the following questions:</ns0:p><ns0:p>What methods have been proposed to apply the FAIR principles to restricted data? How can the relevant aspects of the methods proposed be categorised? What is the maturity of the methods proposed in applying the FAIR principles to restricted data?</ns0:p><ns0:p>This work contributes to the existing knowledge of FAIR by providing an extensive framework describing the methods employed when researching restricted data. With this review, we are laying the groundwork for future research into making restricted data more Findable, Accessible, Interoperable and Reusable. Moreover, the categorisation of methods and the ontology created based on the results can provide a reusable framework to express the methods used when dealing with restricted data.</ns0:p><ns0:p>The remaining part of the paper proceeds as follows: the next section begins by illustrating the background information about the role of Data Science, Restricted Research Data and the FAIR Guiding Principles within the scope of this review. We will then describe the methods used for the selection and analysis of the included articles and the following sections will present and discuss the results. The last section of the review will summarise the overall findings and provide a final overview of the relationship between FAIR and restricted data.</ns0:p></ns0:div>
<ns0:div><ns0:head>BACKGROUND Data Science</ns0:head><ns0:p>Data Science is fast becoming a key component in nearly all scientific domains and industrial processes, and in the last decade, it has emerged as a research field of its own. As more data is produced, and more analysis techniques are made available, there is a need for specialised skills to embark on this data-dependent world. Industries, as well as scientific fields such as Medicine and Engineering, have now become data-driven and their success is closely related to their ability to explore and analyse complex data sources (National Academies of <ns0:ref type='bibr'>Sciences & Engineering & Medicine, 2018)</ns0:ref>. The application of Data Science has already been seen in known data-intensive fields, for example, Geoscience <ns0:ref type='bibr' target='#b41'>(Singleton and Arribas-Bel, 2021)</ns0:ref>, Biology <ns0:ref type='bibr' target='#b40'>(Shi et al., 2021)</ns0:ref> and Artificial Intelligence <ns0:ref type='bibr' target='#b39'>(Sarker et al., 2021)</ns0:ref>. Nevertheless, less data-driven domains are also adapting to the Data Science wave, creating new fields of studies such as Digital Humanities and Computational Social Sciences.</ns0:p></ns0:div>
<ns0:div><ns0:head>Restricted Access Data</ns0:head><ns0:p>Recently, we have increasingly seen the development of online Open Government Data (OGD) portals, intending to enhance innovation, economic progress and social welfare. Through the creation of OGD, governments have allowed the general public to easily access information that was long thought unattainable, and use them in a variety of fields such as journalism, software development and research (Begany Manuscript to be reviewed <ns0:ref type='bibr'>et al., 2021)</ns0:ref>. The economic value of public records has been expected, by the European Commission, to increase from 52 billion in 2018 to 215 billion in 2028 <ns0:ref type='bibr' target='#b2'>(Barbero et al., 2018)</ns0:ref>, thus emphasizing the economic impact on different public sectors, such as transportation, environmental protection, education and health <ns0:ref type='bibr' target='#b38'>(Quarati, 2021)</ns0:ref>.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Although the beneficial contribution of using public records to the overall well-being of society is clear, there is still a large amount of data that can not be made publicly available due to confidentiality concerns. For example, health data from hospitals and medical centres are often restricted to the data owners and stakeholders, due to patient privacy concerning issues. Moreover, researchers can also decide not to make their data open, and instead, apply limitations to their use through usage terms and licenses.</ns0:p></ns0:div>
<ns0:div><ns0:head>FAIR Guiding Principles</ns0:head><ns0:p>Since the publication of FAIR in 2016, there is a growing number of literature that applies the Guiding Principles and recognises the importance of making data Findable, Accessible, Interoperable and Reusable.</ns0:p><ns0:p>The FAIR initiative is taking up more and more momentum, and the application of the Principles has been seen in nearly all fields of science. For example, a recent paper by <ns0:ref type='bibr'>Kinkade et al.</ns0:ref> proposes practical solutions to address and achieve FAIR in data-driven research in geosciences <ns0:ref type='bibr' target='#b22'>(Kinkade and Shepherd, 2021)</ns0:ref>. Another recent paper discusses the importance of the Principles in expanding epidemiological research within the veterinary medicine domain <ns0:ref type='bibr' target='#b29'>(Meyer et al., 2021)</ns0:ref>. The FAIR guidelines have also been applied in more technical fields such as the scientific research software domain. A community effort to improve the sharing of research software has brought the creation of the FAIR for Research Software (FAIR4RS) Working <ns0:ref type='bibr'>Group (Chue Hong et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b21'>Katz et al., 2021)</ns0:ref> and the 'Top 10 FAIR Data & Software Things' <ns0:ref type='bibr' target='#b28'>(Martinez et al., 2019)</ns0:ref>. Other examples of guidelines that have stemmed from the original FAIR Principles are the 'FAIR Metrics' <ns0:ref type='bibr' target='#b51'>(Wilkinson et al., 2018)</ns0:ref> and the 'FAIR Data Maturity Model' <ns0:ref type='bibr' target='#b17'>(Group et al., 2020)</ns0:ref>.</ns0:p><ns0:p>The FAIR Guiding Principles have also been gaining increasing interest and recognition from international entities such as the European Commission and the National Institute of Health (NIH) 6 . The latter, together with the Department of Health and Human Services (HHS) 7 and the Big Data to Knowledge (BD2K) initiative <ns0:ref type='bibr' target='#b26'>(Margolis et al., 2014)</ns0:ref>, are supporting the application of FAIR in the biomedical domain through the development of innovative approaches to big data and data science. The European Commission has particularly been involved in the application of the Principles through international initiatives, such as the Internet of FAIR Data and Services (IFDS) 8 and the European Open Science Cloud (EOSC) 9 , to implement strategies for the application of FAIR on digital objects, technological protocols, digital data-driven science and the 'Internet of Things' (Directorate General for Research & Innovation, 2020) <ns0:ref type='bibr' target='#b46'>(van Reisen et al., 2020)</ns0:ref>. Moreover, the European Commission is now mandating the use of FAIR in new projects, and it is working towards comprehensive reports and action plans for 'turning FAIR data into reality' <ns0:ref type='bibr' target='#b11'>(Collins et al., 2018)</ns0:ref>.</ns0:p><ns0:p>Nevertheless, the technical implementation of FAIR is still the main challenge faced by many stakeholders. The FAIR principles call for data to be both machine and human-readable to facilitate the retrieval and analysis of resources. This process requires the data stakeholder to generate a machine-readable format of the data, often using the Resource Description Framework (RDF) <ns0:ref type='bibr' target='#b30'>(Miller, 1998)</ns0:ref>. Further, another core principle of FAIR is the importance of not only data standards but also metadata standards.</ns0:p><ns0:p>The term 'metadata' refers to the top-level information and attributes describing the data, such as the provenance, the methodology used as well as terms of use of the artefact. More in general, metadata can be thought of as the bibliographic information about the data is describing <ns0:ref type='bibr' target='#b9'>(Boeckhout et al., 2018</ns0:ref>). Yet, the process by which FAIR metadata should be generated and organised to include all relevant information is still unclear. On top of the need for clear technical guidelines for the implementation of the Principles, there is also the need of changing the work culture to mirror the core meaning of FAIR. From a business point of view, enterprises need evidence to show how they can generate a long-term return on investment (RoI) through the application of FAIR, and for research centres, management boards are still to be convinced about the benefits brought by the Principles, such as peer-recognition, data accessibility and financial rewards <ns0:ref type='bibr' target='#b53'>(Wise et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b43'>Stall et al., 2019)</ns0:ref>.</ns0:p><ns0:p>It is clear how the impact of the FAIR Principles is important for the future of Open Science, technological development and scientific research. With the continuously growing number of domains where the Principles are applied and the increasing amount of data generated, it is essential to understand the mechanisms of FAIR in the context of restricted data. In this review, we aim to provide a better understanding of how the FAIRification process can benefit restricted data, by analysing the methods employed by the scientific community to overcome the barriers of confidentiality and to guide research on privacy concerning data toward mature FAIR choices.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head><ns0:p>In the following section, we describe the methods employed in the review, by first describing details of the resources' selection step and the application of inclusion and exclusion criteria. We then provide information concerning the data collection and data analysis processes. We aim to investigate the common methods utilised to overcome issues related to restricted data, in the context of FAIR-driven research.</ns0:p></ns0:div>
<ns0:div><ns0:head>Eligibility Criteria:</ns0:head><ns0:p>Several criteria were considered when selecting the studies to be included in the analysis. Eligibility criteria required articles to:</ns0:p><ns0:p>1. be written in English.</ns0:p><ns0:p>2. be peer-reviewed.</ns0:p><ns0:p>3. be research papers.</ns0:p><ns0:p>4. clearly describe the proposal or application of the FAIR principles in the context of restricted data.</ns0:p><ns0:p>The selection of the eligibility criteria was made based on a few conditions. English is the lingua franca of science communication and was the only language shared by all authors, and therefore only papers written in English were included in the systematic review. Secondly, we decided to exclude papers that were not peer-reviewed or other systematic reviewed, therefore only including peer-reviewed research papers. The last eligibility criterion was formulated based on the fact that the authors wanted to include papers that showed a clear application of FAIR principles concerning restricted data, rather than just the mentioning of such without an apparent use of the principles.</ns0:p></ns0:div>
<ns0:div><ns0:head>Search Strategy:</ns0:head><ns0:p>The electronic literature search for this study was conducted on the Google Scholar database on the 16th of September 2021, using the following query: 'findable accessible interoperable reusable' AND (copyrighted OR confidential OR sensitive OR restricted OR privacy)</ns0:p><ns0:p>No other filters or limits were set in the search.</ns0:p></ns0:div>
<ns0:div><ns0:head>Selection Process:</ns0:head><ns0:p>The corpus of publications resulting from the query was exported to Rayyan <ns0:ref type='bibr' target='#b36'>(Ouzzani et al., 2016)</ns0:ref>, a Software as a Service (SaaS) web application used for the screening of publication data. The tool was used solely for the management of publications and to help resolve duplicates. The application of inclusion and exclusion criteria, as well as the resolution of duplicated publications, is not dependent on Rayyan, and the same results are to be expected if another citation management tool is used. The first author of this paper (M. Martorana) independently screened each record to evaluate their eligibility. The process of evaluation started with reading the abstract of each publication. If a decision on its eligibility could not be made, the whole paper was read.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_1'>2022:04:73043:1:0:NEW 17 Jun 2022)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Data Collection and Analyses:</ns0:head><ns0:p>A qualitative approach was adopted to capture descriptive data from the included publications.</ns0:p><ns0:p>The first step in this process was to determine the Field of Research each paper belonged to. Second, information regarding the suggestion or application of the methods was collected. On completion, an iterative process was carried out to group the outcomes into concrete Classes, which were then recognised to resemble different stages of the Data Life Cycle. By the term 'Data Life Cycle' we refer to the stages the data goes through from the moment of collection to what happens after its usage. It is important here to distinguish between the Data Life Cycle steps recognised in this research, and the most common steps most often identified as the Data Lifecycle Management. The latter, represents an overview of the steps the data owners would mostly be faced with during the management and safekeeping of data, and they involve: 1) Creating Data, 2) Data Storage, 3) Data Use, 4) Data Archive and 5) Data Destruction.</ns0:p><ns0:p>In the context of this research, the cycle has been aligned to the Data Lifecycle Management one, but we have also added steps to include the stages when the data is processed, as well as the steps required after the data is used. Next, each publication was annotated concerning the methods proposed. Finally, a Technology Readiness Level (TRL) <ns0:ref type='bibr' target='#b18'>(Héder, 2017)</ns0:ref>, based on the maturity of the technology proposed, was estimated for each publication. The appraisal of TRLs offers an effective way to assess how different technical solutions related to more advanced research infrastructures, by assigning a score representing the level of maturity of the technology. For practical purposes, as well as to decrease potential miscalculation of the scores, the TRL levels were organised into the following 4 main groups based on the European Union definitions <ns0:ref type='bibr' target='#b12'>(Commission et al., 2017)</ns0:ref>. Table <ns0:ref type='table'>1</ns0:ref>, below, shows how the TRL levels were grouped, and define the level of maturity expressed by each group.</ns0:p><ns0:p>Technology Readiness Levels based on <ns0:ref type='bibr' target='#b12'>(Commission et al., 2017)</ns0:ref> Definition</ns0:p><ns0:formula xml:id='formula_0'>TRL 1 & 2</ns0:formula><ns0:p>This class was assigned to publications where the technology proposed was only conceptually formulated but not implemented. TRL <ns0:ref type='bibr'>3 & 4</ns0:ref> This class, instead, was assigned to publications where the technology proposed has gone through some testing but only in limited environments. TRL 4 & 5</ns0:p><ns0:p>This class was assigned to publications that clearly showed testing and expected performance. TRL 7, 8 & 9</ns0:p><ns0:p>Lastly, this class was assigned to publications that showed full technical capabilities and that were also available to users.</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. The table shows the grouping of the Technology Readiness Levels (TRLs) based on the work by <ns0:ref type='bibr' target='#b12'>(Commission et al., 2017)</ns0:ref>, and it provides a definition on how each TRL group was assigned to the included publications.</ns0:p></ns0:div>
<ns0:div><ns0:head>Synthesis:</ns0:head><ns0:p>The final stage of the methodology comprised of a visual representation of the publications and their relative methods and TRL scores, as well as the creation of an OWL ontology representing the methods.</ns0:p><ns0:p>The Web Ontology Language (OWL) <ns0:ref type='bibr' target='#b0'>(Antoniou and Harmelen, 2004)</ns0:ref> was used to provide a FAIR representation of the results of this systematic review in the form of a human and machine readable 'Data Methods' ontology. The ontology we created could be used in the future to help describe the methods implied when researching restricted data in a FAIR manner. The decision of building our ontology in OWL is based on the fact that it is a W3C approved Semantic Language, designed to formally define rich meaning and concepts.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Study Selection</ns0:head><ns0:p>The first set of results concerns the outcome of the search and selection process of papers. Google Scholar returned an overall number of 932 publications based on the query performed. Duplicates were detected and resolved accordingly, resulting in 894 unique publications. Further, publications were excluded based on the following criteria: 78 were not written in English, 9 were not peered-review and the other 9</ns0:p><ns0:p>were systematic reviews. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science extraction and analysis. A summary of these results can be found below, in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. Details of the 40 publications included in the review can be found below, in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Field of Research</ns0:head><ns0:p>The first set of analyses examined the 'Field of Research' each of the included publications belonged to.</ns0:p><ns0:p>We were able to distinguish 9 different fields, and we found that the 'Biomedical Domain' was the most common Field of Research, with 27 papers (67.5%) linked to it. We also found that 5% of papers were linked to the 'Biodiversity' domain, and another 5% discussed solutions for 'Business' purposes. The 'Social Science', 'Environmental', 'Astronomy' and 'Nanotechnology' domains only had 1 publication each, and 10% of papers did not belong to a specific Field of Research, but involved solutions related to a 'General' use of restricted data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Overview of Data Methods</ns0:head><ns0:p>In the following paragraphs, we report the results of the data methods encountered in the included papers.</ns0:p><ns0:p>During the data analysis, we found that the data methods could reasonably be modelled along with the Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>steps of what we could call the 'data life cycle'. The first step refers to the data collection process and includes methods such as the application of standards and requesting consent from the data subjects.</ns0:p><ns0:p>Once the collection is completed, the second step refers to the processing of the data and includes, for example, methods related to the curation and validation and the creation of synthetic data. Then the data is published, through methods such as the selection of appropriate repositories and federated systems, or the application of an embargo on data release. Finally, the data is used, through methods such as the employment of access control systems and the selection of secure environments. After data usage, there might also be post-usage methods employed, for example, the acknowledgement of the data owners as well as archiving any secondary results. Within the data collection step, an important type of method deals with the aspect of metadata representation, for example, methods describing the licenses and usage terms applicable to the data, the versions available and the provenance. Other important aspects of restricted data are anonymization methods, which happen during the data processing step. Such methods include, for example, the de-identification, the minimization and the pseudonymization of the data. In the sections that follow, we describe in more detail each of the steps of the data life cycle and their related methods.</ns0:p><ns0:p>Also, we propose a graphical representation of the methods in Figure <ns0:ref type='figure' target='#fig_5'>2</ns0:ref>, and an overview of the results can be seen in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Collection</ns0:head><ns0:p>The first step in any data-related activity is to collect the data. During this step, many methods are relevant to facilitate inter-disciplinary cooperation and data reuse. For example, methods that involve applying standards, common formats and best practices while collecting the data. We have found that 19 publications (47.5%) mentioned such methods, which we have collectively called 'Data Standardization' methods. Other methods to improve the cooperation across disciplines are the ones related to making connections between the data and already available semantic vocabularies, such as the European Language Social Science Thesaurus (ELSST) <ns0:ref type='bibr' target='#b1'>(Balkan et al., 2011)</ns0:ref>. Such methods usually require the data collector to research how concepts about the data, such as variables or descriptors, can be best mapped to semantic vocabularies. This process often necessitates some type of experience with Linked Data, as the exact connection and mapping are not always already available and there might be the need of creating custom links. These methods have been collectively defined as 'Semantic Mapping', and they have been found in a total of 12 publications (30%). Moreover, during the data collection step, we also found methods about the request of the consent of collecting and sharing the data from the data subjects. Both types of methods can be applied to practically all types of data, but they possibly have more impact when the data collected contains Personal Identifiable Information (PII). For instance, when collecting patients' data it is important to clearly request the consent for collecting and sharing with each individual, as well as to define how and for what purposes the data can be shared. The methods that refer to the planning and collection of consent forms from the data subjects are categorised together with the term 'Consent Planning', and we found 9 publications (22.5%) mentioning them. Moreover, methods referring to the request of consent from the data subjects about the sharing of the data are collectively called 'Data Sharing Planning', and they were found in a total of 10 publications (25%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Metadata Representation</ns0:head><ns0:p>Under the Data Collection Class, we also found methods related to the description of the data's top-level information, which have been categorised together as 'Metadata Representation'. For example, we found methods for the process of describing usage and license terms in the metadata, which has a key role in the reusability of secondary data as it outlines how access can be granted and under which conditions. In fact, a clear description of the usage and license terms in the metadata is essential for limiting and setting boundaries for secondary users. We found a total of 20 publications (50%) mentioning such methods, which we have called with the term 'Usage & License Terms'. If we turn now to the other methods under the Metadata Representation class, we found that 9 publications (22.5%) mentioned the use of 'Persistent</ns0:p><ns0:p>Identifier' in the metadata. The use of a persistent identifier was expected to have a higher rate, as it is a key component for the Findability and Interoperability of data. A possible explanation of why this method was found in less than a quarter of the publications, is because often the data is released as a result of a publication, which is usually accompanied by an identifier. Nevertheless, the publication identifier (e.g. DOI) relates to the publication itself and not the data it might contain, and in the instance of the data being reused, assigning a persistent identifier also the data could greatly improve the Findability and Interoperability of such resource. Moreover, we found other methods related to the definition of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the type of data the metadata is describing. For example, by clearly stating if the metadata is describing survey, questionnaire or tabular data. These methods have been collected under the term 'Data Type</ns0:p><ns0:p>Definition', and we found them in 7 publications (17.5%). In the same number of publications, we also found methods related to the description of the provenance of the data, which is an important aspect of making restricted data more FAIR. Data provenance methods have been collectively called 'Provenance</ns0:p><ns0:p>Capturing', and they include methods through which the origin of the data is documented. Also included in the Metadata Representation class, we found methods regarding the reporting of other versions of the data ('Versioning', found in 5 publications), and also methods describing the quality of the data ('Data Quality Indicator', found in 3 publications). The latter can indicate a variety of factors referring to the conditions of the data, such as its completeness, uniformity or if it is free of missing values and outliers.</ns0:p><ns0:p>Lastly, we found 19 publications (47.5%) that mentioned the importance of having detailed metadata but did not specify any of its specific aspects in particular. Nevertheless, this result suggests that almost half of the included publications recognise the positive impact of having comprehensive metadata, even if no specific features were mentioned.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Processing</ns0:head><ns0:p>We will now present the results from the second step of the data life cycle, which involves the processing of the data after it has been collected. We found a variety of methods related to the processes of transforming, modifying and therefore processing the data before the other steps of the life cycle. Some of the methods found in the included publications involved practices to curate and validate the data before it can be published into databases and cloud services. Overall, such methods are aimed to improve the overall consistency and quality of the data, and they are related to the Quality Indicator method found in the Data</ns0:p><ns0:p>Collection class, as it can either positively or negatively affect the quality of the data. We collectively named these methods 'Curation & Validation', and they were found in 12 of the included publications (30%). We also found methods related to statistical techniques for limiting the accuracy or adding noise to the data to prevent the release of identifiable information. Such methods are potentially more applicable to numerical and tabular data, but they can indeed be applied also to questionnaire or survey data, by deciding for example to disclose only parts or modified versions of the original data. We have grouped these methods as the 'Statistical Disclosure' methods, and they were found in 8 publications (20%). The next set of methods found is related to the process of linking the data to other data sources already available in repositories, such as Google Dataset Search 10 , or to make the data suitable to be linkable by others. The 'Data Linking' method, found in 5 publications (12.5%) is related to the Semantic Mapping under the Data Collection class, in the way that both methods refer to the process of creating links between the data and already available knowledge. However, in the Semantic Mapping case, links are aimed to be created between semantic vocabularies and data concepts instead of between data sources like in the case of Data Linking. We also found methods related to the creation of synthetic data from the original data to eliminate the possibility of identifiable or confidential information being exposed. We have collectively named these methods 'Data Synthetization', and they were found in 4 publications (10%). It is now important to clearly define the relationship between the Data Synthetization method and the Anonymization subclass, and why this method has not been included in the subclass. Some readers may have expected the creation of synthetic data to be aligned with the concept of data anonymization, but in the context of this paper, we have made a distinction between the two. The Anonymization subclass refers to methods that are aimed to sanitize the data and make it free of personally identifiable and confidential information. Nevertheless, such processes are often applied to fragments or sections of the data and maintain the non-identifiable information intact. In contrast, with the Data Synthetization method, the data is used as a template for a completely new, and free of confidential information, set of data. Therefore, Data Synthetization and Anonymization are different in the way that, to achieve the removal of identifiable information, the first method requires completely new data to be generated, and the second one, instead, can be applied only to a section of the original data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Anonymization</ns0:head><ns0:p>Under the Data Processing Class, we also found methods related to different techniques for anonymizing the data, which represent the 'Anonymization' methods. An important aspect, here, is to clearly describe the differences between the methods, as they can often be confused and misinterpreted. Some of the methods refer to the process of removing Personal Identifiable Information (PII) to eliminate the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science links between the data subjects and the data itself. These methods have been grouped under the 'De-Identification' method, and they were found in 5 of the included publications (12.5%). Next, we found methods related to the reduction of released information, therefore minimizing the original data to a non-personal identifiable version. We also found other methods related to the process of replacing PII with artificially generated information, also called pseudonyms. These methods follow the same concept as the Data Synthetization ones, in the sense that artificial or synthetic information is created to avoid confidential data being exposed. The difference between the two methods is the fact that, while synthetization is applied to the whole data, pseudonymization is only applied to the personally identifiable information.</ns0:p><ns0:p>The methods related to 'Data Minimization' and 'Pseudonymization' were only found in 3 publications each (7.5%). The last set of methods relates to the process of encrypting the whole or parts of the data to limit access to confidential information and PII. Central to this type of encryption method is that the encrypting key has to be kept secure from undesired use and unauthorised access. the 'Anonymization by</ns0:p><ns0:p>Encryption' methods were found in 2 publications (5%). Lastly, we found 7 publications (17.5%) that mentioned the use of anonymization techniques to process restricted and confidential data but did not provide clear details regarding the specific type of anonymization used.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Publication</ns0:head><ns0:p>In the following section, we will present the results of the methods we found belonging to the third step of the data life cycle, which involves the publication of the collected data after it has been processed. Under the 'Data Publication' step, we found methods related to the description of the data's chain of custody, also called 'Data Governance' methods. Data governance aims to describe the standards by which the data is gathered, stored and processed, as well as to establish the responsibilities and authorities for its conservation. These methods, found in 15 publications (37.5%), are related to a variety of other methods found in different classes, such as the Usage & License Terms and Access Control methods, as they aim to improve the safeguarding of the data and to ensure its appropriate use. Next, we found methods related to the process of publishing the data into federated systems and allowing for the data to be combined with other resources, therefore improving both its Findability and Reusability. The 'System Federation' method was found in 8 publications (20%). Other methods that are relevant to the enhancement of FAIR, involve the decision of selecting the most appropriate repository or database to publish the data in. The 'Repository Selection' method was only found in 5 publications (12.5%), and this low number could be explained by the fact that scientists do not find the selecting of the appropriate repository as a difficult task. Possibly, domain experts have a clear understanding of the most used and most reliable repository, and therefore do not have to go through lengthy deliberation to agree on where the data can be published.</ns0:p><ns0:p>We also found methods referring to the delaying or postponing of the publishing of the data, to minimise the effect of the data with respect to the time it was created. For example, by postponing the release of information by a couple of years, it is possible that identifiable information is not relevant anymore or that the data does not comport confidentiality issues any longer. Such methods have collectively been called 'Embargo on Release' and they were found in 3 publications (7.5%). Lastly, we found only 1 publication referring to the process of making the data available through the adoption of a decentralised and distributed system using blockchain, to track and record data sharing and usage. The 'Publishing using Blockchain' method is often a complex task, and it can require high technical skills and expertise, which could explain why this method was only found in 1 publication.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Usage</ns0:head><ns0:p>Once the data is published, it can also be used. The fourth step of the data life cycle represents the 'Data Usage' step, and it includes a variety of methods involving the access and use of the data. For example, we found that most publications mentioned techniques for limiting access to restricted data to avoid undesired or unauthorised use. Such methods are also related to the Usage & License Terms method under the Metadata Representation class. In fact, the type of use that is allowed on the data can also influence the type of access requirements. For example, a specific dataset can be allowed to be used only for research purposes by university researchers, and this could be defined as the type of access requirements by only allowing access to the data to, for example, registered university researchers.</ns0:p><ns0:p>This 'Access Control' method was the most common among all the methods and all the life cycle steps, with a total of 29 publications (72.5%) mentioning it. Next, we found methods referring to the process of moving the analysis to where the data is stored. More specifically, in most common cases the data is accessed and stored or downloaded into personal machines or cloud systems, where then the data Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>analysis is performed. Through the 'Algorithm to Data' method, the data is never fully accessed by the user and cannot be downloaded. Instead, the user is only able to send their algorithms to the data to perform data analysis. This method, usually, does not require access control systems to be in place, as the analysis is most often allowed to return only aggregate results and, therefore, the concerns for the unintentional release of confidential and private information are limited. An example of the Algorithm to Data method is the Personal Health Train, a tool that allows the distributed analysis of health data while preserving privacy protection and ensuring a secure infrastructure <ns0:ref type='bibr' target='#b8'>(Beyan et al., 2020)</ns0:ref>. The Algorithm to Data method was found in 7 publications (17.5%). We also found methods involving the establishment or selection of safe infrastructure to allow for data usage. This 'Secure Environment Selection' process, found in 5 publications (12.5%), can often comprise of a secure virtual or physical machine that limits the type of use granted, such as not allowing the download or the sharing of the data as well as limiting the information available for analysis. Lastly, we found 1 publication mentioning what we have called the 'Algorithm Predefinition' method, which refers to the process by which the data can only be analysed through a specific set of algorithms or statistical tests, that have been predefined a priori by the data owners.</ns0:p></ns0:div>
<ns0:div><ns0:head>Post Data Usage</ns0:head><ns0:p>After the data is used and analysed, the last step of the data life cycle illustrates the methods describing what is required to be done for the 'Post Data Usage'. We found methods referring to the process of acknowledging the data owners by, for example, including information about the archive hosting the data or citing the original source. Next, we also found methods referring to the requirement from the data owners to archive the results from the analysis (also called secondary data) into the same repository as the original data. The 'Owner Acknowledgement' method was found in 5 publications (12.5%) and the 'Result Archiving' method was found in only 2 publications (5%).</ns0:p></ns0:div>
<ns0:div><ns0:head>Technology Readiness Level (TRL)</ns0:head><ns0:p>A Technology Readiness Level (TRL) was estimated for each publication based on the maturity of each (2017) we also found only 1 paper with TRL 5 & 6. In 2018, instead, we found 2 papers, one with TRL 1 & 2 and the other one with TRL 3 & 4. We decided to focus on the analysis of the methods proposed by papers with the highest TRL levels, 7 to 9. This decision was made based on the assumption that if a method was employed in an infrastructure tested and implemented in the real world, such a method represented a reliable and mature way of dealing with restricted research data. Manuscript to be reviewed Table <ns0:ref type='table'>3</ns0:ref>. Visual representation of the frequency of each method found in the included publications. If a publication presented the method assigned to the column, then it would show an 'X' coloured cell. The colour of the cell is in correspondence to the Technology Readiness Level (TRL) assigned to the given publication: from lightest (light blue -TRL 1 & 2) to darkest (dark blue -TRL 7, 8 & 9). At the bottom of the figure, there is a row showing the number of articles each method has been found in, also colour graded from dark (many instances) to light grey (few instances). Overall, this table shows that there are wide variations in the frequency of the methods, but also that the vast majority of methods present TRL scores of 5 and above. We can also see that the coverage of the methods is rather broad and they are approximately evenly distributed among each class. Indicator methods could give insights to the researcher about the type of data included in the dataset as well as its quality.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Overall, each of these methods can be implemented after that data has been released but, of course, it is advisable to have an optimal metadata structure at the very stage of the data life cycle. Extensive and highly descriptive metadata information is a key component of making restricted data FAIR, as they can be designed not to contain any confidential information, but still benefit the research community.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations</ns0:head><ns0:p>Several limitations need to be noted regarding the present study. Despite the review offering some meaningful insights into technical solutions to overcome the barrier to researching restricted data, it has certain limitations in terms of the selection process as well as data analysis and extraction. A potential source of bias in this study lies in the fact that only one author was primarily responsible for the application of the inclusion/exclusion criteria, as well as for the data extraction and analysis. Although the author tried to assess each publication objectively and methodically following the criteria, it is possible that different results would have been generated if more authors were part of the evaluation. Moreover, the vast majority of papers were excluded based on the 4th eligibility criteria, which was to 'clearly describe the proposal or application of the FAIR principles in the context of restricted data'. This suggests that even though the papers mentioned the FAIR principles within their abstracts or full text, we could not find a clear application of the principles regarding restricted data. As a possible extension to this work, it would be interesting to contact the authors of the excluded papers and perform a survey to better understand and verify the intentions and limitations of the application of the principles in this context.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>The present study set out to provide the first systematic account of the relationship between Open Science, restricted data and the FAIR principles. The findings of this research provide insights into different ways restricted research data can be used, shared, stored and analysed, by respecting the privacy concerns in the reality of the Open Science world. With our results, we are providing an overview of the methods used when using restricted data in a FAIR manner, as well as a categorisation of such methods in both human and machine readable formats. The Data Methods framework and ontology we developed, can be used in the future to comply with the FAIR principles and provide information on how research on restricted data has been developed.</ns0:p><ns0:p>If the debate is to be moved forward, a better understanding of how the information resulting from this review can help in further achieving FAIR in restricted research data is needed. More research is required to develop a modelling strategy for improving the Findability, Accessibility, Interoperability and Reusability of restricted data. The FAIR Principles have been widely used in a variety of fields, and many guidelines and frameworks have been proposed, such as the 'Top 10 FAIR Data & Software</ns0:p><ns0:p>Things' <ns0:ref type='bibr' target='#b28'>(Martinez et al., 2019)</ns0:ref>, the 'FAIR Metrics' <ns0:ref type='bibr' target='#b51'>(Wilkinson et al., 2018)</ns0:ref>, the 'FAIR Data Maturity</ns0:p><ns0:p>Model' <ns0:ref type='bibr' target='#b17'>(Group et al., 2020)</ns0:ref> and the Internet of FAIR Data and Services (IFDS) 11 . Nevertheless, no FAIR framework has yet been proposed that directly addresses the issues concerning research with confidential and restricted access data. It would be interesting to assess how the information about the Data Methods found in this review can be introduced in the metadata of restricted data, and investigate whether available metadata models are suitable for such implementation. In fact, metadata has a key role in the development of FAIR workflows and, as discussed previously, we believe that extensive metadata information is also key for the reuse of restricted access data in a FAIR manner. We conclude that with the present systematic Manuscript to be reviewed</ns0:p><ns0:p>Computer Science review we are providing a framework to organise our knowledge about the methods employed in restricted data research, highlighting the importance of Metadata Representation and the FAIR Principles. We hope that our results can find practical applications both for stakeholders and researchers, and the methods found can be implemented in future projects.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. PRISMA flow diagram illustrating the results from the search and selection process, performed on the Google Scholar database.</ns0:figDesc><ns0:graphic coords='8,141.73,99.92,413.57,430.06' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>upholding standard data use conditions Lakerveld et al. 2017 Identifying and sharing data for secondary data analysis of physical activity, sedentary behaviour and their determinants across the life course in Europe: general principles and an example from DEDIPAC Bertocco et al. 2018 Cloud access to interoperable IVOA-compliant VOSpace storage Kleemola et al. 2019 A FAIR guide for data providers to maximise sharing of human genomic data Sun et al. 2019 A Privacy-Preserving Infrastructure for Analyzing Personal Health Data in a Vertically Partitioned Scenario. Rockhold et al. 2019 Open science: The open clinical trials data journey Demotes-Mainard et al. 2019 How the new European data protection regulation affects clinical research and recommendations? Van Atteveldt et al. 2019 Computational communication science-toward open computational communication science: A practical road map for reusable data and code Dimper et al. 2019 ESRF Data Policy, Storage, and Services Lahti et al. 2019 'As Open as Possible, as Closed as Necessary'-Managing legal and owner-defined restrictions to openness of biodiversity data. Becker et al. 2019 DAISY: A Data Information System for accountability under the General Data Protection Regulation Kephalopoulos et al. 2020 Indoor air monitoring: sharing and accessing data via the Information Platform for chemical monitoring (IPCHEM) Hoffmann et al. 2020 Guiding principles for the use of knowledge bases and real-world data in clinical decision support systems: report by an international expert workshop at Karolinska Institutet Cullinan et al. 2020 Unlocking the potential of patient data through responsible sharing-has anyone seen my keys? Nicholson et al. 2020 Interoperability of population-based patient registries Paprica et al. 2020 Essential requirements for establishing and operating data trusts: practical guidance co-developed by representatives from fifteen Canadian organizations and initiatives Jaddoe et al. 2020 The LifeCycle Project-EU Child Cohort Network: a federated analysis infrastructure and harmonized data of more than 250,000 children and parents Bader et al. 2020 The International Data Spaces Information Model-An Ontology for Sovereign Exchange of Digital Content Aarestrup et al. 2020 Towards a European health research and innovation cloud (HRIC) Suver et al. 2020 Bringing Code to Data: Do Not Forget Governance Roche et al. 2020 Open government data and environmental science: a federal Canadian perspective Beyan et al. 2020 Distributed analytics on sensitive medical data: The Personal Health Train Choudhury et al. 2020 Personal health train on FHIR: A privacy preserving federated approach for analyzing FAIR data in healthcare Arefolov et al. 2021 Implementation of The FAIR Data Principles for Exploratory Biomarker Data from Clinical Trials Ofili et al. 2021 The Research Centers in Minority Institutions (RCMI) Consortium: A Blueprint for Inclusive Excellence Haendel et al. 2021 The National COVID Cohort Collaborative (N3C): rationale, design, infrastructure, and deployment Kumar et al. 2021 Federated Learning Systems for Healthcare: Perspective and Recent Progress Abuja et al. 2021 Public-Private Partnership in Biobanking: The Model of the BBMRI-ERIC Expert Centre Schulman et al. 2021 The Finnish Biodiversity Information Facility as a best-practice model for biodiversity data infrastructures Cooper et al. 2021 Perspective: The Power (Dynamics) of Open Data in Citizen Science Øvrelid et al. 2021 TSD: A Research Platform for Sensitive Data Hanisch et al. 2021 Research Data Framework (RDaF): Motivation, Development, and A Preliminary Framework Core Read et al. 2021 Embracing the value of research data: introducing the JCHLA/JABSC Data Sharing Policy Hanke et al. 2021 In defense of decentralized research data management Zegers et al. 2021 Mind Your Data: Privacy and Legal Matters in eHealth Groenen et al. 2021 The de novo FAIRification process of a registry for vascular anomalies Delgado Mercè et al. 2021 Approaches to the integration of TRUST and FAIR principles Jeliazkova et al. 2021 Towards FAIR nanosafety data Demchenko et al. 2021 Future Scientific Data Infrastructure: Towards Platform Research Infrastructure as a Service (PRIaaS)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>10 https://datasetsearch.research.google.com 10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>system. As mentioned in the Results section, the TRLs were classified into 4 groups: TRL 1 & 2, TRL 3 & 4, TRL 5 & 6 and TRL 7, 8 & 9. The higher the TRL, the more mature the research infrastructures proposed in the publications are. By grouping the included papers by year, we have found that the ones published between 2019 and 2021 had the full array of TRLs. This means that at least one publication published each year had the lowest level (TRL 1 & 2) and at least one publication had the highest level (TRL 7 to 9). For the year 2016, we found only 1 paper with TRL 1 & 2, and for the subsequent year</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Visual representations of the methods classes found during data analysis. Below each method a short description can be found. Note the 'is subclass of' relations.</ns0:figDesc><ns0:graphic coords='14,141.73,202.03,413.49,354.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>represents the top-level information of the data it describes, it can be created, expanded and modified a posteriori, and does not intrinsically impose confidentiality concerns. The methods found in this reviewbelonging to the Metadata Representation Class are clearly related to the FAIR Principles, as they allow for better Findability, Accessibility, Interoperability and Reusability of the resource. In more detail, information about the Usage & License Terms could help researchers to understand exactly what actions are allowed on the data and how to request access, and Provenance Capturing could give important information about the data owners and stakeholders. Details about different available versions of the data (Versioning), as well as the Use of a Personal Unique Identifier, could help with Interoperability, by clearly stating the exact data used for analysis. Moreover, Data Type Definition and Data Quality</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>11 https://www.go-fair.org/resources/internet-fair-data-services/ 16/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>List with short Authors, year and title references of the final 40 publications included in the systematic review.</ns0:figDesc><ns0:table /><ns0:note>8/19PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)</ns0:note></ns0:figure>
<ns0:note place='foot' n='1'>https://www.cos.io 2 https://losc.ligo.org</ns0:note>
<ns0:note place='foot' n='6'>https://www.nih.gov 7 https://www.hhs.gov 8 https://www.go-fair.org/resources/internet-fair-data-services/ 9 https://eosc-portal.eu 4/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='13'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='14'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='15'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022)</ns0:note>
<ns0:note place='foot' n='19'>/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:04:73043:1:0:NEW 17 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Reviewer 1 (Anonymous)
Basic reporting
1. To date, it remains unclear how sensitive data should be managed, accessed, and analysed (Cox et al., 2016).
- 6-year-old statement is given to express today’s situation
The authors have added more current references to the statement. We now have added more references namely:
(Leonelli et al, 2021) and (Bender et al, 2022)
2. Is there any solid ground behind the selection of Eligibility criteria?
3. Condition four of the eligibility criteria is not present in a vast number of articles?
The eligibility criteria were developed based on the fact that the authors wanted to include articles in the systematic
review which clearly had a connection with the FAIR principles. We only included articles that were written in English
as it is the only common language between the authors, we also excluded articles (that were not peer-reviewed and
not scienti c research papers (such as other systematic reviews). Condition 4 is actually the criterion by which the vast
majority of papers got rejected, as seen in Figure 1.
4. The corpus of publications resulting from the query was exported to Rayyan
- Why this speci c tool is being used?
- This tool was presented in 2016. Do we have any latest tool available in this context?
- Comparative analysis is required for the selection of such tool from the list of available tools
- It is seemed that the proposed work is heavily depending on this tool. What if this tool is removed / replaced
with some other tool?
- Which speci c version of the tool is used?
There was no speci c reason for choosing this tool, only the author’s personal preference. As Rayyan is a Software as
a Service tool, we found the web interface more friendly in this speci c instance. Rayyan is only a citation
management tool. The process of inclusion and exclusion of the papers is not dependent in any way on Rayyan, and
other tools (such as Zenodo) could have been chosen to arrive to the same results. Moreover, as Rayyan is a SaaS,
there are no details of the tool version is available to use on their website. We have made explicit in the text that the
analysis is not dependent on the tool and that, indeed, other tools could have been used.
5. In the context of this research, the cycle has been slightly modi ed to better suit our results and to bring
more awareness to also the stages when the data is processed, as well as the steps required after the data is
used.
- If there is no other reason of modifying data lifecycle management then it seems a bit biased to modify the
process.
- Secondly, what alteration has been made?
We have rephrased the part highlighted to better express the authors intentions, which were not to modify the cycle
but, instead, to add further steps. The steps added are related to the stages where the data is processed and the
stage after the data is used. In the original Data Lifecycle Management these two steps are not made explicit, and we
added these two steps to our own cycle to bring more awareness to the methods we found belonging to these two
stages.
6. TRL levels details are missing.
7. Line 244 to 251 can be better represented in a tabular form.
We have added a table (Table 1) to better describe the TRL details, as well as providing a description for each class.
8. What is OWL ontology and why it is being used here?
9. An RDF ontology was generated to describe the Data Methods subclass hierarchy.
- Is there any alternate to the above representation?
- Why do we need this step?
We have expanded the text including description of the OWL language and why this language was chosen. We have
also added an explanation why we decided to build the ontology, which is in order to keep the alignment with the FAIR
principles. We believe that the best semantic language to build ontology, in this instance, is by using the OWL
language.
Is the review of broad and cross-disciplinary interest and within the scope of the journal?
- Yes
Has the eld been reviewed recently? If so, is there a good reason for this review (di erent point of view,
accessible to a di erent audience, etc.)?
- No, the authors claim is that this is the rst study of this type.
Does the Introduction introduce the subject and make it clear who the audience is/what the motivation is?
- Yes, up to some extent.
ff
fi
fi
fi
fi
ff
fi
fi
fi
fi
Experimental design
Methods described with su cient detail & information to replicate.
- No, with the provided details it will be di cult to reduce the result because it is working on the result of
google scholar. At di erent time, google scholar will return di erent papers. This will a ect the result.
We have reported the date on which the query was run (16th of September 2021), and we have speci ed what the
search query was and the fact that we did not lter further any of the results, therefore exporting the full corpus of
papers returned by Google Scholar. This means that in the future, if the results want to be replicated, the search query
can be run again on Google Scholar and the “time range” can be customised with the end date on the 16th September
2022. Running the search query with the same end date as our research would produce the same corpus of
publications.
Is the Survey Methodology consistent with a comprehensive, unbiased coverage of the subject? If not, what is
missing?
Are sources cited? Quoted or paraphrased as appropriate?
- Yes
Is the review organized logically into coherent paragraphs/subsections?
- Yes
Validity of the ndings
Is there a well-developed and supported argument that meets the goals set out in the Introduction?
- In this context, goals are not being set out in the introduction.
The authors believe that the goals of the papers are being set out in the Introduction under the “Problems Statements
and Contributions” section.
Does the Conclusion identify unresolved questions / gaps / future directions?
- There is a limitation heading found in the article in this regard.
Overall, this article is divided into two parts:
- Domain / problem introduction
- Proposed Solution
First part is being presented very well. Covering subject area well e.g., introduction, problem statement,
background etc. but when it comes to problem solution then some weak technical stu is being observed e.g.,
Methods, eligibility criteria etc. The Results section contains nothing but the detailed explanation of the
selected dataset. Similarly, the outcome of the paper is heavily depended on the selected dataset. E.g.,
di erent input articles will have di erent results.
fi
ff
ff
ff
fi
ffi
ff
ffi
ff
fi
ff
Reviewer 2 (Anonymous)
Basic reporting
The paper analyzes various practices being followed in the research community to meet the data sharing
proposed by FAIR principles. The manuscript provides a detailed survey of key mechanisms involved in
applying FAIR principles on restricted research data.
Experimental design
The identi ed research questions look appropriate however the utility of RQ2 and RQ3 should be further
elaborated, i.e., how research community may take advantage of proposed categorization and whether the
mature practices can be converted into some kind of standards or standard operation procedures or not?
We have extended the “Problem Statement & Contributions” section to better explain RQ2 and RQ3. Moreover, under
the “Synthesis” section under Methods, we have given more details on the potential use of the ontology.
The records show that majority of the papers are unable to meet the criteria set by the submitted manuscript
and 40 articles are selected for investigation of research questions. Theses number suggests that research
community is either not aware of FAIR or the principles are not believed to improve the quality of research. A
survey may be used to contact the authors of the excluded papers to verify the intention of no following FAIR
principles.
We agree with this statement and appreciate the suggestion. We have added this as a potential future work because
we do the bene t that a survey would have on our research. Nevertheless, we believe that this is out of scope in the
context of this systematic reviews.
It is suggested to target papers that have received awards in conferences or journals in the study as most of
the conferences consider reproducibility as key condition for such awards. Also it can be found out whether
compliance of FAIR principles is a requirement for the such awards or not?
Recently a few journals have appeared that particularity target publication of datasets (Data in Brief). Authors
may consider including journal or conference based criteria to evaluate further the research questions.
We have targeted papers that have been peer-reviewed, but not necessarily awarded in conferences or
similar. Actually, when dealing with restricted data, the reproducibility aspect is not a key requirements as
in another research contexts. This is because if a research is presented that uses con dential data for their
analysis, then such data cannot be made public.
fi
fi
fi
fi
Validity of the ndings
A comprehensive discussion has been provided based on the collected data along with limitations, however
authors apparently have relied on the reader for developing the inferences related to the research questions. It
is suggested to provide crisp response to the research questions particularly RQ2 and RQ3.
As mentioned above, we have expanded the “Statement and Contribution” and the “Synthesis” sections. We have also
re-iterated in the conclusion how our results can help the community with the compliance of FAIR when researching
restricted data.
Authors have discussed the aspect of anonymization of data however it is suggested to investigate the aspect
keeping ner granularity in mind, i.e., what kind of anonymization principles should be or already part of FAIR?
The systematic review set out as a goal to understand what are the methods used when dealing with restricted/
con dential data in a FAIR manner. Therefore, we have designed it to actually answer such questions as we did know
yet what types of anonymisation were included in FAIR.
fi
fi
ff
fi
fi
fi
fi
fi
fi
fi
fi
Table 2 provides elds of research categories and no example can be found that belongs to multiple elds. As
the manuscript is submitted in a computer science journal, it is therefore suggested to make e ort to nd a
few paper that belong to either core computer science or its constituents eld, i.e. communication networks,
social network analysis, software project management etc that usually include research works involving
sharing of data.
The papers found were mostly belonging to the biomedical domain, which would explain the connection with
con dential/restricted data. The computer science elds mentioned did not appear in our nal set of included
publications, possibly because there is still not much work done that connects the FAIR principles with restricted data
and, for example, the eld of “software project management”.
" | Here is a paper. Please give your review comments after reading it. |
714 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>One of the major purposes of building a model is to increase its accuracy within a shorter timeframe through the feature selections process. This is carried out by determining the importance of available features in a dataset using Information Gain (IG). This process is used to calculate the amounts of information contained in features with high values selected to accelerate the performance of an algorithm. In selecting informative features, a threshold value (cut-off) is used by the Information Gain (IG). Therefore, this research aims to determine the time and accuracy-performance needed to improve feature selection by integrating IG, the Fast Fourier Transform (FFT), and SMOTE methods. The feature selection model is then applied on The Random Forrest, which is tree-based machine learning algorithm with random feature selection. A total of 8 dataset consisting of 3 balanced dataset and 5 imbalanced datasets were used to conduct this research.</ns0:p><ns0:p>Furthermore, the Minority Synthetic Over-Sampling Technique (SMOTE) found in the imbalance dataset was used to balance the data. The result showed that the feature selection using Information Gain, FFT, and SMOTE improved the performance accuracy of Random Forest.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Higher accuracy and quicker processing time must be considered in order to build a model. Unfortunately, those two are contradictory because any effort to increase the accuracy of one affects the processing speed and accuracy of the other. Therefore, this study determined the accuracy-performance and the required time to improve feature selection by integrating IG, the Fast Fourier Transform (FFT) and SMOTE methods.</ns0:p><ns0:p>Random Forest is a classification algorithm based on the random selection of trees <ns0:ref type='bibr' target='#b11'>(Gounaridis and Koukoulas, 2016;</ns0:ref><ns0:ref type='bibr' target='#b22'>Prasetiyowati et al., 2020a</ns0:ref><ns0:ref type='bibr' target='#b21'>Prasetiyowati et al., , 2021))</ns0:ref>, thereby making it uninformative as a tool used to build the decision tree <ns0:ref type='bibr' target='#b5'>(Breiman, 2001;</ns0:ref><ns0:ref type='bibr' target='#b21'>Prasetiyowati et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b24'>Scornet et al., 2015)</ns0:ref>. However, this process allows the selected feature to be uninformative. Therefore, improving the feature selection process is necessary to make it informative with a faster execution time. Several studies have proposed the feature selection process for Random Forest <ns0:ref type='bibr' target='#b0'>(Adnan, 2014;</ns0:ref><ns0:ref type='bibr' target='#b21'>Prasetiyowati et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b29'>Ye et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b31'>Zhang and Suganthan, 2014)</ns0:ref>, including the use of IG with a threshold based on the standard deviation value <ns0:ref type='bibr' target='#b21'>(Prasetiyowati et al., 2021)</ns0:ref>. Zhang proposed a new method in Random Forest by increasing tree diversity by combining a different rotation space at the root node <ns0:ref type='bibr' target='#b31'>(Zhang and Suganthan, 2014)</ns0:ref>. Yuming et al. carried out a research by investigating feature selection for Random Forests using the stratified sampling method and the results showed the enhanced performance of Random Forest <ns0:ref type='bibr' target='#b29'>(Ye et al., 2013)</ns0:ref>.</ns0:p><ns0:p>The number of features in a dataset varies from few to more than 100 features. However, not all features are informative, irrelevant, and redundant <ns0:ref type='bibr' target='#b17'>(Lin et al., 2018)</ns0:ref>, therefore this affects the performance and accuracy <ns0:ref type='bibr' target='#b5'>(Chandrashekar and Sahin, 2014)</ns0:ref>. One of the methods used to solve this problem is the Information Gain (IG), an essential technique for weighting the maximum entropy value <ns0:ref type='bibr' target='#b5'>(Chandrashekar and Sahin, 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Elmaizi et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b12'>Jadhav et al., 2018;</ns0:ref><ns0:ref type='bibr' target='#b19'>Nguyen et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b20'>Odhiambo Omuya et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b26'>Singer et al., 2020)</ns0:ref>. According to preliminary studies, IG reduces the entropy value before and after the separation process and used to determine the possibility of using or discarding an attribute. For instance, those equal to or greater than a predetermined threshold value of 0.05 are selected in the algorithm classification process <ns0:ref type='bibr' target='#b7'>(Demsˇar and Demsar, 2006;</ns0:ref><ns0:ref type='bibr' target='#b28'>Yang et al., 2020)</ns0:ref>. Several other studies use the calculation of the frequency of each feature to determine the threshold value as a subset of the final features <ns0:ref type='bibr' target='#b27'>(Tsai and Sung, 2020)</ns0:ref>. However, some also use the standard deviation to determine the threshold <ns0:ref type='bibr' target='#b21'>(Prasetiyowati et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b25'>Sindhu and Radha, 2020)</ns0:ref>.</ns0:p><ns0:p>Furthermore, the preliminary study shows that the standard deviation method which aims to determine the threshold value did not calculate the class balance in the dataset. Therefore, this led to the development of several techniques to overcome this process. One of which is using the Synthetic Minority Oversampling Technique, also known as SMOTE <ns0:ref type='bibr' target='#b6'>(Chawla et al., 2002;</ns0:ref><ns0:ref type='bibr'>Feng et al., 2021)</ns0:ref>). SMOTE <ns0:ref type='bibr' target='#b15'>(Juez-Gil et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b16'>Li et al., 2021;</ns0:ref><ns0:ref type='bibr' target='#b18'>Mishra and Singh, 2021;</ns0:ref><ns0:ref type='bibr' target='#b32'>Zhu et al., 2017)</ns0:ref>, an excellent oversampling technique that reduces the risk <ns0:ref type='bibr' target='#b6'>(Chawla et al., 2002)</ns0:ref>. However, SMOTE tends to cause problems when applied to unbalanced multiclass data, with generalization acting as a more severe problem and one of the minority classes to the majority <ns0:ref type='bibr' target='#b32'>(Zhu et al., 2017)</ns0:ref>. The SMOTE stages are as follows <ns0:ref type='bibr'>(Feng et al., 2021)</ns0:ref> Steps 2 and 4 are repeated until the desired amount is obtained. This study followed up the previous studies <ns0:ref type='bibr' target='#b21'>(Prasetiyowati et al., 2021</ns0:ref><ns0:ref type='bibr' target='#b22'>(Prasetiyowati et al., , 2020a</ns0:ref><ns0:ref type='bibr' target='#b23'>(Prasetiyowati et al., , 2020b))</ns0:ref>. The researchers began this study by using the Correlation-based Feature Selection (CBF) for feature selection. This study resulted in the time required by the Random Forest (RF) that was less than the study without performing the feature selection. However, the accuracy was poor <ns0:ref type='bibr' target='#b22'>(Prasetiyowati et al., 2020a)</ns0:ref>. In the second study, the researchers continued to use the CBF, however, the dataset used in the study was the dataset that had been transformed using the Fast Fourier Transform (FFT) and reverted by using the IFFT. This study resulted in a better accuracy value than that of the previous studies. The average accuracy value for the dataset that had been transformed increased by 0.03 to 0.08% compared to the original dataset <ns0:ref type='bibr' target='#b23'>(Prasetiyowati et al., 2020b)</ns0:ref>. Even though the required time in this second study was shorter than that of the RF without feature selection, the total time did not include the time required for transforming the dataset. To fix the required time and the accuracy value, the third study used the Gain information with the threshold based on the Standard Deviation <ns0:ref type='bibr' target='#b21'>(Prasetiyowati et al., 2021)</ns0:ref>. This third study resulted in a better accuracy compared to the previous studies and the required time was also better. Nonetheless, the accuracy obtained from the study could not be superior to that of RF without feature selection. This study was only superior in the aspect of required time. The needs for the increased accuracy value stimulated the researchers to implement the FFT to the feature. Based on the previous studies, FFT could increase the accuracy value <ns0:ref type='bibr' target='#b23'>(Prasetiyowati et al., 2020b)</ns0:ref>. In addition, this study also proposes the integration of Information Gain, Fast Fourier Transform (FFT), and Synthetic Minority Oversampling Technique (SMOTE) algorithms to improve the accuracy of Random Forest performance. FFT is used to transform feature values into complex numbers consisting of imaginary and real numbers, while SMOTE is used for class imbalance problems. Features with real values are taken, and the median value is calculated to determine the threshold. The stages or the roadmap of this study can be seen in Figure <ns0:ref type='figure'>1</ns0:ref>.</ns0:p><ns0:p>This study is organized as follows: sections 2 and 3 describe the related research and proposed method. Meanwhile, the results and comparisons with other methods and studies are described in section 4. Finally, the research conclusion is discussed in section 5.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head><ns0:p>This study proposed a feature selection method using the median of Information Gain (IG), transformed with Fast Fourier Transform (FFT) to obtain real and imaginary values. However, only real values were taken to calculate the median of the IG, which are used to determine the threshold (cut off) subsequent processes. The equation used to calculate the IG value is shown in equation 1.</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>gain (y,A) = ( ) -∑ ( ) ( )</ns0:formula><ns0:p>The value c is an attribute, and Yc is a subset of y. The rule of equation ( <ns0:ref type='formula'>1</ns0:ref>) is the total entropy y, obtained after splitting the data based on feature X.</ns0:p><ns0:p>In the next step, the Information Gain value is transformed using FTT as in equations 2 and 3.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2021:12:68916:1:0:NEW 30 May 2022)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>Computer Science , k=0, 1, … N-1 (2) [ ] = ∑ -1 = 0 [ ]</ns0:formula><ns0:p>Where referred to as the twiddle factor, has a value of , hence ⅇ</ns0:p><ns0:formula xml:id='formula_2'>- 2 , k=0,1,… N-1 (3) [ ] = ∑ -1 = 0 [ ].ⅇ - 2</ns0:formula><ns0:p>The IG transformed by FFT is a complex number consisting of imaginary and real values. This study used the real value of the transformation results to calculate the median, which is the middle value that divides data into 2 (half). The median equation is seen in equation 4.</ns0:p><ns0:formula xml:id='formula_3'>(4) = + 1 2</ns0:formula><ns0:p>Where n is the number of data determined from the real value of the IG. After obtaining the median value, the next step is to cut off a threshold based on the median value. However, when the IG value is greater than or equal to (>=) the median, it is included as the selected feature. Furthermore, this study also proposes using SMOTE for multiclass datasets with only 2 classes, namely the minority and majority. The SMOTE only synthesizes the minor data to balance with the major, as opposed to the minor. Furthermore, this study proposes the SMOTE repetition technique for all minor classes approaches the same number of instances as the major class. The flow chart for the proposed method is shown in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Data preparation</ns0:head><ns0:p>This research was carried out using a computer with an Intel ®Core™ i5 processor, 1.6 GHz CPU, 12 GB RAM, and a 64 bit Windows 10 Professional operating system. The development environment was developed using Python, Matlab, and Weka 3.9.2. Meanwhile, 8 datasets were used in the UCI Machine Learning Repository <ns0:ref type='bibr'>(Dua, D. and Graff, C, n.d.)</ns0:ref>, including EEG Eye, Cancer ('Breast Cancer Wisconsin (Diagnostic) Data Set Predict whether the cancer is benign or malignant,' n.d.), Contraceptive Method, Dermatology, Divorce <ns0:ref type='bibr' target='#b30'>(Yöntem and Ilhan, 2019)</ns0:ref>, CNAE-9, Urban Land Cover <ns0:ref type='bibr' target='#b13'>(Johnson, 2013;</ns0:ref><ns0:ref type='bibr' target='#b14'>Johnson and Xie, 2013)</ns0:ref>, and Epilepsy <ns0:ref type='bibr' target='#b3'>(Andrzejak et al., 2001)</ns0:ref>. Information and details of each dataset are shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Table 1 Dataset Details</ns0:head><ns0:p>Each dataset was tested 10 times using a random seed with the cross-validation (K-Fold validation 10) process used for the selection of training and test.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>This study conducted feature selection and SMOTE experiments using Weka machine learning tools (version 3.9.2) and MATLAB. The performance of the proposed model was compared to other methods such as Correlation Base Feature Selection (CBF) and Information Gain (IG) using a threshold of 0.05 based on the Standard Deviation value <ns0:ref type='bibr' target='#b21'>(Prasetiyowati et al., 2021)</ns0:ref> and the original Random Forest <ns0:ref type='bibr' target='#b5'>(Breiman, 2001)</ns0:ref>.The required time and the accuracy performance are divided into two parts, namely the proposed feature selection and the dataset using the SMOTE process.</ns0:p><ns0:p>The proposed feature selection technique was the Information Gain (IG) method with a threshold based on the median value, which was calculated using FFT. The IG transformed with FFT was used to search for the real value. The results of the IG with the threshold were compared with the original Random Forest method. In fact, for IG with a threshold based on the median real (threshold median real), there is one dataset having a superior accuracy value and another with the same accuracy value. The datasets are the Urban Land Cover and Divorce datasets. If it is compared with the proposal in the previous study <ns0:ref type='bibr' target='#b21'>(Prasetiyowati et al., 2021)</ns0:ref>, the threshold median real method increases the accuracy in 3 datasets, namely Cancer, Urban Land Cover, and CNAE-9. In addition, the Divorce dataset has the same accuracy value. However, if the IG threshold median real is compared to the IG threshold median, it is seen that the IG threshold median real results in a better accuracy value. It can be seen in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>. Five datasets increased. They are EEG Eye, Cancer, Dermatology, Urban Land Cover, and Epilepsy. The threshold value based on the IG threshold median real showed an increased accuracy value from 0.0071 to 0.0249. The result of the experiment for comparing each method is shown in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref> shows that most datasets produce better accuracy using the median threshold with the transformed IG. Only the Contraceptive Method and Divorce datasets experienced a decrease in inaccuracy. Meanwhile, the comparison from the aspect of required time, the IG with threshold median real is faster than the RF and IG with threshold Median. The result of the comparison can be seen in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>.</ns0:p><ns0:p>Therefore, the method's performance and the Confusion Matrix reference were used to determine each method's Precision, Recall, and F1-Score, as shown in Tables <ns0:ref type='table' target='#tab_6'>3 and 4</ns0:ref>. The displayed Precision, Recall, and F1-Score is a cumulative calculation of 10 seeds given to each dataset. Precision is used to measure the classification accuracy conducted to determine the sensitivity. While F1-Score measures the balance between Precision and Recall.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> Precision, Recall and F1-Score on Random Forest, using CBF and IG Threshold of 0.05 Table <ns0:ref type='table'>5</ns0:ref> Precision, Recall and F1-Score on IG Threshold SD, Median and Median -Real</ns0:p><ns0:p>In the next stage, the researchers conducted the test on the unbalanced dataset. There are five unbalanced datasets, namely EEG Eye, Cancer, Contraceptive Method, Dermatology, and Urban Land Cover. Those five datasets were balanced using the SMOTE. The process of balancing the data conducted on the following datasets: EEG Eye, Cancer, and Contraceptive method, was carried out once. Meanwhile, for the Dermatology and Urban Land Cover datasets, the process of balancing the data was conducted 6 times as the researchers had proposed in the beginning. The researchers carried this out because there were two minority classes in the dataset and they needed to be balanced until reaching the major class. Predominantly, the process of balancing the dataset using the SMOTE was conducted repeatedly if there were more than 2 minority classes. This process will be conducted repeatedly until all minority classes are close to the major value. The minor value that will be balanced should not be more than the majority class. The results showed that the datasets that have been balanced using SMOTE had better accuracy, as shown in Fig. <ns0:ref type='figure' target='#fig_6'>4 (A, B and C</ns0:ref>). Similarly, those with 2 datasets are balanced more than once, as shown in Fig. <ns0:ref type='figure' target='#fig_7'>5 (A and B</ns0:ref>). </ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>From the eight datasets used here, only the Divorce dataset has the same accuracy value as that one resulted the Random Forest. This accuracy value can be increased by balancing the dataset using the SMOTE. In Figure <ns0:ref type='figure' target='#fig_2'>4</ns0:ref> and 5, it is seen that the dataset that has been balanced using the SMOTE resulted in a superior accuracy value. In part B of Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>, the IG method using the threshold Median Real results in a poor accuracy value when conducting one-time SMOTE, however, the accuracy increases when conducting multiple-time SMOTE. The researchers conducted the multiple-time SMOTE based on the total majority class in the dataset. As long as the total minority class is below the total majority class, the SMOTE will continue to be conducted. In this study, the multiple-time SMOTE for the Dermatology and Urban Land Cover datasets was conducted 6 times. The decreased accuracy value in the SMOTE for the Urban Land Cover dataset is because the data generated by the SMOTE did not meet the characteristics of minority classes. Besides, the total instance for each class is not much different.</ns0:p><ns0:p>Besides conducting the SMOTE, the accuracy value can be increased by using the feature that has been transformed using the FFT. This accuracy increase can be seen in Table <ns0:ref type='table' target='#tab_0'>2</ns0:ref> on the column for the IG threshold Median and IG Threshold Median Real. In the IG threshold median real method, five datasets saw an increase in the accuracy if it was compared with the IG threshold Median method. Those datasets are EEG Eye, Dermatology, Urban Land Cover, and Epilepsy.</ns0:p><ns0:p>From Table <ns0:ref type='table' target='#tab_2'>2 through Table 5</ns0:ref>, the accuracy value and the F1 score for the datasets, such as the Contraceptive Method and the Epilepsy datasets, decrease. The factor is that the total feature used here is less. In the Contraceptive method, the accuracy decreased since the total feature used here was 5 out of 9 existing features. The Epilepsy dataset also used 97 features out of 178 available features. Meanwhile, all datasets available in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref> and 5 are the datasets that have not been processed using the SMOTE. The SMOTE is not required to be conducted in three datasets, namely Divorce, CNAE-9, and Epilepsy, as those three datasets are balanced already. Even though the aspect of accuracy decreases, the aspect of required time for the IG threshold median real method needs less than the Random Forest without feature selection. The time difference between feature selection with the IG threshold median real and the original Random Forest is between 0.03 and 4.85 seconds</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions and Future Work</ns0:head><ns0:p>Based on the testing, it can conclude that the Information Gain (IG) with a threshold median 3 times superior to the accuracy generated by the Random Forest, especially in the data aggregate of Contraceptive Method, Divorce, and CNAE-9. Nevertheless, the accuracy value for the IG with threshold median real is higher than the threshold accuracy value based on the Median score. 5 datasets have an accuracy value higher than that of the IG Threshold Median; those datasets include EEG Eye, Cancer, Dermatology, Urban Land Cover, and Epilepsy. The increase in this accuracy value applies to both the original dataset and the dataset that has been balanced using the SMOTE. It can be inferred that FFT and SMOTE can increase the accuracy value, particularly if the SMOTE is conducted repeatedly according to what has been proposed by the researchers.</ns0:p><ns0:p>Even though the accuracy value in the feature selection with IG threshold median real is less superior to that of the original Random Forest, this method is superior in the aspect of speed. The time required in this method is less than that of the original random Forest. The next study that needs to be considered is the use of multilevel feature selection based on the roadmap that the researcher suggests in Figure <ns0:ref type='figure'>1</ns0:ref>. In addition, the selection of more informative features also needs to be considered. The next study that needs to be considered is the use of the two-level feature selection based on the roadmap that the researcher suggests in Figure <ns0:ref type='figure'>1</ns0:ref>. In addition, the selection of more informative features also needs to be considered. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>: 1. Prepares the number of synthetic minority class instances 2. Selects a minority class instance randomly PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68916:1:0:NEW 30 May 2022) Manuscript to be reviewed Computer Science 3. Uses the K-Nearest Neighbor (KNN) algorithm to get associated neighbors from the selected instance 4. Combines minority and selected neighboring class instances to generate new synthesis by random interpolation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Flowchart of The Proposed Method</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Comparison of imbalanced and balanced dataset (SMOTE) Figure 5 The Comparison between one-time SMOTE and multiple-time SMOTE</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 1 Figure 1</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Flowchart of The Proposed Method.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Comparison of Accuracy of Median Threshold and Median Threshold -Real.</ns0:figDesc><ns0:graphic coords='14,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Comparison of imbalanced and balanced dataset (SMOTE)</ns0:figDesc><ns0:graphic coords='15,42.52,178.87,525.00,405.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Comparison between one-time SMOTE and several-time SMOTE.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of Accuracy Values Figure 3 Comparison of Accuracy of Median Threshold and Median Threshold -Real</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68916:1:0:NEW 30 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 . Dataset Details Dataset Number of Instance Number of Feature Number of Classess Dataset Status</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Area</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Comparison of Accuracy Values</ns0:figDesc><ns0:table><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell /><ns0:cell>CBF</ns0:cell><ns0:cell /><ns0:cell cols='2'>IG Threshold 0.05</ns0:cell><ns0:cell cols='2'>IG Threshold SD</ns0:cell><ns0:cell>IG Threshold</ns0:cell><ns0:cell /><ns0:cell>IG Threshold</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Median</ns0:cell><ns0:cell /><ns0:cell>Median Real</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Accuracy Num.Of</ns0:cell><ns0:cell cols='2'>Accuracy Num.Of</ns0:cell><ns0:cell cols='2'>Accuracy Num.Of</ns0:cell><ns0:cell cols='2'>Accuracy Num.Of</ns0:cell><ns0:cell cols='2'>Accuracy Num.Of</ns0:cell><ns0:cell>Accuracy Num.Of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Feature</ns0:cell><ns0:cell cols='2'>Feature</ns0:cell><ns0:cell /><ns0:cell>Feature</ns0:cell><ns0:cell /><ns0:cell>Feature</ns0:cell><ns0:cell cols='2'>Feature</ns0:cell><ns0:cell>Feature</ns0:cell></ns0:row><ns0:row><ns0:cell>EEG Eye</ns0:cell><ns0:cell>0.9351</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>0.7703</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.6316</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>0.9015</ns0:cell><ns0:cell>10</ns0:cell><ns0:cell>0.8649</ns0:cell><ns0:cell>7</ns0:cell><ns0:cell>0.8890</ns0:cell></ns0:row><ns0:row><ns0:cell>Cancer</ns0:cell><ns0:cell>0.9633</ns0:cell><ns0:cell>31</ns0:cell><ns0:cell>0.9569</ns0:cell><ns0:cell>12</ns0:cell><ns0:cell>0.9663</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>0.9439</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.9452</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>0.9612</ns0:cell></ns0:row><ns0:row><ns0:cell>Contraceptive</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell>0.5230</ns0:cell><ns0:cell>9</ns0:cell><ns0:cell>0.4874</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.4874</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>0.5164</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>0.5274</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>0.4966</ns0:cell></ns0:row><ns0:row><ns0:cell>Dermatology</ns0:cell><ns0:cell>0.9701</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>0.9492</ns0:cell><ns0:cell>15</ns0:cell><ns0:cell>0.9705</ns0:cell><ns0:cell>33</ns0:cell><ns0:cell>0.9743</ns0:cell><ns0:cell>26</ns0:cell><ns0:cell>0.9352</ns0:cell><ns0:cell>17</ns0:cell><ns0:cell>0.9601</ns0:cell></ns0:row><ns0:row><ns0:cell>Urban Land</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Cover</ns0:cell><ns0:cell>0.8536</ns0:cell><ns0:cell>147</ns0:cell><ns0:cell>0.8730</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>0.8571</ns0:cell><ns0:cell>110</ns0:cell><ns0:cell>0.8476</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>0.8494</ns0:cell><ns0:cell>74</ns0:cell><ns0:cell>0.8565</ns0:cell></ns0:row><ns0:row><ns0:cell>Divorce</ns0:cell><ns0:cell>0.9765</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>0.9653</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>0.9765</ns0:cell><ns0:cell>54</ns0:cell><ns0:cell>0.9765</ns0:cell><ns0:cell>52</ns0:cell><ns0:cell>0.9771</ns0:cell><ns0:cell>27</ns0:cell><ns0:cell>0.9765</ns0:cell></ns0:row><ns0:row><ns0:cell>CNAE-9</ns0:cell><ns0:cell>0.9367</ns0:cell><ns0:cell>856</ns0:cell><ns0:cell>0.8118</ns0:cell><ns0:cell>28</ns0:cell><ns0:cell>0.8756</ns0:cell><ns0:cell>57</ns0:cell><ns0:cell>0.8805</ns0:cell><ns0:cell>65</ns0:cell><ns0:cell>0.9367</ns0:cell><ns0:cell>856</ns0:cell><ns0:cell>0.9150</ns0:cell></ns0:row><ns0:row><ns0:cell>Epilepsy</ns0:cell><ns0:cell>0.6973</ns0:cell><ns0:cell>178</ns0:cell><ns0:cell>0.6951</ns0:cell><ns0:cell>119</ns0:cell><ns0:cell>0.6973</ns0:cell><ns0:cell>178</ns0:cell><ns0:cell>0.6973</ns0:cell><ns0:cell>178</ns0:cell><ns0:cell>0.6759</ns0:cell><ns0:cell>97</ns0:cell><ns0:cell>0.6897</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison of Time Values</ns0:figDesc><ns0:table><ns0:row><ns0:cell>2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Time</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell>CBF</ns0:cell><ns0:cell>IG Threhold 0.05</ns0:cell><ns0:cell>IG Threshold SD</ns0:cell><ns0:cell>IG Threshold Median</ns0:cell><ns0:cell>IG Threshold Median with Real</ns0:cell></ns0:row><ns0:row><ns0:cell>EEG Eye</ns0:cell><ns0:cell>4.57</ns0:cell><ns0:cell>3.87</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>4.99</ns0:cell><ns0:cell>3.83</ns0:cell><ns0:cell>3.67</ns0:cell></ns0:row><ns0:row><ns0:cell>Cancer</ns0:cell><ns0:cell>0.10</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.97</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>0.07</ns0:cell></ns0:row><ns0:row><ns0:cell>Contraceptive Method</ns0:cell><ns0:cell>0.35</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.49</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.27</ns0:cell><ns0:cell>0.22</ns0:cell></ns0:row><ns0:row><ns0:cell>Dermatology</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>0.04</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>0.05</ns0:cell></ns0:row><ns0:row><ns0:cell>Urban Land Cover</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.06</ns0:cell><ns0:cell>0.07</ns0:cell></ns0:row><ns0:row><ns0:cell>Divorce</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>0.02</ns0:cell></ns0:row><ns0:row><ns0:cell>CNAE-9</ns0:cell><ns0:cell>2.19</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.38</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>2.19</ns0:cell><ns0:cell>1.38</ns0:cell></ns0:row><ns0:row><ns0:cell>Epilepsy</ns0:cell><ns0:cell>20.70</ns0:cell><ns0:cell>17.59</ns0:cell><ns0:cell>20.70</ns0:cell><ns0:cell>20.70</ns0:cell><ns0:cell>15.71</ns0:cell><ns0:cell>15.85</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>S</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>T</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68916:1:0:NEW 30 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Precision, Recall and F1-Score on Random Forest, using CBF and IG Threshold of 0.05</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68916:1:0:NEW 30 May 2022)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Precision, Recall and F1-Score on Random Forest, using CBF and IG Threshold of 0.05</ns0:figDesc><ns0:table><ns0:row><ns0:cell>3</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Ran ¡ Fop¢£¤</ns0:cell><ns0:cell /><ns0:cell>CBF Best Fip£¤</ns0:cell><ns0:cell /><ns0:cell>s¥ Thp¢£ ¦ § 0.05</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell /><ns0:cell>F1</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Pp¢¨£¨ © Re¦¦</ns0:cell><ns0:cell>p¢</ns0:cell><ns0:cell>Pp¢¨£¨ © Re¦¦</ns0:cell><ns0:cell>p¢</ns0:cell><ns0:cell cols='2'>Pp¢¨£¨ © Re¦¦ F1 p¢</ns0:cell></ns0:row><ns0:row><ns0:cell>EEG Eye</ns0:cell><ns0:cell>0.9351 0.9350</ns0:cell><ns0:cell>0.9353</ns0:cell><ns0:cell>0.7699 0.7702</ns0:cell><ns0:cell>0.7700</ns0:cell><ns0:cell>0.6304 0.6317</ns0:cell><ns0:cell>0.6310</ns0:cell></ns0:row><ns0:row><ns0:cell>Cancer</ns0:cell><ns0:cell>0.9634 0.9632</ns0:cell><ns0:cell>0.9633</ns0:cell><ns0:cell>0.9568 0.9568</ns0:cell><ns0:cell>0.9568</ns0:cell><ns0:cell>0.9664 0.9664</ns0:cell><ns0:cell>0.9664</ns0:cell></ns0:row><ns0:row><ns0:cell>Contraceptive Method</ns0:cell><ns0:cell>0.5192 0.5231</ns0:cell><ns0:cell>0.5211</ns0:cell><ns0:cell>0.4873 0.4875</ns0:cell><ns0:cell>0.4874</ns0:cell><ns0:cell>0.4873 0.4875</ns0:cell><ns0:cell>0.4874</ns0:cell></ns0:row><ns0:row><ns0:cell>Dermatology</ns0:cell><ns0:cell>0.9690 0.9691</ns0:cell><ns0:cell>0.9690</ns0:cell><ns0:cell>0.9493 0.9492</ns0:cell><ns0:cell>0.9492</ns0:cell><ns0:cell>0.9702 0.9704</ns0:cell><ns0:cell>0.9703</ns0:cell></ns0:row><ns0:row><ns0:cell>Urban Land Cover</ns0:cell><ns0:cell>0.8587 0.8534</ns0:cell><ns0:cell>0.8560</ns0:cell><ns0:cell>0.8850 0.8809</ns0:cell><ns0:cell>0.8829</ns0:cell><ns0:cell>0.8606 0.8571</ns0:cell><ns0:cell>0.8588</ns0:cell></ns0:row><ns0:row><ns0:cell>Divorce</ns0:cell><ns0:cell>0.9780 0.9760</ns0:cell><ns0:cell>0.9770</ns0:cell><ns0:cell>0.9656 0.9656</ns0:cell><ns0:cell>0.9656</ns0:cell><ns0:cell>0.9780 0.9760</ns0:cell><ns0:cell>0.9770</ns0:cell></ns0:row><ns0:row><ns0:cell>CNAE-9</ns0:cell><ns0:cell>0.9371 0.9366</ns0:cell><ns0:cell>0.9368</ns0:cell><ns0:cell>0.7804 0.8117</ns0:cell><ns0:cell>0.7852</ns0:cell><ns0:cell>0.8860 0.8756</ns0:cell><ns0:cell>0.8808</ns0:cell></ns0:row><ns0:row><ns0:cell>Epilepsy</ns0:cell><ns0:cell>0.6963 0.6972</ns0:cell><ns0:cell>0.6967</ns0:cell><ns0:cell>0.6949 0.6953</ns0:cell><ns0:cell>0.6951</ns0:cell><ns0:cell>0.6963 0.6972</ns0:cell><ns0:cell>0.6967</ns0:cell></ns0:row></ns0:table><ns0:note>4 U PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68916:1:0:NEW 30 May 2022)</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68916:1:0:NEW 30 May 2022)</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:12:68916:1:0:NEW 30 May 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "
Doctoral Program of Electrical and Informatics, School of Electrical Engineering and Informatics, Institut Teknologi Bandung,
Jl. Ganesha 10, Bandung, 40132,
West Java, Indonesia
Mei 30th 2022
Dear Editors
We thank to reviewers for their constructive comments on the manuscript and we have carefully revised the manuscript to address their concerns.
We are uploading (a) our point-by-point response to the comments (response to reviewers), (b) an updated manuscript with red sentences indicating changes, and (c) a clean updated manuscript without highlights.
For affiliation, there are changes according to the campus policy.
We believe that the manuscript is now suitable for publication in Peer J Computer Science.
Maria Irmina Prasetiyowati
On behalf all authors.
Reviewer 1 (Anonymous)
Basic reporting
No comment
Experimental design
No comment
Validity of the findings
No comment
Additional comments
Concern # 1: According to the Abstract and Introduction sections, the purpose of this paper is to improve the accuracy and processing speed of random forest, but the actual research only reflects the accuracy and does not analyze the processing speed improvement.
Author response and action:
We have with careful revise our script completely in accordance with suggestions. We added explanation about the purpose of this study which is increasing the required accuracy and speed time in Random Forest. We perfect script with add research purpose on abstract, and first paragraph part Introduction section.
Concern # 2: The parameters or hyperparameters of machine learning algorithm also have a great impact on the accuracy of the model. This work mainly studies the influence and improvement methods of feature screening and unbalanced data on the model accuracy of random forest, but does not study the influence of algorithm parameter or hyperparameter optimization methods (such as Bayesian hyperparameter optimization, refer to sun et al. 2020, 2021, Doi: 10.1016/j.geomorph.2020.107201, Doi: 10.1016/j.enggeo.2020.105972), and the present work cannot cover the subject. It is suggested to add 'feature selection' and 'SMOTE' to modify the title.
Author response and action:
We have with careful revise our script completely according to the suggestions. This research focused on how the influence and enhancement method on features filtering and imbalance data on the accuracy of the forest model random. Our research only explore the influence and features filtering method enhancement, and imbalance data on the accuracy of the Random Forest model.
We have improved the title this research according to the suggestion to add phrase 'election' and 'SMOTE'. So, the new title is:
“The Accuracy of Random Forest performance can be improved by conducting a feature selection with a balancing strategy”
Concern # 3: Does the features selection methods proposed in this work have comparative advantages with the widely used recursive feature elimination (RFE) methods, such as Zhou et al. (2021) used in DOI: 10.1016 / j.gsf. 2021.101211?
Author response and action:
We have with careful revise our script completely according to the suggestions. This study has not comparative edge to the RFE method as in the paper of Zhou et al. (2021). Because the dataset we used not just one/single. We used eight different datasets which have different characteristics.
And we use the Confusion Matric to analyze the accuracy of model prediction.
Concern # 4: For the Conceptual method and Epilepsy data sets, the model effect is not good with lower accuracy and F1 score, whether it is improved or not. The possible causes should be analyzed in the discussion section.
Author response and action:
We have with careful revise our script completely according to the suggestions. The datasets on Tables 4 and 5 are comparison from datasets which have not been SMOTE.
For Contraceptive method dataset, the accuracy decreased because we use less features (5 features) compared to what have been used by RF (9 features). Meanwhile, the selected features are not informative enough compared to the other method.
Likewise the Epilepsy dataset, the features used in the proposed method only 97 features, while the RF have 178 features. And the dataset has not been SMOTE, because the data already balanced.
This information have been mention on the results and discussion section, line 182 to 189.
Concern # 5: There are meaningless to use broken lines in figure 2-4. It is suggested to change broken lines to histograms.
Author response and action:
According to the suggestions, we already change the figures 2-4 into histograms. File names: Figure3_Rev.Doc, Figure4_Rev.Doc, and Figure5_Rev.Doc.
Concern # 6: The number of factors of each data set by using each feature selection method are recommended to be expressed in a table.
Author response and action:
There are no accuracy and time pattern on our recommended method. The resulted values are random. But we tend to compare of the RF accuracy and timing without feature selection with the proposed features selection method. The comparison of the feature used could be seen on the revised Table 2.
Concern # 7: Compared with table 2, the expression of conclusion a and b is inaccurate and confusing, which is recommended to be modified.
Author response and action:
We have revised the conclusion according to the suggestions.
Concern # 8: Table cs-68916-7_CNAE in the Supplemental Files seems incomplete, missing feature X1.
Author response and action:
X1 is a categorical class, according to attribute of CNAE-9 Data Set on: https://archive.ics.uci.edu/ml/datasets/CNAE-9
We move behind the categorical column into the column class.
Concern # 9: It is strongly recommended to add the content of the Discussion section.
Author response and action:
According to the suggestions, we have added in Discussion section. We break down the section into 2 sub chapters: “Results Section” and “Discussion Section”.
Concern # 10: See the Attachment PDF File for other writing errors.
Author response and action:
We have revised the errors as mention on the PDF file according the suggestions.
Reviewer 2 (Anonymous)
Basic reporting
This paper proposes a method to improve accuracy performance in random forest. Some important comments:
Concern # 1: The main contribution of this paper is not explained, for example the difference with previous studies.
Author response and action:
According to the suggestions, we have added the explanation on main contribution of the paper on introduction section lines 92 to 119.
Concern # 2: Explanation of the flow of thought on how IG uses the standard deviation (SD) as a threshold, because the standard deviation is a measure of the spread of data, not the concentration of data. Is the purpose of using SD as a threshold described in the paper the same as the research of Sindhu and Radha, 2020?
Author response and action:
The purpose of using SD in this research is same with the Sindhu 's research: to get the threshold value. However, Sindhu's research used the SD statistical method to calculate the threshold value on an image in order to binarized an input image (stage II).
Concern # 3: In line 66, it is explained that the IG threshold using the standard deviation is less successful in determining the threshold when considering data balance, then in this study using the median. How to explain the flow of thought?
Author response and action:
We have revised our script according to the suggestions. The meaning of line 66 is that IG research with Standard Deviation threshold have not yet use the dataset balance (SMOTE).
Concern # 4: It is necessary to explain the flow of thought why IG needs to be transformed using FFT, then to determine the threshold only use the real part. What does the real and imaginary part after the FFT process represent?
Author response and action:
We have revised our script completely according to the suggestions. This idea appears after the previous study (Prasetiyowati et al., 2020). We found that by with using FFT can increase accuracy value. We write this down on line 110-112:
“The need to increase accuracy value, prompts us to implement FFT on features. This thing based on from study previously that FFT can increase accuracy value ( Prasetiyowati et al., 2020)”
Experimental design
Concern # 1: In this study using several unbalanced datasets, for performance measurement why only use accuracy, it is better to add others.
Author response and action:
We have revised our script completely according to the suggestions. We have already add the research purpose this with calculation time/speed taken.
Concern # 2: Why for data separation using k-fold cross validation, not using a certain proportion between training and testing?
Author response and action:
The reason for using K-Fold Cross Validation is because K-Fold Cross Validation can be used to evaluate model performance. Meanwhile, K-fold is used due to model validation with apply 10-Cross Validation already is standard and a more practical validation method, to reduce computing time, and can maintain the estimation accuracy.
Concern # 3: In Figure 1, a flow chart that explains the SMOTE process for those with more than two classes, needs to be described in more detail, preferably with an example.
Author response and action:
According to the suggestions, we have revised Figure 1 into Figure 2 with an additional image. We have explained the SMOTE and it’s example on line 172 to line 183.
Validity of the findings
Concern # 1: In Figure 4, the term first-SMOTE appears, what does it mean, where is it explained?
Author response and action:
According to the suggestions, we have changed the term “first-SMOTE” into “one time SMOTE” that means conducting data balancing only on one minority class. For instance: if we have 3 data with different classes (10, 3, 4), then 'one-time SMOTE' will only balance the data with minority class (3 becomes 10).
Concern # 2: In the results and discussion, the results obtained need to be explained more comprehensively based on the results of previous studies. For example in Figure 4A, why does the dermatology-first smote accuracy decrease sharply using the median threshold.
Author response and action:
We have found a mistake on in Figure 4A because we forgot to SMOTE the dataset, causing the sharp decrease compared to the original dataset. The error have been fixed. The accuracy value drop also have been discussed the Discussion section.
Additional comments
Concern # 1: Split data using K-Fold validation instead of K-Folt validation (typo)
Author response and action:
We have revised the typo.
Concern # 2: It should be added in the background, the difference between the current research and previous research conducted by the researcher (it can be described as a kind of research roadmap).
Author response and action:
According to the suggestions, we have added the research stages on the introduction line 92-119.
Concern # 3: In the conclusion section, write a comprehensive conclusion, not just in the form of points without the connection between points.
Author response and action:
According to the suggestions, the Conclusion section have been revised.
" | Here is a paper. Please give your review comments after reading it. |
716 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The number of applications prepared for use on mobile devices has increased rapidly with the widespread use of the Android OS. This has resulted in the undesired installation of Android application packages (APKs) that violate user privacy or are malicious. The increasing similarity between Android malware and benign applications makes it difficult to distinguish them from each other and causes a situation of concern for users. Methods. In this study, FG-Droid, a machine-learning based classifier, using the method of grouping the features obtained by static analysis, was proposed. It was created because of experiments with Machine learning (ML), DNN, RNN, LSTM, and GRU-based models using Drebin, Genome, and Arslan datasets. Results. The experimental results revealed that FG-Droid achieved a 97.7% area under the receiver operating characteristic (ROC) curve (AUC) score with a vector including only 11 static features and the ExtraTree algorithm. While reaching a high classification rate, only 0.063 seconds were needed for analysis per application. This means that the proposed feature selection method is faster than all traditional feature selection methods, and FG-Droid is one of the tools to date with the shortest analysis time per application. As a result, an efficient classifier with few features, low analysis time and high classification success was developed thanks to a unique feature grouping method.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Android OS is a mobile platform that was prepared by a group of developers. It has dominated the mobile operating system market for many years. According to 2021 statistics, it is used in more than 70% of mobile devices, and together with the iOS operating system, it meets 98% of the entire market share <ns0:ref type='bibr'>[53]</ns0:ref>. Accordingly, the number of application downloads worldwide is increasing rapidly and this trend is expected to continue <ns0:ref type='bibr'>[61]</ns0:ref>. There are many reasons for this widespread use of the Android operating system. It has the appropriate functional infrastructure to access hardware resources. It is free and open source platform and is equipped with a security framework that relies on the Linux kernel <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. However, since its security structure is based on the application layer <ns0:ref type='bibr' target='#b2'>[5]</ns0:ref>, these devices become partially or completely exposed to numerous security attacks, making them a routine target <ns0:ref type='bibr' target='#b1'>[4]</ns0:ref>. In addition, the widespread use of mobile devices with Android OS, the adoption of these devices by end users and the resulting of increasing market share, has caused it to become the target of cyber hackers, especially webbased and application layer-based attacks <ns0:ref type='bibr'>[2,</ns0:ref><ns0:ref type='bibr'>3]</ns0:ref>. When malicious applications access user mobile devices, they can engage in a series of malicious activities, such as obtaining confidential information, seizing more authorized user accounts, and misusing the obtained certain level of security <ns0:ref type='bibr' target='#b1'>[4]</ns0:ref>. Google Play has set up a permission-based system to control applications' access to confidential data. Users are asked to give permission before installation, taking into account the resources of the application. Users must approve these permissions before installation to use it. However, this-preapproval mechanism does not provide sufficient protection for users, since users accept these permissions without detailed examination. As a result, users accept all conditions in order to have free access to applications that offer the features they demand <ns0:ref type='bibr' target='#b3'>[6]</ns0:ref>. For these reasons, there is a need to work on the security mechanism that Google Play provides for users. Different types of malware detection mechanisms have been proposed to address this need in the security mechanism. These are the signature-based approach <ns0:ref type='bibr' target='#b5'>[7,</ns0:ref><ns0:ref type='bibr' target='#b6'>8]</ns0:ref>, behavior-based detection software <ns0:ref type='bibr' target='#b7'>[9]</ns0:ref><ns0:ref type='bibr' target='#b8'>[10]</ns0:ref><ns0:ref type='bibr' target='#b9'>[11]</ns0:ref>, and machine learning-based approaches <ns0:ref type='bibr' target='#b10'>[12]</ns0:ref><ns0:ref type='bibr' target='#b11'>[13]</ns0:ref><ns0:ref type='bibr' target='#b12'>[14]</ns0:ref>. Among these, machine learning and deep learning architectures have gained more popularity recently. Because in this method, it is possible to obtain both a dynamic learning and development process and good results against zero-day attacks. It is possible to create ML models that are open to learning and development at the same speed for cyber attackers who try to overcome the security mechanism with a new technique every day, and to produce promising solutions for the detection of malware <ns0:ref type='bibr' target='#b13'>[15]</ns0:ref>. In machine learning based approaches, there is a dependency on the features used as input, classifier and learning architectures. In this study, the size of the feature set and the effects of the features it contains in terms of performance and efficiently in android malware detection mechanisms are focused. There are many manual and automatic feature extraction methods. These methods are basically divided into 3 groups, as static features, dynamic features, and hybrid features <ns0:ref type='bibr' target='#b14'>[16]</ns0:ref>. Each of these features can be used to detect different malicious activities. In addition, each feature can provide a distinctiveness in detecting malicious applications. For this reason, different feature sets can be used in studies in this field, and problem-specific feature vectors can be produced. The main purpose is to provide fastest and highest classification success model with the best feature set. The contributions of this work in the following way:</ns0:p><ns0:p> Feature grouping based Android malware detection tool(FG-Droid) is a low runtime and highly efficient machine learning model.  The model groups permission-based features with a unique methodology and obtains only 11 static features for each application.  The model has 97.7% classification success in the tests and only needs 0.063 seconds for analysis per application. This value is one of the best values among the models with similar classification success. This value was obtained without using the GPU.  The model selects fewer features than traditional feature selection methods (chi2, f_class_if, PCA) and requires less processing time while showing higher classification success than them.  As a result, a model with high classification success and low analysis time with few features has been revealed thanks to the proposed unique grouping. The continuation of this study is organized as follows: In Section 2, Android application development infrastructure and similar studies on this subject are analyzed separately for static and dynamic methos. The methodology of FG-Droid is explained in detail in Section 3 and the experimental results are given comparatively in Section 4. In the last part, a general evaluation of the study is made and suggestions are made for future studies. In addition, a FG-Droid permission grouping table is given as Appendix-A before the reference section. made using Hamming distance and an accuracy value of 91% was achieved. The static analysis method has been used in many other studies, such as DroidSieve <ns0:ref type='bibr' target='#b38'>[40]</ns0:ref> and DroidDet <ns0:ref type='bibr' target='#b39'>[41]</ns0:ref>. Although, static analysis is advantageous at certain points, is also has some limitations <ns0:ref type='bibr' target='#b31'>[33]</ns0:ref>. It would not be possible to observe its behavior at runtime. Code analysis of complex software takes time. It may not be possible to extract static features based on the source code inn encrypted and obfuscated applications. A summary of all studies using the static analysis methodology used in this study is shown in Table-1.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dynamic analysis</ns0:head><ns0:p>Dynamic features are extracted as a result of analyzing the situations in which Android applications communicate with the operating systems or the network. System calls and network usage statistics are the most basic features that give an idea about the APKs. In addition, processor and memory space usage data, the status of the services running instantaneously, some statistical data about the device (battery usage, screen-on time, etc.), and information about the addresses reached by the systems calls and network packets are important features for classification. Droidscope <ns0:ref type='bibr' target='#b28'>[30]</ns0:ref> is an efficient and effective dynamic android malware detection tool that works on Android devices by extracting three-layer (hardware, operating system, and dalvik virtual machine) system calls, and performing semantic analysis at the operating system and code level. MADAM <ns0:ref type='bibr' target='#b29'>[31]</ns0:ref> tracks kernel-level system calls and user-level usage statistics and activities to be able to describe and classify the behavior of a mobile application. ANDLANTIS <ns0:ref type='bibr' target='#b33'>[35]</ns0:ref> is a dynamic analysis tool that runs on a sandbox. It aims to detect malware by analyzing system calls, footprints, and running behaviors. It requires 1 h for the analysis of 3000 applications. Chen et al. <ns0:ref type='bibr' target='#b37'>[39]</ns0:ref> proposed a semi-supervised classifier that works using dynamic API usage logs. They made use of both labeled and unlabeled data to obtain application properties. They showed the results comparatively by classifying with SVM and k-nearest neighbor (KNN). TaintDroid <ns0:ref type='bibr' target='#b54'>[62]</ns0:ref> is a dynamic tracer tool. It uses Dalvik virtual machine to do this tracking. It tracks the usage of sensitive resources such as the location, microphone, and camera. While revealing the traces of the applications due to the virtual machine, it does not pose any danger on the real environment. This ensures that data leaks are prevented and malicious software intentions are revealed. Dynamic analysis-based approaches try to understand the intentions of mobile applications by analyzing their behavior during running. For this reason, in some cases, it can show higher recognition success than static analysis. However, in order to understand a suspicious behavior, it must be run at least once and information must be collected during this time. This means both creating a security problem for the device and additional processing time <ns0:ref type='bibr' target='#b30'>[32]</ns0:ref>. The presence of processor and memory limitations of mobile devices complicates the applicability of dynamic analysis. However, running applications on an emulator/sandbox may be sufficient for dynamic analysis. It was accepted that it is possible to gather more information about the application by simulating the so-called user behavior <ns0:ref type='bibr' target='#b32'>[34]</ns0:ref>. Crowdroid <ns0:ref type='bibr' target='#b49'>[56]</ns0:ref> is a behavior-based android malware detection tool that works in client-server architecture. All system calls from the application are collected via the mobile device and sent to a cloud server for analysis. With Kmeans, this data is processed and the application is classified. A single feature can capture certain aspects of an application. However, using more than one feature together can be more advantageous in malware detection. Various studies have been conducted with hybrid feature structures using combinations of both static and dynamic features <ns0:ref type='bibr' target='#b34'>[36]</ns0:ref><ns0:ref type='bibr' target='#b35'>[37]</ns0:ref><ns0:ref type='bibr' target='#b36'>[38]</ns0:ref>. ProfileDroid <ns0:ref type='bibr' target='#b41'>[43]</ns0:ref> evaluates both static and dynamic features such as Android permissions, features obtained as a result of code analysis, and network usage statistics. Thus, a systematic study was aimed to establish a costeffective and consistent model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Hybrid Analysis</ns0:head><ns0:p>The features obtained because of static and dynamic analysis alone can capture only one aspect of the applications. For this reason, it is possible to make a more accurate analysis when more than one feature group is used together. Thus, it is possible to detect malware with higher accuracy. NTPDroid <ns0:ref type='bibr' target='#b50'>[57]</ns0:ref> uses a hybrid feature vector that combines permissions and network traffic. The FP-Growth algorithm is proposed to obtain the commonly used thicknesses among applications. Experimental results showed that it has an accuracy value of 94.25%. Arshad <ns0:ref type='bibr' target='#b51'>[58]</ns0:ref> proposed a model for Android malware detection using static and dynamic analysis together. The model first extracts the requested permissions, used permissions, application components and suspicious API calls by examining the APK file. Then the application's network usage statistics obtain system calls as features. By combining these two feature groups, the feature vector of the application is obtained and classified by SVM. Experimental results showed high malware detection accuracy. OmniDroid <ns0:ref type='bibr' target='#b52'>[59]</ns0:ref> is a hybrid feature vector and machine learning classification tool that combines static and dynamic features based on voting. In this study, the dependencies of static and dynamic features are taken into account and the features go through an evaluation mechanism when combining them into a single vector. AAsandbox <ns0:ref type='bibr' target='#b55'>[63]</ns0:ref> offers a two-stage analysis approach. First, an image of the application is taken in offline mode and static and dynamic analysis is performed on the sandbox. The entire system is hosted on the cloud server and suspicious applications are detected. Tong et al. <ns0:ref type='bibr' target='#b56'>[64]</ns0:ref> proposed a different approach as a solution to the long processing time problem of static analysis and the problem of dynamic analysis to consume a lot of processing resources. The model can detect different types of malware much more efficiently than other studies and achieves an average of 90% classification success. In the hybrid analysis process, there are models that use static and dynamic features together while creating the feature vector, and there are studies in which hybrid classifiers are used to classify a single feature type. Yerima et al. <ns0:ref type='bibr' target='#b53'>[60]</ns0:ref> proposed a composite classification model to increase classification accuracy. In the composition classifier, rule-based, function-based, tree-based, and probability-based classifiers are used together. Thus, an average of 5% increase in success was achieved.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methodology</ns0:head><ns0:p>In this section, the method used by the FG-Droid tool was explained in detail. The method includes the stages of extracting features, grouping extracted features, updating feature values, and testing on sample dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>Android Application Structure</ns0:head><ns0:p>Applications prepared for the Android OS are presented to users as a kind of compressed file with the APK extension. The package includes files such as source code, manifest.XML, libraries, resources, DEX file, and properties, as shown in Figure <ns0:ref type='figure' target='#fig_2'>1</ns0:ref>. Applications are developed in Java using the Android SDK. Applications whose source code is completed are converted to Dalvik bytecode (DEX) together with other required files. A manifest file is an XML that contains basic cookies for the applications, links to external files, and libraries such as activities, receivers, and content providers. In addition, the permissions needed to access device resources, target platform data are included in the manifest. External resources such as videos, sounds, audios, images, and text files used by the application are packaged in the APK file. As a result, a package is prepared that contains necessary files to execute the entire working functions <ns0:ref type='bibr' target='#b45'>[51]</ns0:ref>. Prepared APK packages are offered to users via Google Play Store or third party application distribution platforms. More details for Android application development are provided on the developer page <ns0:ref type='bibr'>[50]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Automatic Feature Extraction and Pre-processing</ns0:head><ns0:p>The Android operating system works on an application-based. After the applications are prepared, they are made into zip files, and their extensions are determined as APK. These applications, which have similar file structures, can be run on the same operating system (Android OS). The application package contains folders and files such as androidmanifest.xml and class.dex, resources.arsc, lib, res. Androidmanifest.xml is required for applications to run and contains some information (version, API level, hardware, software information etc.) and permissions declared by the application. This study adopts a static analysis-based feature extraction approach for Android malware detection. A series of steps were applied to extract the permission-based features in the Androidmanifest.xml file, as shown in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. First, a dataset consisting of benign and malicious software was created. Applications are opened using the Jadx decompiler. Thus, access to both the source code and the Androidmanifest.xml file needed in this study was provided. A list of all permissions in the Androidmanifest.xml file is required to determine the permissions requested by the applications and transfer them to the feature vector. This study used a list consisting of 348 permissions in Android 11 API level 30. The permissions in the manifest file and the permissions in the permission list were compared and transferred to the feature vector with a value of 1 for used ones and 0 for unused ones. Thus, a feature vector of 1x349 dimensions was obtained for each application. It should be noted that it is impossible to open all applications healthily with reverse engineering. In this case, values such as NaN and space in the vector may be present. These were checked, and related applications were filtered. There was no problem in terms of malicious application labelling applications. However, it is critical to determine whether benign apps are genuinely benign. For this reason, all benign applications were rechecked on Virustotal. For this reason, high-speed and efficient feature extraction and preprocessing were carried out. Those who did well in these tests were marked with B, and applications from the Drebin and genome dataset were marked M. After these processes were completed for all applications, they were combined into a single feature vector. As a result, a feature vector of 7266x349 dimensions was obtained. The resulting vector was saved in a CSV file and converted into a usable form in the following steps. All these processes were done with the exe file prepared in c# language. Considering that there will be a need for high processing power in this large-sized vector in machine learning models, it is subjected to feature grouping, the details of which we have given in the next section. Since the values after the group are greater than 1, normalization is performed with the normal distribution, and all values are scaled to the 0-1 range.</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Feature Grouping and Feature Selection Algorithm</ns0:head><ns0:p>The mechanism of accessing certain components or performing functions was based on permissions in the Android operating system. Android applications request permission to access and read information about calendar, location information, contacts, storage, camera, microphone, various sensors. For example, it is possible to access contact information on the device with the Android.permission.READ_CONTACT permission. Permissions vary according to the device's feature and functional capacity. In this study, instead of a feature vector containing all permissions used in Android architecture, a model called FG-Droid has been developed, which can achieve high classification success with a very fast training and testing time by using fewer features. For this purpose, a series of operations were carried out within the flow chart shown in Figure <ns0:ref type='figure' target='#fig_4'>-3</ns0:ref>.</ns0:p><ns0:p>In the first stage of the model, a dataset containing 7622 applications was created, the details of which are given in Section 4. The permission-based features of these applications have been extracted. At this stage, a training and test vector containing 349 features was created. In order to shorten the training and testing times and to classify using less processing time, operations were carried out according to the proposed algorithm for grouping the features and reduction the size. Thus, a tool was created in which it was possible to classify with less features. The standardization process was performed using the normal distribution on the feature vector obtained after this step. The resulting data set was divided into two separate parts to be used in the training and testing phases. During the training phase, oversampling was performed using the SVM-synthetic minority over-sampling technique (SMOTE) algorithm to resolve the sample number imbalance between clusters, and training was carried out on deep neural network (DNN), recurrent neural network (RNN), long short-term memory (LSTM), and gated recurrent unit (GRU) networks together with machine learning algorithms (Rf, Extratree, gradient boosting, etc.). The resulting trained models were tested with previously separated test data and the results were shown in comparison with commonly used metrics. As can be seen in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>, the proposed feature grouping and selection method was carried out after the application features are extracted. Thus, it was possible to work with a much lower-dimensional feature vector not only in the training and testing phases, but also in all processing steps. The details of feature grouping operations are as shown in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. The determination of groups was based on basic read/write operations (CRUD) in computer systems and operations specific to mobile devices (Broadcast, Control, Bind). These groups consisted of Access (A), Modify (M), Set (S), Update (U), Write (W), Read (R), Get (G), Manage (Mn), Bind (Bd), Broadcast (B) and Control (C). The Android permissions in each group are shown in Appendix A. Within the groups determined in Appendix A, all of the features were first scanned and brought together on a group basis. The values of these features were aggregated within each group, so that the existence of the permission represents itself in the total. For example, instead of using 21 different features in model trainings separately, they were combined under the Access (A) feature. Thus, 21 properties were represented only by the Access (A) property. Although there are many permissions/features in the Android operating system, very few of them were used in applications, so it is not necessary to use all of the features separately during training and testing. In the Drebin dataset used in this study, the average number of permissions requested for each application was 5. Similarly, the average number of permit requests in the Genome and Arslan datasets was 6 and 5, respectively. In this case, a feature vector created with 5 numbers 1 and 344 numbers 0. Instead, all of the permissions were searched for each application, and they were combined under 11 feature groups after grouping and aggregation. Thus, in the process starting as 349 features, a feature vector with only 11 features was obtained for each application. The feature grouping process was carried out using the code structure given in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref>. The feature vector of 7622 × 349 dimensions taken as input was converted into a 7622 × 11 dimensional vector, thus providing a much more efficient learning in training and testing processes. The effect of the obtained vector on the results is shown in detail in Section 4.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>The performance of the proposed model is evaluated in terms of 1) the classification success and 2) the time needed for training and prediction. A binary classification was determined as benign and malicious.</ns0:p><ns0:p>The tests were performed separately on machine learning techniques (RF, DT, LDA, etc.), DNN, RNN, Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>LSTM, and GRU networks, and the results were given in detail. Thus, it was desired to show the effect of feature grouping and reduction process used by FG-Droid application on the results.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation Metrics and Experimental setup</ns0:head><ns0:p>The experimental environment and evaluation criteria are very important to analyze the performance of the machine-learning model. For the experiment, 70% of the data was split for training and 30% for testing. area under the receiver operating characteristic (ROC) curve (AUC), precision, recall, f-score, and accuracy values were calculated to evaluate the performance of the proposed model. The calculation equations of these metrics are shown below:</ns0:p><ns0:p>(1) = +</ns0:p><ns0:p>(2) = +</ns0:p><ns0:p>(3)</ns0:p><ns0:formula xml:id='formula_0'>- = + * + + + + (4) = + + + +</ns0:formula><ns0:p>TP is the number of truly malicious samples from those predicted as malware, FP is the number of samples that are predicted to be malware that are not truly malicious. FN is the number of samples that are predicted to be benign that are not truly benign and TN is the number of truly benign samples from those predicted as benign. The computer system architecture used in the development of the FG-Droid tool is as shown in Table <ns0:ref type='table'>2</ns0:ref>. It is very important in calculations regarding training and test times. All comparative results were obtained using the same infrastructure.</ns0:p></ns0:div>
<ns0:div><ns0:head>Dataset</ns0:head><ns0:p>In studies on Android malware detection, it is not possible to reach sufficiently large, homogeneously distributed, and reliable datasets. In this study, Drebin [44] and Genome [45] datasets were used for malware applications. The Drebin dataset contains 5560 samples from 179 different application groups. A malicious dataset with 6660 samples was created by taking 1000 samples from the genome dataset. Arslan <ns0:ref type='bibr' target='#b46'>[52]</ns0:ref> dataset was used for benign applications. In this dataset, the applications with the highest number of downloads in Google Playstore were selected in various categories, and they were subjected to security tests on virustotal.com <ns0:ref type='bibr'>[46]</ns0:ref> and a data set consisting of 960 applications that passed the test was created. During the labeling phase of the datasets, Drebin and Genome were distributed as labeled, so no action was taken on the malicious dataset. However, labeling in the benign dataset was made by us to be used in this study. As a result, a dataset containing real-world applications and examples that will not create noise for both malicious and benign applications has been created.</ns0:p></ns0:div>
<ns0:div><ns0:head>Hyper-parameter tuning for best machine learning models</ns0:head><ns0:p>In order to achieve successful classification performance in a test environment where each sample is represented by 11 features, 10 different machine learning techniques and DNN, RNN, LSTM, and GRU networks were designed and tested. At the end of these processes, hyper-parameter tuning was performed for the all classifiers.. Both GridSearch and RandomSearch algorithms were used for the selection of the best parameters, and the selection range of the parameters and selected parameters was as shown in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>. Thus, it was possible to obtain the best rates for all classifiers in the tests. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results for Machine Learning</ns0:head><ns0:p>After the feature-grouping algorithm used in the development process of the FG-Droid, the results obtained in the tests using machine-learning techniques were as shown in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref> for 10 different classifiers.</ns0:p><ns0:p>As can be seen in Table <ns0:ref type='table' target='#tab_6'>4</ns0:ref>, the accuracy rate is 90% and above, except for two algorithms, and the highest classification rate was 92.5%. While a successful result such as 94.4% was obtained in the precision value, recall, and f-score were 92%. As a result, a very high success of 97.9% was achieved in the AUC score, which is the indicator of classification success for both classes. This showed that it is not enough for permission-based applications to remove the disruptive features in android malware detection and it is not necessary to consider each permission as a separate feature. It was possible to reach the 97.9% success level using only 11 features of the application. The effect of the FG-Droid tool is not in the feature selection, but in grouping the features and ensuring that each permission is represented under the group. It was possible to prevent the loss of the distinctiveness of the feature by selecting the feature. Thus, both the number of features were greatly reduced and the effect of many features on classification was used under the group. In order to improve the results, the cross-validation process on the model was performed using 'RepeatedStratifiedKFold' function. The n_splits, n_repeats and random state were chosen as 10, 3, and 123, respectively. The results obtained in repeated tests for all classifiers are shown in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref>. Accordingly, XGB, ET, RF and DT algorithms are more successful than other classifiers at the average classification rate.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results for Deep Learning Models</ns0:head><ns0:p>During the development of the FG-Droid, tests were carried out using deep learning models. As stated before, the number of features decreases considerably with the proposed grouping algorithm. The effect of the algorithm in deep learning models, which need to use more features and more training examples in its basic structure, is very important. These tests were carried out to observe whether the group-based feature vector would have a negative effect on the classification performance in deep learning models. As can be seen in Table <ns0:ref type='table' target='#tab_11'>5</ns0:ref>, a lower classification success was achieved when compared to the machine learning models, due to the reduction in the number of features. This decrease occurred for all of the metrics. The highest level of success was achieved with the LSTM (100,100) model with 92.2% for accuracy. The precision, recall, and f-score values were 94.4%, 91.6% and 93.9%, respectively. The highest value of 92.5% was obtained in the AUC score, in which both classes were evaluated together. The learning curves for the models with the highest classification rate for DNN, RNN, GRU, and LSTM are shown in Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. For all of the models, learning took place very quickly, with training reaching its peak at 20 epochs. The test curve was parallel to the training curve. However, the learning rate slowed down after 20 epochs. This slowdown was thought to be due to the need for more data to continue learning. Repeating the tests with a larger dataset will allow FG-Droid to achieve a higher AUC.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison Results and Best Classifier Results Details</ns0:head><ns0:p>In the FG-Droid development process, it was understood that the models in which the proposed algorithm produced the most successful results were based on the machine learning algorithms. Machine learning techniques have generally shown high success. In order to evaluate the results with all classifiers, ROC curves were taken as shown in Figure <ns0:ref type='figure' target='#fig_9'>-8</ns0:ref>. Accordingly, 97.0% and above AUC values were obtained in Random forest, ET, DT, and XGBoost algorithms. Obtaining a high classification value in different classifiers strengthens the widespread effect of the proposed feature grouping approach. The highest value of 97.7% was obtained with the ET classifier. This value represents high malware detection. Achieving high classification success with only 11 features is valuable.</ns0:p></ns0:div>
<ns0:div><ns0:head>The effect of proposed feature grouping on learning and testing time</ns0:head><ns0:p>The proposed feature grouping-based algorithm performed a very large feature vector size reduction and feature selection from 349 features to 11 features. Thus, the amount of data was reduced by 30 times and a much simpler feature vector was obtained. This had a serious impact on the training and test duration as well as on the classification result. Having hardware limitations in mobile devices and the need for fast and efficient tools are other advantages in choosing FG-Droid. In the initial state of the created feature vector, in the case of using well-known feature selection methods and in the tests made with the proposed feature grouping method, the required times for training and testing were as shown in Table <ns0:ref type='table' target='#tab_15'>6</ns0:ref>. The results obtained when using known and feature selection methods, such as Extratree, randomforest, chi2, f_classif, f_regression, and PCA, or without feature selection, are as shown in Table <ns0:ref type='table' target='#tab_15'>6</ns0:ref>. The FG-Droid tool reached a 97.7 AUC with only 11 features. The minimum number of features required to obtain similar AUC values was 35. It was seen that the increase in the number of selected features had a serious effect on both the training and prediction time. FG-Droid was on average 700% faster in the training time than in the model without any feature selection, while it was approximately 80% faster in the prediction time. On the other hand, the best results were obtained with chi2 among the traditional feature selection methods, and the proposed model was 45% faster in the training time and 23% faster in the testing time when compared to the chi2 method. As a result, by grouping the features with FG-Droid, the effect of the feature on the classification was not lost and an efficient classification was made.</ns0:p></ns0:div>
<ns0:div><ns0:head>Comparison with similar works and discussion</ns0:head><ns0:p>A lot of work has been done in recent years on Android malware detection. These studies basically used static analysis, dynamic analysis, or hybrid feature extraction methods. While some of these obtained features contributed positively to the classification performance, some may have had no effect at all, and some may have had a deteriorating effect. For this reason, it is beneficial to determine those features that contribute positively to the result and to remove the others from the feature set. In this study, a feature grouping method was proposed for the use of features extracted from static analysis in the classification. FG-Droid used the feature vector consisting of grouped features and made classification with the ExtraTree algorithm. Instead of selecting and removing features from the feature vector, the approach of evaluating these features within the group was adopted. The Drebin malicious dataset, which has been widely used for many years, was used in the tests for FG-Droid. The results of recent studies and the test results of the FG-Droid are shown in Table <ns0:ref type='table' target='#tab_18'>7</ns0:ref>.</ns0:p><ns0:p>When the studies were examined, it was understood that Drebin was widely used as a dataset, and in some studies, Androzoo and original datasets were studied. Permissions have generally been the most widely used feature in static analysis-based methodologies. These features were used in the training and testing processes of various classifiers. Classification successes vary between 91.0% and 99.0%. In the analysis time per application, the best value was 0.008. When the studies were compared, it was understood that the tool with the highest classification success and the best analysis time per application was the FG-Droid recommended in this study. In the study conducted in 2021 with an analysis time of 0.008, the classification performance remained at the level of 91.0%. The analysis time of FG-Droid of 0.063 seconds reveals that it is a very efficient model. The most important factor in the emergence of this efficient model is of course working with a low number of features. It works with a lower feature count than all the studies given in the table. In almost all studies, feature selection is avoided and the processing time is compromised in order to achieve high classification success. However, limited resources in mobile devices require consideration in efficiency. In this study, the feature selection approach was handled from a different perspective and instead of selecting features, a joint evaluation approach was adopted.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>The increase in mobile devices using the Android operating system has caused them to be the target of cyber attackers. New types of malware are emerging every day, and new methods have been proposed as a precaution. FG-Droid uses a permission grouping-based approach to Android malware analysis. It has an AUC of 97.7% with 11 features for binary classification. Using this newly proposed algorithm, 349 features extracted from Android applications were grouped and reduced to 11 features. Thus, a much more efficient feature vector was revealed. Drebin and Genome malware datasets were used to observe the effect of the model. With the success of the classification, a very efficient model was created with the shortening of the training and prediction times. In the future, tests will be performed with datasets containing more samples to further increase the classification success. Analysis time per application is just 0.063 seconds, one of the best analysis times ever. In addition, since a very fast method has been developed, it is aimed to present it with a platform that will serve online. As a result, FG-Droid is expected to contribute positively to the security of Android smart devices. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed weights (@¨ @! @9'#¨$©@%! @¨n¨©@ 5 @&@ 3 @'©'@ @¨ @ SVC C [0.1, 1, 10, 100, 1000] kernel [@@!@¨©#@( C 1, kernel @@ GradientBoosting loss [)n0! )9©#¨$©0! )©1&¨©¨'#0( 4©#¨¨n#'©4 [0.01, 0.025, 0.2] lossn 4©#¨¨n#'©4 1.0</ns0:p><ns0:note type='other'>Figure 1</ns0:note><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:note type='other'>Computer Science Figure 8</ns0:note></ns0:div>
<ns0:div><ns0:head>Random Forest</ns0:head><ns0:p>nn©' #' [100, 200, 800, 2000] criterion [2¨3! 2©¨'&3! 2n3( nn©' #'5677 criterion5@¨@ XG Boost @ #1n9©&'@ [3, 5] @©#¨¨n#'©@ [0.01, 0.1, 1, 10],</ns0:p><ns0:p>maxn9©&'5m @©#¨¨n#'©@5687 Extra Tree @¨n©' #'@ [200-2000] maxn©#'©@ [@#'@!@A'@!@B@( maxn9©&' Min samples split [2,5,10] @ #1n©#'©@ sA' @¨n©' #'@ 1800 maxn9©&'C7 Min samples split67</ns0:p><ns0:p>Ada Boost @©#¨¨n#'©@ [0.01, 0.1, 1, 10], @¨n©' #'@ [50, 500, 2000] @©#¨¨n#'©@ 1 @¨n©' #'@ 50 Decision Tree @$'©¨@ [@¨@! @©¨'&@(! criterion5@¨@ Logistic Regression @D@ [1.e-03, 1.e-02, 1.e-01, 1.eI77! 1.eI76! 1.eI7B! 1.eI7C(! @&©¨#'@ [@6@! @B@(E C@ 1.0, @&©¨#'@ @B@ Linear Discriminant Analysis solver [@9@! @A@! @©©¨@( @©@ @9@ Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1Structure of Android Apk<ns0:ref type='bibr' target='#b45'>[51]</ns0:ref> </ns0:figDesc><ns0:graphic coords='17,42.52,178.87,525.00,181.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2 Automatic Feature Extraction and Pre-processing flowchart</ns0:figDesc><ns0:graphic coords='18,42.52,178.87,525.00,300.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 Flow-chart of proposed model</ns0:figDesc><ns0:graphic coords='19,42.52,178.87,525.00,348.00' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4 Proposed Feature Grouping and Selection Algorithm</ns0:figDesc><ns0:graphic coords='20,42.52,178.87,525.00,253.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5 Pseudo-code of feature grouping and selection algorithm</ns0:figDesc><ns0:graphic coords='21,42.52,178.87,525.00,203.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6 Cross-validation graph for all ML classifiers</ns0:figDesc><ns0:graphic coords='22,42.52,178.87,525.00,215.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7 Learning Curves of Deep Learning Models</ns0:figDesc><ns0:graphic coords='23,42.52,178.87,525.00,465.75' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 ROC</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8 ROC Curve of all ML Algorithms with Feature Grouping</ns0:figDesc><ns0:graphic coords='24,42.52,178.87,525.00,278.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 (on next page)</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of current works on static analysis based in Android malware detection</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Summary of current works on static analysis based in Android malware detection PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 1 Summary of current works on static analysis based in Android malware detection Ref Dataset Feature Extraction Classification Classification Rate</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Sec.</ns0:cell><ns0:cell>for</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Identification</ns0:cell></ns0:row><ns0:row><ns0:cell>each app</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 Details of the hyperh ¡¢¡£¤¥¤¢¦ oo all classio¤¢¦ Classio¤¢</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /><ns0:note>r § ¤¢h ¡¢¡£¤¥¤¢</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Tuning Range Selected Values</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>nn¨© (1,20,1)</ns0:cell></ns0:row><ns0:row><ns0:cell>p (1,5,1)</ns0:cell></ns0:row><ns0:row><ns0:cell>KNeighbors</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 (on next page)</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 ML</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification Results with Proposed Feature Grouping Algorithm</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4 ML</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Classification Results with Proposed Feature Grouping Algorithm</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>1 Table R wF</ns0:head><ns0:label>1R</ns0:label><ns0:figDesc>ClassigGHPQGST UVWXQV wGQY Propose pUPQW`U q`SWaGTb AlbS</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>`GQYe wF ClassigGU` AlbS`GQYe AcdfVHS`U PrecisioT</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>UHPXX</ns0:cell><ns0:cell cols='2'>phVHS`U Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>KNeighbors</ns0:cell><ns0:cell>0.965</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.909</ns0:cell><ns0:cell>0.910</ns0:cell></ns0:row><ns0:row><ns0:cell>SVC</ns0:cell><ns0:cell>0.950</ns0:cell><ns0:cell>0.844</ns0:cell><ns0:cell>0.931</ns0:cell><ns0:cell>0.885</ns0:cell><ns0:cell>0.881</ns0:cell></ns0:row><ns0:row><ns0:cell>GradientBoosting</ns0:cell><ns0:cell>0.965</ns0:cell><ns0:cell>0.928</ns0:cell><ns0:cell>0.899</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.916</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest</ns0:cell><ns0:cell>0.976</ns0:cell><ns0:cell>0.942</ns0:cell><ns0:cell>0.904</ns0:cell><ns0:cell>0.923</ns0:cell><ns0:cell>0.925</ns0:cell></ns0:row><ns0:row><ns0:cell>i Boost</ns0:cell><ns0:cell>0.976</ns0:cell><ns0:cell>0.941</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell>0.923</ns0:cell><ns0:cell>0.925</ns0:cell></ns0:row><ns0:row><ns0:cell>Extra Tree</ns0:cell><ns0:cell>0.979</ns0:cell><ns0:cell>0.944</ns0:cell><ns0:cell>0.909</ns0:cell><ns0:cell>0.926</ns0:cell><ns0:cell>0.929</ns0:cell></ns0:row><ns0:row><ns0:cell>Ada Boost</ns0:cell><ns0:cell>0.955</ns0:cell><ns0:cell>0.917</ns0:cell><ns0:cell>0.872</ns0:cell><ns0:cell>0.894</ns0:cell><ns0:cell>0.898</ns0:cell></ns0:row><ns0:row><ns0:cell>Decision Tree</ns0:cell><ns0:cell>0.970</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.901</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>0.922</ns0:cell></ns0:row><ns0:row><ns0:cell>Logistic Regression</ns0:cell><ns0:cell>0.881</ns0:cell><ns0:cell>0.790</ns0:cell><ns0:cell>0.816</ns0:cell><ns0:cell>0.803</ns0:cell><ns0:cell>0.803</ns0:cell></ns0:row><ns0:row><ns0:cell>Linear Discriminant</ns0:cell><ns0:cell>0.856</ns0:cell><ns0:cell>0.769</ns0:cell><ns0:cell>0.814</ns0:cell><ns0:cell>0.791</ns0:cell><ns0:cell>0.788</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 5 (on next page)</ns0:head><ns0:label>5</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Deep Learning Models Classification Results with Proposed Feature Grouping Algorithm</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 5</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Deep Learning Models Classification Results with Proposed Feature Grouping Algorithm</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_14'><ns0:head>1 Table s tuuv Learning Models Classification Results with Proposed Feature Grouping Algorithm 2 Deep Learning Models Aucxyu Precisio u yu Accuracy u o processep parameters dve</ns0:head><ns0:label>1s</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>DNN(30,30)</ns0:cell><ns0:cell>0.918</ns0:cell><ns0:cell>0.936</ns0:cell><ns0:cell>0.906</ns0:cell><ns0:cell>0.921</ns0:cell><ns0:cell>0.925</ns0:cell><ns0:cell>1352</ns0:cell></ns0:row><ns0:row><ns0:cell>DNN(30,30,30)</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>0.927</ns0:cell><ns0:cell>0.916</ns0:cell><ns0:cell>0.921</ns0:cell><ns0:cell>0.921</ns0:cell><ns0:cell>2282</ns0:cell></ns0:row><ns0:row><ns0:cell>DNN(30,30,30,30)</ns0:cell><ns0:cell>0.935</ns0:cell><ns0:cell>0.942</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>0.925</ns0:cell><ns0:cell>0.930</ns0:cell><ns0:cell>3212</ns0:cell></ns0:row><ns0:row><ns0:cell>DNN(100,100,100)</ns0:cell><ns0:cell>0.925</ns0:cell><ns0:cell>0.944</ns0:cell><ns0:cell>0.891</ns0:cell><ns0:cell>0.917</ns0:cell><ns0:cell>0.924</ns0:cell><ns0:cell>31702</ns0:cell></ns0:row><ns0:row><ns0:cell>DNN(300,300,300)</ns0:cell><ns0:cell>0.928</ns0:cell><ns0:cell>0.940</ns0:cell><ns0:cell>0.904</ns0:cell><ns0:cell>0.921</ns0:cell><ns0:cell>0.926</ns0:cell><ns0:cell>275102</ns0:cell></ns0:row><ns0:row><ns0:cell>RNN(10,10)</ns0:cell><ns0:cell>0.900</ns0:cell><ns0:cell>0.910</ns0:cell><ns0:cell>0.910</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell>242</ns0:cell></ns0:row><ns0:row><ns0:cell>RNN(30,30)</ns0:cell><ns0:cell>0.916</ns0:cell><ns0:cell>0.928</ns0:cell><ns0:cell>0.891</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.918</ns0:cell><ns0:cell>1322</ns0:cell></ns0:row><ns0:row><ns0:cell>RNN(100,100)</ns0:cell><ns0:cell>0.867</ns0:cell><ns0:cell>0.899</ns0:cell><ns0:cell>0.830</ns0:cell><ns0:cell>0.863</ns0:cell><ns0:cell>0.875</ns0:cell><ns0:cell>11402</ns0:cell></ns0:row><ns0:row><ns0:cell>RNN(300,300)</ns0:cell><ns0:cell>0.887</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell>0.865</ns0:cell><ns0:cell>0.885</ns0:cell><ns0:cell>0.895</ns0:cell><ns0:cell>94202</ns0:cell></ns0:row><ns0:row><ns0:cell>GRU(10,10)</ns0:cell><ns0:cell>0.910</ns0:cell><ns0:cell>0.918</ns0:cell><ns0:cell>0.897</ns0:cell><ns0:cell>0.907</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>712</ns0:cell></ns0:row><ns0:row><ns0:cell>GRU(30,30,30)</ns0:cell><ns0:cell>0.902</ns0:cell><ns0:cell>0.915</ns0:cell><ns0:cell>0.887</ns0:cell><ns0:cell>0.901</ns0:cell><ns0:cell>0.905</ns0:cell><ns0:cell>3932</ns0:cell></ns0:row><ns0:row><ns0:cell>GRU(100,100)</ns0:cell><ns0:cell>0.915</ns0:cell><ns0:cell>0.836</ns0:cell><ns0:cell>0.891</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>34102</ns0:cell></ns0:row><ns0:row><ns0:cell>GRU(300,300)</ns0:cell><ns0:cell>0.914</ns0:cell><ns0:cell>0.934</ns0:cell><ns0:cell>0.893</ns0:cell><ns0:cell>0.913</ns0:cell><ns0:cell>0.915</ns0:cell><ns0:cell>282302</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM(10,10)</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>0.939</ns0:cell><ns0:cell>0.891</ns0:cell><ns0:cell>0.914</ns0:cell><ns0:cell>0.918</ns0:cell><ns0:cell>902</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM(30,30)</ns0:cell><ns0:cell>0.909</ns0:cell><ns0:cell>0.890</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>0.908</ns0:cell><ns0:cell>0.910</ns0:cell><ns0:cell>5102</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM(100,100)</ns0:cell><ns0:cell>0.916</ns0:cell><ns0:cell>0.940</ns0:cell><ns0:cell>0.890</ns0:cell><ns0:cell>0.914</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>45002</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM(300,300)</ns0:cell><ns0:cell>0.916</ns0:cell><ns0:cell>0.937</ns0:cell><ns0:cell>0.892</ns0:cell><ns0:cell>0.914</ns0:cell><ns0:cell>0.920</ns0:cell><ns0:cell>375002</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_15'><ns0:head>Table 6 (on next page)</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_16'><ns0:head>Table 6</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Training and testing time with different feature selection algorithms and proposed model</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_17'><ns0:head>Table 6 1 Table 6 Training and testing time with different feature selection algorithms and proposed model 2 Feature Selection Method Selectefgijklqrf srtuljr Sivr Training timexyz{ |jrf}~u}k</ns0:head><ns0:label>616</ns0:label><ns0:figDesc>Training and testing time with different feature selection algorithms and proposed</ns0:figDesc><ns0:table><ns0:row><ns0:cell>timexyz{</ns0:cell></ns0:row></ns0:table><ns0:note>model PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_18'><ns0:head>Table 7 (on next page)</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_19'><ns0:head>Table 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Similar works with proposed model</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_20'><ns0:head>Table 7</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Similar works with proposed model</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_22'><ns0:head>Table 7 Similar works with proposed model Paper Dataset Feature Extraction Feature selection or grouping algorithms and classification methods Classification Performance Sec. for identification each app</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Mat et al. [6]</ns0:cell><ns0:cell>Androzoo,</ns0:cell><ns0:cell cols='2'>Permissions</ns0:cell><ns0:cell /><ns0:cell cols='2'>Na Bayesian</ns0:cell><ns0:cell>91.10%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Drebin</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Arif et al. [47]</ns0:cell><ns0:cell>Androzoo,</ns0:cell><ns0:cell cols='2'>Permissions</ns0:cell><ns0:cell /><ns0:cell>MCDM</ns0:cell><ns0:cell /><ns0:cell>90.54% (4</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Drebin</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>levels)</ns0:cell></ns0:row><ns0:row><ns0:cell>Millar et al.</ns0:cell><ns0:cell>Drebin,</ns0:cell><ns0:cell cols='2'>Opcode and</ns0:cell><ns0:cell /><ns0:cell>No feature</ns0:cell><ns0:cell /><ns0:cell>91.00%</ns0:cell><ns0:cell>0.008</ns0:cell></ns0:row><ns0:row><ns0:cell>[48]</ns0:cell><ns0:cell>Genome</ns0:cell><ns0:cell cols='2'>permissions</ns0:cell><ns0:cell /><ns0:cell>selection,</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>with multi-view</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>deep learning</ns0:cell></ns0:row><ns0:row><ns0:cell>et al.</ns0:cell><ns0:cell>Drebin</ns0:cell><ns0:cell cols='3'>Text se of</ns0:cell><ns0:cell>No feature</ns0:cell><ns0:cell /><ns0:cell>95.20%</ns0:cell><ns0:cell>0.28</ns0:cell></ns0:row><ns0:row><ns0:cell>[49]</ns0:cell><ns0:cell /><ns0:cell>apps</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>selection.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Classification</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>with Text CNN</ns0:cell></ns0:row><ns0:row><ns0:cell>Arp [55]</ns0:cell><ns0:cell>Drebin</ns0:cell><ns0:cell cols='3'>Used permissions,</ns0:cell><ns0:cell cols='3'>Machine learning 94%</ns0:cell><ns0:cell>10</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>sys.</ns0:cell><ns0:cell>Api</ns0:cell><ns0:cell>calls,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>network address</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Anastasia [25]</ns0:cell><ns0:cell>Own dataset</ns0:cell><ns0:cell cols='3'>Api calls, network</ns0:cell><ns0:cell>ML(NB,</ns0:cell><ns0:cell>RF,</ns0:cell><ns0:cell>96%</ns0:cell><ns0:cell>0.29</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>address</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>KNN)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>MamaDroid</ns0:cell><ns0:cell>Drebin</ns0:cell><ns0:cell cols='5'>Api calls, call graphs SVM, RF, 1-NN,</ns0:cell><ns0:cell>87%</ns0:cell><ns0:cell>0.7 ±1.5</ns0:cell></ns0:row><ns0:row><ns0:cell>[28]</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>3-NN</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Taheri [42]</ns0:cell><ns0:cell>Drebin,</ns0:cell><ns0:cell cols='3'>Api calls, intents,</ns0:cell><ns0:cell>FNN,</ns0:cell><ns0:cell>ANN,</ns0:cell><ns0:cell>90%-99%</ns0:cell><ns0:cell>Very high</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Genome</ns0:cell><ns0:cell cols='3'>permissions(21492</ns0:cell><ns0:cell cols='2'>WANN, KMNN</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>features)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Apkauditor[54] Own dataset</ns0:cell><ns0:cell cols='2'>Permissions,</ns0:cell><ns0:cell /><ns0:cell cols='2'>Signature based</ns0:cell><ns0:cell>92.5%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>services, receivers</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Syrris et al. [24] Drebin</ns0:cell><ns0:cell cols='2'>Static features</ns0:cell><ns0:cell /><ns0:cell>ML(6 six</ns0:cell><ns0:cell /><ns0:cell>99%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>classifiers)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Droidmat[27]</ns0:cell><ns0:cell>Own dataset</ns0:cell><ns0:cell cols='3'>Intents, Api calls</ns0:cell><ns0:cell cols='2'>Signature based</ns0:cell><ns0:cell>91.83%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell>Alazab [23]</ns0:cell><ns0:cell>Own dataset</ns0:cell><ns0:cell cols='2'>Api calls</ns0:cell><ns0:cell /><ns0:cell cols='2'>ML(RF, J48,</ns0:cell><ns0:cell>94.30%</ns0:cell><ns0:cell>0.2 -0.92</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>KNN, NB)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Pektaş [21]</ns0:cell><ns0:cell>Drebin, AMD,</ns0:cell><ns0:cell cols='2'>Api calls</ns0:cell><ns0:cell /><ns0:cell cols='2'>SDNE(DNN</ns0:cell><ns0:cell>98.5%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Androzoo</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>model)</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Saleh [18]</ns0:cell><ns0:cell>Own dataset</ns0:cell><ns0:cell cols='3'>Activities, services,</ns0:cell><ns0:cell>RF</ns0:cell><ns0:cell /><ns0:cell>97.1%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>receivers, providers,</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>permissions</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Janani [19]</ns0:cell><ns0:cell>Androzoo</ns0:cell><ns0:cell cols='3'>Permissions(113) -></ns0:cell><ns0:cell cols='2'>DT with PCA</ns0:cell><ns0:cell>94.3%</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>PCA (10)</ns0:cell><ns0:cell /><ns0:cell cols='2'>feature selection</ns0:cell></ns0:row><ns0:row><ns0:cell>¡</ns0:cell><ns0:cell>Drebin,</ns0:cell><ns0:cell cols='3'>Permission groups</ns0:cell><ns0:cell>ML, DNN</ns0:cell><ns0:cell /><ns0:cell>97.7%</ns0:cell><ns0:cell>0.063</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Genome,</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Arslan</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73473:1:1:NEW 23 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "RESPONSE TO THE REVIEWER AND EDITOR COMMENTS
The authors thanks to Editors and Reviewers for their valuable comments and voluntary works for increasing the quality of proposed manuscript.
According to the comments of editor and reviewers, necessary revisions on manuscript has been made and briefly explained here.
EDITOR COMMENTS 1
The detailed comments are as follows, please carefully address all the comments before re-submission.
[# PeerJ Staff Note: The review process has identified that the English language must be improved. PeerJ can provide language editing services - please contact us at [email protected] for pricing (be sure to provide your manuscript number and title) #]
Response to the Editor Comments 1
The authors thanks for this comment. Due to the criticisms from the editors and referees, the whole study was re-read from beginning to end. Then English native speakers found and correct typographical errors and mistakes in grammar, style, and spelling. On average, 1000 different misuses were corrected.
I couldn't show the fixes here because there are too many fixes, but they are shown in the trackchanges version.
REVIWER #1 COMMENTS 1 – Basic Reporting
The English language used in the paper is not up to publishable academic standards. The abstract is written in the past tense. Apart from numerous grammatical mistakes, there are many spelling mistakes in the paper. What makes it worst, the text written from lines 174 to 177 is in some other language (possibly Turkish).
The above-mentioned mistakes prove that the author did not bother to proofread the paper before submission. This is unacceptable.
Response to the Reviewer #1 Comments 1
The authors thanks for this comment. Due to the criticisms from the editors and referees, the whole study was re-read from beginning to end. Then English native speakers found and correct typographical errors and mistakes in grammar, style, and spelling. On average, 1000 different misuses were corrected.
I couldn't show the fixes here because there are too many fixes, but they are shown in the trackchanges version.
REVIWER #1 COMMENTS 2– Basic Reporting
The literature review skeleton is good. However, a lot of language mistakes make it difficult to follow. Moreover, a comparison table of the reviewed papers should be included.
Response to the Reviewer #1 Comments 2
The authors thanks for this comment. Due to the criticisms from the editors and referees, the whole study was re-read from beginning to end. Then English native speakers found and correct typographical errors and mistakes in grammar, style, and spelling. On average, 1000 different misuses were corrected.
A detailed table containing static analysis-based studies has been prepared and added to the related works section.
Table 1 Summary of current works on static analysis based in Android malware detection
Ref
Dataset
Feature Extraction
Classification
Classification Rate
Sec. for Identification each app
Arp [55]
Drebin
Used permissions, sys. Api calls, network address
Machine learning
94%
10
Anastasia [25]
Own dataset
Api calls, network address
ML(NB, RF, KNN)
96%
0.29
MamaDroid [28]
Drebin
Api calls, call graphs
SVM, RF, 1-NN, 3-NN
87%
0.7 ±1.5
Taheri [42]
Drebin, Genome
Api calls, intents, permissions(21492 features)
FNN, ANN, WANN, KMNN
90%-99%
Very high
Apkauditor[54]
Own dataset
Permissions, services, receivers
Signature based
92.5%
-
Syrris et al. [24]
Drebin
Static features
ML(6 six classifiers)
99%
-
Droidmat[27]
Own dataset
Intents, Api calls
Signature based
91.83%
-
Alazab [23]
Own dataset
Api calls
ML(RF, J48, KNN, NB)
94.30%
0.2 – 0.92
Pektaş [21]
Drebin, AMD, Androzoo
Api calls
SDNE(DNN model)
98.5%
-
Saleh [18]
Own dataset
Activities, services, receivers, providers, permissions
RF
97.1%
-
Janani [19]
Androzoo
Permissions(113) –> PCA (10)
DT with PCA feature selection
94.3%
-
Proposed Model
Drebin, Genome, Arslan
Permission groups
ML, DNN
97.7%
0.063
REVIWER #1 COMMENTS 3– Basic Reporting
Figure 5 (Pseudo Code): It appears that the Pseudocode is generated through some automated tool. Convert it to write it in standard algorithm notations. The current form is unacceptable.
Response to the Reviewer #1 Comments 3
The pseudocode showing the algorithm of the original methodology proposed in this study has been rewritten considering the criticism.
Figure 5 Pseudo-code of feature grouping and selection algorithm_updated
REVIWER #1 COMMENTS 4 – Experimental design
Needs to be reconsidered.
Response to the Reviewer #1 Comments 4
In the Experimental result section, I repeated the hyperparameter tuning process for all classifiers and updated Table-3.
In the Experimental result section, I performed repeated tests with the KFold operation for all classifiers and prepared and interpreted n boxplot graphs for all classifiers.
I did the evaluation based on the ROC curve for all ml classifiers instead of just the extratree algorithm. Thus, I show that the proposed model produces a successful feature vector for all classifiers.
Table 3 Details of the hyper-parameters of all classifiers
Classifier
Hyper-parameter Tuning Range
Selected Values
KNeighbors
n_neighbors: (1,20,1)
p: (1,5,1)
weights: ('uniform', 'distance'),
'n_neighbors': 5
'p': 3
'weights': 'uniform'
SVC
C: [0.1, 1, 10, 100, 1000]
kernel: ['rbf','linear']
C: 1,
kernel: 'rbf'
GradientBoosting
loss: [‘log_loss’, ‘deviance’, ‘exponential’]
'learning_rate': [0.01, 0.025, 0.2]
loss:log_loss
'learning_rate': 1.0
Random Forest
n_estimators: [100, 200, 800, 2000]
criterion: [“gini”, “entropy”, “log_loss”]
n_estimators=100
criterion='gini'
XG Boost
'max_depth': [3, 5]
'learning_rate': [0.01, 0.1, 1, 10],
max_depth=5
'learning_rate'=1.0
Extra Tree
'n_estimators': [200-2000]
max_features': ['auto','sqrt','log2']
max_depth: [10-110]
Min samples split: [2,5,10]
'max_features': sqrt
'n_estimators': 1800
max_depth:30
Min samples split:10
Ada Boost
'learning_rate': [0.01, 0.1, 1, 10],
'n_estimators': [50, 500, 2000]
'learning_rate': 1
'n_estimators': 50
Decision Tree
'criterion': ['gini', 'entropy'],
criterion='gini'
Logistic Regression
'C': [1.e-03, 1.e-02, 1.e-01, 1.e+00, 1.e+01, 1.e+02, 1.e+03],
'penalty': ['l1', 'l2']}
C': 1.0,
'penalty': 'l2'
Linear Discriminant Analysis
solver : ['svd', 'lsqr', 'eigen']
'solver': 'svd'
Figure 6 Cross-validation graph for all ML classifiers
Figure 8 ROC Curve of all ML Algorithms with Feature Grouping
REVIWER #1 COMMENTS 5 – Validity of the findings
The author proposes a reduced feature set (using just 11 static features) that is enough for classification tasks for Android malware and Benign apps. Moreover, the author claims that there is no need for an extended feature set for the classification task. I have some concerns in this regard:
1) Most of the literature published in the Android malware detection domain use significantly big feature sets for malware detection and classification tasks. The proposed techniques (I can not mention a single here because there are loads of them) have claimed up to 99% detection accuracy by using larger feature sets. Moreover, a lot of them are using fairly big datasets as compared to this study. How would you compare your results with them?
Response to the Reviewer #1 Comments5
The main contributions of the study are rearranged in the introduction. In this study, the capacity of the proposed feature grouping methodology to provide efficiency in terms of processing time has been demonstrated. A new table has been prepared for comparison with similar studies and the advantageous points of the proposed methodology have also been re-emphasized.
The main contributions of the study to the literature have been rearranged and added to the introduction.
The contributions of this work in the following way:
• Feature grouping based Android malware detection tool(FG-Droid) is a low runtime and highly efficient machine learning model.
• The model groups permission-based features with a unique methodology and obtains only 11 static features for each application.
• The model has 97.7% classification success in the tests and only needs 0.063 seconds for analysis per application. This value is one of the best values among the models with similar classification success. This value was obtained without using the GPU.
• The model selects fewer features than traditional feature selection methods (chi2, f_class_if, PCA) and requires less processing time while showing higher classification success than them.
• As a result, a model with high classification success and low analysis time with few features has been revealed thanks to the proposed unique grouping.
REVIWER #1 COMMENTS 6 – Validity of the findings
The author proposes a reduced feature set (using just 11 static features) that is enough for classification tasks for Android malware and Benign apps. Moreover, the author claims that there is no need for an extended feature set for the classification task. I have some concerns in this regard:
2) Another concern is about adversarial evasion attacks. A small feature set is more vulnerable to evasion attacks than compared to a diverse feature set. Have you considered it?
Response to the Reviewer #1 Comments6
In this study, Drebin, Genome and Arslan datasets, which are also used in other studies we have given in the related works section and the discussion section, were used. These datasets are malicious and benign datasets that have been tested for a long time and the results are shared. The aim of this study is to compare the benefits of feature grouping with other studies using the same dataset. For this reason, the tests were repeated using the same dataset and the results were shared. Of course, testing with datasets containing different malicious families will increase the reliability of the study. However, since our aim is to compare studies using the same dataset, Drebin, Genome and Arslan datasets were preferred in the tests.
In order to increase the objectivity in the tests, the RepeatedStratifiedKFold process was also performed and the results were added to the study.
Figure 6 Cross-validation graph for all ML classifiers
REVIWER #1 COMMENTS 7 – Validity of the findings
The author proposes a reduced feature set (using just 11 static features) that is enough for classification tasks for Android malware and Benign apps. Moreover, the author claims that there is no need for an extended feature set for the classification task. I have some concerns in this regard:
3) I can see that this study uses around 6k malware and 900+ benign samples. How can you justify this distribution? Normally, balanced datasets are employed. However, in real-world situations, malware samples are always less than benign, therefore, an unbalanced dataset (more benign and less malware still makes sense (As used in Drebin)).
Response to the Reviewer #1 Comments7
Of course, in the normal world, benign apps are numerically many times higher than malicious ones. however, when it is prepared as a dataset, it is not as easy as malicious applications to determine that the benign apps are truly benevolent. For the Arslan dataset, which belongs to me and I use in this study, I prepared the applications by downloading them and testing them one by one on virustotal. Malicious datasets are offered in different distributions such as Drebin, Genome, Androzoo. For this reason, the number of good apks is very few. In order that this imbalance does not affect the results, the results for each class were not evaluated separately and a classification success was evaluated based on the AUC value.
The results of the AUC value were previously shared only for the extratree classifier. The related graph has been updated to display the auc value for all classifiers. The results show that the model is successful with almost all classifiers.
Figure 8 ROC Curve of all ML Algorithms with Feature Grouping
REVIWER #1 COMMENTS 8 – Validity of the findings
The author proposes a reduced feature set (using just 11 static features) that is enough for classification tasks for Android malware and Benign apps. Moreover, the author claims that there is no need for an extended feature set for the classification task. I have some concerns in this regard:
4) You have used Drebin and Genome datasets. Just to let you know that all the malware samples in Genome are already a part of the Drebin dataset. Therefore no need to use the Genome as a separate dataset.
Response to the Reviewer #1 Comments8
This is the first time I've heard of this. I have used drebin and genome datasets separately in my publications in this journal or in other academic publications before. I didn't know because there was no such criticism. When I examined the recent studies after your criticism, I saw that they also used drebin and genome as separate datasets. On top of that, I compared the hashes of the drebin and genome apk files in the dataset I have and saw that they were different. For these reasons, I considered the genome and drebin datasets as separate datasets. Sorry for this mistake, I will pay attention in my next work.
REVIWER #1 COMMENTS 9 – Validity of the findings
The author proposes a reduced feature set (using just 11 static features) that is enough for classification tasks for Android malware and Benign apps. Moreover, the author claims that there is no need for an extended feature set for the classification task. I have some concerns in this regard:
5) Drebin dataset is now obsolete. The apps in Drebin were gathered from 2010-to 2012. That's a decade old now. Please consider new datasets. A good option would be to use KronoDroid (2021).
Response to the Reviewer #1 Comments9
As I explained before, my aim in using Drebin and Genome datasets is to prove whether the proposed model is superior to similar studies. After your criticism, I submitted my request to examine the Androzoo dataset. but I haven't got the necessary permissions yet. I will provide access to the relevant dataset to use in my future work. I was only able to access the csv file in KronoDroid. But I need apk files. Besides, it doesn't seem possible to do all these operations with KronoDroid all over again. For this reason, I will try to access the KronoDroid and Androzoo dataset to use in my future studies.
REVIWER #1 COMMENTS 10 – Validity of the findings
The author proposes a reduced feature set (using just 11 static features) that is enough for classification tasks for Android malware and Benign apps. Moreover, the author claims that there is no need for an extended feature set for the classification task. I have some concerns in this regard:
6) Figure 8 is an example of poor presentation of results.
Response to the Reviewer #1 Comments10
Figure-8 was excluded from the study. The experimental result section has been arranged from top to bottom to show all the results of 10 different classifiers. Thus, Figure-8 is no longer needed. In addition, Figure-6 have been added, Figure-7 and figure-8 have been updated.
Figure 6 Cross-validation graph for all ML classifiers
Figure 8 ROC Curve of all ML Algorithms with Feature Grouping
REVIWER #1 COMMENTS 11 – Additional comments
Although the paper should be rejected, however, I am suggesting a major revision at this stage point. All my concerns should be addressed one by one in rebuttal for further considereation.
Response to the Reviewer #1 Comments11
Thank you for not rejecting.
• I've redrawn Figure-1 to Figure-8 to improve the quality of the work.
• I've included Table-1 showing similar studies.
• I updated Table-3 and Table-7.
• I completely reworked the entire work.
• I rewrote the contributions of the study.
• I prepared the whole study not over the extratree classifier, but over 10 different classifiers.
• In Table-3, I have given the parameters selected by the hyper-parameter tuning process for all classifiers.
• To support the machine learning model results, I performed the RepeatedStratifiedKFold operation for all classifiers and gave the repeated results in a new graph.
• I have given the ROC curve for all classifiers, which I shared for the extratree only.
• I expanded the comparison table with similar studies.
• I compared the results with my own work.
And most importantly, I reviewed the entire study from start to finish. read by an english native speaker. More than 1000 proofreading fixes have been made.
The result was an extensive editing, drawing and updating process.
REVIWER #2 COMMENTS 1 – Basic reporting
The English language should be improved to ensure that an international audience can clearly understand your text. Their are too many grammatical mistakes, the article need to go through proofreading
Introduction should be rewrite again
Missing sentences in Line 16
This has resulted in the undesired installation of Android apks that violate user privacy or malicious ???
Line 32 it -> It, Also, too many use of 'it' in paragraph
Line 45 ->When malicious applications Access user mobil devices - correct it
Line 51 ->suffiecient -> correct it sufficient
Line 55 -> Malware detection mechanism in different types have been proposed to address this (WHAT IS THIS?? IN THIS SENTENCE)need in the security mechanism. These (?) are signature-based approach ...
Line 79 wit -> with
Response to the Reviewer #2 Comments1
The authors thanks for this comment. Due to the criticisms from the editors and referees, the whole study was re-read from beginning to end. Then English native speakers found and correct typographical errors and mistakes in grammar, style, and spelling. On average, 1000 different misuses were corrected.
I couldn't show the fixes here because there are too many fixes, but they are shown in the trackchanges version.
REVIWER #2 COMMENTS 2 – Experimental design
Line 238 While adding to the csv file, malicious and benign applications were labeled. Please explain How the data is labeled in Figure 2
Response to the Reviewer #2 Comments1
I redrawn figure-2 to make all the preprocessing processes more understandable. and I rewrote the explanation in the relevant section completely.
Figure 2 Automatic Feature Extraction and Pre-processing flowchart
The Android operating system works on an application-based. After the applications are prepared, they are made into zip files, and their extensions are determined as APK. These applications, which have similar file structures, can be run on the same operating system (Android OS). The application package contains folders and files such as androidmanifest.xml and class.dex, resources.arsc, lib, res. Androidmanifest.xml is required for applications to run and contains some information (version, API level, hardware, software information etc.) and permissions declared by the application.
This study adopts a static analysis-based feature extraction approach for Android malware detection. A series of steps were applied to extract the permission-based features in the Androidmanifest.xml file, as shown in Figure 2. First, a dataset consisting of benign and malicious software was created. Applications are opened using the Jadx decompiler. Thus, access to both the source code and the Androidmanifest.xml file needed in this study was provided. A list of all permissions in the Androidmanifest.xml file is required to determine the permissions requested by the applications and transfer them to the feature vector. This study used a list consisting of 348 permissions in Android 11 API level 30. The permissions in the manifest file and the permissions in the permission list were compared and transferred to the feature vector with a value of 1 for used ones and 0 for unused ones. Thus, a feature vector of 1x349 dimensions was obtained for each application. It should be noted that it is impossible to open all applications healthily with reverse engineering. In this case, values such as NaN and space in the vector may be present. These were checked, and related applications were filtered. There was no problem in terms of malicious application labelling applications. However, it is critical to determine whether benign apps are genuinely benign. For this reason, all benign applications were rechecked on Virustotal. For this reason, high-speed and efficient feature extraction and preprocessing were carried out. Those who did well in these tests were marked with B, and applications from the Drebin and genome dataset were marked M. After these processes were completed for all applications, they were combined into a single feature vector. As a result, a feature vector of 7266x349 dimensions was obtained. The resulting vector was saved in a CSV file and converted into a usable form in the following steps. All these processes were done with the exe file prepared in c# language.
Considering that there will be a need for high processing power in this large-sized vector in machine learning models, it is subjected to feature grouping, the details of which we have given in the next section. Since the values after the group are greater than 1, normalization is performed with the normal distribution, and all values are scaled to the 0-1 range.
REVIWER #2 COMMENTS 3 – Experimental design
Line 241 The processing steps shown in Figure-2 were carried out by means of an automatic code, and it is a very fast feature extraction process. How this process is fast? please explain in detail
Response to the Reviewer #2 Comments3
I redrawn figure-2 to make all the preprocessing processes more understandable. and I rewrote the explanation in the relevant section completely.
Figure 2 Automatic Feature Extraction and Pre-processing flowchart
The Android operating system works on an application-based. After the applications are prepared, they are made into zip files, and their extensions are determined as APK. These applications, which have similar file structures, can be run on the same operating system (Android OS). The application package contains folders and files such as androidmanifest.xml and class.dex, resources.arsc, lib, res. Androidmanifest.xml is required for applications to run and contains some information (version, API level, hardware, software information etc.) and permissions declared by the application.
This study adopts a static analysis-based feature extraction approach for Android malware detection. A series of steps were applied to extract the permission-based features in the Androidmanifest.xml file, as shown in Figure 2. First, a dataset consisting of benign and malicious software was created. Applications are opened using the Jadx decompiler. Thus, access to both the source code and the Androidmanifest.xml file needed in this study was provided. A list of all permissions in the Androidmanifest.xml file is required to determine the permissions requested by the applications and transfer them to the feature vector. This study used a list consisting of 348 permissions in Android 11 API level 30. The permissions in the manifest file and the permissions in the permission list were compared and transferred to the feature vector with a value of 1 for used ones and 0 for unused ones. Thus, a feature vector of 1x349 dimensions was obtained for each application. It should be noted that it is impossible to open all applications healthily with reverse engineering. In this case, values such as NaN and space in the vector may be present. These were checked, and related applications were filtered. There was no problem in terms of malicious application labelling applications. However, it is critical to determine whether benign apps are genuinely benign. For this reason, all benign applications were rechecked on Virustotal. For this reason, high-speed and efficient feature extraction and preprocessing were carried out. Those who did well in these tests were marked with B, and applications from the Drebin and genome dataset were marked M. After these processes were completed for all applications, they were combined into a single feature vector. As a result, a feature vector of 7266x349 dimensions was obtained. The resulting vector was saved in a CSV file and converted into a usable form in the following steps. All these processes were done with the exe file prepared in c# language.
Considering that there will be a need for high processing power in this large-sized vector in machine learning models, it is subjected to feature grouping, the details of which we have given in the next section. Since the values after the group are greater than 1, normalization is performed with the normal distribution, and all values are scaled to the 0-1 range.
REVIWER #2 COMMENTS 4 – Experimental design
Automatic Feature Extraction and Pre-processing section need to be explain properly. Things are overlapping and explained shortly.
Response to the Reviewer #2 Comments4
I redrawn figure-2 to make all the preprocessing processes more understandable. and I rewrote the explanation in the relevant section completely.
Figure 2 Automatic Feature Extraction and Pre-processing flowchart
The Android operating system works on an application-based. After the applications are prepared, they are made into zip files, and their extensions are determined as APK. These applications, which have similar file structures, can be run on the same operating system (Android OS). The application package contains folders and files such as androidmanifest.xml and class.dex, resources.arsc, lib, res. Androidmanifest.xml is required for applications to run and contains some information (version, API level, hardware, software information etc.) and permissions declared by the application.
This study adopts a static analysis-based feature extraction approach for Android malware detection. A series of steps were applied to extract the permission-based features in the Androidmanifest.xml file, as shown in Figure 2. First, a dataset consisting of benign and malicious software was created. Applications are opened using the Jadx decompiler. Thus, access to both the source code and the Androidmanifest.xml file needed in this study was provided. A list of all permissions in the Androidmanifest.xml file is required to determine the permissions requested by the applications and transfer them to the feature vector. This study used a list consisting of 348 permissions in Android 11 API level 30. The permissions in the manifest file and the permissions in the permission list were compared and transferred to the feature vector with a value of 1 for used ones and 0 for unused ones. Thus, a feature vector of 1x349 dimensions was obtained for each application. It should be noted that it is impossible to open all applications healthily with reverse engineering. In this case, values such as NaN and space in the vector may be present. These were checked, and related applications were filtered. There was no problem in terms of malicious application labelling applications. However, it is critical to determine whether benign apps are genuinely benign. For this reason, all benign applications were rechecked on Virustotal. For this reason, high-speed and efficient feature extraction and preprocessing were carried out. Those who did well in these tests were marked with B, and applications from the Drebin and genome dataset were marked M. After these processes were completed for all applications, they were combined into a single feature vector. As a result, a feature vector of 7266x349 dimensions was obtained. The resulting vector was saved in a CSV file and converted into a usable form in the following steps. All these processes were done with the exe file prepared in c# language.
Considering that there will be a need for high processing power in this large-sized vector in machine learning models, it is subjected to feature grouping, the details of which we have given in the next section. Since the values after the group are greater than 1, normalization is performed with the normal distribution, and all values are scaled to the 0-1 range.
REVIWER #2 COMMENTS 5 – Validity of the findings
Explain how FG-droid is efficient as mentioned in contribution?
Response to the Reviewer #2 Comments5
The main contributions of the study to the literature have been rearranged and added to the introduction.
The contributions of this work in the following way:
• Feature grouping based Android malware detection tool(FG-Droid) is a low runtime and highly efficient machine learning model.
• The model groups permission-based features with a unique methodology and obtains only 11 static features for each application.
• The model has 97.7% classification success in the tests and only needs 0.063 seconds for analysis per application. This value is one of the best values among the models with similar classification success. This value was obtained without using the GPU.
• The model selects fewer features than traditional feature selection methods (chi2, f_class_if, PCA) and requires less processing time while showing higher classification success than them.
• As a result, a model with high classification success and low analysis time with few features has been revealed thanks to the proposed unique grouping.
REVIWER #2 COMMENTS 6 – Validity of the findings
Line 427 -> While some of these obtained features contribute positively to the classification performance, some may have no effect at all, and some may have a deteriorating effect. Explain How? What are their limitations? What have you achieved?
Response to the Reviewer #2 Comments6
There are many permissions used in Android applications, and many of them are not used. In the ungrouped feature vector produced in this study, the number of unused permissions is 2584200, while the number of used permissions is only 89055. In this case, it becomes necessary to identify and select the permits that contribute more to the classification. It is not possible to look at the weights of 348 features separately, but when we get the feature importance graph for 10 of them, it becomes as follows. Therefore, the classification effect of all of them is not at the same level. The main idea in this study started from this point. Since there are so many 0-valued features, it was thought that grouping would be beneficial instead of processing them separately. In addition, in this study, instead of reducing the number of features by selection, he chose to group the features. In this way, as we have shown in the results, we achieved higher classification success than feature selection methods such as chi2, f_class_if, PCA.
REVIWER #2 COMMENTS 7 – Additional comments
In this study, FG-Droid, a machine-learning based classifier, using the method of grouping the features obtained by static analysis, was proposed. It was created because of experiments with Machine learning (ML), DNN, RNN, LSTM and GRU based models using Drebin, Genome and Arslan datasets.
However, Clear, unambiguous, professional English language is missing throughout the paper. Overall the structure of the paper need to be modified again. Experimental evaluation with other studies are missing in terms of novelty.
Response to the Reviewer #2 Comments7
The authors thanks for this comment. Due to the criticisms from the editors and referees, the whole study was re-read from beginning to end. Then English native speakers found and correct typographical errors and mistakes in grammar, style, and spelling. On average, 1000 different misuses were corrected.
• I've redrawn Figure-1 to Figure-8 to improve the quality of the work.
• I've included Table-1 showing similar studies.
• I updated Table-3 and Table-7.
• I completely reworked the entire work.
• I rewrote the contributions of the study.
• I prepared the whole study not over the extratree classifier, but over 10 different classifiers.
• In Table-3, I have given the parameters selected by the hyper-parameter tuning process for all classifiers.
• To support the machine learning model results, I performed the RepeatedStratifiedKFold operation for all classifiers and gave the repeated results in a new graph.
• I have given the ROC curve for all classifiers, which I shared for the extratree only.
• I expanded the comparison table with similar studies.
• I compared the results with my own work.
And most importantly, I reviewed the entire study from start to finish. read by an english native speaker. More than 1000 proofreading fixes have been made.
The result was an extensive editing, drawing and updating process.
" | Here is a paper. Please give your review comments after reading it. |
717 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Considering that the road short-term traffic flow has strong time series correlation characteristics, a new long-term and short-term memory neural network (LSTM)-based prediction model optimized by the improved genetic algorithm (IGA) is proposed to improve the prediction accuracy of road traffic flow. First, an improved genetic algorithm (IGA) is proposed by dynamically adjusting the mutation rate and crossover rate of standard GA. Then, the parameters of the LSTM, such as the number of hidden units, training times, gradient threshold and learning rate, are optimized by the IGA. In the analysis stage, 5-minute short-term traffic flow data are used to demonstrate the superiority of the proposed method over the existing neural network algorithms. The results show that the root mean square error achieved by the proposed algorithm is lower than that achieved by the other neural network methods in both the weekday and weekend data sets. This verifies that the algorithm can adapt well to different kinds of data and achieve high prediction accuracy.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Traffic congestion is a hotspot in the field of smart city and intelligent transportation <ns0:ref type='bibr' target='#b0'>[1,</ns0:ref><ns0:ref type='bibr' target='#b2'>2]</ns0:ref>.</ns0:p><ns0:p>Intelligent transportation system (ITS) can effectively strengthen the coordination between people, vehicles and roads, and forms a safe, clean and environment-friendly comprehensive transportation system by comprehensively applying advanced information technology, computer technology, sensing technology and artificial intelligence to transportation and service control <ns0:ref type='bibr' target='#b5'>[3]</ns0:ref>. Short-term traffic flow prediction is the key technology of ITS to solve traffic congestion and guidance. By collecting the temporal and spatial characteristics of historical traffic flow data, the short-term Manuscript to be reviewed Computer Science traffic flow in the future can be accurately predicted to obtain real-time traffic status, so as to provide decision support for traffic dredging and traffic control <ns0:ref type='bibr' target='#b7'>[4,</ns0:ref><ns0:ref type='bibr' target='#b8'>5]</ns0:ref>.</ns0:p><ns0:p>There are three common model prediction methods: parameter based prediction, shallow machine learning prediction and deep learning prediction. The typical parameter-based prediction model of ARIMA was applied to traffic flow prediction in <ns0:ref type='bibr' target='#b10'>[6]</ns0:ref>. This method lacks of consideration of the temporal and spatial sequence characteristics of traffic flow. The prediction methods of shallow machine learning model including support vector machine (SVM), BP neural network, etc. have the defects of slow processing speed and low accuracy. In contrast, deep learning models, such as deep belief network (DBN), convolutional neural network (CNN), stacked automatic encoder neural network (SAE), cyclic neural network (RNN), long-term and short-term memory neural network (LSTM), gated cyclic unit neural network (GRU), etc., have unique advantages in the processing of time series data. With the development of vehicle networking technology and artificial intelligence, deep learning model has become the hot spot of current research and been widely used in the field of traffic flow prediction. The pure deep learning model also has some inherent defects. For example, although the LSTM can fully handle to the time series characteristics of traffic flow, it is difficult to balance the depth and operation time complexity of its model; The dependence of the RNN model on long time series is difficult to deal with, and even the gradient vanishes. By combining the optimization algorithm with the deep learning algorithm, the output root mean square error can be significantly improved <ns0:ref type='bibr' target='#b12'>[7]</ns0:ref><ns0:ref type='bibr' target='#b13'>[8]</ns0:ref><ns0:ref type='bibr' target='#b15'>[9]</ns0:ref><ns0:ref type='bibr' target='#b16'>[10]</ns0:ref><ns0:ref type='bibr' target='#b18'>[11]</ns0:ref><ns0:ref type='bibr' target='#b19'>[12]</ns0:ref>.</ns0:p><ns0:p>The parameter selection of the LSTM algorithm has a great impact on the results. The traditional methods use traversal search and control parameter adjustment, which has a large time complexity.</ns0:p><ns0:p>To deal with the large amount of data and strong time dependence of road short-term traffic flow time series data. The main contributions of this paper are summarized as follows:</ns0:p><ns0:p>(1) We improved and optimized the genetic algorithm model(IGA), and obtained the IGA model which has higher convergence efficiency;</ns0:p><ns0:p>(2) The IGA is applied to optimize the LSTM model, and the optimized algorithm is named as the IGA-LSTM. The IGA-LSTM can be used to predict the 5-minute short-term traffic flow; </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Related works</ns0:head><ns0:p>In this section, we will review related works on deep learning method and short-term traffic flow. Yuankai Wu et al. proposed a traffic flow prediction model based on deep neural network, which established an attention model, and obtained the correlation of historical traffic flow data through the model, so as to predict the future traffic flow <ns0:ref type='bibr' target='#b21'>[13]</ns0:ref>; T. Bogaerts et al. proposed a CNN-LSTM neural network prediction model for short-term and long-term traffic flow, which can extract the temporal and spatial characteristics of traffic flow data at the same time <ns0:ref type='bibr' target='#b23'>[14]</ns0:ref>.</ns0:p><ns0:p>Considering the problem that the accuracy of data-driven prediction model is not high when the amount of training data is small or the noise is large, Yunyuan et al. proposed a PRGP model to strengthen the estimation of traffic flow through shadow GP, and the prediction result is better than that of simple machine learning algorithm <ns0:ref type='bibr' target='#b27'>[15]</ns0:ref> Manuscript to be reviewed Computer Science mechanism <ns0:ref type='bibr' target='#b33'>[18]</ns0:ref>. F. Ali et al proposed a social network-based, real-time monitoring framework for traffic accident detection and condition analysis using ontology and latent Dirichlet allocation (OLDA) and bidirectional long short-term memory (Bi-LSTM) <ns0:ref type='bibr' target='#b35'>[19]</ns0:ref>. The results achieves accuracy of 97 %. In the mean while, his research on Traffic accident detection and sentiment Analysis of transportation in <ns0:ref type='bibr' target='#b37'>[20]</ns0:ref><ns0:ref type='bibr' target='#b39'>[21]</ns0:ref><ns0:ref type='bibr' target='#b42'>[22]</ns0:ref>, which improved the task of transportation features extraction and text classification using the Bi-directional Long Short-Term Memory (Bi-LSTM) approach.</ns0:p><ns0:p>However, most of these systems use empirical values to initialize the deep learning algorithm models, which are sensitive to initial values. That is not scientifically rigorous enough. The parameters of model LSTM mostly uses traversal multi-grid search algorithm, which has high computational complexity. And also the adjustment is complicated. In addition, traffic flow prediction is a complex system engineering, which needs to comprehensively consider spatial information and time information. Therefore, this study uses evolutionary algorithm to optimize the initial parameters of the deep learning model so as to improve the convergence efficiency and prediction accuracy of the algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Description of Traffic Flow Time Series Model</ns0:head><ns0:p>Traffic flow can be expressed as a historical time series at a given observation location. For a certain time range, short-term traffic flow series can be predicted in the corresponding time interval. Assuming that n detectors are set in a certain road section, the historical observation data of traffic flow (vehicle / t) of the S detector in period t is defined as follows:</ns0:p><ns0:p>(1)</ns0:p><ns0:formula xml:id='formula_0'>  N 1,2,3..., P)/s (t Q 2),......, (t Q 1), (t Q (t), Q X x x x s     </ns0:formula><ns0:p>where t represents the current period, P represents time lag, and the number of all variables is expressed as equation ( <ns0:ref type='formula'>2</ns0:ref>).</ns0:p><ns0:p>(2) 𝐿 = 𝑁(𝑃 + 1) X represents the traffic state of the whole road network at the current and P historical times, and Q S (t) depends on time and spatial layout, which is called Spatio-temporal variable. Output Y can be expressed by equation <ns0:ref type='bibr' target='#b5'>(3)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_1'>(3) 1) (t Q Y x  </ns0:formula><ns0:p>The prediction model is the mapping relationship between input A and output Y, which can be expressed by equation ( <ns0:ref type='formula'>4</ns0:ref>). ( <ns0:ref type='formula'>4</ns0:ref>) <ns0:ref type='table' target='#tab_3'>2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_2'>Y(t)) → f(X(t) F(t)  PeerJ Comput. Sci. reviewing PDF | (CS-</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The overall framework of the system model is shown in Figure <ns0:ref type='figure' target='#fig_18'>1</ns0:ref>. The traffic flow detector collects the original traffic flow data, divides and preprocesses the collected data. The missing data in the collection process is supplemented by the method of front and rear mean and normalized with the abnormal data eliminated.</ns0:p><ns0:p>The operation process of the model can be described as follows:</ns0:p><ns0:p>(1) Data preparation and data preprocessing, and establishment of predictive models;</ns0:p><ns0:p>(2) Optimize the crossover operator and mutation operator of the genetic algorithm to obtain the IGA algorithm;</ns0:p><ns0:p>(3) The IGA algorithm is used to optimize the LSTM parameters, and the target problem is transformed into a biological evolution process. New populations are generated through operations such as crossover, mutation, and replication, and solutions with low fitness are eliminated;</ns0:p><ns0:p>(4) The population is initialized and decoded, and the mean square error of the LSTM neural network is used as the fitness function. The individual of the solution is subjected to selection crossover mutation operation;</ns0:p><ns0:p>(5) If the target value of the fitness function reaches the optimal value, go to the next step; otherwise, go back to step ( <ns0:ref type='formula'>4</ns0:ref>); <ns0:ref type='bibr' target='#b10'>(6)</ns0:ref> Obtain the fitness target value and optimal parameters. Calculate the mean square error of prediction based on the best parameters; <ns0:ref type='bibr' target='#b12'>(7)</ns0:ref> Judge the terminate conditions, if the number of iterations of the population is satisfied, stop the calculation. Now, the global optimal parameter combination of the LSTM network; otherwise, return to step ( <ns0:ref type='formula'>6</ns0:ref>); <ns0:ref type='bibr' target='#b13'>(8)</ns0:ref> Output the normalized data, perform error analysis, obtain the final prediction result and compare it with several other shallow layers and their learning algorithms such as the GA-BP, the PSO-BP and the LSTM, and give the final conclusion.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>The optimization of Genetic Algorithm</ns0:head><ns0:p>Genetic Algorithm (GA) is an adaptive global optimization probabilistic search algorithm tool. Just as described in the <ns0:ref type='bibr'>[23][24]</ns0:ref>, GA is a global search algorithm. By improving it, it can avoid local optimum.</ns0:p><ns0:p>We have analyzed the crossover rate and the mutation rate of standard GA in <ns0:ref type='bibr' target='#b48'>[25]</ns0:ref>. And we have obtained the result that a too large crossover rate or a too small crossover rate will affect the search efficiency of the GA. An optimized GA has been proposed in <ns0:ref type='bibr' target='#b50'>[26]</ns0:ref>, in which the mutation rate and crossover rate can be adjusted appropriately. And also we introduced the variable crossover process and mutation rate parameters in <ns0:ref type='bibr' target='#b48'>[25]</ns0:ref>, The adaptive crossover rate P C based on the IGA can be expressed as equation ( <ns0:ref type='formula'>5</ns0:ref>), as follows:</ns0:p><ns0:formula xml:id='formula_3'>(5)              avg C2 avg min max max C1 C F F , P F F , F F ) F (F P P</ns0:formula><ns0:p>The adaptive mutation rate P V can be calculated as equation ( <ns0:ref type='formula'>6</ns0:ref>):</ns0:p><ns0:formula xml:id='formula_4'>(6)           avg V2 avg min max max V1 V F F , P F F , F F F) (F P P</ns0:formula><ns0:p>In equation( <ns0:ref type='formula'>5</ns0:ref>) and ( <ns0:ref type='formula'>6</ns0:ref> The equation ( <ns0:ref type='formula'>7</ns0:ref>) is used to calculate the larger fitness value of individuals. In this process, the individuals are randomly paired and the equation is as follows:</ns0:p><ns0:formula xml:id='formula_5'>(7)               avg C2 avg lC C1 Cm F F , P F F , e 1 P P</ns0:formula><ns0:p>Then, k c can be expressed as follows in equation ( <ns0:ref type='formula'>8</ns0:ref>), which refers to adaptive crossover rate. <ns0:ref type='bibr' target='#b13'>(8)</ns0:ref> avg max</ns0:p><ns0:formula xml:id='formula_6'>avg C F - F F - F K    </ns0:formula><ns0:p>In the equation( <ns0:ref type='formula'>7</ns0:ref>)and( <ns0:ref type='formula'>8</ns0:ref>), F′ refers to the maximum fitness value of the parent individual based on the crossover operation; F max represents the maximum fitness of individuals; represents the average fitness of individuals whose fitness is higher than F avg ; and Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_7'>′ avg</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The follow equation ( <ns0:ref type='formula'>9</ns0:ref>) is the mutation operation. P m represents the fitness of each individual in the population which can be calculated with below equation ( <ns0:ref type='formula'>9</ns0:ref>): K m represents the adaptive mutation rate. The number of mutation points and the mutation rate can be calculated with equation ( <ns0:ref type='formula'>9</ns0:ref>) and equation <ns0:ref type='bibr' target='#b16'>(10)</ns0:ref>, respectively. <ns0:ref type='bibr' target='#b16'>(10)</ns0:ref> avg max</ns0:p><ns0:formula xml:id='formula_8'>(9)             avg m2</ns0:formula><ns0:formula xml:id='formula_9'>avg m F - F F - F K    (11) P * num NUM </ns0:formula><ns0:p>F represents the fitness of individuals during the mutation operation process; F max refers to the maximum fitness of individuals during the same process; The average fitness value of the individuals can be expressed as whose fitness is higher than F avg ; and are two constant</ns0:p><ns0:formula xml:id='formula_10'>′ avg F 1 m P 2 m P</ns0:formula><ns0:p>parameters. The largest number of mutation points can be calculated by equation <ns0:ref type='bibr' target='#b18'>(11)</ns0:ref> which is named NUM.</ns0:p><ns0:p>The above operations improved the fitness value of the population, so that the fitness value can be raised to a higher level faster. The excellent genes will not be destroyed in the evolution, so it has a certain contribution to the improvement of the GA algorithm. This improved GA algorithm(IGA) will be applied to the optimization of parameters of the deep learning algorithm in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>5Long-term and Short-term Memory Neural Network(LSTM)</ns0:head><ns0:p>The long-term and short-term memory neural network is also called gate recurrent neural network (gate RNN), also known as a Recurrent Neural Network, including the network based on Long-term and Short-term Memory (LSTM) and the network based on gate recurrent unit. The recurrent neural network can store the relationship between the input of the neuron at the current time and the output of the neuron at the previous time. In recent years, it has been used in nonlinear time series data prediction. Due to the limited depth of RNN in the time dimension, it is easy to produce gradient disappearance or gradient explosion when processing samples with many parameters and long-term time dependence. The LSTM well solves the problem of the long-term dependence of samples. Its network structure adds one input and one output based on the RNN.</ns0:p><ns0:p>The gating unit can flexibly change the connection weight. It also improves the structure of repeated modules. Its prominent feature is to strengthen the control ability of information by using the threshold composed of a sigmoid neural network layer and point-by-point multiplication. As shown in Fig. <ns0:ref type='figure'>2</ns0:ref>, the forgetting threshold layer f t provides the ability to selectively learn, cancel learning or retain information from each unit, which determines whether it should be retained or discarded in the process of information transmission; the unit status to be updated is determined by the input the threshold layer i t . The output threshold layer o t will filter the output based on the unit state, and the update of each threshold layer is shown in equation <ns0:ref type='bibr' target='#b19'>(12)</ns0:ref>. Where W f 、b f 、W i 、 b i 、W o and b o are the weight and offset of each threshold layer respectively, σ represents sigmoid activation function.</ns0:p><ns0:formula xml:id='formula_11'>(12)       ) , ( ) , ( ) , ( 1 1 1       b x h W b x h W i b x h W f t t t i t t i t f t t f t            </ns0:formula><ns0:p>The unique unit state and threshold layer of the LSTM extend the memory ability of the RNN model. In the training process of the neural network, the weights and offsets of each threshold layer are learned from the historical data set to identify and remember the characteristics of the historical state. In the real-time prediction stage, the prediction value of the time series can be obtained by calculating the input data based on the trained model <ns0:ref type='bibr' target='#b52'>[27,</ns0:ref><ns0:ref type='bibr' target='#b54'>28]</ns0:ref>.</ns0:p><ns0:p>6 The IGA-LSTM Model</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>Model Establishment</ns0:head><ns0:p>The parameter selection of long-term and short-term memory neural network models has a great impact on the results. The improved genetic algorithm is introduced to select the parameters of the LSTM, and the IGA-LSTM traffic flow prediction model is established. The optimized parameters can reduce the impact of the initial parameter setting on the prediction results. The overall framework is shown in Figure <ns0:ref type='figure' target='#fig_7'>3</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Data Reprocessing</ns0:head><ns0:p>The measured traffic flow data is a nonlinear time series. The data is normalized after being imported by the IGA-LSTM model. The formula adopted is Formula <ns0:ref type='bibr' target='#b21'>(13)</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_12'>(13)       dj dj dj di i f min - f max f min - f d  n j ≤ ≤ 1</ns0:formula><ns0:p>Where refers to the traffic flow data. and are the maximum and </ns0:p><ns0:formula xml:id='formula_13'>di f { } di f max { } di f</ns0:formula></ns0:div>
<ns0:div><ns0:head n='6.3'>Experiment platform</ns0:head><ns0:p>The computer configuration and software environment used in the experiment are as follows:</ns0:p><ns0:p>the processor is Intel i5-6500, and the memory is 8. 0GB; the system is Windows10 (64-bit); the programming language version is Python3.7; the IGA-LSTM model, the GA-BP model ,the PSO-BP model and the LSTM model are implemented in the Keras library with Tensorflow as the backend.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4'>Model Training and Prediction Results</ns0:head><ns0:p>The processed normalized data sequence is input into the IGA-LSTM model for training, set the population individuals to 50, the number of iterations to 200, the variation rate and crossover probability to be automatically adjusted, set the number of initially hidden unit layers of the LSTM To cooperate with the algorithm calculation, formula ( <ns0:ref type='formula'>6</ns0:ref>) is used to normalize the original data to reduce the data proportion after difference to the range of <ns0:ref type='bibr'>[-1,1]</ns0:ref>. The IGA-LSTM proposed in this paper is used to train the data, and the seventh-day data is used for prediction. The prediction results are shown in Figure <ns0:ref type='figure'>5</ns0:ref> </ns0:p><ns0:formula xml:id='formula_14'>(a) (b) (c) (d):</ns0:formula><ns0:p>Before and after using the IGA model to optimize and update the weight and threshold of the LSTM, the prediction results of normalized traffic flow data are compared. The error is reduced by 50%, and the prediction accuracy is greatly improved. After optimizing and fine adjusting the parameters of the LSTM in the process of model evolution, the final prediction results of the four algorithms are compared as shown in Figure <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>Compared with LSTM algorithm, the GA-BP algorithm and the PSO-BP algorithm, the IGA-LSTM algorithm proposed in this paper can better fit the real data and achieve the prediction effect.</ns0:p><ns0:p>Compared with the LSTM algorithm, the GA-BP algorithm and the PSO-BP algorithm, the IGA-LSTM algorithm has smaller output root mean square error, stronger model adaptability and better prediction accuracy in traffic flow data prediction.</ns0:p><ns0:p>In order to further verify the effectiveness of the algorithm, weekend data in the same period Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='7.'>Simulation Results and Analysis</ns0:head><ns0:p>For weekday data, because the periodicity of weekday traffic flow is obvious and the peak time of road traffic flow is relatively stable, the prediction results are better. By the simulation of four different models, the RMSE and Maximum Square Error(MMSE) of the data are shown in Table <ns0:ref type='table' target='#tab_1'>1</ns0:ref>. The optimal model is the IGA-LSTM, followed by the LSTM, then the GA-BP and the PSO-BP. Because the LSTM has strong adaptability to time series data, the performance of a deep learning algorithm is generally better than the traditional machine learning algorithm. Select the one-hour data from 8:00 to 9:00 in the morning peak of the test data on weekdays for analysis, intercept each 5-minute data node respectively, and compare the root mean square error of the four algorithms, as shown in Figure <ns0:ref type='figure' target='#fig_17'>9</ns0:ref>. It can be seen that the IGA-LSTM algorithm shows the smallest RMSE at the other 11 time nodes except the 7 th -time and 9 th -time nodes, and the advantages of the algorithm are very obvious. The RMSE of the four algorithms in time series is compared. The initial RMSE and maximum RMSE of the IGA-LSTM algorithm are the smallest, but the fluctuation is relatively large, and finally the average RMSE is the smallest; The RMSE characteristics of the GA-BP algorithm change smoothly, which shows that the error of the GA-BP algorithm changes little in the prediction process, and its average value is lower than that of the IGA-LSTM and the LSTM. The RMSE of the PSO-BP algorithm fluctuates greatly and the stability of the algorithm is poor.</ns0:p><ns0:p>For weekend data, the evaluation of the prediction results of the four models is shown in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. From table 2, it can be seen that the prediction accuracy of several algorithms in the prediction results of weekend data is lower than that of working days. From the analysis of the original data set, it can be seen that the periodicity of the weekend data set is less obvious than that of working days. The performance of the model in predicting such data is slightly poor. However, compared with similar data, the IGA-LSTM model still shows the best prediction accuracy and the lowest root means square error. The second is still the LSTM model, the GA-BP model and the PSO-BP model. Similarly, select the one-hour data of weekend between 8:00-9:00 AM for analysis, intercept each 5-minute data node respectively, and compare the root mean square error of the four algorithms. As shown in Figure <ns0:ref type='figure' target='#fig_18'>10</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The test results of integrating weekday and weekend data show that the IGA-LSTM algorithm model constructed in this paper has better advantages in time dependence in all kinds of traffic flow data, high prediction accuracy and small output root mean square error, indicating that the algorithm has good prediction performance and good applicability <ns0:ref type='bibr' target='#b42'>[22]</ns0:ref>.</ns0:p><ns0:p>All experiment results in this paper are averaged through multiple runs. In order to show the comparison between the algorithms proposed in this paper and other algorithms, the average Standard Error(SE) of the 4 algorithms are calculated more than 10 times. And the results are displayed as follows:</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_18'>11</ns0:ref> shows the calculation results of the weekday data samples. The standard error of all four algorithms are calculated. It can be seen that the IGA-LSTM algorithm still has considerable advantages compared with the other three algorithms. Among the selected samples, the standard error of the IGA-LSTM algorithm is the smallest in the proportion of 76.9%.</ns0:p><ns0:p>Especially in the test of weekday data samples. Due to the strong periodic characteristics of traffic flow in weekdays, it can better reflect the superiority of the algorithm proposed in this paper.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_18'>12</ns0:ref> shows the calculation results of the weekend data samples. Still the average standard error is multiple runs. But only about 53.8% of the selected samples have the smallest standard error by the IGA-LSTM algorithm. Therefore, in general, the algorithm proposed in this paper has relatively higher prediction accuracy for weekday data, and relatively poor prediction accuracy for weekend data. Because the data in weekend has no obvious periodic characteristics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='8.'>Conclusions and Recommendation on Future Works</ns0:head><ns0:p>According to the spatiotemporal correlation of traffic flow series, a prediction method based on improved GA-LSTM model has been applied in this paper. This method is based on long-term and short-term memory neural network, and adaptively adjusts the crossover rate and mutation rate of the GA through the improved genetic algorithm, so that the population can retain excellent genes in the process of evolution and will not be destroyed in evolution, to continuously improve the maximum fitness value of the population. The improved IGA model can optimize and adjust the number of hidden units, training times, gradient threshold, learning rate and other parameters of the LSTM model, so as to make the prediction accuracy of the LSTM model better and the mean square error smaller. The time complexity of the algorithm is increased Manuscript to be reviewed Computer Science prediction accuracy is improved by more than 10%. All abbreviated symbols and their meanings are shown in Table <ns0:ref type='table'>3</ns0:ref>.</ns0:p><ns0:p>The experimental results show that the IGA algorithm can avoid population precocity and improve population fitness. By searching the optimal value of spatial parameters, the optimization efficiency is high. The training times have a great impact on the prediction results in the optimization process, followed by the number of hidden units. Too large hidden units will reduce the operation efficiency, and the accuracy value will not be improved, but too small number of hidden units will reduce the accuracy. The value of the dropout layer has little impact on the output error, and the adjustment of learning rate parameters has a great impact on the results. On the whole, compared with several commonly used machine learning algorithms, the IGA-LSTM model has higher operation efficiency, better accuracy and fast fitness convergence. Through empirical simulation in this paper, it can be used for short-term traffic flow prediction of roads. This algorithm mainly carries out targeted simulation experiments for short-term traffic flow data, but it can also be used for medium and long-term traffic flow prediction. Because of the strong feature extraction ability and big data processing ability of deep learning, it can further consider the traffic flow prediction of multi-state cross data at the upstream and downstream of the road section.</ns0:p><ns0:p>In the future, we will conduct data correlation analysis on the upstream and downstream traffic flow of the tested road. The impacts of the upstream and downstream road will be fully considered on the traffic flow of the target road section. We will reconstruct the training data to obtain better data samples. That can improves prediction accuracy. At the same time, we also plan to consider the medium and long-term traffic flow forecast of the target road so as to provide support for the decision-making of the traffic management department. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:p>Prediction results of the 4 models(weekday) This is the prediction results of 4 models on weekdays. The red curve represents the real value on weekdays and the blue curve represents the predicted value on weekdays. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>Prediction results of the 4 models(weekend) This is the prediction results of 4 models on weekends. The red curve represents the real value on weekends and the blue curve represents the predicted value on weekends. Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>( 3 )( 4 )( 5 )</ns0:head><ns0:label>345</ns0:label><ns0:figDesc>The improved genetic algorithm (IGA) is used to optimize the training times, gradient threshold, hidden layers, learning rate and other parameters of the LSTM, aiming at minimizing the output root mean square error; PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022) Manuscript to be reviewed Computer Science The traffic flow data of California Highway Administration PEMS system are used to demonstrate the performance of the proposed design. Both workday and weekday data prediction results are simulated; The experiment results show that the proposed IGA-LSTM model has higher prediction accuracy and faster convergence speed than the shallow machine learning algorithms like GA-BP model, PSO-BP model and pure LSTM model.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>; Aiming at the problem of missing data in the process of short-term traffic flow prediction, Jinming Yang et al. proposed a spatio-temporal prediction model based on original incomplete data, This method realized the interpolation of missing values, and captured time series and spatial features through long-term memory (LSTM) network and Laplace matrix (GL) model, Compared with the deep learning model of multivariable spatiotemporal traffic data set, the proposed LSTM-GL model has better robustness and prediction accuracy [16]. Jie Su et al. predicted the next position and travel time by capturing the correlation between position and time series in trajectory data through hybrid LSTM and sequential LSTM deep learning models. This work obtained higher prediction accuracy compared with hidden Markov model and LSTM model [17]. Wen Huiying et al. used genetic algorithm to optimize the number of hidden layers, training times and dropout parameters of model LSTM model to predict the long-term and short-term traffic flow data of expressway after obtaining the optimal parameter. Compared with the pure machine learning and deep learning model, the prediction mean square error obtained in [17] is smaller. L. Hou, et al. proposed a CNN+BiLSTM model, which is the first to combine rule embedding, a parallel structure of Convolutional Neural Network (CNN) and a two-layer Bi-directional Long Short-Term Memory (BiLSTM) with the selfattention PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>), F′ represents the maximum fitness value of the parent individual in the process of the crossover operation; F represents the fitness value of the individual in the process of the mutation operation; F max corresponds to the maximum fitness value of the population individual, and F min represents the minimum fitness value of a individual. F avg represents the average fitness value of the population, and ,</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>parameters. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>, including three modules: data processing, model training and data prediction. The improved genetic algorithm optimizes the parameters of the LSTM, including the number of hidden units, training times, gradient threshold and learning rate. The weight of the model is adaptively adjusted through the LSTM iteration to form the IGA-LSTM model. After determining the optimal parameter combination, the traffic flow time series is input PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022) Manuscript to be reviewed Computer Science into the IGA-LSTM model, and the output value is the traffic flow prediction value. Through repeated experiments, the training data set is input into the model for operation, and the error between the prediction results and the test data set is compared and output.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>model to 3 -</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>200, set the number of training epoch to 250 rounds, set the initial value of dropout layer to 0.25, take the time step of 1-10 and the step of 1. At each time step of the input sequence, the LSTM network learns to predict the value of the next time step. Specify an initial learning rate of 0.005 and reduce the learning rate by multiplying by a factor of 0.2 after 125 rounds of training. For each prediction, the previous prediction is used as the input of the function. The IGA-LSTM model can store better individuals and avoid the degradation of the population in the process of evolution. Through the analysis of the average fitness convergence curve of the model, we can see that as the number of iterations increases, the fitness value tends to converge steadily, and the error PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022) Manuscript to be reviewed Computer Science is in a downward trend, thus achieving the optimal solution of the search space. The model fitness convergence curve is shown in Figure 4 and tends to be stable after 40 generations of iteration. In this paper, the measured data of three sections of the PEMs official data in California are selected for analysis. The data contains the flow data set of 3 road sections in the road network. Select the daily flow data set of the detector from June 1 to June 10, 2019, the sampling interval is 5 minutes, and the data set has a total of 3 * 7 * 288 data points. Record fields include: date, flow, number of lanes, occupancy and average speed. Zero data in the original data has been repaired, and the mean value of the data before and after the missing point is taken. The first six days of the selected data are the training data set, and the seventh day is the test data set. The Root Mean Square Error(RMSE) , (Standard Error)SE and the Maximum Square Error(MMSE) are used as the evaluation index of prediction results.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>are used for simulation analysis. A total of 3 * 4 * 288 data sets are selected. 90% of the data are used for training the model and 10% of the data are used for testing. The prediction results of the four algorithm models are shown in Figure 7 (a) (b) (c) (d). The final prediction results for weekend data of the four algorithms are compared as shown in Figure 8. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>, the IGA-LSTM algorithm shows the smallest RMSE feature at 8-time nodes, but it is still dominant. The root means square error characteristics of the PSO-BP algorithm are more optimized than the realization of working day data. The RMSE of the GA-BP algorithm is the largest at each time node. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>deep learning algorithm. But the space complexity does not increase too much, and the PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 1 OverallFigure 2 Figure 3 Framework</ns0:head><ns0:label>123</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Fitness</ns0:head><ns0:label /><ns0:figDesc>convergence curve of the IGA-LSTM model The red curve represents the fitness convergence curve of the algorithm, and the average fitness is taken. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>FIG. D represents the results of the PSO-BP model on weekdays.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>Comparison of prediction results of four algorithms(weekday) In the comparison curve of prediction results, red represents the test data set, green curve represents the prediction results of the GA-BP model, and blue represents the PSO-BP model prediction results, yellow represents the IGA-LSTM model prediction results, and black represents the LSTM model prediction results. PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>Fig. A represents the results of the IGA-LSTM model on weekends. Fig. B represents the results of the LSTM model on weekends. Fig. C represents the results of the GA-BP model on weekends.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head>FigFigure 8</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Fig. D represents the results of the PSO-BP model on weekends.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 9 comparison</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head>comparison of 1 -</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 11</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,42.52,229.87,525.00,156.00' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>min minimum values of the sample. The obtained data sequence is divided into training data set and</ns0:figDesc><ns0:table><ns0:row><ns0:cell>test data set, which are expressed as</ns0:cell><ns0:cell>tr d </ns0:cell><ns0:cell> d</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>,</ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>,...,</ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>m</ns0:cell><ns0:cell></ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>te</ns0:cell><ns0:cell>=</ns0:cell><ns0:cell>{ d</ns0:cell><ns0:cell>m</ns0:cell><ns0:cell>1 +</ns0:cell><ns0:cell>,</ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>m</ns0:cell><ns0:cell>+</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>,...,</ns0:cell><ns0:cell>d</ns0:cell><ns0:cell>n</ns0:cell><ns0:cell>。 }</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Evaluation of model prediction results (weekdays)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>RMSE</ns0:cell><ns0:cell>MMSE</ns0:cell><ns0:cell>SE</ns0:cell></ns0:row><ns0:row><ns0:cell>IGA-LSTM</ns0:cell><ns0:cell>0.004466</ns0:cell><ns0:cell>0.00459</ns0:cell><ns0:cell>3.38076</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM</ns0:cell><ns0:cell>0.004538</ns0:cell><ns0:cell>0.1262</ns0:cell><ns0:cell>4.95769</ns0:cell></ns0:row><ns0:row><ns0:cell>GA-BP</ns0:cell><ns0:cell>0.00458</ns0:cell><ns0:cell>0.00773</ns0:cell><ns0:cell>4.13900</ns0:cell></ns0:row><ns0:row><ns0:cell>PSO-BP</ns0:cell><ns0:cell>0.00512</ns0:cell><ns0:cell>0.01407</ns0:cell><ns0:cell>3.94184</ns0:cell></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 (on next page)</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Evaluation of model prediction results (weekend)It can be seen that in the weekend traffic flow data prediction, the Root Mean Square Error of the IGA-LSTM model is the smallest, followed by the PSO-BP model, then, the LSTM model and the GA-BP model. This result is different from the traffic flow prediction results on weekdays. The main reason is that the traffic flow peak data on weekdays has strong repeatability, and the repeatability of weekend data is not obvious.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)Manuscript to be reviewedComputer Science1</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Evaluation of model prediction results (weekend)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Model</ns0:cell><ns0:cell>RMSE</ns0:cell><ns0:cell>MMSE</ns0:cell><ns0:cell>SE</ns0:cell></ns0:row><ns0:row><ns0:cell>OGA-LSTM</ns0:cell><ns0:cell>0.004675</ns0:cell><ns0:cell>0.00669</ns0:cell><ns0:cell>4.98384</ns0:cell></ns0:row><ns0:row><ns0:cell>LSTM</ns0:cell><ns0:cell>0.004988</ns0:cell><ns0:cell>0.00567</ns0:cell><ns0:cell>4.93000</ns0:cell></ns0:row><ns0:row><ns0:cell>GA-BP</ns0:cell><ns0:cell>0.005424</ns0:cell><ns0:cell>0.0109</ns0:cell><ns0:cell>6.33076</ns0:cell></ns0:row><ns0:row><ns0:cell>PSO-BP</ns0:cell><ns0:cell>0.00493</ns0:cell><ns0:cell>0.0133</ns0:cell><ns0:cell>5.04615</ns0:cell></ns0:row></ns0:table><ns0:note>2 PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022)</ns0:note></ns0:figure>
<ns0:note place='foot' n='2'>PeerJ Comput. Sci. reviewing PDF | (CS-2021:09:66130:1:2:NEW 20 Apr 2022) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Rebuttal letter
Dear Editor,
Thank you for your suggested changes and reviewer comments,that really helps me a lot to improve this paper. According to the revised comments, I will reply to the main revised content as follows:
Reviewer 1 (Anonymous)
Basic reporting
In this paper, author presented an improved genetic algorithm (OGA) to optimize long-term and short-term memory neural network (LSTM). However, there are some limitations that must be addressed as follows.
1. Some sentences in abstract are very lengthy (for example see sentence 1). These sentences should be changed to make the abstract more attractive for readers.
The Abstract has been updated, and longer sentences have been revised into shorter sentences to make it easier for readers to understand. For specific changes, it is shown in the revised manuscript from line 12 to line 23.
2. In Introduction section, it is difficult to understand the novelty of the presented research work. This section should be modified carefully. In addition, the main contribution should be presented in the form of bullets.
The introduction is rearranged by improving the English expression. And it is divided into two parts, namely Introduction and related works. Please see them in line 24 to line 107. At the same time, a summary of the main literature research in this paper is added, the main research content and the problems to be solved in this paper are introduced, and five research objectives are written side by side, so that readers can understand the main contributions of this paper. Please see the additional content in line 51 to ling 67 in the revised manuscript.
3. Different existing application of traffic events or traffic flow should be discussed. In addition, the most recent work about Deep learning-based traffic events or traffic flow should be discussed as follows (‘Traffic accident detection and condition analysis based on social networking data’, ‘Fuzzy Ontology and LSTM-Based Text Mining: A Transportation Network Monitoring System for Assisting Travel’, and ‘Transportation sentiment analysis using word embedding and ontology-based topic modeling’, and ‘Fuzzy ontology-based sentiment analysis of transportation and city feature reviews for safe traveling’.).
We added analysis of relevant literature in the Sec.2 Related works. Especially we added the discussion of traffic flow and traffic incidents in the above literature of Farman Ali, and added the relevant literature to the reference. In the following paragraphs, we summarized the current status and problems of the application of deep learning algorithms in traffic flow and lead to the main content of this study. Please see the changes in line 68 to line 107 in the revised manuscript.
4. Equations should be discussed deeply.
All the equations and formulas in the text are re-interpreted and comprehensively explained , that will help readers to understand the meaning of the formulas in this text. The explanatory information lacked was supplemented. Please see the changes in line 114,117,121, and line124 in the revised manuscript.
5. LSTM is not properly discussed in Section 4. What about Bi-LSTM?
We added references [18] [19] [20] [21] [22]. In these cited documents, the current application and existing problems of BiLSTM are discussed in Sec.2 Related works. For details, please refer to line 91 to line 107 in the revised manuscript.
6. Captions of the Figures not self-explanatory. The caption of figures should be self-explanatory, and clearly explaining the figure. Extend the description of the mentioned figures to make them self-explanatory.
We add table 3 to explain all the Abbreviation Symbol so as to help readers to understand the meaning of the article. About the figures, we add detailed explanation and analysis. For details, please refer to line 340 to line 352 in the revised manuscript.
7. The whole manuscript should be thoroughly revised in order to improve its English.
After consulting experts and teachers in the field, we revised the entire English expression of the manuscript.
8. More details should be included in future work.
In the Sec.8. Conclusions and Recommendation on Future Works, we have added a paragraph about future work plans. For details, please refer to line 368 to line 373 in the revised manuscript.
Reviewer 2 (Anonymous)
Basic reporting
The work is technically sound and has potential merit for publication. However, major revisions are needed to make it worth publishable.
Experimental design
The experimental section, in particular, the setup and parameters being used, needs further explanation.
Validity of the findings
The experiments seem inline, valid, and consistent with previous findings. However, the evaluation section is written well. However, it is not clear how the simulations were performed? Which optimization tool was used? Furthermore, the evaluation metrics should be briefly described.
Additional comments
I have the following comments and suggestions to further improve the quality of the work:
1. The abstract is not concise enough to sketch the entire theme, in particular, the results of the manuscript. Also, there were some serious grammar issues that should be corrected.
The Abstract has been updated, and longer sentences have been revised into shorter sentences to make it easier for readers to understand. For specific changes, it is shown in the revised manuscript from line 12 to line 23.
2. Furthermore, the introduction section needs considerable effort (concise and brief). The problem being investigated should be described clearly.
The introduction, e.g., should lead the way throughout the paper. In addition, the benefits coming from this paper should be made clearer in the introduction and throughout the paper.
- The entire introduction section is written in two paragraphs, while the first paragraph has been ended with a semi-colon (full stop needed)
- Furthermore, briefly describe the major contributions in bullet form, just before the organization paragraph.
- The final paragraph of the introduction section should be the organization flow.
The introduction is rearranged by improving the English expression. And it is divided into two parts, namely Introduction and related works. Please see them in line 24 to line 107. At the same time, a summary of the main literature research in this paper is added, the main research content and the problems to be solved in this paper are introduced, and five research objectives are written side by side, so that readers can understand the main contributions of this paper. Please see the additional content in line 51 to line 67 in the revised manuscript.
3. I suggest adding a separate section that illustrates the related work. Various sections in the paper should be moved to this section. Moreover, a summary of the related work should be sketched into a table with respect to their characteristics. The authors should put their proposal into this table for easy comparison.
This will make it clearer to readers and they will be able to see what was missing in the literature; and how this is addressed in this paper.
A separate section 2 related works has been added. All related works are listed in this section. In addition, a summary is added, which compares the characteristics of all related works, and introduces the main work of this paper. Please see the details in line 68 to line 107 in the revised manuscript.
4. Moreover, It would be better to add all notations in a Table for easy understanding.
We added Tab. 3 A notation table gives all abbreviated symbols and their meanings to explain all the symbols in the article. That will be helpful for easy understanding.
Tab. 3 A notation table gives all abbreviated symbols and their meanings.
Abbreviation
Full name
ITS
Intelligent Transportation System
LSTM
Long-term and short-term memory neural network
IGA
Improved genetic algorithm
IGA-LSTM
Improved genetic algorithm optimize the long-term and short-term memory neural network
BP
Back propagation neural network
GA-BP
Genetic algorithm optimize the Back propagation neural network
PSO-BP
Particle swarm optimization algorithm optimize the Back propagation neural network
SVM
support vector machine
DBN
Deep belief network
CNN
Convolutional neural network
RNN
Cyclic neural network
GRU
Gated cyclic unit neural network
SAE
Stacked automatic encoder neural network
CNN-LSTM
Convolutional neural network and Long-term and short-term memory neural network
BiLSTM
Bi-directional Long and Short-Term Memory
neural network
RMSE
Root mean square error
MMSE
Maximum square error
SE
Standard Error
5. The organization of the paper should be improved. For example, Sec. 5 has the proposed technique as well as results. I suggest putting them into two separate sections.
Sec. 5 in the old manuscript has become Sec. 6 IGA-LSTM Model and Sec. 7 Simulation Results and Analysis.it has been divided into two separate parts.
6. Does the results shown in various tables and figures refer to multiple runs (average)? In the latter case, I will suggest adding standard deviation bars. The reason behind this is to ensure that whether the results overlap with the closest rivals or not.
The calculation results of the methods used in this paper are the average values obtained after multiple runs. The standard deviation (SD Standard Deviation) has been added which is shown in figure 11 and figure 12. The advantages of the algorithm proposed in this paper can be clearly express compared with other algorithms.
7. How about the time and space complexity of the proposed algorithms?
We added a paragraph to describe the complexity of the algorithm in this paper in Sec.8 Conclusions and Recommendation on Future Works. Please see the additional content in line 349 to line 352 in the revised manuscript.
8. The evaluation section is written well. However, it is not clear how the simulations were performed? Which optimization tool was used? The experimental section, in particular, the setup and parameters being used, needs further explanation. Furthermore, the evaluation metrics should be briefly described.
In Section 6.3 Experiment platform, an introduction to the programming language, simulation tools, simulation environment, data sources, and optimization tools used in this research has been added. Please refer to line 243 to line 247 in the revised manuscript.
9. I suggest adding pseudo-code for the proposed algorithm. Various procedures in the algorithm needs explanation. I suggest step-by-step details of the algorithm.
In Sec.3 Description of Traffic Flow Time Series Model, the description of the algorithm process is added. Please refer to line 124 to line 147 in the revised manuscript.
10.English can be improved. Proofreading should ensure the appropriate use of grammar, tenses, and punctuations. Longer sentences should be converted into smaller ones. Many words are missing their articles (the, a, an). Many punctuation characters are missing, and some are redundant.
After consulting experts and teachers in the field, we revised the entire English expression of the manuscript. I'm very sorry, now there should be no more grammatical errors in the full text.
That’s all the revisions of the manuscript. Thanks a lot!
Kind regards.
Junxi Zhang.
" | Here is a paper. Please give your review comments after reading it. |
719 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Gender-Based Violence (GBV) has been plaguing our society for long back. The severity of GBV has spurred research around understanding the causes and factors leading to GBV.</ns0:p><ns0:p>Understanding factors and causes leading to GBV is helpful in planning and executing efficient policies to curb GBV. Past researches have claimed a country's culture to be one of the driving reasons behind GBV. The culture of a country consists of cultural norms, societal rules, gender-based stereotypes, and social taboos which provoke GBV. These claims are supported by theoretical or small-scale survey-based research that suffers from under-representation and biases. With the advent of social media and, more importantly, location-tagged social media, huge ethnographic data are available, creating a platform for many sociological research. In this paper, we also utilize huge social media data to verify the claim of confluence between GBV and the culture of a country. We first curate GBV content from different countries by collecting a large amount of data from Twitter. In order to explore the relationship between a country's culture and GBV content, we perform correlation analyses between a country's culture and its GBV content. The correlation results are further re-validated using graph-based methods. Through the findings of this research, we observe that countries with similar cultures also show similarity in GBV content, thus reconfirming the relationship between GBV and culture.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Gender-Based Violence (GBV) is one of the most heinous and age-old violations of human rights 1 . GBV is evident across all parts of the globe 2 , and it has been plaguing our society for a long back. The condition is so severe that one in three women is reported to have faced GBV 3 . With alarming instances of GBV around the world, social and governmental organisations are taking rigorous preventive measures. The quest to deliver effective preventive measures has triggered research to understand the causes and factors of GBV to provide effective preventive measures. Research in this field have found that cultural norms which comprise of societal stigma, gender-based rules, and societal prejudices are major factors that contribute to <ns0:ref type='bibr'>GBV Jewkes et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b18'>Elischberger et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b59'>Raj and Silverman (2002)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Jewkes et al. (2002)</ns0:ref>; <ns0:ref type='bibr' target='#b6'>Bishwajit et al. (2016)</ns0:ref>. GBV is pervasive across all social, economic, and national strata <ns0:ref type='bibr' target='#b13'>Dartnall and Jewkes (2013)</ns0:ref>, but the type of GBV, the intensity of GBV, people's reactions, and opinions for any GBV event is not the same across the globe. For example, acid attacks are a form of revenge in developing countries arising because of refusal of a marriage proposal or a love proposal, or land disputes <ns0:ref type='bibr' target='#b5'>Bahl (2003)</ns0:ref>. However, in South America, the same acid attack results from poor relationships and domestic intolerance toward women <ns0:ref type='bibr' target='#b25'>Guerrero (2013)</ns0:ref>. The context of GBV changes with the country, and this change is known to be an outcome of persisting culture in a country <ns0:ref type='bibr' target='#b1'>Abrahams et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b21'>Fulu and Miedema (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Alesina et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b53'>Perrin et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b69'>Stubbs-Richardson et al. (2018)</ns0:ref>. WHO 1 https://www.who.int/news-room/fact-sheets/detail/violence-against-women 2 https://www.undp.org/content/undp/en/home/blog/2018/violence-against-women-cause-consequence-inequality.html 3 https://www.who.int/news/item/09-03-2021-devastatingly-pervasive-1-in-3-women-globally-experience-violence (World Health Organization) has studied cultural norms of many countries leading to various forms of GBV 4 . The global organization World Bank also pronounced to work on such cultural and social norms to curb GBV 5 . However, these researches claiming cultural norms as a driving factor behind GBV are based on cognitive studies which require significant intervention from social and cultural experts. The claims presented in these works are based upon long-term manual discerning of GBV events occurring in countries of different cultures. These researches are dependent upon survey/questionnaire-based data which can be collected only in a limited amount and can also suffer from several biases. Thus, past research lacks a large-scale, data-driven empirical research to verify the confluences between culture and GBV.</ns0:p><ns0:p>In this paper, we take a step to answer the research question 'Is gender-based violence a confluence of culture?' by experimenting with large-scale social network data. The use of social network data for research around GBV is a non-conventional way to dive into the finer details of GBV. Our research analyses GBV from the lens of culture. This research is useful for social workers, policy-makers, governments, and other organizations working for the welfare of women and society <ns0:ref type='bibr' target='#b38'>Kim (2021)</ns0:ref>. Additionally, the findings of this research can help in planning more efficient and targeted GBV policies and awareness campaigns. Social network data has already become a substitute for survey data for numerous applications.</ns0:p><ns0:p>Recently, social network data has also gained much utility for research related to <ns0:ref type='bibr'>GBV Hansson et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>Liu et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>Chowdhury et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Hassan et al. (2020)</ns0:ref>. Online content contains a rich spectrum of information pertaining to user opinions/reactions, ongoing news/events <ns0:ref type='bibr' target='#b8'>Blake et al. (2021)</ns0:ref>, and many more <ns0:ref type='bibr' target='#b48'>Nikolov et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Pal et al. (2018)</ns0:ref>. Thus, online content is not only a mere content but a real-time proxy for user behaviour. For this research, we consider online content related to GBV from different countries as a representative of user reactions and perspectives towards GBV. We design experiments to check for the content similarity between countries with similar cultures. Towards this goal, we perform the correlation analysis between content distance and cultural distance between countries.</ns0:p><ns0:p>Further, to validate results from the correlation analysis, we also performed graph analyses. In graph analyses, we create graphs with countries as nodes and different types of distances (content distance and cultural distance) between countries is used for building edges. These graphs are compared using various graph comparison metrics.</ns0:p><ns0:p>On experimentation with Twitter content from different countries, we find a statistically significant positive correlation between GBV content distance and cultural distance. We also observed a higher similarity between the GBV content graph and the culture graph. Thus, through the findings of this research, we observe that the countries which are similar in culture also show higher similarity in GBV content. This observation is consistent with correlation analyses and graph analyses. From this observation, we can conclude that there are traces of culture in GBV content which justifies the claim of confluences of culture on GBV. The contributions of the current research can be summarized as follows:</ns0:p><ns0:p>• In this research, we explore evidence of confluence between GBV and the culture by means of an empirical study conducted over a large dataset created naturally over a long period of time on social media.</ns0:p><ns0:p>• The results obtained from this research justify the hypothesis that GBV is a confluence of culture. This hypothesis has not been tested in past literature using uncensored and unbiased social media data.</ns0:p><ns0:p>• All the experiments conducted in this research are extended to different categories of GBV and generic online content. Further, all the six dimensions of culture are also investigated. Thus, we provide a holistic analysis.</ns0:p><ns0:p>• The findings in this research are supported by correlation analyses as well as graph-based analyses.</ns0:p><ns0:p>Thus, making our claims more robust.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 details relevant past literature related to this research. Section 3.1 describes the collected data. Section 3 elaborates on the methodology of our experiments, and section 4 shows all the results and analyses. Section 5 discusses the implications and limitations of the work. Finally, section 6 concludes our work with possible future works. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORKS</ns0:head><ns0:p>This research is based upon three broad areas of related works i. The relation between GBV and culture ii.</ns0:p><ns0:p>Social Media Content as a source of Data and iii. GBV through social media GBV and Culture GBV is a social ill evident across all the countries irrespective of their economy, language, and demography. However, with country, the type of GBV, its intensity, and the reaction of people vary <ns0:ref type='bibr' target='#b20'>Fakunmoju et al. (2017)</ns0:ref>. For example, in the USA, dating violence is more common than in Africa where there are comparatively lesser instances of dating violence <ns0:ref type='bibr' target='#b34'>Johnson et al. (2015)</ns0:ref>. On the other hand, in Africa, intimate partner violence is more prominent as compared to North America 6 . This implies that the same GBV is represented differently in a different country. This implies that GBV is a global evil but the context of GBV changes with the country. There have been many research to understand the causes and factors leading to <ns0:ref type='bibr'>GBV Jewkes et al. (2002</ns0:ref><ns0:ref type='bibr'>, 2017)</ns0:ref>; <ns0:ref type='bibr' target='#b44'>Marine and Lewis (2020)</ns0:ref>. These works have claimed that a country's culture can characterize the persisting GBV in the country. Every culture has norms, prejudices, and societal rules that design the behaviour of people towards GBV. For example, in Malawi, the concept of polygamy and dowry is evident in the culture, and these perpetuate GBV in Malawi <ns0:ref type='bibr' target='#b7'>Bisika (2008)</ns0:ref>. Similar research in many other countries like UK Aldridge (2021), Ethiopia <ns0:ref type='bibr' target='#b41'>Le Mat et al. (2019)</ns0:ref>, Cambodia <ns0:ref type='bibr' target='#b51'>Palmer and Williams (2017)</ns0:ref>, and many other countries <ns0:ref type='bibr' target='#b16'>Djamba and Kimuna (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b59'>Raj and Silverman (2002)</ns0:ref> have highlighted cultural norms which lead to one or other form of GBV.</ns0:p><ns0:p>Not only in research but global organisations like WHO 7 , World Bank 8 have also highlighted the cultural norms of many countries that influence GBV. The socio-cultural impact is so intense that people even justify instances of GBV as a form of the social norm which cannot be questioned <ns0:ref type='bibr' target='#b54'>Piedalue et al. (2020)</ns0:ref>.</ns0:p><ns0:p>However, these claims are supported by mere examples and small-scale interview-based data. Thus, the research community lags a data-driven research that justifies the claim with sufficient empirical results.</ns0:p><ns0:p>In this research, we do a large-scale analysis of social network content to find evidence of confluence between the culture of a country and GBV. Next, we present the role of social media content in bridging the gap of data for various research. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b55'>Prakash and Majumdar (2021)</ns0:ref>. For this research, we also use Hofstede's dimensions which have been used in a huge number of research to measure culture.</ns0:p><ns0:p>GBV through the lens of social media Social media provides an uncensored and user-friendly medium for expressing views and opinions <ns0:ref type='bibr' target='#b56'>Puente et al. (2021)</ns0:ref>. With this, social media has become a platform for self-expression as well as for conducting online campaigns <ns0:ref type='bibr' target='#b45'>Martínez et al. (2021)</ns0:ref>. There have been many campaigns on social media related to GBV like the #metoo, #Notokay, #StateOfWomen, #HeForShe and many more <ns0:ref type='bibr' target='#b36'>Karuna et al. (2016)</ns0:ref>. These campaigns and freedom of expression on social media have generated huge data related to GBV. The recent campaign of #Metoo observed an unprecedented response from all around the globe, thus, generating huge data related to GBV. And the event was followed by a sudden upsurge in research related to GBV using the generated data.</ns0:p><ns0:p>Thus, the data availability of social media has helped in many recent works related to GBV, which have delivered a multitude of interesting findings <ns0:ref type='bibr' target='#b47'>Moitra et al. (2021)</ns0:ref>; Razi (2020); <ns0:ref type='bibr' target='#b52'>Pandey et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b37'>Khatua et al. (2018)</ns0:ref>. Moreover, location-tagged social media data also assist in several cross-cultural studies related to <ns0:ref type='bibr'>GBV Purohit et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b67'>Starkey et al. (2019)</ns0:ref>. In this paper, we also use social media</ns0:p><ns0:p>Twitter data from different countries of the world as a source of data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>In this section, we first give details of the collected dataset and its processing. Further, in the section, we elaborate on the methodology used to understand the relation between GBV and culture. We process the country-level tweets from different categories for correlation analysis and graph analysis w.r.t culture of different countries. The flow of methodology is represented in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. Next, we explain the details of data collection and its processing to obtain country-wise GBV tweets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Dataset and Processing</ns0:head><ns0:p>We use public streams of Twitter data collected using the Twitter Streaming API. We procured 1% of public tweets provided by the API for a period of two years and five months (1 Manuscript to be reviewed <ns0:ref type='bibr'>2018)</ns0:ref>. We remove all the duplicate tweets and retweets from the collected data as these do not add any new information <ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref>. From the collected tweets, we extract GBV related tweets using a keyword matching approach as described next.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>GBV Tweet Extraction UNFPA (United Nations Population Fund) domain experts have proposed three categories of GBV, namely sexual violence, physical violence, and harmful practices. They have also provided unique keywords related to each category of GBV, which have been used frequently in past literature for GBV related research <ns0:ref type='bibr' target='#b58'>Purohit et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>ElSherief et al. (2017)</ns0:ref>. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows a total of 81 keywords constituting 29, 25, and 27 keywords from sexual violence, physical violence, and harmful practices respectively. We use the same keywords to extract relevant tweets from all three categories.</ns0:p><ns0:p>The keyword set provided by UNFPA is very precise and can contain multi-words. Our methodology for extracting tweets for a particular keyword is based on the presence of the keyword in a tweet. If all the words of a multi-word keyword are present in a tweet regardless of order, we consider it a match.</ns0:p><ns0:p>For example, for the category sexual violence, sexual assault is a related keyword with two words. If a tweet contains both the words sexual and assault, we consider it a match. For the cases where a tweet matches more than one category, we consider the tweet in both categories of GBV. This approach has been used in previous works in order to deliver high-precision data <ns0:ref type='bibr' target='#b58'>Purohit et al. (2015)</ns0:ref>. From the keywords related to each category of GBV, we extract tweets and create a tweet dataset from three categories, namely the sexual violence dataset, physical violence dataset, and harmful practices dataset, with a total of 0.83 million, 0.53 million, and 0.66 million tweets respectively. Further, we combined all three category tweets to create a GBV tweet dataset containing more than 2 million tweets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Generic Tweet Dataset</ns0:head><ns0:p>We create another dataset namely generic tweet dataset to provide a better context of comparison with other categories of the dataset. This dataset is used for drawing inferences from GBV categories dataset w.r.t a generic dataset. For creating this dataset, we borrow the methodology of <ns0:ref type='bibr' target='#b19'>ElSherief et al. (2017)</ns0:ref>. Our collected data is for a very long period, resulting in around 4 billion tweets.</ns0:p><ns0:p>We extract a random 1% sub-sample of total collected tweets as a generic tweet dataset. To eliminate duplicate content here as well, we remove tweets which are duplicates and retweets. Details of generic tweets data are given in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. We also make the required data for this research and the associated codes publicly available 9 .</ns0:p></ns0:div>
<ns0:div><ns0:head>Country-level Location Tagging</ns0:head><ns0:p>There are many indicators of location in a tweet, such as geotags, time zone, and profile location.</ns0:p><ns0:p>Adopting the location indicators of <ns0:ref type='bibr' target='#b4'>Annamoradnejad et al. (2019)</ns0:ref> for tagging each tweet to a location, we use a 3-level hierarchy of location indicative according to their accuracy levels <ns0:ref type='bibr' target='#b40'>Kulshrestha et al. (2012)</ns0:ref>.</ns0:p><ns0:p>The first one is geotag, which gives the most accurate location information. If a geotag is available, then we use it for location tagging, and if it is not present, we look for the time zone data. Time zone is also an accurate way to tag country-level locations. A time zone data directly contains the user's time zone in the form of the corresponding country name. For the cases where even time zone information is not available, then we look for the next location information in the hierarchy, i.e. location field mentioned in the user profile. Geotags and time zone contain exact country names, which can be directly mapped to a country. User profile location is an unstructured text location field that requires further processing to get country information. For this, we use the approach used by <ns0:ref type='bibr' target='#b17'>Dredze et al. (2016)</ns0:ref> where city names present in the user profile location are mapped to corresponding country names based on the Geoname 10 world gazetteer. We borrow the list of required countries from <ns0:ref type='bibr' target='#b4'>Annamoradnejad et al. (2019)</ns0:ref> where authors have used a list of 22 countries, namely Arab countries, Argentina, Australia, Brazil, Canada, China, Colombia, France, Germany, India, Indonesia, Iran, Italy, Japan, Korea, Philippines, Russia, Spain, Thailand, Turkey, UK (United Kingdom), USA (United States of America). Arab Countries is a group of countries with a similar culture, so we merged tweets from all Arab Countries. There were very few tweets from the country Korea (170), so we discarded Korea from the list of considered countries and limited our research to the remaining 21 countries, each having more than 3000 tweets. We apply the same location tagging scheme to all the GBV tweets and generic tweets. The complete data statistics are shown in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> for all the categories of tweets.</ns0:p><ns0:p>We present the evaluation methodology and evaluation results of GBV tweets extraction and location tagging in section 4. In this research, we want to explore the relationship between GBV online content and culture of a country. For this, we perform two analyses i. Correlation Analyses and ii. Graph Analyses. In correlation analysis, we correlate culture and its dimensions with different categories of online content in order to understand their relationship. In graph analyses, we create country graphs on the basis of parameters correlated in correlation analyses like content, and culture, which are compared using multiple graph comparison metrics in order to re-assure the observed relationships from correlation analyses. Next, we discuss the methodology used for these analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Correlation Analysis</ns0:head><ns0:p>In order to find a relation between the culture of a country and GBV, we calculate cultural distance and content distance between each pair of countries as detailed next.</ns0:p><ns0:p>Cultural Distance: We quantify the cultural distance between two countries using cultural dimensions proposed by Geert Hofstede <ns0:ref type='bibr' target='#b30'>Hofstede et al. (2005)</ns0:ref>. Geert Hofstede administered a huge survey among people from different countries to measure the difference in the way they behave. He has quantified six dimensions of culture (power distance 11 , individualism 12 , masculinity 13 , uncertainty avoidance 14 , long-term orientation 15 , indulgence 16 ) for different countries in values ranging between 0 − 120. In order to measure cultural distance between two countries, we adopt the formulation of <ns0:ref type='bibr' target='#b4'>Annamoradnejad et al. (2019)</ns0:ref> where authors use the euclidean distance between two countries to measure the cultural distance. The cultural distance can be formulated as shown in equation 1, where |D| is the total number of dimensions, d i c 1 and d i c 2 are the values of dimension d i for countries c 1 , c 2 respectively.</ns0:p><ns0:formula xml:id='formula_0'>CulturalDistance(C 1 ,C 2 ) = |D| ∑ i=1 (d i c 1 − d i c 2 ) 2 (1)</ns0:formula><ns0:p>We also calculate the distance between countries on the basis of each dimension of culture proposed by Hofstede. For example, power distance is one of the dimensions of culture, and we need to calculate the distance between two countries according to power distance. For this also, we use euclidean distance, but since there is only one parameter, this becomes equivalent to |d c 1 − d c 2 |. For further analyses, we calculate the cultural distance for each pair of countries on the basis of culture and six dimensions of culture.</ns0:p><ns0:p>Content Distance: Online content related to a particular topic from a particular country captures country-level user comments and discussions on that topic <ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref>. In order to measure the difference between contents from two countries, we measure the content distance between two countries using Jaccard Similarity. First, all the tweets from each country are pre-processed to generate country-wise tweet tokens, details of which are given next.</ns0:p></ns0:div>
<ns0:div><ns0:head>Tweet Preprocessing</ns0:head><ns0:p>11 This is a measure of the level of acceptance of unequal power in society.</ns0:p><ns0:p>12 This is a measure of rights and concerns of each person rather than for a group or community. 13 This is a measure of the distribution of gender-based roles in society.</ns0:p><ns0:p>14 This is a measure of likeliness that people avoid uncertainty. 15 This parameter measures the characteristics of perseverance and futuristic mindset among people. 16 This measures the degree of fun and enjoyment a society allows.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:1:1:NEW 11 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>We adopt the pre-processing settings of <ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref> to generate tweet tokens from each country.</ns0:p><ns0:p>We first remove URLs, mentions, punctuation, extra spaces, stop words, and emoticons. Online acronyms and short forms are expanded using NetLingo 17 . For hashtags, we removed the symbol # and kept the remaining word. Spelling and typos are corrected using Textblob 18 . We also transliterated non-English words to English to reduce inconsistencies in language. Lastly, we tokenized each tweet using NLTK (Natural Language Toolkit). Extracted tokens from all the tweets of a country are merged to create country-wise tweet tokens. Next, for each pair of countries, we calculate content distance using the formula shown in equation 2, where C1, C2 are the set of all the tweet tokens of countries C1 and C2, respectively, and |C1∩C2| |C1∪C2| is the Jaccard Similarity 19 .</ns0:p><ns0:formula xml:id='formula_1'>ContentDistance(C1, C2) = 1 − |C1 ∩C2| |C1 ∪C2|<ns0:label>(2)</ns0:label></ns0:formula><ns0:p>We have a total of 5 categories of online content i.e. sexual violence, physical violence, harmful practices, GBV, and generic content. We apply the same methodology to extract country-wise tweet tokens from each category of online content.</ns0:p><ns0:p>Correlation: A correlation helps in understanding the relationship between two variables. Pearson correlation and Spearman correlation are two popular metrics for correlation. To establish robustness in our findings, we use both Pearson correlation, and Spearman correlation for calculating the association between content distance and cultural distance. Pearson correlation captures the linear relationship between two variables and Spearman correlation captures the monotonic relationship between two variables. Both the correlation metrics give correlation values in the range of (−1 to +1). A positive correlation value indicates that content similarity is higher for countries having higher cultural similarity and a negative correlation indicates vice versa. For calculating the correlation, we calculate content distance and corresponding cultural distance for each country pair (a total of n C 2 pairs, if there are n countries). For exhaustive correlation analysis, we measure multiple correlations by keeping one correlation variable as different types of content (sexual, physical, harmful, GBV, and generic tweets) and another variable as six dimensions of culture.</ns0:p></ns0:div>
<ns0:div><ns0:head>P-value:</ns0:head><ns0:p>For measuring the fitness of a correlation, we calculate the p-values for each correlation using the python library SciPy 20 . A p-value represents the measure of occurrence of the correlation between two data samples by chance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Graph Analysis</ns0:head><ns0:p>Our objective is to compare GBV related content to a country's culture. To this end, we create country graphs where edge weights are decided on the basis of different distances in terms of GBV content and culture, as mentioned in section 3.2. For detailed analyses, we create multiple weighted graphs among countries with a different edge parameters. Finally, we compare created graphs using multiple graph distance metrics and graph clustering. GBV content Graph: This graph captures the relationship between countries according to GBV content distance. In GBV content graph G gbv = (C, E), the weights of the set of edges E are decided on the basis of the content distance score between two countries on the basis of GBV tweets. Here, GBV tweets are used for calculating content distance. We also create sexual violence graph, physical violence graph, and harmful practices graph where for assigning edge weights, we calculate content distance on sexual violence tweets, physical violence tweets, and harmful practices tweets, respectively. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Generic content Graph: This graph captures the relationship between countries and generic content.</ns0:p><ns0:p>In the generic content graph G rand = (C, E), the weights of the edges are assigned according to the content distance score between two countries on the basis of generic tweet data.</ns0:p><ns0:p>Cultural Graph: This graph captures the cultural relationship between countries. In the cultural graph G cult = (C, E), the weights of the edges are decided by the value of cultural distance calculated using equation 1.</ns0:p><ns0:p>Graph Pre-processing: For all the graphs G = (C, E), there is an edge between any pair of countries with a weight creating a complete graph. Further, all the created graphs have a different range of values of edge weight. For example, for GBV tweets graphs, edge weights will lie in the range (0,1), but for the culture graph, the values of weights can range from (0-120). To ensure consistency, we upscale edge weights in the range of (0,1) to a range of (0-120). Next, we pruned edges that are unimportant, i.e. whose weight is lesser than the median of all the edge weights. Thus, keeping only the important, i.e. higher edge weight edges in the graph. Before pre-processing, each graph is a complete graph with the same edges in all the graphs, but after pre-processing, each graph is a non-complete graph with only important edges resulting in different edges in each graph. The same is applied to all the graphs, and the final pre-processed graph is a weighted, un-directed, and non-complete graph.</ns0:p><ns0:p>We also mention that all the distances (content/culture) used to decide edge weight in all the graphs follow the axioms of distance Kosub (2019).</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1'>Graph Comparison Metrics</ns0:head><ns0:p>For comparing two graphs, past literature has proposed a number of metrics depending upon the type of graphs <ns0:ref type='bibr' target='#b70'>Tantardini et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b46'>McCabe et al. (2020)</ns0:ref>. In this research, we have graphs with node correspondence, i.e. same nodes in every graph. Additionally, our graphs are un-directed and weighted.</ns0:p><ns0:p>For the purpose of graph comparison, we use multiple graph distance metrics to calculate the distance between two graphs. Graph distance shows how dissimilar the two graphs are. For calculating graph distance, we use the python library netrd 21 . Next, we describe metrics used in our research to calculate graph distance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Distances:</ns0:head><ns0:p>• Quantum JSD: Quantum Jensen-Shannon Divergence De Domenico and Biamonte (2016) compares two weighted and undirected graphs by finding the distance between spectral entropy of density matrices.</ns0:p><ns0:p>• Degree Divergence: This method Hébert-Dufresne et al. ( <ns0:ref type='formula'>2016</ns0:ref>) compares the degree distribution of two graphs. This methodology is applicable to weighted as well as unweighted graphs but only undirected graphs.</ns0:p><ns0:p>• Jaccard Distance: Jaccard distance Oggier and Datta ( <ns0:ref type='formula'>2021</ns0:ref>) is applicable to only unweighted graphs, and its value depends on the number of common edges in the two compared graphs. For applying to our graphs, we coerced weighted graphs into unweighted ones by removing weights from all the graphs.</ns0:p><ns0:p>• Hamming Distance: Hamming distance is one of the popular techniques for measuring the distance between two unweighted graphs. This is a measure of element-wise disagreement between the two adjacency matrices of the graphs. We apply Hamming distance to our graphs by coercing weighted graphs to unweighted ones by simply removing the weights.</ns0:p><ns0:p>• HammingIpsenMikhailov: This method is an enhanced version of Hamming Distance which takes into account the disagreement between adjacency matrices and associated laplacian matrices. This is applicable to weighted and undirected graphs.</ns0:p><ns0:p>• Frobenius: This is an adjacency matrix level distance that calculates L2-norms of the adjacency matrices.</ns0:p><ns0:p>• NetLSD: A metric for measuring graph distance based on spectral node signature distributions for unweighted graphs. For this, we coerced our graphs to unweighted ones by removing the weights. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>In this section, we first provide validation results for our proposed methodology of GBV tweet filtering and location tagging. Then we present the results and insights of correlation analyses and graph analyses in order to understand the correspondence between GBV and culture.</ns0:p><ns0:p>GBV tweet extraction and error analysis GBV tweet extraction is accomplished by tagging tweets using a keyword matching process. Following the keyword match verification methodology of <ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref>, we employed three graduate annotators to manually tag the GBV category. Annotators were provided with a sample of tweets without any category information and were asked to manually tag each tweet to one or more categories of GBV (sexual violence, physical violence, harmful practices) with their own understanding and external online resources. Annotators were provided with a basic definition of GBV and its categories. For the purpose of validation, we created a balanced and shuffled sample of 6000 tweets with 2000 tweets from each category of GBV. For each tweet annotated by the three annotators, we select the majority category as the final category. Tweets that do not have any majority category are discarded. Considering the category tagged by annotators as the actual categories, we calculate the precision value of our keyword matching methodology for each category of GBV. The precision value for sexual violence is found to be 0.97, for physical violence 0.96, and for harmful practices to be 0.98.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref> shows a few example tweets and tagged GBV categories. Examples 1 − 9 shows matching keywords and the tagged GBV category of the tweets from all three categories of GBV. Example 10 − 11 show tweets that contain keywords from more than a category of GBV. These tweets are kept in all the matching categories. In examples 12 − 13, keyword matching results in the wrong tagging of tweets because of contextual differences in tweets. As we can see in example tweet 12, the keywords woman and attacked belong to physical violence keywords, and hence the tweet is wrongly classified in the physical violence category. There are only a few such errors in GBV tweet category tagging arising because of changes in the context of tweets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation of Location Tagging</ns0:head><ns0:p>We have a three-level hierarchy(time zone, geotags, profile location) of location tagging. Location tagging from time zone and geotags is completely accurate. For evaluating location tagging from profile location a random sample of 10, 000 tweets are given to three independent graduate annotators who were asked to manually tag a country-level location from their own understanding using online gazetteers and searches. The majority country name is selected as the final tagged country name. The profile location field with no majority among annotators is discarded. Considering the country tagged by manual annotation as the actual country, we obtained a precision score of 0.94. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Correlation Analysis</ns0:head><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> and Table <ns0:ref type='table'>5</ns0:ref> show the results of pearson correlation and spearman correlation of different types of online content with culture and its parameters. From the tables, we can draw the following observations.</ns0:p><ns0:p>1. GBV content and all categories of GBV content show a positive correlation with culture and all its parameters by both the correlation metrics. For example, the correlation between GBV content and culture is 0.55, with a significant p-value of 0.001. Similarly, the correlation between culture and other categories of GBV i.e. sexual violence, physical violence, and harmful practices content is 0.51, 0.53, and 0.53, respectively with significant p-values.</ns0:p><ns0:p>2. The three parameters of culture uncertainty avoidance, indulgence, and individualism show comparatively higher correlation values as compared to other parameters of culture power distance, masculinity, and long-term orientation. This observation is consistent with all the categories of GBV content and with both the correlation metrics. The pearson correlation of uncertainty avoidance, indulgence, and individualism with GBV tweet content is 0.36, 0.34, and 0.45 (Table <ns0:ref type='table'>4</ns0:ref>). On the other hand, the pearson correlation of power distance, masculinity, and long-term orientation with GBV tweets content is 0.27, 0.16, and 0.17 respectively.</ns0:p><ns0:p>3. We also observe that the same content analyses performed for GBV content did not show a similar strong and consistent correlation with generic content. The pearson correlation between generic content and culture is 0.33, which is much lower than the pearson correlation between GBV content and culture, i.e. 0.55. Additionally, generic content fails to show any correlation with culture and its parameters from spearman correlation.</ns0:p><ns0:p>Observation 1 indicates that GBV content has an influence of culture and all six parameters of culture.</ns0:p><ns0:p>The observation is consistent with all the categories of GBV content, i.e. sexual violence, physical violence, and harmful practices. Additionally, we also show the scatter plots of the culture and different GBV content types in Figure <ns0:ref type='figure'>.</ns0:ref> 2 to reconfirm the findings. These all results implies that countries with similar culture also show higher similarity in GBV content. GBV content is composed of discussions, news, comments generated by users on the topic related to GBV. The reason for the similarity in GBV content for similar culture countries is the similarity in their discussions, news, and comments. For further understanding, we manually discern the content of similar culture countries. According to Hofstede's is more similar to Arab countries as compared to the USA. Table <ns0:ref type='table' target='#tab_8'>6</ns0:ref> shows the scores of cultural distance between different countries using Hofstede's dimensions. In order to show the content differences of different culture countries, we plot the word clouds of common frequent words of USA-Canada and Arab countries-Iran in Figures <ns0:ref type='figure' target='#fig_4'>3(a</ns0:ref>) and 3(b), respectively. For the countries USA and Canada, we find keywords like gfriend, whitesupremacist, objectifying as common frequent keywords. For the countries Arab countries and Iran, we find keywords like veiled, hijab, attacked, predator as common frequent keywords.</ns0:p><ns0:p>The highlights in observation 2 suggest that a few parameters of culture also play an important role in shaping the content related to GBV. Interestingly, Hofstede's parameters uncertainty avoidance, indulgence, and individualism are found to show more impact on GBV related content than other parameters like power distance, masculinity, and long term orientation. For further reconfirming the connection between these parameters and different types of GBV content, we also show the scatter plots of these parameters and different content in </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Graph Analysis</ns0:head><ns0:p>We first summarize the statistical characteristics of the created graphs in our research in Table <ns0:ref type='table' target='#tab_9'>7</ns0:ref>. All the graphs show similar basic properties because there is node correspondence in all the graphs. The edges Table <ns0:ref type='table' target='#tab_11'>8</ns0:ref> shows the distance between different created graphs from various metrics. For each distance metric, if the distance value between a pair of graphs (G1, G2) is smaller than the distance between another pair of graphs (G3, G2), it means that the graph G2 is more similar to G1 as compared to G3. From the table, we observe that for all the metrics, the distance between the generic tweet graph and the culture graph is consistently higher than the distance between other graphs (GBV-culture, sexual violence-culture, physical violence-culture, and harmful practices-culture). For example, the metric QuantumJSD, the distance between generic graph and culture graph is 0.27 while for GBV graph and culture graph is 0.21.</ns0:p><ns0:p>For the same metric, the distance between the sexual graph-culture graph, the physical graph-culture graph, and the harmful graph-culture graph is 0.21, 0.20, and 0.22, respectively. This shows that the graph created using content distance by GBV and its categories are more similar among themselves and to the graph created using cultural distance. The graph created using the generic content is consistently more distant from the cultural graph as compared to other graphs. This observation re-validates the observation from correlation analyses showing a higher degree of similarity between GBV content for similar culture the same cluster have a larger overlap with the GBV graph rather than the generic graph. For example, the countries USA, UK, and Australia belong to the same cluster in the culture graph, and the same is also true for GBV graph. However, for the generic graph, all three countries belong to different clusters.</ns0:p><ns0:p>This observation again shows a higher similarity between the culture graph and the GBV graph than the generic graph and the culture graph. Thus, we observe that the created clusters are also congruous to all other findings stating a higher level of relation between culture and GBV content, which is not the same for generic content.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSIONS</ns0:head><ns0:p>Implications: In this research, we use social media data to verify connections between the culture of a country and GBV. Our findings suggest that real-world hypotheses are also evident in social media data, and their verification is no longer dependent on survey-based data. We believe that this research not only validates the hypothesis of confluence between culture and GBV but also points to the possibility of verification of other hypotheses related to GBV.</ns0:p><ns0:p>A finer analysis can also reveal culture-specific traits of GBV, which can further enhance understanding of GBV across cultures. We argue that these analyses are vital for designing culture-aware policies and strategies to curb GBV. There is a huge possibility of discovery of many more cultural norms like those pronounced by the World Bank 23 , which can promote GBV. Thus, this research paves a path for Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>understanding culture-specific GBV using online social network data.</ns0:p><ns0:p>Limitations and Critiques: In this section, we show a few possible limitations and how our research overcomes those. In this paper, we have performed cross-cultural research using online content from Twitter. Here, we limit our research to English tweets only pertaining to two reasons. First, the GBV keywords are in English, resulting in a collection of English GBV content. Second, English has become the new lingua franca on Twitter <ns0:ref type='bibr' target='#b11'>Choudhary et al. (2018)</ns0:ref>, which delivers sufficient tweets for this research.</ns0:p><ns0:p>GBV data collection is based upon GBV keywords provided by UNFPA, which is a global organization.</ns0:p><ns0:p>The provided keywords can be incomplete and non-exhaustive. There might have scope for increasing these keywords; however, GBV is a sensitive issue, and extending keywords without the intervention of social experts may introduce errors. So, we limit this research to globally available keywords only.</ns0:p><ns0:p>Online content is much inflected by a flux of ongoing news and events, which can lead to differences in data patterns in certain time periods. However, our research is based upon data from a long temporal span which diffuses such temporal inflections <ns0:ref type='bibr' target='#b24'>Grieve et al. (2018)</ns0:ref>.</ns0:p><ns0:p>There can be many more ways to capture the distance between countries in terms of GBV, but we have limited this to content distance using two common metrics (cosine similarity and jaccard similarity).</ns0:p><ns0:p>The content distance used in this research captures the basic difference between tweet tokens of the two countries. However, the same methodology can be easily adapted to other twitter features and metrics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>The article investigates evidence of the confluence between culture and GBV with the help of social media content. Social media content is explored by means of correlation analyses and graph-based analyses to find the traces of culture in GBV related social media content. In this research, we find a noteworthy influence of culture on GBV related content which is not apparent in generic content. The observation is consistent with different analyses and metrics. This research not only claims higher confluence between GBV and culture but also paves a path for effective policy-making and research related to GBV by means of social media content. Social media content captures behavioral aspects related to GBV, which can be used for other investigations related to GBV. As a future work of this research, we would like to understand the role of other factors like economy, unemployment,and crises in GBV. Moreover, this research is a global analysis of different countries of the world. We would also like to extend the research to a finer scale of states or counties within a country.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. An overview of our proposed methodology.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Country</ns0:head><ns0:label /><ns0:figDesc>Graph: A country graph created in this research is an un-directed, weighted graph G = (C, E), where C denotes the nodes of the graph, which are countries, and E denotes the set of edges between countries. For all the graphs in this research, the set of nodes C and the set of edges E are the same. The only difference is in the weights of the edges. Next, we describe the creation of edges in the required graphs.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure. 4, Figure. 5 and Figure. 6. From the scatter plots, we can evidently observe that Hofstede's parameters uncertainty avoidance, indulgence, and individualism consistently show a close association (points are closer to the fitted line) with all types of GBV content. The same is not true for generic content. This shows a connection between these parameters of culture and different GBV categories content. The reasons for more influence of these parameters require further exploration which is outside the scope of this work. However, this observation again recommends a role of culture on GBV.Observation 3 further strengthens the findings of observations 1 and 2. The pattern of correlation showing a connection between culture and different categories of GBV is not the same for generic content.The lower and inconsistent correlation values of the generic content as compared to GBV content reinforce a stronger relationship between GBV content and the culture of a country. Further, the scatter plots shown inFigure. 2, Figure. 4, Figure. 5 and Figure. 6 also show that the points in all the plots of generic content are more scattered from the fitted line as compared to points in GBV and its category content plots.For example, the points in the GBV-culture plot(Figure. 2(a)) are closer to the fitted line, while in the generic-culture plot (Figure.2(e)), the points are farther to the fitted lines showing a comparatively lower correlation. Other categories of content (sexual, physical, and harmful) also show a stronger correlation with culture as compared to generic content.Generic content is composed of content from different topics, a few of which can be highly correlated to culture<ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref>, such as food, and a few can hardly show any correlation<ns0:ref type='bibr' target='#b4'>Annamoradnejad et al. (2019)</ns0:ref>, such as technology. These characteristics of generic content can be the most probable reason for showing weak correlations. Here we show results of generic content just to provide a broader background for understanding. Next, we describe the results from graph analyses in order to validate findings from correlation analyses.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>22Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Scatter plots between different categories of content distance (GBV, sexual, physical, harmful practices, generic) and cultural distance.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Word Clouds showing common frequent keywords of culturally similar countries (USA-Canada and Arab Countries-Iran)</ns0:figDesc><ns0:graphic coords='13,155.98,371.67,180.09,90.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Generic-Uncer. Avoid.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .Figure 5 .Figure 6 .</ns0:head><ns0:label>456</ns0:label><ns0:figDesc>Figure 4. Scatter plots between different categories of content distance (GBV, sexual, physical, harmful practices, generic) and cultural distance (uncertainty avoidance).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Clustering nodes (countries) in the culture graph, GBV graph, and generic graph.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Set of Keywords to identify tweets from sexual violence, physical violence, and harmful practices categories of GBV. /girl/female attacked, boyfriend/boy-friend assault, stalking woman/women/girl/female, groping woman/women/girl/female, sexual/rape victim, gang rape, victim blame, sex predator, woman/women/girl/female forced</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Keywords</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Social Media Content Social media has now become the new language of people, and this has generated a massive amount of data for various research. Social media data has removed the bottleneck</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>of data requirements in numerous applications such as urban computing Silva et al. (2019), cultural</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>computing Wang et al. (2017), personality computing Samani et al. (2018) and many more. Social</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>media content also plays a huge role in understanding people's views and sentiments during the Covid-19</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>pandemic Malagoli et al. (2021). Social media content has already substituted the tedious, time consuming,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>biased and under-represented survey-based data and has unlocked possibilities for research in many other</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>directions. The utility of social media content increases with the availability of location-tagged data.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>The location-tagged online content has been used in numerous ethnographic research Abdullah et al.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(2015), cultural research Cheke et al. (2020), and sociological research Stewart et al. (2017) in recent</ns0:cell></ns0:row><ns0:row><ns0:cell>6 https://apps.who.int/iris/bitstream/handle/10665/85239/9789241564625 eng.pdf</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>7 https://www.who.int/violence injury prevention/violence/norms.pdf</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>8 https://thedocs.worldbank.org/en/doc/656271571686555789-0090022019/original/ShiftingCulturalNormstoAddressGBV.pdf</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:1:1:NEW 11 Jun 2022)</ns0:cell><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Data Statistics</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>Before Location Tagging After Location Tagging</ns0:cell></ns0:row><ns0:row><ns0:cell>Sexual Violence</ns0:cell><ns0:cell>836,497</ns0:cell><ns0:cell>681,537</ns0:cell></ns0:row><ns0:row><ns0:cell>Physical Violence</ns0:cell><ns0:cell>534,707</ns0:cell><ns0:cell>433,560</ns0:cell></ns0:row><ns0:row><ns0:cell>Harmful Practices</ns0:cell><ns0:cell>659,666</ns0:cell><ns0:cell>522,844</ns0:cell></ns0:row><ns0:row><ns0:cell>GBV tweets</ns0:cell><ns0:cell>2,030,874</ns0:cell><ns0:cell>1,637,941</ns0:cell></ns0:row><ns0:row><ns0:cell>Generic tweets</ns0:cell><ns0:cell>42,445,234</ns0:cell><ns0:cell>36,689,133</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>days. Social media content is not mere data, but it captures several dimensions of human interests,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>psychology, and behavior. There have been works which found that the social media content reflects the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>real-world properties as well. García-Gavilanes et al. (2014); Garcia-Gavilanes et al. (2013) have found</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>that interaction and usage of social networks are dependent on social, economic, and cultural aspects</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>of users. Thus, the real-world behavior of people is also mirrored in social networks. This utility of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>social media content motivates us to examine the relationship between GBV and a country's culture</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>through analysis of social media data. For this research, we use Twitter data related to GBV from different</ns0:cell></ns0:row><ns0:row><ns0:cell>countries.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Measuring Culture Culture is an amalgamation of thoughts, beliefs, potential acts, and a lot more.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>A number of definitions of culture are available from previous works Hofstede et al. (2005). One of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>many definitions of culture is 'a fuzzy set of assumptions and values, orientations to life, beliefs, policies,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>procedures and behavioural conventions that are shared by a group of people' Spencer-Oatey and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Franklin (2012). Culture plays a vital role in many spheres of life, such as behaviour Huang et al. (2016)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>economy Herrero et al. (2021), language Sazzed (2021), attitude Shin et al. (2022). In order to ease out</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>research based on culture, there have been several quantification of culture. Hofstede has done one such</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>comprehensive quantification. Hofstede Hofstede et al. (2005) defines culture in terms of six parameters</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Power Distance, Uncertainty Avoidance, Individualism, Masculinity, Long Term Orientation, Indulgence)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>and quantifies each one for different countries of the world. An extensive set of previous works use</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Hofstede Dimensions to quantify culture García-Gavilanes et al. (2014); Garcia-Gavilanes et al. (2013);</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Example tweets with their matching keywords and tagged GBV category. graph clustering algorithm clusters similar nodes in different groups. If two graphs are similar, then their clusters will also be similar. In order to compare the GBV graph and the generic graph with the culture graph, we use the Louvain community detection algorithm De<ns0:ref type='bibr' target='#b15'>Meo et al. (2011)</ns0:ref>. Louvain community detection algorithm is a clustering algorithm for nodes of a weighted graph where nodes are clustered on the basis of modularity between the nodes. Here the number of clusters is decided by the algorithm only.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Sexual (Error)</ns0:cell></ns0:row></ns0:table><ns0:note>S.No. Example Tweets Category 1 Why is harassment an automatic career hazard for a woman receiving any amount of professional attention? Sexual Violence 2 Girls are forced to sleep and authorities are POWERLESS. Europe is dead. Sexual Violence 3 Yesterday I asked my daughter's schools to stop slut-shaming and victim-blaming girls. It went viral. Sexual Violence 4 #Berlin metro attacker who kicked woman down stairs in random act of violence detained Physical Violence 5 laws don't cause divorce, domestic violence does Physical Violence 6 Some girls are beaten up by their boyfriends and stick around saying Ï see something in him. Physical Violence 7 Gather round children, I'm doing a thread on how this society sexualizes underage girls. Leggo. Harmful Practices 8 If you suspect a child is being abused You have a moral duty to report it Harmful Practices 9 A 12 year old child bride taking photos in her wedding dress. Can you imagine it? Harmful Practices 10 For most of these women, a history of sexual abuse and childhood trauma dragged them into prostitution Multiple 11 She has written books on sexual abuse, child molestation, domestic violence. Multiple 12 Woman With Too Much Makeup Mistaken As Clown; Attacked By Angry Mob Physical (Error) 13 all I want for my children is happiness! I don't care what their sexual preference Is.. No one can force them A</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Arab Countries Argentina Australia Brazil Canada China Colombia France Germany India Indonesia Iran Italy Japan Philippines Russia Spain Thailand Turkey UK USA Cultural Distance by Hofstede's Dimensions dimensions, USA and Canada are more similar 22 in culture as compared to USA and Iran. Similarly, Iran</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Arab Countries</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>46.3</ns0:cell><ns0:cell>78.9</ns0:cell><ns0:cell>35.6</ns0:cell><ns0:cell>72.1</ns0:cell><ns0:cell>78.7</ns0:cell><ns0:cell>59.7</ns0:cell><ns0:cell>59</ns0:cell><ns0:cell>81.7</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>50.6</ns0:cell><ns0:cell cols='2'>28.3 64.7</ns0:cell><ns0:cell>85.8</ns0:cell><ns0:cell>31.8</ns0:cell><ns0:cell>69.2</ns0:cell><ns0:cell>42.9</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell cols='2'>89.1 78.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Argentina</ns0:cell><ns0:cell>46.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>58.3</ns0:cell><ns0:cell>34.2</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>44.9</ns0:cell><ns0:cell>56.6</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>75.6</ns0:cell><ns0:cell cols='2'>38.9 62.7</ns0:cell><ns0:cell>80.9</ns0:cell><ns0:cell>67</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>36.9</ns0:cell><ns0:cell>47.7</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell cols='2'>75.8 61.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Australia</ns0:cell><ns0:cell>78.9</ns0:cell><ns0:cell>58.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>71.4</ns0:cell><ns0:cell>20.5</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>88.4</ns0:cell><ns0:cell>71.8</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell cols='2'>64.9 66.1</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>86.4</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>70.3</ns0:cell><ns0:cell>85.3</ns0:cell><ns0:cell>78.1</ns0:cell><ns0:cell>34.5</ns0:cell><ns0:cell>7.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Brazil</ns0:cell><ns0:cell>35.6</ns0:cell><ns0:cell>34.2</ns0:cell><ns0:cell>71.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>77.5</ns0:cell><ns0:cell>48.8</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>51.5</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell cols='2'>41.5 58.5</ns0:cell><ns0:cell>70.1</ns0:cell><ns0:cell>49.7</ns0:cell><ns0:cell>63.7</ns0:cell><ns0:cell>26.8</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell>14.5</ns0:cell><ns0:cell cols='2'>76.7 71.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Canada</ns0:cell><ns0:cell>72.1</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>20.5</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>84.8</ns0:cell><ns0:cell>60.2</ns0:cell><ns0:cell>60.4</ns0:cell><ns0:cell>67.4</ns0:cell><ns0:cell>86.7</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>57.7</ns0:cell><ns0:cell>92.5</ns0:cell><ns0:cell>79.1</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>58.8</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>66.8</ns0:cell><ns0:cell cols='2'>26.3 18.2</ns0:cell></ns0:row><ns0:row><ns0:cell>China</ns0:cell><ns0:cell>78.7</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>77.5</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>87</ns0:cell><ns0:cell>75.9</ns0:cell><ns0:cell>48.3</ns0:cell><ns0:cell>40.1</ns0:cell><ns0:cell cols='2'>89.6 82.4</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>66.9</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>84.7</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>112</ns0:cell></ns0:row><ns0:row><ns0:cell>Colombia</ns0:cell><ns0:cell>59.7</ns0:cell><ns0:cell>44.9</ns0:cell><ns0:cell>88.4</ns0:cell><ns0:cell>48.8</ns0:cell><ns0:cell>84.8</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>87.3</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell cols='2'>59.7 97.5</ns0:cell><ns0:cell>98.3</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>69.4</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>56.3</ns0:cell><ns0:cell cols='2'>102 91.4</ns0:cell></ns0:row><ns0:row><ns0:cell>France</ns0:cell><ns0:cell>59</ns0:cell><ns0:cell>56.6</ns0:cell><ns0:cell>71.8</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>60.2</ns0:cell><ns0:cell>87</ns0:cell><ns0:cell>87.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>50.1</ns0:cell><ns0:cell>59.3</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell cols='2'>65.4 39.1</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>75.7</ns0:cell><ns0:cell>53.6</ns0:cell><ns0:cell>28.1</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>38.6</ns0:cell><ns0:cell cols='2'>71.9 70.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Germany</ns0:cell><ns0:cell>81.7</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>60.4</ns0:cell><ns0:cell>75.9</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>50.1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>63.9</ns0:cell><ns0:cell>76.1</ns0:cell><ns0:cell>81</ns0:cell><ns0:cell>31.5</ns0:cell><ns0:cell>49</ns0:cell><ns0:cell>90.7</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>54.9</ns0:cell><ns0:cell>81.9</ns0:cell><ns0:cell>64.6</ns0:cell><ns0:cell cols='2'>56.9 70.8</ns0:cell></ns0:row><ns0:row><ns0:cell>India</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>51.5</ns0:cell><ns0:cell>67.4</ns0:cell><ns0:cell>48.3</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>59.3</ns0:cell><ns0:cell>63.9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>39.7</ns0:cell><ns0:cell cols='2'>50.3 55.3</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>37.7</ns0:cell><ns0:cell>68.8</ns0:cell><ns0:cell>55.1</ns0:cell><ns0:cell>52.3</ns0:cell><ns0:cell>54.3</ns0:cell><ns0:cell cols='2'>73.8 75.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Indonesia</ns0:cell><ns0:cell>50.6</ns0:cell><ns0:cell>75.6</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell>86.7</ns0:cell><ns0:cell>40.1</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>76.1</ns0:cell><ns0:cell>39.7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>81.4</ns0:cell><ns0:cell>46.1</ns0:cell><ns0:cell>62.1</ns0:cell><ns0:cell>59.2</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>49.4</ns0:cell><ns0:cell cols='2'>95.7 99.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Iran</ns0:cell><ns0:cell>28.3</ns0:cell><ns0:cell>38.9</ns0:cell><ns0:cell>64.9</ns0:cell><ns0:cell>41.5</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>89.6</ns0:cell><ns0:cell>59.7</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>81</ns0:cell><ns0:cell>50.3</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>68.4</ns0:cell><ns0:cell>96.7</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell>87.1</ns0:cell><ns0:cell>44.7</ns0:cell><ns0:cell>30.6</ns0:cell><ns0:cell>43.1</ns0:cell><ns0:cell cols='2'>78.7 65.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Italy</ns0:cell><ns0:cell>64.7</ns0:cell><ns0:cell>62.7</ns0:cell><ns0:cell>66.1</ns0:cell><ns0:cell>58.5</ns0:cell><ns0:cell>57.7</ns0:cell><ns0:cell>82.4</ns0:cell><ns0:cell>97.5</ns0:cell><ns0:cell>39.1</ns0:cell><ns0:cell>31.5</ns0:cell><ns0:cell>55.3</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>68.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>51.7</ns0:cell><ns0:cell>78.6</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>44.3</ns0:cell><ns0:cell>76.6</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell cols='2'>60.8 63.1</ns0:cell></ns0:row><ns0:row><ns0:cell>Japan</ns0:cell><ns0:cell>85.8</ns0:cell><ns0:cell>80.9</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>70.1</ns0:cell><ns0:cell>92.5</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>98.3</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>49</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>81.4</ns0:cell><ns0:cell cols='2'>96.7 51.7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>93.4</ns0:cell><ns0:cell>74.7</ns0:cell><ns0:cell>67.1</ns0:cell><ns0:cell>91.9</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell cols='2'>91.8 100</ns0:cell></ns0:row><ns0:row><ns0:cell>Philippines</ns0:cell><ns0:cell>31.8</ns0:cell><ns0:cell>67</ns0:cell><ns0:cell>86.4</ns0:cell><ns0:cell>49.7</ns0:cell><ns0:cell>79.1</ns0:cell><ns0:cell>66.9</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>75.7</ns0:cell><ns0:cell>90.7</ns0:cell><ns0:cell>37.7</ns0:cell><ns0:cell>46.1</ns0:cell><ns0:cell cols='2'>47.3 78.6</ns0:cell><ns0:cell>93.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>66.2</ns0:cell><ns0:cell>48.7</ns0:cell><ns0:cell>56.8</ns0:cell><ns0:cell cols='2'>90.2 84.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Russia</ns0:cell><ns0:cell>69.2</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>63.7</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>53.6</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>68.8</ns0:cell><ns0:cell>62.1</ns0:cell><ns0:cell cols='2'>87.1 72.6</ns0:cell><ns0:cell>74.7</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>57.1</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>55.2</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>118</ns0:cell></ns0:row><ns0:row><ns0:cell>Spain</ns0:cell><ns0:cell>42.9</ns0:cell><ns0:cell>36.9</ns0:cell><ns0:cell>70.3</ns0:cell><ns0:cell>26.8</ns0:cell><ns0:cell>58.8</ns0:cell><ns0:cell>84.7</ns0:cell><ns0:cell>69.4</ns0:cell><ns0:cell>28.1</ns0:cell><ns0:cell>54.9</ns0:cell><ns0:cell>55.1</ns0:cell><ns0:cell>59.2</ns0:cell><ns0:cell cols='2'>44.7 44.3</ns0:cell><ns0:cell>67.1</ns0:cell><ns0:cell>66.2</ns0:cell><ns0:cell>57.1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>42.6</ns0:cell><ns0:cell>17.9</ns0:cell><ns0:cell cols='2'>76.1 70.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Thailand</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>47.7</ns0:cell><ns0:cell>85.3</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>81.9</ns0:cell><ns0:cell>52.3</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell cols='2'>30.6 76.6</ns0:cell><ns0:cell>91.9</ns0:cell><ns0:cell>48.7</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>42.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell cols='2'>91.8 85.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Turkey</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell>78.1</ns0:cell><ns0:cell>14.5</ns0:cell><ns0:cell>66.8</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>56.3</ns0:cell><ns0:cell>38.6</ns0:cell><ns0:cell>64.6</ns0:cell><ns0:cell>54.3</ns0:cell><ns0:cell>49.4</ns0:cell><ns0:cell>43.1</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell>56.8</ns0:cell><ns0:cell>55.2</ns0:cell><ns0:cell>17.9</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>78.5</ns0:cell></ns0:row><ns0:row><ns0:cell>UK</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>75.8</ns0:cell><ns0:cell>34.5</ns0:cell><ns0:cell>76.7</ns0:cell><ns0:cell>26.3</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>71.9</ns0:cell><ns0:cell>56.9</ns0:cell><ns0:cell>73.8</ns0:cell><ns0:cell>95.7</ns0:cell><ns0:cell cols='2'>78.7 60.8</ns0:cell><ns0:cell>91.8</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>76.1</ns0:cell><ns0:cell>91.8</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>28.5</ns0:cell></ns0:row><ns0:row><ns0:cell>USA</ns0:cell><ns0:cell>78.4</ns0:cell><ns0:cell>61.7</ns0:cell><ns0:cell>7.9</ns0:cell><ns0:cell>71.6</ns0:cell><ns0:cell>18.2</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>91.4</ns0:cell><ns0:cell>70.6</ns0:cell><ns0:cell>70.8</ns0:cell><ns0:cell>75.4</ns0:cell><ns0:cell>99.3</ns0:cell><ns0:cell cols='2'>65.3 63.1</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>84.2</ns0:cell><ns0:cell>118</ns0:cell><ns0:cell>70.5</ns0:cell><ns0:cell>85.4</ns0:cell><ns0:cell>78.5</ns0:cell><ns0:cell>28.5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table><ns0:note>10/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:1:1:NEW 11 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Statistical Inferences from the Graphs</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='6'>GBV Culture Sexual Physical Harmful Generic</ns0:cell></ns0:row><ns0:row><ns0:cell>Edges</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell></ns0:row><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell></ns0:row><ns0:row><ns0:cell>Clustering Coefficient</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>No of nodes in largest connected component</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>19</ns0:cell></ns0:row><ns0:row><ns0:cell>No of Connected components</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparative results of distances between different graphs</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Distance Metric</ns0:cell><ns0:cell cols='5'>GBV-Culture Sexual-Culture Physical-Culture Harmful-Culture Generic-Culture</ns0:cell></ns0:row><ns0:row><ns0:cell>QuantumJSD</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.22</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row><ns0:row><ns0:cell>DegreeDivergence</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.38</ns0:cell></ns0:row><ns0:row><ns0:cell>JaccardDistance</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.76</ns0:cell></ns0:row><ns0:row><ns0:cell>Hamming</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>HammingIpsenMikhailov</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.32</ns0:cell></ns0:row><ns0:row><ns0:cell>Frobenius</ns0:cell><ns0:cell>12.16</ns0:cell><ns0:cell>11.66</ns0:cell><ns0:cell>12.0</ns0:cell><ns0:cell>12.0</ns0:cell><ns0:cell>13.71</ns0:cell></ns0:row><ns0:row><ns0:cell>NetLSD</ns0:cell><ns0:cell>2.71</ns0:cell><ns0:cell>11.62</ns0:cell><ns0:cell>11.18</ns0:cell><ns0:cell>11.97</ns0:cell><ns0:cell>21.78</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='4'>https://www.who.int/violence injury prevention/violence/norms.pdf 5 https://thedocs.worldbank.org/en/doc/656271571686555789-0090022019/original/ShiftingCulturalNormstoAddressGBV.pdf 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:1:1:NEW 11 Jun 2022)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Authors’ Response to the review of
CS-2022:02:70572
June 10, 2022
We would like to thank all the reviewers for their intriguing and insightful
comments that we feel have definitely helped in improving the quality of the
paper. We have tried our best to address all the comments. We next outline
each of the comments made by the reviewers along with our response and the
corresponding changes made in the paper. The reviews by each of the reviewers
start with a new section mentioning “Response to Comments from Reviewer
reviewer no”. The comments by the reviewer are in bold font as a subsection.
Our response is written in regular font. All the references used for addressing
reviewers’ comment are listed at the end of the response sheet.
1
1.1
Response to Comments from Reviewer 1
The author didn’t mention the flow chart of the proposed work. Proposed work must explain step by step according to the flow chart.
Response: We sincerely apologize for missing this important part. We thank
the reviewer for pointing this out. As per your suggestion, we now add the flow
chart of the proposed methodology in Figure 1 (page 6). We also explain each
component of the flow chart step by step in the Methodology section (section
3, page no.s - 4 to 7). To summarize, the first step is to categorize collected
geotagged tweets in different categories namely sexual violence tweets, physical violence tweets, harmful practices tweets, Gender Based Violence (GBV)
tweets, and generic tweets. These tweets are tagged with country-level locations
to perform country-wise analyses. We use Hofstede’s dimensions to calculate
country-wise cultural distance. Finally, we perform correlation analyses and
graph analyses on all the different categories of tweets with respect to cultural
distance.
1.2
What is the novelty of this work
Response: We apologize that our manuscript could not explain the novelty of
our work. The novelty of the work can be categorized in three categories namely
i. Novelty in Data Usage for Research ii. Novelty in Analysis and iii. Novelty
in Findings. Details of each categories are given as follows.
1
• Novelty in Data Usage for Research: Past works claiming the relationship between GBV and culture are dependent on small-scale data
created using questionnaires and surveys. Such data is prone to biases.
Additionally, there is always a quest to explore the details for GBV using
non-conventional methods [1]. Our research is based on social media data
which is created naturally by its users and can be collected in large-scale.
The use of social media data for finding confluence between culture and
GBV has not been done before.
• Novelty in Analysis: The methodologies used in our research for finding the relation between GBV and culture are based upon correlation
analyses [2] as well as graph-based analyses. Past works exploring the relationship between GBV and other factors have utilized only correlation.
The use of graph-based analyses has not been done before.
• Novelty in Findings: Along with establishing a relationship between
culture and GBV, our research also gives other interesting findings. For
example, our research also finds that the three parameters of culture uncertainty avoidance, indulgence, and individualism show a comparatively
stronger relation with GBV as compared to other parameters of culture
power distance, masculinity, and long-term orientation.
All the above mentioned points are now included in the Introduction section
(page 1 to 2) of the revised manuscript.
1.3
whether this entire work is study or research work. Somewhere author
mentions it’s as a study and somewhere he mentions it’s a study.
Kindly go with a strong proofread of your paper.
Response: We really apologize for the confusion created because of different
terminology used and thank the reviewer for notifying. To correct the issue,
we now update the proposed work as research throughout the revised paper.
Moreover, we also did a careful proofread of the paper. We hereby enumerate a
few changes made by proofreading.
1. Prepositions: We found a few wrong usage of prepositions. For example,
in line number 125, it was written “possibilities of research” which is now
updated to “possibilities for research”.
2. Punctuation: In a few sentences there were missing punctuation marks.
For example, in line number 511, it was written “aspects related to GBV
which can be” is now updated to “aspects related to GBV, which can be”
3. Full Forms: There were a few missing full forms at the first use such as
in line number 258, the full form of NLTK is missing. In the revised
manuscript we have added all such missing full forms.
4. Character cases: There were a few mistakes in character cases. For example, in line number 347, it was written “A Graph clustering algorithm”
which is now updated to “A graph clustering algorithm”.
2
5. Number formats: In the previous version of the manuscript, few number
were not represented in standard format. For example, in Table 4, 0.01
was written as .01. We have now standardized all such numbers in the
revised paper.
The same concern is raised by Reviewer 3. We have given a few more details of the proofread updates made in the revised manuscript in Table 2 and
Table 1. We thank the reviewer once again for suggesting the proofread. This
has improved the quality of the manuscript.
2
Response to comments from Reviewer 2
Response: We really appreciate the time and effort of the reviewer for reviewing our manuscript. There were no comments from Reviewer 2. Thank you
again for your efforts.
3
3.1
Response to comments from Reviewer 3
Authors are required to improve the English language and some sentence structures need to be updated.
Response: We are very sorry for the inconvenience caused because of the
English language. As per suggestion of the reviewer, all the authors have now
thoroughly revised the manuscript and have checked for the English language.
We have made following changes for improving the English language of the
manuscript.
Table 1: Few examples of changes in sentence structure
Old Sentence
However, for this study, we use Hofstede’s dimensions which have been used
in a huge number of studies to measure
culture.
In order to deliver effective preventive measures, researches are being conducted to understand the causes and
factors leading to GBV.
To establish a comprehensive context
for drawing inferences, we also create a
generic tweet dataset from the collected
data.
Updated Sentence
For this research, we also use Hofstede’s
dimensions which have been used in a
huge number of studies to measure culture.
The quest to deliver effective preventive
measures, has triggered research to understand causes and factors of GBV.
We create another dataset namely
generic tweet dataset to provide a better context of comparison with other
categories of the dataset.
• Sentence Structure: We have updated structure of some sentences. We
show few examples of change in sentence structure in the Table 1.
• Punctuation: There were a few missing punctuation in the older version
of the manuscript. We have now added all the missing punctuation, few
examples are as follows.
3
1. In line number 495, it was written as “these keywords, however,
GBV” which is now updated as “these keywords; however, GBV”
in the revised manuscript.
2. In line number 409, it was written as “discussions, news and comments” which is now updated as “discussions, news, and comments”.
• Use of articles: There were a few missing articles in the manuscript. A
few examples of missing articles are shown as follows. All these changes
are added in the revised manuscript.
1. In line number 408, it was written “The reason for similarity” which
is updated to “The reason for the similarity”.
2. In line number 398, it was written as “GBV content did not show
similar strong” which is updated to “GBV content did not show a
similar strong”.
• Prepositions: There were a few wrong and missing prepositions in the
older version of the manuscript. We have now corrected all the mistakes
of preposition and added new ones, wherever required. A few examples
are shown next.
1. In line number 453, the phrase“is smaller to the distance” is updated
as ”is smaller than the distance”
2. In line number 333, the phrase “coerced the graphs to unweighted
ones” is updated as “coerced the weighted graphs into unweighted
ones”
3.2
In some sections authors have written too many paragraphs for expressing their views. Need to update the style with some professional
writing style.
Response: Please accept our apology for the writing style of the manuscript.
We have tried our level best to improve the writing style. We have made following changes.
1. In Related Works section, under the sub-section “GBV and Culture”
Page No.- 3, three paragraphs are there, we merged them as one paragraph.
2. In Methodology section, under the sub-section “Correlation Analysis”
under the heading “Correlation” page no.-7, three paragraphs are there, we
merge them as one.
3. In Results Section, under the sub-section “Graph Analysis” (page number
-14), three paragraphs are there, we make it two.
3.3
Provide expanded from of abbreviations wherever used for the first
time.
Response: We are very sorry for the missing abbreviations at the first use of
the short forms. As per your suggestion, we have now thoroughly re-checked
4
and updated all the short forms with abbreviations wherever used for the first
time. For example, for WHO (page no.-1 , line no.-42), we have now added
World Health Organization and for NLTK (page no. 07, line no.-258), we have
now added Natural Language Toolkit.
We thank the reviewer for pointing this out.
3.4
Make your Introduction section more technical than in the current
form it just focuses on the basics of fog computing.
Response: We thank the reviewer for the suggestion. To make the Introduction
section more technical, we add the details of the contribution of our research
along with the possible applications and usefulness of this research in the revised manuscript. Further, we also add the research question answered in this
research. We have updated the changes in the Introduction section (paragraphs
4 and 5, page no. - 2 ) of the revised manuscript.
3.5
The calculation of the fitness value is not clarified in the proposed
method.
Response: We apologize for the missing explanation of the calculation of fitness value in the proposed methodology. We hereby inform that we have used
correlation analyses in our proposed approach, where p-values are taken as a
measure of fitness. For measuring the fitness of a correlation, we calculate the
p-values for each correlation using the python library SciPy1 . A p-value represents the measure of the occurrence of correlation between two data samples
by chance. We have now incorporated these details of calculation of p-value in
the Methodology section (section 3.2 Correlation Analysis) in page no.- 7 of the
revised manuscript.
3.6
More details are needed to clarify the proposed method flow chart.
Response: We thank the reviewer for pointing this out. We have now added
the flow chart of the proposed work in the revised manuscript at page no.- 6.
Each component of the flow chart is also explained subsequently. To summarize, the first step is to categorize collected geotagged tweets in different categories namely sexual violence tweets, physical violence tweets, harmful practices
tweets, Gender Based Violence (GBV) tweets, and generic tweets. These tweets
are tagged with country-level locations to perform country-wise analyses. We
use Hofstede’s dimensions to calculate country-wise cultural distance. Finally,
we perform correlation analyses and graph analyses on all the different categories
of tweets with respect to cultural distance.
3.7
Please check the proofreading of the paper
Response: We thank reviewer for the suggestion. All the authors of the paper
have now thoroughly checked the proofreading of the revised manuscript. All the
1 https://docs.scipy.org/doc/scipy-0.14.0/reference/stats.html
5
mistakes found while proofread has now been updated in the revised manuscript.
We show a few examples of the corrected mistakes in Table 2.
Table 2: Few examples of proofread updates. Here words in red are the errors
and words in bold are the updated changes in the revised manuscript.
Line No.
28
40
49
68
95
108
127
185
242
371
443
3.8
Older
since a long time
with country
questionnaire based
from correlation analysis
GBV and Culture
Similar researches in many other countries
location tagged
it as a match
We also calculate distance
keyword matching results into wrong tagging
This characteristics of generic content
Revised
for a long time
with the country
questionnaire-based
from the correlation analysis
GBV and culture
Similar research in many other countries
location-tagged
it a match
We also calculate the distance
keyword matching results in the wrong tagging
These characteristics of generic content
The quality of the article should be improved by giving more details
about the contribution
Response: We thank the reviewer for the suggestion. As per suggestion, we
have now included main contributions in the Introduction section of the revised
paper. For your reference, we are presenting the contributions as follows.
• In this research, we explore evidence of confluence between GBV and
the culture by means of empirical studies conducted over a large dataset
created naturally over a long period of time on social media.
• The results obtained from this research justify the hypothesis that GBV
is a confluence of culture. This hypothesis has not been tested in past
literature using uncensored and unbiased social media data.
• All the experiments conducted in this study are extended to different categories of GBV and generic online content. Further, all the six dimensions
of culture are also investigated. Thus, we provide a holistic analysis.
• The findings in this research is supported by correlation analyses as well
as graph-based analyses. Thus, making our claims more robust.
3.9
The problem formulation part is not well-organized and the added
value must be well described
Response: We thank the reviewer for pointing this out. We mention that in
this research paper, we are trying to answer the research question, ”Is GBV
a confluence of culture ?” To answer this question, we have performed empirical analyses on large scale social network data. The previous version of the
manuscript could not clarify this. In the revised manuscript, we add the research
question in the Introduction section. We also add the possible applications of
our research which represents the added value of the research.
6
3.10
The references are applicable but are not sufficient
Response: We thank the reviewer for pointing this out. As per your suggestion
we have now included a few more recent references in the revised manuscript.
For example, we have added references for highlighting the role of culture [3],
use of social media for GBV related studies [4], the use of social media for
real-world events [2].
References
[1] M. E. Kim, “Shifting the lens: An implementation study of a communitybased and social network intervention to gender-based violence,” Violence
against women, vol. 27, no. 2, pp. 222–254, 2021.
[2] K. R. Blake, S. M. O’Dean, J. Lian, and T. F. Denson, “Misogynistic tweets
correlate with violence against women,” Psychological science, vol. 32, no. 3,
pp. 315–325, 2021.
[3] Á. Herrero, A. Jiménez, and R. Alcalde, “Advanced feature selection to
study the internationalization strategy of enterprises,” PeerJ Computer Science, vol. 7, p. e403, 2021.
[4] F. Martı́nez, C. Pacheco, and M. Galicia, “The# metoo movement in twitter: Fighting gender-based violence,” in International Conference on Information Technology & Systems. Springer, 2021, pp. 36–44.
7
" | Here is a paper. Please give your review comments after reading it. |
720 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Gender-Based Violence (GBV) has been plaguing our society for long back. The severity of GBV has spurred research around understanding the causes and factors leading to GBV.</ns0:p><ns0:p>Understanding factors and causes leading to GBV is helpful in planning and executing efficient policies to curb GBV. Past researches have claimed a country's culture to be one of the driving reasons behind GBV. The culture of a country consists of cultural norms, societal rules, gender-based stereotypes, and social taboos which provoke GBV. These claims are supported by theoretical or small-scale survey-based research that suffers from under-representation and biases. With the advent of social media and, more importantly, location-tagged social media, huge ethnographic data are available, creating a platform for many sociological research. In this paper, we also utilize huge social media data to verify the claim of confluence between GBV and the culture of a country. We first curate GBV content from different countries by collecting a large amount of data from Twitter. In order to explore the relationship between a country's culture and GBV content, we perform correlation analyses between a country's culture and its GBV content. The correlation results are further re-validated using graph-based methods. Through the findings of this research, we observe that countries with similar cultures also show similarity in GBV content, thus reconfirming the relationship between GBV and culture.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Gender-Based Violence (GBV) is one of the most heinous and age-old violations of human rights 1 . GBV is evident across all parts of the globe 2 , and it has been plaguing our society for a long back. The condition is so severe that one in three women is reported to have faced GBV 3 . With alarming instances of GBV around the world, social and governmental organisations are taking rigorous preventive measures. The quest to deliver effective preventive measures has triggered research to understand the causes and factors of GBV to provide effective preventive measures. Research in this field have found that cultural norms which comprise of societal stigma, gender-based rules, and societal prejudices are major factors that contribute to <ns0:ref type='bibr'>GBV Jewkes et al. (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b18'>Elischberger et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b59'>Raj and Silverman (2002)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Jewkes et al. (2002)</ns0:ref>; <ns0:ref type='bibr' target='#b6'>Bishwajit et al. (2016)</ns0:ref>. GBV is pervasive across all social, economic, and national strata <ns0:ref type='bibr' target='#b13'>Dartnall and Jewkes (2013)</ns0:ref>, but the type of GBV, the intensity of GBV, people's reactions, and opinions for any GBV event is not the same across the globe. For example, acid attacks are a form of revenge in developing countries arising because of refusal of a marriage proposal or a love proposal, or land disputes <ns0:ref type='bibr' target='#b5'>Bahl (2003)</ns0:ref>. However, in South America, the same acid attack results from poor relationships and domestic intolerance toward women <ns0:ref type='bibr' target='#b25'>Guerrero (2013)</ns0:ref>. The context of GBV changes with the country, and this change is known to be an outcome of persisting culture in a country <ns0:ref type='bibr' target='#b1'>Abrahams et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b21'>Fulu and Miedema (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Alesina et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b53'>Perrin et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b69'>Stubbs-Richardson et al. (2018)</ns0:ref>. WHO 1 https://www.who.int/news-room/fact-sheets/detail/violence-against-women 2 https://www.undp.org/content/undp/en/home/blog/2018/violence-against-women-cause-consequence-inequality.html 3 https://www.who.int/news/item/09-03-2021-devastatingly-pervasive-1-in-3-women-globally-experience-violence (World Health Organization) has studied cultural norms of many countries leading to various forms of GBV 4 . The global organization World Bank also pronounced to work on such cultural and social norms to curb GBV 5 . However, these researches claiming cultural norms as a driving factor behind GBV are based on cognitive studies which require significant intervention from social and cultural experts. The claims presented in these works are based upon long-term manual discerning of GBV events occurring in countries of different cultures. These researches are dependent upon survey/questionnaire-based data which can be collected only in a limited amount and can also suffer from several biases. Thus, past research lacks a large-scale, data-driven empirical research to verify the confluences between culture and GBV.</ns0:p><ns0:p>In this paper, we take a step to answer the research question 'Is gender-based violence a confluence of culture?' by experimenting with large-scale social network data. The use of social network data for research around GBV is a non-conventional way to dive into the finer details of GBV. Our research analyses GBV from the lens of culture. This research is useful for social workers, policy-makers, governments, and other organizations working for the welfare of women and society <ns0:ref type='bibr' target='#b38'>Kim (2021)</ns0:ref>. Additionally, the findings of this research can help in planning more efficient and targeted GBV policies and awareness campaigns. Social network data has already become a substitute for survey data for numerous applications.</ns0:p><ns0:p>Recently, social network data has also gained much utility for research related to <ns0:ref type='bibr'>GBV Hansson et al. (2021)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>Liu et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b12'>Chowdhury et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Hassan et al. (2020)</ns0:ref>. Online content contains a rich spectrum of information pertaining to user opinions/reactions, ongoing news/events <ns0:ref type='bibr' target='#b8'>Blake et al. (2021)</ns0:ref>, and many more <ns0:ref type='bibr' target='#b48'>Nikolov et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b50'>Pal et al. (2018)</ns0:ref>. Thus, online content is not only a mere content but a real-time proxy for user behaviour. For this research, we consider online content related to GBV from different countries as a representative of user reactions and perspectives towards GBV. We design experiments to check for the content similarity between countries with similar cultures. Towards this goal, we perform the correlation analysis between content distance and cultural distance between countries.</ns0:p><ns0:p>Further, to validate results from the correlation analysis, we also performed graph analyses. In graph analyses, we create graphs with countries as nodes and different types of distances (content distance and cultural distance) between countries is used for building edges. These graphs are compared using various graph comparison metrics.</ns0:p><ns0:p>On experimentation with Twitter content from different countries, we find a statistically significant positive correlation between GBV content distance and cultural distance. We also observed a higher similarity between the GBV content graph and the culture graph. Thus, through the findings of this research, we observe that the countries which are similar in culture also show higher similarity in GBV content. This observation is consistent with correlation analyses and graph analyses. From this observation, we can conclude that there are traces of culture in GBV content which justifies the claim of confluences of culture on GBV. The contributions of the current research can be summarized as follows:</ns0:p><ns0:p>• In this research, we explore evidence of confluence between GBV and the culture by means of an empirical study conducted over a large dataset created naturally over a long period of time on social media.</ns0:p><ns0:p>• The results obtained from this research justify the hypothesis that GBV is a confluence of culture. This hypothesis has not been tested in past literature using uncensored and unbiased social media data.</ns0:p><ns0:p>• All the experiments conducted in this research are extended to different categories of GBV and generic online content. Further, all the six dimensions of culture are also investigated. Thus, we provide a holistic analysis.</ns0:p><ns0:p>• The findings in this research are supported by correlation analyses as well as graph-based analyses.</ns0:p><ns0:p>Thus, making our claims more robust.</ns0:p><ns0:p>The rest of the paper is organized as follows. Section 2 details relevant past literature related to this research. Section 3.1 describes the collected data. Section 3 elaborates on the methodology of our experiments, and section 4 shows all the results and analyses. Section 5 discusses the implications and limitations of the work. Finally, section 6 concludes our work with possible future works. </ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORKS</ns0:head><ns0:p>This research is based upon three broad areas of related works i. The relation between GBV and culture ii.</ns0:p><ns0:p>Social Media Content as a source of Data and iii. GBV through social media GBV and Culture GBV is a social ill evident across all the countries irrespective of their economy, language, and demography. However, with country, the type of GBV, its intensity, and the reaction of people vary <ns0:ref type='bibr' target='#b20'>Fakunmoju et al. (2017)</ns0:ref>. For example, in the USA, dating violence is more common than in Africa where there are comparatively lesser instances of dating violence <ns0:ref type='bibr' target='#b34'>Johnson et al. (2015)</ns0:ref>. On the other hand, in Africa, intimate partner violence is more prominent as compared to North America 6 . This implies that the same GBV is represented differently in a different country. This implies that GBV is a global evil but the context of GBV changes with the country. There have been many research to understand the causes and factors leading to <ns0:ref type='bibr'>GBV Jewkes et al. (2002</ns0:ref><ns0:ref type='bibr'>, 2017)</ns0:ref>; <ns0:ref type='bibr' target='#b44'>Marine and Lewis (2020)</ns0:ref>. These works have claimed that a country's culture can characterize the persisting GBV in the country. Every culture has norms, prejudices, and societal rules that design the behaviour of people towards GBV. For example, in Malawi, the concept of polygamy and dowry is evident in the culture, and these perpetuate GBV in Malawi <ns0:ref type='bibr' target='#b7'>Bisika (2008)</ns0:ref>. Similar research in many other countries like UK Aldridge (2021), Ethiopia <ns0:ref type='bibr' target='#b41'>Le Mat et al. (2019)</ns0:ref>, Cambodia <ns0:ref type='bibr' target='#b51'>Palmer and Williams (2017)</ns0:ref>, and many other countries <ns0:ref type='bibr' target='#b16'>Djamba and Kimuna (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b59'>Raj and Silverman (2002)</ns0:ref> have highlighted cultural norms which lead to one or other form of GBV.</ns0:p><ns0:p>Not only in research but global organisations like WHO 7 , World Bank 8 have also highlighted the cultural norms of many countries that influence GBV. The socio-cultural impact is so intense that people even justify instances of GBV as a form of the social norm which cannot be questioned <ns0:ref type='bibr' target='#b54'>Piedalue et al. (2020)</ns0:ref>.</ns0:p><ns0:p>However, these claims are supported by mere examples and small-scale interview-based data. Thus, the research community lags a data-driven research that justifies the claim with sufficient empirical results.</ns0:p><ns0:p>In this research, we do a large-scale analysis of social network content to find evidence of confluence between the culture of a country and GBV. Next, we present the role of social media content in bridging the gap of data for various research. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science <ns0:ref type='bibr' target='#b55'>Prakash and Majumdar (2021)</ns0:ref>. For this research, we also use Hofstede's dimensions which have been used in a huge number of research to measure culture.</ns0:p><ns0:p>GBV through the lens of social media Social media provides an uncensored and user-friendly medium for expressing views and opinions <ns0:ref type='bibr' target='#b56'>Puente et al. (2021)</ns0:ref>. With this, social media has become a platform for self-expression as well as for conducting online campaigns <ns0:ref type='bibr' target='#b45'>Martínez et al. (2021)</ns0:ref>. There have been many campaigns on social media related to GBV like the #metoo, #Notokay, #StateOfWomen, #HeForShe and many more <ns0:ref type='bibr' target='#b36'>Karuna et al. (2016)</ns0:ref>. These campaigns and freedom of expression on social media have generated huge data related to GBV. The recent campaign of #Metoo observed an unprecedented response from all around the globe, thus, generating huge data related to GBV. And the event was followed by a sudden upsurge in research related to GBV using the generated data.</ns0:p><ns0:p>Thus, the data availability of social media has helped in many recent works related to GBV, which have delivered a multitude of interesting findings <ns0:ref type='bibr' target='#b47'>Moitra et al. (2021)</ns0:ref>; Razi (2020); <ns0:ref type='bibr' target='#b52'>Pandey et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b37'>Khatua et al. (2018)</ns0:ref>. Moreover, location-tagged social media data also assist in several cross-cultural studies related to <ns0:ref type='bibr'>GBV Purohit et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b67'>Starkey et al. (2019)</ns0:ref>. In this paper, we also use social media</ns0:p><ns0:p>Twitter data from different countries of the world as a source of data.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>METHODOLOGY</ns0:head><ns0:p>In this section, we first give details of the collected dataset and its processing. Further, in the section, we elaborate on the methodology used to understand the relation between GBV and culture. We process the country-level tweets from different categories for correlation analysis and graph analysis w.r.t culture of different countries. The flow of methodology is represented in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. Next, we explain the details of data collection and its processing to obtain country-wise GBV tweets.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Dataset and Processing</ns0:head><ns0:p>We use public streams of Twitter data collected using the Twitter Streaming API. We procured 1% of public tweets provided by the API for a period of two years and five months (1 st July 2016 -25 th Nov 2018). We remove all the duplicate tweets and retweets from the collected data as these do not add any Manuscript to be reviewed</ns0:p><ns0:p>Computer Science new information <ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref>. From the collected tweets, we extract GBV related tweets using a keyword matching approach as described next.</ns0:p><ns0:p>GBV Tweet Extraction UNFPA (United Nations Population Fund) domain experts have proposed three categories of GBV, namely sexual violence, physical violence, and harmful practices. They have also provided unique keywords related to each category of GBV, which have been used frequently in past literature for GBV related research <ns0:ref type='bibr' target='#b58'>Purohit et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>ElSherief et al. (2017)</ns0:ref>. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> shows a total of 81 keywords constituting 29, 25, and 27 keywords from sexual violence, physical violence, and harmful practices respectively. We use the same keywords to extract relevant tweets from all three categories.</ns0:p><ns0:p>The keyword set provided by UNFPA is very precise and can contain multi-words. Our methodology for extracting tweets for a particular keyword is based on the presence of the keyword in a tweet. If all the words of a multi-word keyword are present in a tweet regardless of order, we consider it a match.</ns0:p><ns0:p>For example, for the category sexual violence, sexual assault is a related keyword with two words. If a tweet contains both the words sexual and assault, we consider it a match. For the cases where a tweet matches more than one category, we consider the tweet in both categories of GBV. This approach has been used in previous works in order to deliver high-precision data <ns0:ref type='bibr' target='#b58'>Purohit et al. (2015)</ns0:ref>. From the keywords related to each category of GBV, we extract tweets and create a tweet dataset from three categories, namely the sexual violence dataset, physical violence dataset, and harmful practices dataset, with a total of 0.83 million, 0.53 million, and 0.66 million tweets respectively. Further, we combined all three category tweets to create a GBV tweet dataset containing more than 2 million tweets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Generic Tweet Dataset</ns0:head><ns0:p>We create another dataset namely generic tweet dataset to provide a better context of comparison with other categories of the dataset. This dataset is used for drawing inferences from GBV categories dataset w.r.t a generic dataset. For creating this dataset, we borrow the methodology of <ns0:ref type='bibr' target='#b19'>ElSherief et al. (2017)</ns0:ref>. Our collected data is for a very long period, resulting in around 4 billion tweets.</ns0:p><ns0:p>We extract a random 1% sub-sample of total collected tweets as a generic tweet dataset. To eliminate duplicate content here as well, we remove tweets which are duplicates and retweets. Details of generic tweets data are given in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. We also make the required data for this research and the associated codes publicly available 9 .</ns0:p></ns0:div>
<ns0:div><ns0:head>Country-level Location Tagging</ns0:head><ns0:p>There are many indicators of location in a tweet, such as geotags, time zone, and profile location.</ns0:p><ns0:p>Adopting the location indicators of <ns0:ref type='bibr' target='#b4'>Annamoradnejad et al. (2019)</ns0:ref> for tagging each tweet to a location, we use a 3-level hierarchy of location indicative according to their accuracy levels <ns0:ref type='bibr' target='#b40'>Kulshrestha et al. (2012)</ns0:ref>.</ns0:p><ns0:p>The first one is geotag, which gives the most accurate location information. If a geotag is available, then we use it for location tagging, and if it is not present, we look for the time zone data. Time zone is also an accurate way to tag country-level locations. A time zone data directly contains the user's time zone in the form of the corresponding country name. For the cases where even time zone information is not available, then we look for the next location information in the hierarchy, i.e. location field mentioned in the user profile. Geotags and time zone contain exact country names, which can be directly mapped to a country. User profile location is an unstructured text location field that requires further processing to get country information. For this, we use the approach used by <ns0:ref type='bibr' target='#b17'>Dredze et al. (2016)</ns0:ref> where city names present in the user profile location are mapped to corresponding country names based on the Geoname 10 world gazetteer. We borrow the list of required countries from <ns0:ref type='bibr' target='#b4'>Annamoradnejad et al. (2019)</ns0:ref> where authors have used a list of 22 countries, namely Arab countries, Argentina, Australia, Brazil, Canada, China, Colombia, France, Germany, India, Indonesia, Iran, Italy, Japan, Korea, Philippines, Russia, Spain, Thailand, Turkey, UK (United Kingdom), USA (United States of America). Arab Countries is a group of countries with a similar culture, so we merged tweets from all Arab Countries. There were very few tweets from the country Korea (170), so we discarded Korea from the list of considered countries and limited our research to the remaining 21 countries, each having more than 3000 tweets. We apply the same location tagging scheme to all the GBV tweets and generic tweets. The complete data statistics are shown in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> for all the categories of tweets.</ns0:p><ns0:p>We present the evaluation methodology and evaluation results of GBV tweets extraction and location tagging in section 4. In this research, we want to explore the relationship between GBV online content and culture of a country. For this, we perform two analyses i. Correlation Analyses and ii. Graph Analyses. In correlation analysis, we correlate culture and its dimensions with different categories of online content in order to understand their relationship. In graph analyses, we create country graphs on the basis of parameters correlated in correlation analyses like content, and culture, which are compared using multiple graph comparison metrics in order to re-assure the observed relationships from correlation analyses. Next, we discuss the methodology used for these analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Correlation Analysis</ns0:head><ns0:p>In order to find a relation between the culture of a country and GBV, we calculate cultural distance and content distance between each pair of countries as detailed next.</ns0:p><ns0:p>Cultural Distance: We quantify the cultural distance between two countries using cultural dimensions proposed by Geert Hofstede <ns0:ref type='bibr' target='#b30'>Hofstede et al. (2005)</ns0:ref>. Geert Hofstede administered a huge survey among people from different countries to measure the difference in the way they behave. He has quantified six dimensions of culture (power distance 11 , individualism 12 , masculinity 13 , uncertainty avoidance 14 , long-term orientation 15 , indulgence 16 ) for different countries in values ranging between 0 − 120. In order to measure cultural distance between two countries, we adopt the formulation of <ns0:ref type='bibr' target='#b4'>Annamoradnejad et al. (2019)</ns0:ref> where authors use the euclidean distance between two countries to measure the cultural distance. The cultural distance can be formulated as shown in equation 1, where |D| is the total number of dimensions, d i c 1 and d i c 2 are the values of dimension d i for countries c 1 , c 2 respectively.</ns0:p><ns0:formula xml:id='formula_0'>CulturalDistance(C 1 ,C 2 ) = |D| ∑ i=1 (d i c 1 − d i c 2 ) 2 (1)</ns0:formula><ns0:p>We also calculate the distance between countries on the basis of each dimension of culture proposed by Hofstede. For example, power distance is one of the dimensions of culture, and we need to calculate the distance between two countries according to power distance. For this also, we use euclidean distance, but since there is only one parameter, this becomes equivalent to |d c 1 − d c 2 |. For further analyses, we calculate the cultural distance for each pair of countries on the basis of culture and six dimensions of culture.</ns0:p><ns0:p>Content Distance: Online content related to a particular topic from a particular country captures country-level user comments and discussions on that topic <ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref>. In order to measure the difference between contents from two countries, we measure the content distance between two countries using Jaccard Similarity. First, all the tweets from each country are pre-processed to generate country-wise tweet tokens, details of which are given next.</ns0:p></ns0:div>
<ns0:div><ns0:head>Tweet Preprocessing</ns0:head><ns0:p>We adopt the pre-processing settings of <ns0:ref type='bibr' target='#b9'>Cheke et al. (2020)</ns0:ref> to generate tweet tokens from each country.</ns0:p><ns0:p>We first remove URLs, mentions, punctuation, extra spaces, stop words, and emoticons. Online acronyms 11 This is a measure of the level of acceptance of unequal power in society.</ns0:p><ns0:p>12 This is a measure of rights and concerns of each person rather than for a group or community. 13 This is a measure of the distribution of gender-based roles in society.</ns0:p><ns0:p>14 This is a measure of likeliness that people avoid uncertainty. 15 This parameter measures the characteristics of perseverance and futuristic mindset among people. 16 This measures the degree of fun and enjoyment a society allows.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:2:0:NEW 29 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>and short forms are expanded using NetLingo 17 . For hashtags, we removed the symbol # and kept the remaining word. Spelling and typos are corrected using Textblob 18 . We also transliterated non-English words to English to reduce inconsistencies in language. Lastly, we tokenized each tweet using NLTK (Natural Language Toolkit). Extracted tokens from all the tweets of a country are merged to create country-wise tweet tokens. Next, for each pair of countries, we calculate content distance using the formula shown in equation 2, where C1, C2 are the set of all the tweet tokens of countries C1 and C2, respectively, and |C1∩C2| |C1∪C2| is the Jaccard Similarity 19 .</ns0:p><ns0:formula xml:id='formula_1'>ContentDistance(C1, C2) = 1 − |C1 ∩C2| |C1 ∪C2| (2)</ns0:formula><ns0:p>We have a total of 5 categories of online content i.e. sexual violence, physical violence, harmful practices, GBV, and generic content. We apply the same methodology to extract country-wise tweet tokens from each category of online content.</ns0:p><ns0:p>Correlation: A correlation helps in understanding the relationship between two variables. Pearson correlation and Spearman correlation are two popular metrics for correlation. To establish robustness in our findings, we use both Pearson correlation, and Spearman correlation for calculating the association between content distance and cultural distance. Pearson correlation captures the linear relationship between two variables and Spearman correlation captures the monotonic relationship between two variables. Both the correlation metrics give correlation values in the range of (−1 to +1). A positive correlation value indicates that content similarity is higher for countries having higher cultural similarity and a negative correlation indicates vice versa. For calculating the correlation, we calculate content distance and corresponding cultural distance for each country pair (a total of n C 2 pairs, if there are n countries). For exhaustive correlation analysis, we measure multiple correlations by keeping one correlation variable as different types of content (sexual, physical, harmful, GBV, and generic tweets) and another variable as six dimensions of culture.</ns0:p></ns0:div>
<ns0:div><ns0:head>P-value:</ns0:head><ns0:p>For measuring the fitness of a correlation, we calculate the p-values for each correlation using the python library SciPy 20 . A p-value represents the measure of occurrence of the correlation between two data samples by chance.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Graph Analysis</ns0:head><ns0:p>Our objective is to compare GBV related content to a country's culture. To this end, we create country graphs where edge weights are decided on the basis of different distances in terms of GBV content and culture, as mentioned in section 3.2. For detailed analyses, we create multiple weighted graphs among countries with a different edge parameters. Finally, we compare created graphs using multiple graph distance metrics and graph clustering. GBV content Graph: This graph captures the relationship between countries according to GBV content distance. In GBV content graph G gbv = (C, E), the weights of the set of edges E are decided on the basis of the content distance score between two countries on the basis of GBV tweets. Here, GBV tweets are used for calculating content distance. We also create sexual violence graph, physical violence graph, and harmful practices graph where for assigning edge weights, we calculate content distance on sexual violence tweets, physical violence tweets, and harmful practices tweets, respectively.</ns0:p><ns0:p>Generic content Graph: This graph captures the relationship between countries and generic content.</ns0:p><ns0:p>In the generic content graph G rand = (C, E), the weights of the edges are assigned according to the content distance score between two countries on the basis of generic tweet data. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Cultural Graph: This graph captures the cultural relationship between countries. In the cultural graph G cult = (C, E), the weights of the edges are decided by the value of cultural distance calculated using equation 1.</ns0:p><ns0:p>Graph Pre-processing: For all the graphs G = (C, E), there is an edge between any pair of countries with a weight creating a complete graph. Further, all the created graphs have a different range of values of edge weight. For example, for GBV tweets graphs, edge weights will lie in the range (0,1), but for the culture graph, the values of weights can range from (0-120). To ensure consistency, we upscale edge weights in the range of (0,1) to a range of (0-120). Next, we pruned edges that are unimportant, i.e. whose weight is lesser than the median of all the edge weights. Thus, keeping only the important, i.e. higher edge weight edges in the graph. Before pre-processing, each graph is a complete graph with the same edges in all the graphs, but after pre-processing, each graph is a non-complete graph with only important edges resulting in different edges in each graph. The same is applied to all the graphs, and the final pre-processed graph is a weighted, un-directed, and non-complete graph.</ns0:p><ns0:p>We also mention that all the distances (content/culture) used to decide edge weight in all the graphs follow the axioms of distance <ns0:ref type='bibr' target='#b39'>Kosub (2019)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3.1'>Graph Comparison Metrics</ns0:head><ns0:p>For comparing two graphs, past literature has proposed a number of metrics depending upon the type of graphs </ns0:p></ns0:div>
<ns0:div><ns0:head>Distances:</ns0:head><ns0:p>• Quantum JSD: Quantum Jensen-Shannon Divergence De Domenico and Biamonte (2016) compares two weighted and undirected graphs by finding the distance between spectral entropy of density matrices.</ns0:p><ns0:p>• Degree Divergence: This method Hébert-Dufresne et al. ( <ns0:ref type='formula'>2016</ns0:ref>) compares the degree distribution of two graphs. This methodology is applicable to weighted as well as unweighted graphs but only undirected graphs.</ns0:p><ns0:p>• Jaccard Distance: Jaccard distance Oggier and Datta ( <ns0:ref type='formula'>2021</ns0:ref>) is applicable to only unweighted graphs, and its value depends on the number of common edges in the two compared graphs. For applying to our graphs, we coerced weighted graphs into unweighted ones by removing weights from all the graphs.</ns0:p><ns0:p>• Hamming Distance: Hamming distance is one of the popular techniques for measuring the distance between two unweighted graphs. This is a measure of element-wise disagreement between the two adjacency matrices of the graphs. We apply Hamming distance to our graphs by coercing weighted graphs to unweighted ones by simply removing the weights.</ns0:p><ns0:p>• HammingIpsenMikhailov: This method is an enhanced version of Hamming Distance which takes into account the disagreement between adjacency matrices and associated laplacian matrices. This is applicable to weighted and undirected graphs.</ns0:p><ns0:p>• Frobenius: This is an adjacency matrix level distance that calculates L2-norms of the adjacency matrices.</ns0:p><ns0:p>• NetLSD: A metric for measuring graph distance based on spectral node signature distributions for unweighted graphs. For this, we coerced our graphs to unweighted ones by removing the weights. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>RESULTS</ns0:head><ns0:p>In this section, we first provide validation results for our proposed methodology of GBV tweet filtering and location tagging. Then we present the results and insights of correlation analyses and graph analyses in order to understand the correspondence between GBV and culture.</ns0:p><ns0:p>GBV tweet extraction and error analysis GBV tweet extraction is accomplished by tagging tweets using a keyword matching process. Following the keyword match verification methodology of Cheke et al. (2020), we employed three graduate annotators to manually tag the GBV category. Annotators were provided with a sample of tweets without any category information and were asked to manually tag each tweet to one or more categories of GBV (sexual violence, physical violence, harmful practices) with their own understanding and external online resources. Annotators were provided with a basic definition of GBV and its categories. For the purpose of validation, we created a balanced and shuffled sample of 6000 tweets with 2000 tweets from each category of GBV. For each tweet annotated by the three annotators, we select the majority category as the final category. Tweets that do not have any majority category are discarded. Considering the category tagged by annotators as the actual categories, we calculate the precision value of our keyword matching methodology for each category of GBV. The precision value for sexual violence is found to be 0.97, for physical violence 0.96, and for harmful practices to be 0.98.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_7'>3</ns0:ref> shows a few example tweets and tagged GBV categories. Examples 1 − 9 shows matching keywords and the tagged GBV category of the tweets from all three categories of GBV. Example 10 − 11 show tweets that contain keywords from more than a category of GBV. These tweets are kept in all the matching categories. In examples 12 − 13, keyword matching results in the wrong tagging of tweets because of contextual differences in tweets. As we can see in example tweet 12, the keywords woman and attacked belong to physical violence keywords, and hence the tweet is wrongly classified in the physical violence category. There are only a few such errors in GBV tweet category tagging arising because of changes in the context of tweets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation of Location Tagging</ns0:head><ns0:p>We have a three-level hierarchy(time zone, geotags, profile location) of location tagging. Location tagging from time zone and geotags is completely accurate. For evaluating location tagging from profile location a random sample of 10, 000 tweets are given to three independent graduate annotators who were asked to manually tag a country-level location from their own understanding using online gazetteers and searches. The majority country name is selected as the final tagged country name. The profile location field with no majority among annotators is discarded. Considering the country tagged by manual annotation as the actual country, we obtained a precision score of 0.94.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/19</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:2:0:NEW 29 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Correlation Analysis</ns0:head><ns0:p>Table <ns0:ref type='table'>4</ns0:ref> and Table <ns0:ref type='table' target='#tab_8'>5</ns0:ref> show the results of pearson correlation and spearman correlation of different types of online content with culture and its parameters. From the tables, we can draw the following observations.</ns0:p><ns0:p>1. GBV content and all categories of GBV content show a positive correlation with culture and all its parameters by both the correlation metrics. For example, the correlation between GBV content and culture is 0.55, with a significant p-value of 0.001. Similarly, the correlation between culture and other categories of GBV i.e. sexual violence, physical violence, and harmful practices content is 0.51, 0.53, and 0.53, respectively with significant p-values.</ns0:p><ns0:p>2. The three parameters of culture uncertainty avoidance, indulgence, and individualism show comparatively higher correlation values as compared to other parameters of culture power distance, masculinity, and long-term orientation. This observation is consistent with all the categories of GBV content and with both the correlation metrics. The pearson correlation of uncertainty avoidance, indulgence, and individualism with GBV tweet content is 0.36, 0.34, and 0.45 (Table <ns0:ref type='table'>4</ns0:ref>). On the other hand, the pearson correlation of power distance, masculinity, and long-term orientation with GBV tweets content is 0.27, 0.16, and 0.17 respectively.</ns0:p><ns0:p>3. We also observe that the same content analyses performed for GBV content did not show a similar strong and consistent correlation with generic content. The pearson correlation between generic content and culture is 0.33, which is much lower than the pearson correlation between GBV content and culture, i.e. 0.55. Additionally, generic content fails to show any correlation with culture and its parameters from spearman correlation.</ns0:p><ns0:p>Observation 1 indicates that GBV content has an influence of culture and all six parameters of culture.</ns0:p><ns0:p>The observation is consistent with all the categories of GBV content, i.e. sexual violence, physical violence, and harmful practices. Additionally, we also show the scatter plots of the culture and different GBV content types in Figure <ns0:ref type='figure'>.</ns0:ref> 2 to reconfirm the findings. These all results implies that countries with similar culture also show higher similarity in GBV content. GBV content is composed of discussions, news, comments generated by users on the topic related to GBV. The reason for the similarity in GBV content for similar culture countries is the similarity in their discussions, news, and comments. For further Manuscript to be reviewed</ns0:p><ns0:p>Computer Science understanding, we manually discern the content of similar culture countries. According to Hofstede's dimensions, USA and Canada are more similar 22 in culture as compared to USA and Iran. Similarly, Iran is more similar to Arab countries as compared to the USA. Table <ns0:ref type='table' target='#tab_10'>6</ns0:ref> shows the scores of cultural distance between different countries using Hofstede's dimensions. In order to show the content differences of different culture countries, we plot the word clouds of common frequent words of USA-Canada and Arab countries-Iran in Figures <ns0:ref type='figure' target='#fig_4'>3(a</ns0:ref>) and 3(b), respectively. For the countries USA and Canada, we find keywords like gfriend, whitesupremacist, objectifying as common frequent keywords. For the countries Arab countries and Iran, we find keywords like veiled, hijab, attacked, predator as common frequent keywords.</ns0:p><ns0:p>The highlights in observation 2 suggest that a few parameters of culture also play an important role in shaping the content related to GBV. Interestingly, Hofstede's parameters uncertainty avoidance, indulgence, and individualism are found to show more impact on GBV related content than other parameters like power distance, masculinity, and long term orientation. For further reconfirming the connection between these parameters and different types of GBV content, we also show the scatter plots of these parameters and different content in </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Graph Analysis</ns0:head><ns0:p>We first summarize the statistical characteristics of the created graphs in our research in Table <ns0:ref type='table' target='#tab_11'>7</ns0:ref>. All the graphs show similar basic properties because there is node correspondence in all the graphs. The edges Table <ns0:ref type='table' target='#tab_13'>8</ns0:ref> shows the distance between different created graphs from various metrics. For each distance metric, if the distance value between a pair of graphs (G1, G2) is smaller than the distance between another pair of graphs (G3, G2), it means that the graph G2 is more similar to G1 as compared to G3. From the table, we observe that for all the metrics, the distance between the generic tweet graph and the culture graph is consistently higher than the distance between other graphs (GBV-culture, sexual violence-culture, physical violence-culture, and harmful practices-culture). For example, the metric QuantumJSD, the distance between generic graph and culture graph is 0.27 while for GBV graph and culture graph is 0.21.</ns0:p><ns0:p>For the same metric, the distance between the sexual graph-culture graph, the physical graph-culture graph, and the harmful graph-culture graph is 0.21, 0.20, and 0.22, respectively. This shows that the graph created using content distance by GBV and its categories are more similar among themselves and to the graph created using cultural distance. The graph created using the generic content is consistently more distant from the cultural graph as compared to other graphs. This observation re-validates the observation from correlation analyses showing a higher degree of similarity between GBV content for similar culture the same cluster have a larger overlap with the GBV graph rather than the generic graph. For example, the countries USA, UK, and Australia belong to the same cluster in the culture graph, and the same is also true for GBV graph. However, for the generic graph, all three countries belong to different clusters.</ns0:p><ns0:p>This observation again shows a higher similarity between the culture graph and the GBV graph than the generic graph and the culture graph. Thus, we observe that the created clusters are also congruous to all other findings stating a higher level of relation between culture and GBV content, which is not the same for generic content.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DISCUSSIONS</ns0:head><ns0:p>Implications: In this research, we use social media data to verify connections between the culture of a country and GBV. Our findings suggest that real-world hypotheses are also evident in social media data, and their verification is no longer dependent on survey-based data. We believe that this research not only validates the hypothesis of confluence between culture and GBV but also points to the possibility of verification of other hypotheses related to GBV.</ns0:p><ns0:p>A finer analysis can also reveal culture-specific traits of GBV, which can further enhance understanding of GBV across cultures. We argue that these analyses are vital for designing culture-aware policies and strategies to curb GBV. There is a huge possibility of discovery of many more cultural norms like those pronounced by the World Bank 23 , which can promote GBV. Thus, this research paves a path for Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>understanding culture-specific GBV using online social network data.</ns0:p><ns0:p>Limitations and Critiques: In this section, we show a few possible limitations and how our research overcomes those. In this paper, we have performed cross-cultural research using online content from Twitter. Here, we limit our research to English tweets only pertaining to two reasons. First, the GBV keywords are in English, resulting in a collection of English GBV content. Second, English has become the new lingua franca on Twitter <ns0:ref type='bibr' target='#b11'>Choudhary et al. (2018)</ns0:ref>, which delivers sufficient tweets for this research.</ns0:p><ns0:p>GBV data collection is based upon GBV keywords provided by UNFPA, which is a global organization.</ns0:p><ns0:p>The provided keywords can be incomplete and non-exhaustive. There might have scope for increasing these keywords; however, GBV is a sensitive issue, and extending keywords without the intervention of social experts may introduce errors. So, we limit this research to globally available keywords only.</ns0:p><ns0:p>Online content is much inflected by a flux of ongoing news and events, which can lead to differences in data patterns in certain time periods. However, our research is based upon data from a long temporal span which diffuses such temporal inflections <ns0:ref type='bibr' target='#b24'>Grieve et al. (2018)</ns0:ref>.</ns0:p><ns0:p>There can be many more ways to capture the distance between countries in terms of GBV, but we have limited this to content distance using two common metrics (cosine similarity and jaccard similarity).</ns0:p><ns0:p>The content distance used in this research captures the basic difference between tweet tokens of the two countries. However, the same methodology can be easily adapted to other twitter features and metrics.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>CONCLUSION AND FUTURE WORK</ns0:head><ns0:p>The article investigates evidence of the confluence between culture and GBV with the help of social media content. Social media content is explored by means of correlation analyses and graph-based analyses to find the traces of culture in GBV related social media content. In this research, we find a noteworthy influence of culture on GBV related content which is not apparent in generic content. The observation is consistent with different analyses and metrics. This research not only claims higher confluence between GBV and culture but also paves a path for effective policy-making and research related to GBV by means of social media content. Social media content captures behavioral aspects related to GBV, which can be used for other investigations related to GBV. As a future work of this research, we would like to understand the role of other factors like economy, unemployment,and crises in GBV. Moreover, this research is a global analysis of different countries of the world. We would also like to extend the research to a finer scale of states or counties within a country.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. An overview of our proposed methodology.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Country Graph: A country graph created in this research is an un-directed, weighted graph G = (C, E), where C denotes the nodes of the graph, which are countries, and E denotes the set of edges between countries. For all the graphs in this research, the set of nodes C and the set of edges E are the same. The only difference is in the weights of the edges. Next, we describe the creation of edges in the required graphs.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Figure. 4, Figure. 5 and Figure. 6. From the scatter plots, we can evidently observe that Hofstede's parameters uncertainty avoidance, indulgence, and individualism consistently show a close association (points are closer to the fitted line) with all types of GBV content. The same is not true for generic content. This shows a connection between these parameters of culture and different GBV categories content. The reasons for more influence of these parameters require further exploration which is outside the scope of this work. However, this observation again recommends a role of culture on GBV.Observation 3 further strengthens the findings of observations 1 and 2. The pattern of correlation showing a connection between culture and different categories of GBV is not the same for generic content.The lower and inconsistent correlation values of the generic content as compared to GBV content reinforce a stronger relationship between GBV content and the culture of a country. Further, the scatter plots shown in Figure. 2, Figure. 4, Figure. 5 and Figure. 6 also show that the points in all the plots of generic content are more scattered from the fitted line as compared to points in GBV and its category content plots. For example, the points in the GBV-culture plot (Figure. 2(a)) are closer to the fitted line, while in the generic-culture plot (Figure. 2(e)), the points are farther to the fitted lines showing a comparatively lower correlation. Other categories of content (sexual, physical, and harmful) also show a stronger correlation with culture as compared to generic content. Generic content is composed of content from different topics, a few of which can be highly correlated to culture Cheke et al. (2020), such as food, and a few can hardly show any correlation Annamoradnejad et al. (2019), such as technology. These characteristics of generic content can be the most probable reason for showing weak correlations. Here we show results of generic content just to provide a broader background for understanding. Next, we describe the results from graph analyses in order to validate findings from correlation analyses.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>22Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Scatter plots between different categories of content distance (GBV, sexual, physical, harmful practices, generic) and cultural distance.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Word Clouds showing common frequent keywords of culturally similar countries (USA-Canada and Arab Countries-Iran)</ns0:figDesc><ns0:graphic coords='13,155.98,371.67,180.09,90.19' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>Generic-Uncer. Avoid.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .Figure 5 .Figure 6 .</ns0:head><ns0:label>456</ns0:label><ns0:figDesc>Figure 4. Scatter plots between different categories of content distance (GBV, sexual, physical, harmful practices, generic) and cultural distance (uncertainty avoidance).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Clustering nodes (countries) in the culture graph, GBV graph, and generic graph.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Set of keywords to identify tweets from different categories. /girl/female attacked, boyfriend/boy-friend assault, stalking woman/women/girl/female, groping woman/women/girl/female, sexual/rape victim, gang rape, victim blame, sex predator, woman/women/girl/female forced</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Category</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Relevant Keywords</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>sexual</ns0:cell><ns0:cell>assault,</ns0:cell><ns0:cell>sexual</ns0:cell><ns0:cell>violence,</ns0:cell><ns0:cell>woman/women/girl/female</ns0:cell><ns0:cell>harass,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>Sexual Violence woman/womenPhysical Violence woman/women/girl/female beat up, woman/women/girl/female acid attack, woman/women/girl/female violence, woman/women/girl/female punched, woman/women/girl/female attacked, gender/domestic violence, intimate partner</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>violence, physical abuse/violence</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='3'>child/children/underage/forced</ns0:cell><ns0:cell>marriage,</ns0:cell><ns0:cell>sex/child/children</ns0:cell><ns0:cell>trafficking,</ns0:cell></ns0:row><ns0:row><ns0:cell>Harmful Practices</ns0:cell><ns0:cell cols='6'>woman/women/girl/female trafficking, child molestation/bride/sex, child vio-lence/abuse/bullying/beat, spouse abuse, sex/women/forced slave, female genital</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='6'>mutilation (fgm), early marriage, pedophilia, human trafficking, woman abuse</ns0:cell></ns0:row><ns0:row><ns0:cell>GBV</ns0:cell><ns0:cell cols='6'>All the keywords from sexual violence, physical violence, and harmful practices</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>Social Media Content Social media has now become the new language of people, and this has generated a massive amount of data for various research. Social media data has removed the bottleneck of data requirements in numerous applications such as urban computingSilva et al. (2019), cultural </ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>computing Wang et al. (2017), personality computing Samani et al. (2018) and many more. Social</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>media content also plays a huge role in understanding people's views and sentiments during the Covid-19</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>pandemic Malagoli et al. (2021). Social media content has already substituted the tedious, time consuming,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>biased and under-represented survey-based data and has unlocked possibilities for research in many other</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>directions. The utility of social media content increases with the availability of location-tagged data.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>The location-tagged online content has been used in numerous ethnographic research Abdullah et al.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>(2015), cultural research Cheke et al. (2020), and sociological research Stewart et al. (2017) in recent</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>days. Social media content is not mere data, but it captures several dimensions of human interests,</ns0:cell></ns0:row><ns0:row><ns0:cell>6 https://apps.who.int/iris/bitstream/handle/10665/85239/9789241564625 eng.pdf</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>7 https://www.who.int/violence injury prevention/violence/norms.pdf</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>8 https://thedocs.worldbank.org/en/doc/656271571686555789-0090022019/original/ShiftingCulturalNormstoAddressGBV.pdf</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:2:0:NEW 29 Jun 2022)</ns0:cell><ns0:cell>3/19</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Descriptive statistics of collected data</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data Category</ns0:cell><ns0:cell cols='2'>Before Location Tagging After Location Tagging</ns0:cell></ns0:row><ns0:row><ns0:cell>Sexual Violence</ns0:cell><ns0:cell>836,497</ns0:cell><ns0:cell>681,537</ns0:cell></ns0:row><ns0:row><ns0:cell>Physical Violence</ns0:cell><ns0:cell>534,707</ns0:cell><ns0:cell>433,560</ns0:cell></ns0:row><ns0:row><ns0:cell>Harmful Practices</ns0:cell><ns0:cell>659,666</ns0:cell><ns0:cell>522,844</ns0:cell></ns0:row><ns0:row><ns0:cell>GBV tweets</ns0:cell><ns0:cell>2,030,874</ns0:cell><ns0:cell>1,637,941</ns0:cell></ns0:row><ns0:row><ns0:cell>Generic tweets</ns0:cell><ns0:cell>42,445,234</ns0:cell><ns0:cell>36,689,133</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>psychology, and behavior. There have been works which found that the social media content reflects the</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>real-world properties as well. García-Gavilanes et al. (2014); Garcia-Gavilanes et al. (2013) have found</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>that interaction and usage of social networks are dependent on social, economic, and cultural aspects</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>of users. Thus, the real-world behavior of people is also mirrored in social networks. This utility of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>social media content motivates us to examine the relationship between GBV and a country's culture</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>through analysis of social media data. For this research, we use Twitter data related to GBV from different</ns0:cell></ns0:row><ns0:row><ns0:cell>countries.</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>Measuring Culture Culture is an amalgamation of thoughts, beliefs, potential acts, and a lot more.</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>A number of definitions of culture are available from previous works Hofstede et al. (2005). One of</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>many definitions of culture is 'a fuzzy set of assumptions and values, orientations to life, beliefs, policies,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>procedures and behavioural conventions that are shared by a group of people' Spencer-Oatey and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Franklin (2012). Culture plays a vital role in many spheres of life, such as behaviour Huang et al. (2016)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>economy Herrero et al. (2021), language Sazzed (2021), attitude Shin et al. (2022). In order to ease out</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>research based on culture, there have been several quantification of culture. Hofstede has done one such</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>comprehensive quantification. Hofstede Hofstede et al. (2005) defines culture in terms of six parameters</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>(Power Distance, Uncertainty Avoidance, Individualism, Masculinity, Long Term Orientation, Indulgence)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>and quantifies each one for different countries of the world. An extensive set of previous works use</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Hofstede Dimensions to quantify culture García-Gavilanes et al. (2014); Garcia-Gavilanes et al. (2013);</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Example tweets with their matching keywords and tagged GBV category. graph clustering algorithm clusters similar nodes in different groups. If two graphs are similar, then their clusters will also be similar. In order to compare the GBV graph and the generic graph with the culture graph, we use the Louvain community detection algorithm De<ns0:ref type='bibr' target='#b15'>Meo et al. (2011)</ns0:ref>. Louvain community detection algorithm is a clustering algorithm for nodes of a weighted graph where nodes are clustered on the basis of modularity between the nodes. Here the number of clusters is decided by the algorithm only.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Sexual (Error)</ns0:cell></ns0:row></ns0:table><ns0:note>S.No. Example Tweets Category 1 Why is harassment an automatic career hazard for a woman receiving any amount of professional attention? Sexual Violence 2 Girls are forced to sleep and authorities are POWERLESS. Europe is dead. Sexual Violence 3 Yesterday I asked my daughter's schools to stop slut-shaming and victim-blaming girls. It went viral. Sexual Violence 4 #Berlin metro attacker who kicked woman down stairs in random act of violence detained Physical Violence 5 laws don't cause divorce, domestic violence does Physical Violence 6 Some girls are beaten up by their boyfriends and stick around saying Ï see something in him. Physical Violence 7 Gather round children, I'm doing a thread on how this society sexualizes underage girls. Leggo. Harmful Practices 8 If you suspect a child is being abused You have a moral duty to report it Harmful Practices 9 A 12 year old child bride taking photos in her wedding dress. Can you imagine it? Harmful Practices 10 For most of these women, a history of sexual abuse and childhood trauma dragged them into prostitution Multiple 11 She has written books on sexual abuse, child molestation, domestic violence. Multiple 12 Woman With Too Much Makeup Mistaken As Clown; Attacked By Angry Mob Physical (Error) 13 all I want for my children is happiness! I don't care what their sexual preference Is.. No one can force them A</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Spearman correlation coefficient and p-values between different cultural distances and different content distances. All the correlations all calculated on a sample of 210 country pairs, since there are 21 countries. Degree of freedom for the correlation analyses is 208.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='2'>GBV Tweets</ns0:cell><ns0:cell cols='5'>Sexual Violence Physical Violence Harmful Practices Generic Tweets</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>Corr p-value Corr p-value Corr</ns0:cell><ns0:cell>p-value</ns0:cell><ns0:cell>Corr</ns0:cell><ns0:cell>p-value</ns0:cell><ns0:cell>Corr p-value</ns0:cell></ns0:row><ns0:row><ns0:cell>Cultural Distance</ns0:cell><ns0:cell>0.28</ns0:cell><ns0:cell cols='2'>0.001 0.24</ns0:cell><ns0:cell>0.001 0.25</ns0:cell><ns0:cell cols='2'>0.001 0.28</ns0:cell><ns0:cell>0.001 0.03</ns0:cell><ns0:cell>0.43</ns0:cell></ns0:row><ns0:row><ns0:cell>Power Distance</ns0:cell><ns0:cell>0.17</ns0:cell><ns0:cell cols='2'>0.001 0.23</ns0:cell><ns0:cell>0.001 0.21</ns0:cell><ns0:cell cols='2'>0.001 0.25</ns0:cell><ns0:cell>0.001 0.01</ns0:cell><ns0:cell>0.82</ns0:cell></ns0:row><ns0:row><ns0:cell>Masculinity</ns0:cell><ns0:cell>0.07</ns0:cell><ns0:cell cols='2'>0.11 0.07</ns0:cell><ns0:cell>0.13 0.07</ns0:cell><ns0:cell cols='2'>0.11 0.05</ns0:cell><ns0:cell>0.26 0.06</ns0:cell><ns0:cell>0.14</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Uncertainty Avoidance 0.27</ns0:cell><ns0:cell cols='2'>0.001 0.28</ns0:cell><ns0:cell>0.001 0.22</ns0:cell><ns0:cell cols='2'>0.001 0.27</ns0:cell><ns0:cell>0.001 0.09</ns0:cell><ns0:cell>0.03</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Long-term Orientation 0.08</ns0:cell><ns0:cell cols='2'>0.001 0.04</ns0:cell><ns0:cell>0.38 0.07</ns0:cell><ns0:cell cols='2'>0.11 0.07</ns0:cell><ns0:cell>0.11 0.12</ns0:cell><ns0:cell>0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Indulgence</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell cols='2'>0.001 0.26</ns0:cell><ns0:cell>0.001 0.27</ns0:cell><ns0:cell cols='2'>0.001 0.26</ns0:cell><ns0:cell>0.001 0.01</ns0:cell><ns0:cell>0.74</ns0:cell></ns0:row><ns0:row><ns0:cell>Individualism</ns0:cell><ns0:cell>0.35</ns0:cell><ns0:cell cols='2'>0.001 0.33</ns0:cell><ns0:cell>0.001 0.36</ns0:cell><ns0:cell cols='2'>0.001 0.37</ns0:cell><ns0:cell>0.001 0.02</ns0:cell><ns0:cell>0.73</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Cultural distance between different countries by Hofstede's dimensions.Arab Countries Argentina Australia Brazil Canada China Colombia France Germany India Indonesia Iran Italy Japan Philippines Russia Spain Thailand Turkey UK USA</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Arab Countries</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>46.3</ns0:cell><ns0:cell>78.9</ns0:cell><ns0:cell>35.6</ns0:cell><ns0:cell>72.1</ns0:cell><ns0:cell>78.7</ns0:cell><ns0:cell>59.7</ns0:cell><ns0:cell>59</ns0:cell><ns0:cell>81.7</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>50.6</ns0:cell><ns0:cell cols='2'>28.3 64.7</ns0:cell><ns0:cell>85.8</ns0:cell><ns0:cell>31.8</ns0:cell><ns0:cell>69.2</ns0:cell><ns0:cell>42.9</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell cols='2'>89.1 78.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Argentina</ns0:cell><ns0:cell>46.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>58.3</ns0:cell><ns0:cell>34.2</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>44.9</ns0:cell><ns0:cell>56.6</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>75.6</ns0:cell><ns0:cell cols='2'>38.9 62.7</ns0:cell><ns0:cell>80.9</ns0:cell><ns0:cell>67</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>36.9</ns0:cell><ns0:cell>47.7</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell cols='2'>75.8 61.7</ns0:cell></ns0:row><ns0:row><ns0:cell>Australia</ns0:cell><ns0:cell>78.9</ns0:cell><ns0:cell>58.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>71.4</ns0:cell><ns0:cell>20.5</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>88.4</ns0:cell><ns0:cell>71.8</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell cols='2'>64.9 66.1</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>86.4</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>70.3</ns0:cell><ns0:cell>85.3</ns0:cell><ns0:cell>78.1</ns0:cell><ns0:cell>34.5</ns0:cell><ns0:cell>7.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Brazil</ns0:cell><ns0:cell>35.6</ns0:cell><ns0:cell>34.2</ns0:cell><ns0:cell>71.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>77.5</ns0:cell><ns0:cell>48.8</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>51.5</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell cols='2'>41.5 58.5</ns0:cell><ns0:cell>70.1</ns0:cell><ns0:cell>49.7</ns0:cell><ns0:cell>63.7</ns0:cell><ns0:cell>26.8</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell>14.5</ns0:cell><ns0:cell cols='2'>76.7 71.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Canada</ns0:cell><ns0:cell>72.1</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>20.5</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>84.8</ns0:cell><ns0:cell>60.2</ns0:cell><ns0:cell>60.4</ns0:cell><ns0:cell>67.4</ns0:cell><ns0:cell>86.7</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>57.7</ns0:cell><ns0:cell>92.5</ns0:cell><ns0:cell>79.1</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>58.8</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>66.8</ns0:cell><ns0:cell cols='2'>26.3 18.2</ns0:cell></ns0:row><ns0:row><ns0:cell>China</ns0:cell><ns0:cell>78.7</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>77.5</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>87</ns0:cell><ns0:cell>75.9</ns0:cell><ns0:cell>48.3</ns0:cell><ns0:cell>40.1</ns0:cell><ns0:cell cols='2'>89.6 82.4</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>66.9</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>84.7</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>112</ns0:cell></ns0:row><ns0:row><ns0:cell>Colombia</ns0:cell><ns0:cell>59.7</ns0:cell><ns0:cell>44.9</ns0:cell><ns0:cell>88.4</ns0:cell><ns0:cell>48.8</ns0:cell><ns0:cell>84.8</ns0:cell><ns0:cell>108</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>87.3</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell cols='2'>59.7 97.5</ns0:cell><ns0:cell>98.3</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>69.4</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>56.3</ns0:cell><ns0:cell cols='2'>102 91.4</ns0:cell></ns0:row><ns0:row><ns0:cell>France</ns0:cell><ns0:cell>59</ns0:cell><ns0:cell>56.6</ns0:cell><ns0:cell>71.8</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>60.2</ns0:cell><ns0:cell>87</ns0:cell><ns0:cell>87.3</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>50.1</ns0:cell><ns0:cell>59.3</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell cols='2'>65.4 39.1</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>75.7</ns0:cell><ns0:cell>53.6</ns0:cell><ns0:cell>28.1</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>38.6</ns0:cell><ns0:cell cols='2'>71.9 70.6</ns0:cell></ns0:row><ns0:row><ns0:cell>Germany</ns0:cell><ns0:cell>81.7</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>74.4</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>60.4</ns0:cell><ns0:cell>75.9</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>50.1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>63.9</ns0:cell><ns0:cell>76.1</ns0:cell><ns0:cell>81</ns0:cell><ns0:cell>31.5</ns0:cell><ns0:cell>49</ns0:cell><ns0:cell>90.7</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>54.9</ns0:cell><ns0:cell>81.9</ns0:cell><ns0:cell>64.6</ns0:cell><ns0:cell cols='2'>56.9 70.8</ns0:cell></ns0:row><ns0:row><ns0:cell>India</ns0:cell><ns0:cell>41.7</ns0:cell><ns0:cell>71.5</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>51.5</ns0:cell><ns0:cell>67.4</ns0:cell><ns0:cell>48.3</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>59.3</ns0:cell><ns0:cell>63.9</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>39.7</ns0:cell><ns0:cell cols='2'>50.3 55.3</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>37.7</ns0:cell><ns0:cell>68.8</ns0:cell><ns0:cell>55.1</ns0:cell><ns0:cell>52.3</ns0:cell><ns0:cell>54.3</ns0:cell><ns0:cell cols='2'>73.8 75.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Indonesia</ns0:cell><ns0:cell>50.6</ns0:cell><ns0:cell>75.6</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell>86.7</ns0:cell><ns0:cell>40.1</ns0:cell><ns0:cell>76.9</ns0:cell><ns0:cell>70</ns0:cell><ns0:cell>76.1</ns0:cell><ns0:cell>39.7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>81.4</ns0:cell><ns0:cell>46.1</ns0:cell><ns0:cell>62.1</ns0:cell><ns0:cell>59.2</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell>49.4</ns0:cell><ns0:cell cols='2'>95.7 99.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Iran</ns0:cell><ns0:cell>28.3</ns0:cell><ns0:cell>38.9</ns0:cell><ns0:cell>64.9</ns0:cell><ns0:cell>41.5</ns0:cell><ns0:cell>58</ns0:cell><ns0:cell>89.6</ns0:cell><ns0:cell>59.7</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>81</ns0:cell><ns0:cell>50.3</ns0:cell><ns0:cell>60</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>68.4</ns0:cell><ns0:cell>96.7</ns0:cell><ns0:cell>47.3</ns0:cell><ns0:cell>87.1</ns0:cell><ns0:cell>44.7</ns0:cell><ns0:cell>30.6</ns0:cell><ns0:cell>43.1</ns0:cell><ns0:cell cols='2'>78.7 65.3</ns0:cell></ns0:row><ns0:row><ns0:cell>Italy</ns0:cell><ns0:cell>64.7</ns0:cell><ns0:cell>62.7</ns0:cell><ns0:cell>66.1</ns0:cell><ns0:cell>58.5</ns0:cell><ns0:cell>57.7</ns0:cell><ns0:cell>82.4</ns0:cell><ns0:cell>97.5</ns0:cell><ns0:cell>39.1</ns0:cell><ns0:cell>31.5</ns0:cell><ns0:cell>55.3</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>68.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>51.7</ns0:cell><ns0:cell>78.6</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>44.3</ns0:cell><ns0:cell>76.6</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell cols='2'>60.8 63.1</ns0:cell></ns0:row><ns0:row><ns0:cell>Japan</ns0:cell><ns0:cell>85.8</ns0:cell><ns0:cell>80.9</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>70.1</ns0:cell><ns0:cell>92.5</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>98.3</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>49</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>81.4</ns0:cell><ns0:cell cols='2'>96.7 51.7</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>93.4</ns0:cell><ns0:cell>74.7</ns0:cell><ns0:cell>67.1</ns0:cell><ns0:cell>91.9</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell cols='2'>91.8 100</ns0:cell></ns0:row><ns0:row><ns0:cell>Philippines</ns0:cell><ns0:cell>31.8</ns0:cell><ns0:cell>67</ns0:cell><ns0:cell>86.4</ns0:cell><ns0:cell>49.7</ns0:cell><ns0:cell>79.1</ns0:cell><ns0:cell>66.9</ns0:cell><ns0:cell>65.4</ns0:cell><ns0:cell>75.7</ns0:cell><ns0:cell>90.7</ns0:cell><ns0:cell>37.7</ns0:cell><ns0:cell>46.1</ns0:cell><ns0:cell cols='2'>47.3 78.6</ns0:cell><ns0:cell>93.4</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>66.2</ns0:cell><ns0:cell>48.7</ns0:cell><ns0:cell>56.8</ns0:cell><ns0:cell cols='2'>90.2 84.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Russia</ns0:cell><ns0:cell>69.2</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>120</ns0:cell><ns0:cell>63.7</ns0:cell><ns0:cell>107</ns0:cell><ns0:cell>75.5</ns0:cell><ns0:cell>104</ns0:cell><ns0:cell>53.6</ns0:cell><ns0:cell>79.8</ns0:cell><ns0:cell>68.8</ns0:cell><ns0:cell>62.1</ns0:cell><ns0:cell cols='2'>87.1 72.6</ns0:cell><ns0:cell>74.7</ns0:cell><ns0:cell>82.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>57.1</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>55.2</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>118</ns0:cell></ns0:row><ns0:row><ns0:cell>Spain</ns0:cell><ns0:cell>42.9</ns0:cell><ns0:cell>36.9</ns0:cell><ns0:cell>70.3</ns0:cell><ns0:cell>26.8</ns0:cell><ns0:cell>58.8</ns0:cell><ns0:cell>84.7</ns0:cell><ns0:cell>69.4</ns0:cell><ns0:cell>28.1</ns0:cell><ns0:cell>54.9</ns0:cell><ns0:cell>55.1</ns0:cell><ns0:cell>59.2</ns0:cell><ns0:cell cols='2'>44.7 44.3</ns0:cell><ns0:cell>67.1</ns0:cell><ns0:cell>66.2</ns0:cell><ns0:cell>57.1</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>42.6</ns0:cell><ns0:cell>17.9</ns0:cell><ns0:cell cols='2'>76.1 70.5</ns0:cell></ns0:row><ns0:row><ns0:cell>Thailand</ns0:cell><ns0:cell>34</ns0:cell><ns0:cell>47.7</ns0:cell><ns0:cell>85.3</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell>73.2</ns0:cell><ns0:cell>77.4</ns0:cell><ns0:cell>54.8</ns0:cell><ns0:cell>64.8</ns0:cell><ns0:cell>81.9</ns0:cell><ns0:cell>52.3</ns0:cell><ns0:cell>40</ns0:cell><ns0:cell cols='2'>30.6 76.6</ns0:cell><ns0:cell>91.9</ns0:cell><ns0:cell>48.7</ns0:cell><ns0:cell>72.6</ns0:cell><ns0:cell>42.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell cols='2'>91.8 85.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Turkey</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell>35.9</ns0:cell><ns0:cell>78.1</ns0:cell><ns0:cell>14.5</ns0:cell><ns0:cell>66.8</ns0:cell><ns0:cell>79.7</ns0:cell><ns0:cell>56.3</ns0:cell><ns0:cell>38.6</ns0:cell><ns0:cell>64.6</ns0:cell><ns0:cell>54.3</ns0:cell><ns0:cell>49.4</ns0:cell><ns0:cell>43.1</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>68</ns0:cell><ns0:cell>56.8</ns0:cell><ns0:cell>55.2</ns0:cell><ns0:cell>17.9</ns0:cell><ns0:cell>32.6</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>78.5</ns0:cell></ns0:row><ns0:row><ns0:cell>UK</ns0:cell><ns0:cell>89.1</ns0:cell><ns0:cell>75.8</ns0:cell><ns0:cell>34.5</ns0:cell><ns0:cell>76.7</ns0:cell><ns0:cell>26.3</ns0:cell><ns0:cell>101</ns0:cell><ns0:cell>102</ns0:cell><ns0:cell>71.9</ns0:cell><ns0:cell>56.9</ns0:cell><ns0:cell>73.8</ns0:cell><ns0:cell>95.7</ns0:cell><ns0:cell cols='2'>78.7 60.8</ns0:cell><ns0:cell>91.8</ns0:cell><ns0:cell>90.2</ns0:cell><ns0:cell>117</ns0:cell><ns0:cell>76.1</ns0:cell><ns0:cell>91.8</ns0:cell><ns0:cell>84</ns0:cell><ns0:cell>0</ns0:cell><ns0:cell>28.5</ns0:cell></ns0:row><ns0:row><ns0:cell>USA</ns0:cell><ns0:cell>78.4</ns0:cell><ns0:cell>61.7</ns0:cell><ns0:cell>7.9</ns0:cell><ns0:cell>71.6</ns0:cell><ns0:cell>18.2</ns0:cell><ns0:cell>112</ns0:cell><ns0:cell>91.4</ns0:cell><ns0:cell>70.6</ns0:cell><ns0:cell>70.8</ns0:cell><ns0:cell>75.4</ns0:cell><ns0:cell>99.3</ns0:cell><ns0:cell cols='2'>65.3 63.1</ns0:cell><ns0:cell>100</ns0:cell><ns0:cell>84.2</ns0:cell><ns0:cell>118</ns0:cell><ns0:cell>70.5</ns0:cell><ns0:cell>85.4</ns0:cell><ns0:cell>78.5</ns0:cell><ns0:cell>28.5</ns0:cell><ns0:cell>0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Statistical inferences from the created graphs.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Graph Attributes</ns0:cell><ns0:cell cols='6'>GBV Culture Sexual Physical Harmful Generic</ns0:cell></ns0:row><ns0:row><ns0:cell>Edges</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell><ns0:cell>105</ns0:cell></ns0:row><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell></ns0:row><ns0:row><ns0:cell>Clustering Coefficient</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.73</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>No of nodes in largest connected component</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>21</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>20</ns0:cell><ns0:cell>19</ns0:cell><ns0:cell>19</ns0:cell></ns0:row><ns0:row><ns0:cell>No of Connected components</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>3</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_13'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Comparative results of distances between different graphs</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Distance Metric</ns0:cell><ns0:cell cols='5'>GBV-Culture Sexual-Culture Physical-Culture Harmful-Culture Generic-Culture</ns0:cell></ns0:row><ns0:row><ns0:cell>QuantumJSD</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>0.20</ns0:cell><ns0:cell>0.22</ns0:cell><ns0:cell>0.27</ns0:cell></ns0:row><ns0:row><ns0:cell>DegreeDivergence</ns0:cell><ns0:cell>0.19</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>0.37</ns0:cell><ns0:cell>0.38</ns0:cell></ns0:row><ns0:row><ns0:cell>JaccardDistance</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.76</ns0:cell></ns0:row><ns0:row><ns0:cell>Hamming</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.31</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>HammingIpsenMikhailov</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.24</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>0.26</ns0:cell><ns0:cell>0.32</ns0:cell></ns0:row><ns0:row><ns0:cell>Frobenius</ns0:cell><ns0:cell>12.16</ns0:cell><ns0:cell>11.66</ns0:cell><ns0:cell>12.0</ns0:cell><ns0:cell>12.0</ns0:cell><ns0:cell>13.71</ns0:cell></ns0:row><ns0:row><ns0:cell>NetLSD</ns0:cell><ns0:cell>2.71</ns0:cell><ns0:cell>11.62</ns0:cell><ns0:cell>11.18</ns0:cell><ns0:cell>11.97</ns0:cell><ns0:cell>21.78</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='4'>https://www.who.int/violence injury prevention/violence/norms.pdf 5 https://thedocs.worldbank.org/en/doc/656271571686555789-0090022019/original/ShiftingCulturalNormstoAddressGBV.pdf 2/19 PeerJ Comput. Sci. reviewing PDF | (CS-2022:02:70572:2:0:NEW 29 Jun 2022)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Authors’ Response to the review of
CS-2022:02:70572
June 29, 2022
We would to thank all the reviewers for their constructive and intriguing
comments that we feel have helped in substantial improvement of the overall
quality of the manuscript. We have tried our best to address all the comments.
Next, we outline each of the comments made by the reviewers along with our
response and corresponding changes made in the paper. The reviews by each
of the reviewers start with a new section mentioning “Response to Comments
from Reviewer reviewer no”. The comments by the reviewer are in bold font
as a subsection. Our response is written in regular font.
1
Response to Comments from Reviewer 3
1. Given header keywords in Table 1, together with any heading, and
all table perimeter names.
Response:
We really thank the reviewer for showing concern for the
table headers. According to your kind suggestion, we have now updated and
rechecked all the table headers. All the changes are now incorporated in the
revised manuscript.
2. Authors have improved the language.
Response: We thank the reviewer for providing feedback regarding the
language of the manuscript. The manuscript has improved substantially with
your suggestions.
3. References have been updated to include some more recently
published work.
Response: We thank reviewer for investing time in reviewing our manuscript
and helping in its improvement.
4. Figures are OK.
Response: We express our gratitude for helping us out with feedback and
suggestions.
5. They have updated the methodology section accordingly.
Response: We thank again the reviewer for all the suggestions in the
revision journey of the manuscript.
1
" | Here is a paper. Please give your review comments after reading it. |
721 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Speech emotion recognition (SER) systems have evolved into an important method for recognizing a person in several applications, including e-commerce, e-commerce, everyday interactions, law enforcement, and forensics. However, an SER system's efficiency depends on the length of the audio samples used for testing and training.</ns0:p><ns0:p>However, the different suggested models successfully obtained relatively high accuracy in this study. Moreover, the degree of SER efficiency is not yet optimum due to the limited database, resulting in overfitting and skewing samples. Therefore, the proposed approach presents a data augmentation method that shifts the pitch, uses multiple window sizes, stretches the time, and adds white noise to the original audio. In addition, a deep model is further evaluated to generate a new paradigm for SER. The data augmentation approach increased the limited amount of data from the Pakistani racial speaker speech dataset in the proposed system. The seven-layer framework was employed to provide the most optimal performance in terms of accuracy compared to other multilayer approaches. The seven-layer method is used in existing work to achieve a very high level of accuracy. The suggested system achieved 97.32\% accuracy with a 0.032\% loss in the 75\%:25\% splitting ratio. In addition, more than 500 augmentation data samples were added. Therefore, the proposed approach results show that deep neural networks with data augmentation can enhance the SER performance on the Pakistani racial speech dataset.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Speaker emotion recognition (SER) is an attractive study since there are still many issues to address and many research gaps that need to be filled. However, deep learning (DL) and machine learning <ns0:ref type='bibr'>(ML)</ns0:ref> approaches have tackled SER challenges. Particularly in research that employs speech datasets with enormous volumes of data. The amount of data is increasing at the moment. Consequently, an expansion in the amount of data worldwide is inevitable. Social websites, personal archives, sensors, mobile devices, cameras, webcams, financial market data, and health data create hundreds of petabytes of data <ns0:ref type='bibr' target='#b20'>(Gupta and Rani (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b25'>Khan et al. (2022a))</ns0:ref>. By 2025, the World Economic Forum predicts that the world will create 463 exabytes of data every day. However, it is not easy to find the appropriate method to convert such a large volume of data into useful information.</ns0:p><ns0:p>, Therefore, artificial intelligence (AI) has been used in numerous fields of the latest studies. Previously, speech recognition studies utilizing ML achieved a high degree of precision by using the Gaussian Mixture Model (GMM) technique <ns0:ref type='bibr' target='#b40'>(Marufo da Silva et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b38'>Maghsoodi et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Mouaz et al. (2019)</ns0:ref>), and the Hidden Markov Model (HMM) technique <ns0:ref type='bibr' target='#b80'>(Veena and Mathew (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b10'>Bao and Shen (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b11'>Chakroun et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b42'>Maurya et al. (2018)</ns0:ref>). However, as the data increases, the level of accuracy with PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70479:1:2:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science these techniques drops rapidly, to the point where this traditional ML approach suffers from low accuracy and generalization issues <ns0:ref type='bibr' target='#b86'>(Xie et al. (2018)</ns0:ref>). Nevertheless, this technique provides a reliable strategy for addressing data groupings, making it appropriate for various situations.</ns0:p><ns0:p>Several studies have been conducted regarding SER based on deep learning using different methods, such as the Deep Neural Network (DNN) <ns0:ref type='bibr' target='#b72'>(Seki et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b48'>Najafian et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b41'>Matjka et al. (2016)</ns0:ref>; <ns0:ref type='bibr' target='#b17'>Dumpala and Kopparapu (2017)</ns0:ref>; <ns0:ref type='bibr' target='#b75'>Snyder et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b47'>Najafian and Russell (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b64'>Rohdin et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b27'>Khan et al. (2021)</ns0:ref>; <ns0:ref type='bibr'>Amjad et al. (2021b,a,b)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>Khan et al. (2022b)</ns0:ref>) and Convolutional Neural Network (CNN) methodologies used in the study <ns0:ref type='bibr' target='#b61'>(Ravanelli and Bengio (2019)</ns0:ref>) attained an overall accuracy of 85% with the TIMIT database and 96% with LibriSpeech. Using the deep learning technique, <ns0:ref type='bibr' target='#b5'>(An et al. (2019)</ns0:ref>) obtained 96.5 percent accuracy and significantly improved the ability to handle multiple issues in SER. However, DL requires a lot of training datasets, which are challenging to gather and expensive.</ns0:p><ns0:p>Therefore, this approach is not suitable for utilization with SER because it will yield overfitting problems and may lead to skewed data. The use of data augmentation (DA) is one solution to the problem of small data in the SER study. A DA approach is a technique that can be used to create additional training datasets by altering the shape of a training dataset. DA is helpful in many investigations, such as digital signal processing, object identification, and image classification <ns0:ref type='bibr' target='#b85'>(Wu et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Li et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Amjad et al. (2022)</ns0:ref>).</ns0:p><ns0:p>The DA technique has been extensively used in various fields of study because a few samples in many different DA classes can help solve a problem more effectively <ns0:ref type='bibr' target='#b91'>(Zheng et al. (2020)</ns0:ref>). For example, multiple SER studies using DA <ns0:ref type='bibr' target='#b69'>(Schlüter and Grill (2015)</ns0:ref>; <ns0:ref type='bibr'>Salamon and Bello (2017a,b)</ns0:ref>; <ns0:ref type='bibr' target='#b54'>Pandeya et al. (2018)</ns0:ref>) showed a reduction of up to 30% in classification errors and obtained 86.194% accuracy. Data augmentation includes several approaches that have been effectively used in various research, including Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) approaches <ns0:ref type='bibr' target='#b44'>(Moreno-Barea et al. (2020)</ns0:ref>). The suggested approach obtained accuracy using limited data, with 87.7 percent. In another investigation, scientists employed an auditory DA strategy to achieve an 82.6 percent accuracy for Mandarin-English code flipping <ns0:ref type='bibr' target='#b36'>(Long et al. (2020)</ns0:ref>). Pitch shifting is frequently utilized in DA, as presented in <ns0:ref type='bibr' target='#b87'>(Ye et al. (2020)</ns0:ref>), and achieved 90% accuracy. In addition, <ns0:ref type='bibr' target='#b16'>(Damskgg and Vlimki (2017))</ns0:ref> employs the time-stretched data augmentation approach when performing DA-based fuzzy identification on a variety of audio signals. <ns0:ref type='bibr' target='#b1'>(Aguiar et al. (2018)</ns0:ref>) incorporates Latin music's noise usage, shifting the pitch, loudness variation, and stretching the time to further enhance genre categorization. As a result, <ns0:ref type='bibr' target='#b63'>(Rituerto-Gonzlez et al. (2019)</ns0:ref>) reports an 89.45 percent accuracy using the database (LMD). We propose DA because it is proven to increase the quantity of the dataset so that it can help improve speaker recognition performance with an accuracy rate of 99.76</ns0:p><ns0:p>The proposed study presents a data augmentation method based on a seven-layer DNN for recognizing racial speakers in Pakistan by utilizing 400 audio samples from multiple classes of racial speakers in Pakistan. However, this kind of study may easily lead to multiclass difficulties due to the many classes it includes. On the other hand, DNN approaches are often utilized in SER <ns0:ref type='bibr' target='#b49'>(Nassif et al. (2019)</ns0:ref>). In addition, DNN is also a powerful model capable of achieving excellent performance in pattern recognition <ns0:ref type='bibr' target='#b52'>(Nurhaida et al. (2020)</ns0:ref>). The study undertaken by <ns0:ref type='bibr' target='#b51'>(Novotny et al. (2018)</ns0:ref>) in conjunction with Melfrequency cepstral coefficients (MFCC) has shown the effectiveness of DNN in SER and improved network efficiency in busy and echo conditions. Furthermore, DNN with Mel-frequency cepstral coefficients has outperformed numerous other research approaches on SER single networks <ns0:ref type='bibr' target='#b67'>(Saleem and Irfan Khattak (2020)</ns0:ref>). Additionally, DNN has been effectively fusing with augmented datasets. The presented approach employs a seven-layer neural network because the seven-layer technique yields the highest efficiency and accuracy when used in previous works with an average precision above 90% <ns0:ref type='bibr' target='#b34'>(Liu et al. (2016);</ns0:ref><ns0:ref type='bibr' target='#b90'>Zhang et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Li et al. (2019)</ns0:ref>). Furthermore, including the Pakistani speakers with many classes employing DNN with DA would improve the identification efficiency of multiple emotional classes. This paper is divided into three sections: Part 1 is an introduction that describes the significant issue and the studies done by the speaker; Part 2 is Related Works, which includes many existing works that support the proposed study; and Part 3 is DA, which describes DA and several methodologies that will be used in the research. Part 4 discusses DNNs, and the deep learning techniques employed. Next, the methodology is covered in Chapter 5. Then, Part 6 includes the research outcomes and a discussion.</ns0:p><ns0:p>Finally, section 7 is the conclusion, which covers various significant things about the conclusion of the research outcomes.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70479:1:2:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2'>RELATED WORKS</ns0:head><ns0:p>The proposed study on multi-racial voice recognition was carried out in many nations, like China <ns0:ref type='bibr' target='#b49'>(Nassif et al. (2019)</ns0:ref>), Africa <ns0:ref type='bibr' target='#b53'>(Oyo and Kalema (2014)</ns0:ref>), Italy <ns0:ref type='bibr' target='#b47'>(Najafian and Russell (2020)</ns0:ref>), Pakistan <ns0:ref type='bibr' target='#b76'>(Syed et al. (2020)</ns0:ref>; <ns0:ref type='bibr' target='#b55'>Qasim et al. (2016)</ns0:ref>), United States (Upadhyay and Lui (2018)), and India, through CNN and MFCC <ns0:ref type='bibr' target='#b6'>(Ashar et al. (2020)</ns0:ref>. It is a vital technique that many researchers have chosen to enhance SER efficacy <ns0:ref type='bibr' target='#b14'>(Chowdhury and Ross (2020)</ns0:ref>).</ns0:p><ns0:p>, In contrast, the limitations of multi-racial SER systems investigated in some studies included limited speech data and a lack of emotional classes. Therefore, weak data training methods may result from inaccurate outcomes. Nevertheless, some research in SER and multi-racial SER systems, such as automatic Urdu speech recognition using HMM, involves a ten-speaker category consisting of eight male and two female speakers with 78.2 percent accuracy. In addition, the study of multilingual, multi-speaker involves three classes, namely Javanese, Indonesian, and Sundanese <ns0:ref type='bibr' target='#b9'>(Azizah et al. (2020)</ns0:ref>). However, this investigation has limits regarding the number of emotional categories. Various types of SER studies have been conducted. For example, <ns0:ref type='bibr' target='#b18'>(Durrani and Arshad (2021)</ns0:ref>) used Deep Residual Network (DRN) with a 74.7 percent accuracy rate. Another study employing MFCC and Fuzzy Vector Quantization Modeling on hundred categories from the TIMIT database gives 98% accuracy, higher than other approaches such as Fuzzy Vector Quantization 2 and Fuzzy C-Means <ns0:ref type='bibr' target='#b74'>(Singh (2018)</ns0:ref>). The ML technique is still utilized in conjunction. The classic approaches, such as the HMM, recognize four Moroccan dialect speakers using 20 speakers; this research achieved a 90% accuracy rate for speaker recognition <ns0:ref type='bibr' target='#b45'>(Mouaz et al. (2019)</ns0:ref>).</ns0:p><ns0:p>A single-layer DNN with a data augmentation approach is also utilized to investigate the impact of stress on the performance of SER systems, obtaining an accuracy of 99.46% with the VOCE database (Rituerto-Gonzlez et al. ( <ns0:ref type='formula'>2019</ns0:ref>)). The VOCE database comprises 135 utterances from forty-five speakers.</ns0:p><ns0:p>In addition, the GMM and MFCC with the TIMIT database were utilized to recognize short utterances from 64 different regions and obtained 98.44% accuracy <ns0:ref type='bibr' target='#b12'>(Chakroun and Frikha (2020)</ns0:ref>). This accuracy is higher than the traditional GMM. Another approach was employed in a study <ns0:ref type='bibr' target='#b22'>(Hanifa et al. (2020)</ns0:ref>) that used 52 recordings of Malaysian recorded samples utilizing the MFCC in the feature extraction, with obtained an accuracy of 57%. Along with machine learning, numerous works in SER and multi-racial utilize the DL technique, regarded as a rigorous approach to SER. The Deep Learning technique with a deep neural network is used with different techniques, one of which is DA, as demonstrated in a study presented by <ns0:ref type='bibr' target='#b36'>(Long et al. (2020)</ns0:ref>) on the OC16-CE80 dataset. This Mandarin-English mixlingual speech corpus successfully produced an effective model for SER with an 86% accuracy. The above research has several similarities with the proposed study: the dataset containing speakers from multi-racial backgrounds, DA, and the MFCC feature extraction method. However, some preceding studies differed from the proposed study in many ways, including the number of speech categories, the length of the utterance, and the identification techniques utilized. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> explains the evolution of work on SER in further detail:</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>DATA AUGMENTATION</ns0:head><ns0:p>Researchers employ a method known as data augmentation to enhance the number of dataset samples.</ns0:p><ns0:p>DA is an approach for increasing the number of training datasets useful for neural network training <ns0:ref type='bibr' target='#b62'>(Rebai et al. (2017)</ns0:ref>) and has a major influence on deep learning with limited datasets <ns0:ref type='bibr' target='#b37'>(Ma et al. (2019)</ns0:ref>).</ns0:p><ns0:p>Furthermore, DA is a useful method for overcoming overfitting problems, enhancing model dependability, and increasing generalization <ns0:ref type='bibr' target='#b82'>(Wang et al. (2019)</ns0:ref>), which are common issues in machine learning.</ns0:p><ns0:p>Research-based on deep learning with data augmentation techniques is critical for improving prediction accuracy while dealing with massive volumes of data <ns0:ref type='bibr' target='#b44'>(Moreno-Barea et al. (2020)</ns0:ref>). There are a few data augmentation methods, including adding white noise into an original sample, shifting the pitch, loudness variation, multiple window sizes, and stretching the time. The small size of the dataset is a problem when utilizing deep learning approaches. The proposed approach used to overcome this issue is to induce noise into the training data.</ns0:p><ns0:p>Adding white noise: Adding white noise to a speaker's data enhancements recognition effectively <ns0:ref type='bibr' target='#b30'>(Ko et al. (2017)</ns0:ref>). This approach involves the addition of random sound samples with similar amplitude but various frequencies <ns0:ref type='bibr' target='#b43'>(Mohammed et al. (2020)</ns0:ref>). Using white noise in a speech signal increases the performance of SER <ns0:ref type='bibr' target='#b69'>(Schlüter and Grill (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Aguiar et al. (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b23'>Hu et al. (2018)</ns0:ref>). Furthermore, when white noise is added to an original sound gives a distinct sound effect, which increases the performance of Pitch shifting: is a commonly used method in an audio sample to increase or decrease the original tone of voice. Pitch variations are performed by using this technique without affecting playback speed <ns0:ref type='bibr' target='#b46'>(Mousa (2010)</ns0:ref>). In addition, a method is utilized in pitch shifting to increase the pitch of the original sound without changing the duration of the recorded sound clip <ns0:ref type='bibr' target='#b57'>(Rai and Barkana (2019)</ns0:ref>). For example, various studies on Singing Voice Detection (SVD) <ns0:ref type='bibr' target='#b19'>(Gui et al. (2021)</ns0:ref>), Environmental Sound Classification (ESC) <ns0:ref type='bibr' target='#b66'>(Salamon and Bello (2017b)</ns0:ref>), and domestic cat classification have shown that pitch shifting may be highly effective for DA <ns0:ref type='bibr' target='#b54'>(Pandeya et al. (2018)</ns0:ref>).</ns0:p><ns0:p>Time Stretching: is a way to change the speed or length of an audio signal without changing the tone.</ns0:p><ns0:p>Instead, it is used to manipulate audio signals <ns0:ref type='bibr' target='#b16'>(Damskgg and Vlimki (2017)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science width of the window, offset, and shape. To extract a part of a signal, multiply the signal's value at the time 't,' signal[t], by the value of the hamming window at a time 't,' window[t], which is expressed as:</ns0:p><ns0:formula xml:id='formula_0'>windowsignal[t] =window[t]* signal[t]</ns0:formula><ns0:p>A windowed signal is utilized to create characteristics for emotion recognition. For SER, a standard size window of 25 ms is employed to extract features with a 10 ms overlap <ns0:ref type='bibr' target='#b88'>(Yoon et al. (2018);</ns0:ref><ns0:ref type='bibr' target='#b77'>Tarantino et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b59'>Ramet et al. (2018)</ns0:ref>). On the other hand, some research has indicated that a larger window size improves emotion identification performance <ns0:ref type='bibr' target='#b13'>(Chernykh and Prikhodko (2018)</ns0:ref>; <ns0:ref type='bibr' target='#b78'>Tripathi et al. (2019)</ns0:ref>).</ns0:p><ns0:p>In addition, other studies have assessed the significance of step size (overlap window size). However, SER analysis is conducted using a single-window <ns0:ref type='bibr' target='#b77'>(Tarantino et al. (2019)</ns0:ref>; <ns0:ref type='bibr' target='#b13'>Chernykh and Prikhodko (2018)</ns0:ref>).</ns0:p><ns0:p>Therefore, In <ns0:ref type='bibr' target='#b77'>(Tarantino et al. (2019)</ns0:ref>) investigated the influence of overlap window size on SER. They discovered that a small step size leads to a lower test loss. In <ns0:ref type='bibr' target='#b13'>(Chernykh and Prikhodko (2018)</ns0:ref>), explored multiple window widths ranging from 30 ms to 200 ms before settling on a unique 200 ms window for the SER study.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>METHODOLOGY</ns0:head><ns0:p>Deep Learning has been used to create a variety of solid approaches for SER. The DNN is one of the most widely utilized deep learning approaches. In many SER studies, deep neural networks are employed because they have several benefits over conventional machine learning approaches. There are several benefits to using the DNN approach in many scientific domains, including object detection, geographic Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='4.1'>Input Layer</ns0:head><ns0:p>The input layer comprises nodes that obtain the inputted data from variable A. These nodes are directly connected to the hidden units. The generation of eleven input layer features is generated after a preprocessing step utilizing the Principal Component Analysis (PCA) algorithm.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Hidden layer</ns0:head><ns0:p>The hidden layer is composed of nodes that obtain data from the first layer. Previous studies have suggested that the volume of nodes in the hidden layer may be influenced by the dimensions of the input and output layers. For example, in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, the size of the hidden neurons is 24.12, and 12 in the hidden units, which is the optimal number of deep neural network characteristics based on previous studies.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Dropout (DO)</ns0:head><ns0:p>A dropout is a single approach utilized to generate a range of system designs that may be used to address overfitting issues in the model. The dropout value ranges between 0 and 1. Dropout is set to a size of 0.2 for each layer in figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, since DNN obtains the highest efficiency with this value.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70479:1:2:NEW 24 May 2022)</ns0:p><ns0:p>Manuscript to be reviewed </ns0:p><ns0:note type='other'>Computer Science</ns0:note></ns0:div>
<ns0:div><ns0:head n='4.4'>Output layer</ns0:head><ns0:p>The output layer comprises nodes that access data directly from the hidden or input layer. The output value provides a computation outcome from the A to B value. For example, the two output layer nodes in 1 represent the number of groups. The proposed technique improved the Pakistani racial speaker recognition accuracy. It was based on the seven-layer DNN architecture with a data augmentation approach. Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> illustrates the proposed method's architecture. The proposed SER using a seven-layer DNN-DA approach to the multi-language dataset, as shown in Fig. <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>, is a robust approach. First, a dataset is divided into training data (75% of the dataset) and testing data (25% of the dataset). Then, the training data is preprocessed by trimming audio signals with identical temporal lengths and generating sample types with similar shapes and sizes. Moreover, four techniques of the data augmentation procedure are performed on the dataset to enhance audio data. Finally, the MFCC extracts the features and processes them with a seven-layer DNN-DA for classification. The testing dataset performs the same preprocessing steps, data augmentation, and feature extraction using MFCC. Furthermore, the proposed approach will be evaluated Manuscript to be reviewed</ns0:p><ns0:p>Computer Science using testing data to see how accurate it is in terms of Speaker Recognition. </ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>DATASET AND PREPROCESSING</ns0:head><ns0:p>This study utilized a dataset of the six most spoken local languages in Pakistan. The information was obtained to adjust for the numerous ethnicities. A variety of online resources were used to compile this dataset <ns0:ref type='bibr' target='#b84'>(Wang and Guan (2008)</ns0:ref>; <ns0:ref type='bibr' target='#b76'>Syed et al. (2020)</ns0:ref>) . This study aims to gather data from areas of Pakistan where Urdu and its five primary ethnicities (Punjabi, Sindhi, Urdu, Saraiki, and Pashto) are spoken. The audio samples were processed using PRAAT software. The dataset for the Urdu language is summarized in Table <ns0:ref type='table' target='#tab_3'>2</ns0:ref>. The dataset is utilized only to recognize Urdu racials. The dataset contains 80 distinct utterances for each ethnicity type with different levels of education, ranging from semi-literate to literate. Each audio file is from an individual speaker, resulting in 80 distinct speakers per ethnic group.</ns0:p><ns0:p>Each clip is 30 seconds long, in mono channel WAV format, and sampled at 16 kHz of Sindhi, Saraiki, and Pashto languages. Additionally, each utterance is distinct from others in the dataset. The dataset includes sounds from 80 speakers with five racials, for 1240 clips.</ns0:p><ns0:p>The dataset processing uses a segmentation process similar to that used for the dataset of The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS). This multimodal recording dataset takes the form of emotional speech and songs recorded in audio and video formats <ns0:ref type='bibr' target='#b7'>Atmaja and Akagi (2020a)</ns0:ref>. Experiments on RAVDESS were carried out by <ns0:ref type='bibr' target='#b35'>Livingstone and Russo (2018)</ns0:ref>, and they involved the participation of 24 professional actors with North American accents. The research included speech and songs with various facial expressions, including neutral, calm, happy, sad, angry, fearful, surprised, and disgusted. In the data of Pakistani racial speakers, the complete audio utterances are segmented once again using the approach that is described below:</ns0:p><ns0:p>• Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='5.1'>Feature Extraction</ns0:head><ns0:p>We employed MFCC in the proposed study since it is one of the most robust approaches to extracting features from SER features. MFCC is the most widely used approach for obtaining spectral information from a speech by processing the Fourier Transform (FT) signal with a perception-based Mel-space filter bank. Additionally, in the proposed study, Librosa is used to extract MFCC features. This Python library has functionality for reading sound data and assisting in the MFCC feature extraction method. According to <ns0:ref type='bibr' target='#b21'>Hamidi et al. (2020)</ns0:ref>, the MFCC technique is shown in Fig. <ns0:ref type='figure'>3</ns0:ref>: The MFCC approach enhances the audio sound input during the preemphasis phase and increases the signal-to-noise ratio (SNR) enough to ensure that the voice is not influenced by noise. The framing mechanism divides the audio signal into many frames with the same signal count. Windowing is the technique of employing the window function to weight the output frame. The following procedure is the DFT (Discrete Fourier Transform), which examines the frequency signal derived from the discrete-time signal. Then, the MFCC obtained from the original utterances is determined using the filter bank (FB). The wrapping of Mel Frequency is often used in conjunction with a FB. A FB is a kind of filter used to determine the amount of energy contained within a certain frequency range, <ns0:ref type='bibr' target='#b0'>Afrillia et al. (2017)</ns0:ref>. Finally, the logarithmic (LOG) value is obtained by converting the DFT result to a single value. Inverse DFT is a technique for obtaining a perceptual autocorrelation sequence based on the linear prediction (LP) coefficient computation. The MFCC technique was employed in this study by setting frame lengths at 25 with a hamming window, 13 spectral and 22 lifter coefficients, and ten frameshifts. The MFCC approach enhances the audio sound input during the preemphasis phase, increasing the signal-to-noise ratio (SNR) enough to ensure that the voice is not influenced by noise. The framing mechanism divides the audio signal into many frames with the same signal count. Windowing is the technique of employing the window function to weigh the output frame. The following procedure is the DFT (Discrete Fourier Transform), which examines the frequency signal derived from the discrete-time signal. Then, the MFCC obtained from the original utterances is determined using the filter bank (FB). The wrapping of Mel Frequency is often used in conjunction with a FB.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Seven Layer DNN</ns0:head><ns0:p>In this study, the Rectified Linear Unit (Relu) activation function is utilized in conjunction with the adam optimizer (AO). Adam optimizer is used to improve the learning speed of deep neural networks. This algorithm was introduced at a renowned conference by deep learning experts, <ns0:ref type='bibr' target='#b28'>Kingma and Ba (2017)</ns0:ref>, with a 0.2%dropout rate. A deep neural network comprises seven layers, with the structure shown in Fig.</ns0:p></ns0:div>
<ns0:div><ns0:head>4.</ns0:head><ns0:p>As seen in Fig. <ns0:ref type='figure'>4</ns0:ref>, the seven-layer architecture of the DNN consists of one fully connected layer with 400 neurons on layer 2, which is the expected volume of neurons identified in our investigation. The following layer has just half of the neurons from the preceding layer. Layer one is composed of dense functions that create a fully connected layer. The second layer comprises 400 neurons composed of the dense and dropout functions used in the neural network to avoid overfitting and accelerate the learning Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed <ns0:ref type='table' target='#tab_5'>4</ns0:ref>, when the split ratio is 75:25, the trained model achieves the highest accuracy and the lowest loss level. As shown in Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref>, the accuracy of the results decreases when the split ratio is 80:20. At the same time, the loss increases. Finally, when the split ratio is 90:10, the accuracy results increase while the loss rate decreases. test was conducted with the addition of 100 to 500 data samples to the original 400 wav data using the split ratio approach.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>In the suggested method, a dataset with a data augmentation of 500 samples and a split ratio of 75:25 obtained the highest performance with a low total loss. However, as the sample of DA decreases, the SER model's performance decreases. In another comparison, accuracy improves when a large DA and a significant amount of training data are used. Additionally, as seen in Table <ns0:ref type='table' target='#tab_10'>7</ns0:ref>, the study has the highest accuracy performance compared to numerous methodologies using ML and DL algorithms. The study performance on SER in Table <ns0:ref type='table' target='#tab_10'>7</ns0:ref> demonstrates that the seven-layer approach we presented is practical. DNN-DA is a robust approach for usage in SER that has achieved a high degree of accuracy. It is not straightforward to get accurate prediction findings while researching several classes. Certain aspects of multi-classes will be more challenging since they must discriminate between many classes while generating predictions </ns0:p></ns0:div>
<ns0:div><ns0:head n='7'>CONCLUSION</ns0:head><ns0:p>A study in SER that includes significant data is a challenging research issue; the Pakistani racial speech dataset is comprised of utterance groups. Therefore, seven-layer DNN-DA is the approach presented in this report, which combines the data augmentation technique with a DNN to improve performance and minimize overfitting issues. Finally, some of the contributions to our work include using a Pakistani racial speech dataset in this study. Furthermore, DA can increase the amount of data by using white noise, variable window widths, pitch-shifting, and temporal stretching methods to generate new audio data for the segments. Furthermore, classification with deep neural networks of seven layers is beneficial for improving the performance of the SER system when used with all Pakistani racial speech datasets. In addition, the proposed model with the seven-layer DNN-DA technique also has an accuracy advantage.</ns0:p><ns0:p>Similar to some approaches using conventional ML and DL methods that also produce high accuracy performance.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 1</ns0:head><ns0:p>Structure of a Deep Neural Network</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Structure of a Deep Neural Network</ns0:figDesc><ns0:graphic coords='6,255.47,63.78,186.11,220.10' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>information retrieval, and voice classification Seifert et al. (2017). The DNN-based acoustic model was used in previous work to achieve high-level performance Seki et al. (2015); Snyder et al. (2018); Novotny et al. (2018); Saleem and Irfan Khattak (2020).The structure of a DNN approach is composed of input, hidden, dropout, and output layers<ns0:ref type='bibr' target='#b58'>Rajyaguru et al. (2020)</ns0:ref> . The deep neural network is an evolution of the neural network (see Fig.1), which is essentially a function in a mathematical measure R: A ⇒ B that may be stated as follows:5/16PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70479:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Structure of Proposed Approach</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.59,425.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Block Diagram of the computation steps of MFCC</ns0:figDesc><ns0:graphic coords='8,255.47,63.78,186.11,148.79' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Modality 001 = only-audio , 002 = only-video, 003 = audio-video • Classes: 001 = disgust, 002 = neutral, 003 = fearful, 004 = angry, 005 = happy, 006 = surprised, 007 = sad, 008 = calm • Vocal: 001 = song , 002 = speech • Intensity: 001 = strong, 002 = normal • The racial of the speakers as a class from 01 to 5 • Repetition: 001 = First, 002 = second • Speaker sequence number per tribe/region from 01 to 10 8/16 PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70479:1:2:NEW 24 May 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>process. The third layer comprises 200 neurons. The fourth layer comprises 100 neurons, the fifth layer comprises 50 neurons, and the sixth layer comprises 25 neurons. It is also composed of dense and dropout functions. Finally, the seventh layer comprises ten neurons with dense and dropout functions. At the same time, softmax activation is used as the output layer. The seven-layer DNN architecture is employed in this work because it provides the maximum level of accuracy compared to the three-layer DNN and five-layer DNN.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Multiple factors determine the split ratios, namely the compute costs associated with the model training, the computational costs associated with testing the model, and data analysis. Accuracy is a commonly used metric for assessing the extent of incorrectly identified items in balanced and approximately balanced datasets<ns0:ref type='bibr' target='#b8'>Atmaja and Akagi (2020b)</ns0:ref>. It is one of the model performance assessment methodologies often used in ML.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Proposed model performance on training dataset</ns0:figDesc><ns0:graphic coords='11,224.45,381.44,248.15,149.67' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Proposed model performance on testing dataset</ns0:figDesc><ns0:graphic coords='12,224.45,63.78,248.15,137.99' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>et al. (2017). However, seven layer DNN-DA outperforms conventional machine learning methods such as k-nearest Neighbors(KNN), Random Forest(RF), Multilayer Perceptron, Decision Tree, and DL approaches using three-layer DNN layer and five-layer DNN, as demonstrated by the highest accuracy performance compared to other approaches using three-layer DNN and five-layer layers DNN layer.</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='22,42.52,178.87,525.00,259.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='23,42.52,178.87,525.00,315.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='24,42.52,178.87,525.00,291.75' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Detailed description of datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Reference</ns0:cell><ns0:cell cols='2'>Approach</ns0:cell><ns0:cell cols='2'>Database</ns0:cell><ns0:cell /><ns0:cell>Classes</ns0:cell><ns0:cell>Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>Wang et al. (2014)</ns0:cell><ns0:cell>HMM</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>S-PTH</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>13.8%</ns0:cell><ns0:cell>and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>GMM</ns0:cell><ns0:cell /><ns0:cell cols='2'>database</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>24.6% Error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Rate</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Najafian et al. (2016) DNNs</ns0:cell><ns0:cell /><ns0:cell>The</ns0:cell><ns0:cell cols='2'>First</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>3.91%</ns0:cell><ns0:cell>and</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Accents</ns0:cell><ns0:cell>of</ns0:cell><ns0:cell /><ns0:cell>10.5% error</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>the</ns0:cell><ns0:cell cols='2'>British</ns0:cell><ns0:cell /><ns0:cell>rate</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Isles Speech</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Corpus</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Qasim et al. (2016)</ns0:cell><ns0:cell>Support</ns0:cell><ns0:cell /><ns0:cell cols='3'>Recorded Pak-</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>92.55</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Vector</ns0:cell><ns0:cell>Ma-</ns0:cell><ns0:cell cols='3'>istan ethnic</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>chine,Random</ns0:cell><ns0:cell cols='2'>speaker</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Forest</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Gaussian Mix-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>ture Model</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Salamon and Bello</ns0:cell><ns0:cell>SB-CNN</ns0:cell><ns0:cell /><ns0:cell>Urban-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>10</ns0:cell><ns0:cell>94</ns0:cell></ns0:row><ns0:row><ns0:cell>(2017b)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Sound8K</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Upadhyay and Lui</ns0:cell><ns0:cell cols='2'>Deep Belief</ns0:cell><ns0:cell cols='4'>FAS Database 6</ns0:cell><ns0:cell>90.2</ns0:cell></ns0:row><ns0:row><ns0:cell>(2018)</ns0:cell><ns0:cell>Network</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Singh (2018)</ns0:cell><ns0:cell cols='2'>Fuzzy Vector</ns0:cell><ns0:cell>TIMIT</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>100</ns0:cell><ns0:cell>98.8</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Quantization</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Mouaz et al. (2019)</ns0:cell><ns0:cell cols='2'>HMM One</ns0:cell><ns0:cell cols='3'>VOCE Corpus</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>90</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>layer Deep</ns0:cell><ns0:cell cols='2'>Dataset</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Neural Net-</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>work</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Ashar et al. (2020)</ns0:cell><ns0:cell>CNN</ns0:cell><ns0:cell /><ns0:cell cols='3'>Spontaneous</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>87.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Urdu dataset</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Azizah et al. (2020)</ns0:cell><ns0:cell>DNNs</ns0:cell><ns0:cell /><ns0:cell cols='2'>Indonesian</ns0:cell><ns0:cell /><ns0:cell>4</ns0:cell><ns0:cell>98.96</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='3'>speech corpus</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Chakroun et al.</ns0:cell><ns0:cell>GMM</ns0:cell><ns0:cell /><ns0:cell>TIMIT</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>8</ns0:cell><ns0:cell>98.44</ns0:cell></ns0:row><ns0:row><ns0:cell>(2016)</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hanifa et al. (2020)</ns0:cell><ns0:cell cols='2'>Support Vec-</ns0:cell><ns0:cell cols='3'>speaker eth-</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>56.96</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>tor Machine</ns0:cell><ns0:cell>nicity</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Hanifa et al. (2020)</ns0:cell><ns0:cell>DNN</ns0:cell><ns0:cell /><ns0:cell>OC16</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>2</ns0:cell><ns0:cell>86.10</ns0:cell></ns0:row><ns0:row><ns0:cell>SER.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>3/16PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70479:1:2:NEW 24 May 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Duration of audio speech data in hours</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Racial</ns0:cell><ns0:cell cols='2'>Number</ns0:cell><ns0:cell>Duration per</ns0:cell><ns0:cell>Number of</ns0:cell><ns0:cell>Nature of samples</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>of</ns0:cell><ns0:cell>Male</ns0:cell><ns0:cell>sample</ns0:cell><ns0:cell>Samples</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>and Female</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Speakers</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Punjabi (Wang and</ns0:cell><ns0:cell cols='2'>4 males and 4</ns0:cell><ns0:cell>42 seconds</ns0:cell><ns0:cell>500 samples</ns0:cell><ns0:cell>Speaker and text inde-</ns0:cell></ns0:row><ns0:row><ns0:cell>Guan (2008))</ns0:cell><ns0:cell>females</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>pendent</ns0:cell></ns0:row><ns0:row><ns0:cell>Urdu (Wang and</ns0:cell><ns0:cell cols='2'>4 males and 4</ns0:cell><ns0:cell>42 seconds</ns0:cell><ns0:cell>500 samples</ns0:cell><ns0:cell>Speaker and text inde-</ns0:cell></ns0:row><ns0:row><ns0:cell>Guan (2008); Syed</ns0:cell><ns0:cell>females</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>pendent</ns0:cell></ns0:row><ns0:row><ns0:cell>et al. (2020))</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Sindhi(Syed et al.</ns0:cell><ns0:cell cols='2'>32 males and</ns0:cell><ns0:cell>30 seconds</ns0:cell><ns0:cell>80 samples</ns0:cell><ns0:cell>Speaker and text inde-</ns0:cell></ns0:row><ns0:row><ns0:cell>(2020))</ns0:cell><ns0:cell cols='2'>38 females</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>pendent</ns0:cell></ns0:row><ns0:row><ns0:cell>Saraiki</ns0:cell><ns0:cell cols='2'>42 males and</ns0:cell><ns0:cell>30 seconds</ns0:cell><ns0:cell>80 samples</ns0:cell><ns0:cell>Speaker and text inde-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>28 females</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>pendent</ns0:cell></ns0:row><ns0:row><ns0:cell>Pashto</ns0:cell><ns0:cell cols='2'>35 males and</ns0:cell><ns0:cell>30 seconds</ns0:cell><ns0:cell>80 samples</ns0:cell><ns0:cell>Speaker and text inde-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>35 females</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell>pendent</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Comparison table of loss at dividing ratio with accuracyActed, semi-natural, and spontaneous datasets were employed in the proposed study. In addition, the split ratio method with train test split assessment was used to evaluate performance in ML. The proposed</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dividing Ratio</ns0:cell><ns0:cell>Classification Accuracy</ns0:cell><ns0:cell>Total Loss</ns0:cell></ns0:row><ns0:row><ns0:cell>90 : 10</ns0:cell><ns0:cell>93.55</ns0:cell><ns0:cell>0.105</ns0:cell></ns0:row><ns0:row><ns0:cell>80 : 20</ns0:cell><ns0:cell>95.767</ns0:cell><ns0:cell>0.093</ns0:cell></ns0:row><ns0:row><ns0:cell>75 : 25</ns0:cell><ns0:cell>97.32</ns0:cell><ns0:cell>0.032</ns0:cell></ns0:row><ns0:row><ns0:cell>5.3 Evaluation</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>9/16PeerJ Comput. Sci. reviewing PDF | (CS-2022:01:70479:1:2:NEW 24 May 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>The accuracy and loss comparison table includes augmentation data with 75:25 ratio</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data augmentation</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Loss</ns0:cell></ns0:row><ns0:row><ns0:cell>100</ns0:cell><ns0:cell>96.57</ns0:cell><ns0:cell>1.33</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>96.21</ns0:cell><ns0:cell>0.05</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>96.83</ns0:cell><ns0:cell>2.77</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>96.45</ns0:cell><ns0:cell>0.035</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>97.32</ns0:cell><ns0:cell>0.031</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>The accuracy and loss comparison table includes augmentation data with 80:20 ratio data into training for matching the ML architecture and testing the ML architecture.The most utilized ratio is splitting training and testing data by 70%: 30%, 80%: 20%, or 90%: 10%.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data augmentation</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Loss</ns0:cell></ns0:row><ns0:row><ns0:cell>100</ns0:cell><ns0:cell>95.12</ns0:cell><ns0:cell>6.33</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>95.99</ns0:cell><ns0:cell>0.04</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>96.13</ns0:cell><ns0:cell>0.19</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>96.29</ns0:cell><ns0:cell>0.66</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>97.09</ns0:cell><ns0:cell>2.77</ns0:cell></ns0:row><ns0:row><ns0:cell>approach separates the</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>The accuracy and loss comparison table includes augmentation data with 90:10 ratio</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Data augmentation</ns0:cell><ns0:cell>Accuracy</ns0:cell><ns0:cell>Loss</ns0:cell></ns0:row><ns0:row><ns0:cell>100</ns0:cell><ns0:cell>95.21</ns0:cell><ns0:cell>0.13</ns0:cell></ns0:row><ns0:row><ns0:cell>200</ns0:cell><ns0:cell>96.90</ns0:cell><ns0:cell>0.28</ns0:cell></ns0:row><ns0:row><ns0:cell>300</ns0:cell><ns0:cell>96.34</ns0:cell><ns0:cell>3.22</ns0:cell></ns0:row><ns0:row><ns0:cell>400</ns0:cell><ns0:cell>96.99</ns0:cell><ns0:cell>6.23</ns0:cell></ns0:row><ns0:row><ns0:cell>500</ns0:cell><ns0:cell>97.01</ns0:cell><ns0:cell>5.232</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Comparison of outcomes with different ML and DL algorithms</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Dataset</ns0:cell><ns0:cell>Classification Accuracy</ns0:cell><ns0:cell>Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>KNN</ns0:cell><ns0:cell>81.99</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Random Forest</ns0:cell><ns0:cell>71.56</ns0:cell></ns0:row><ns0:row><ns0:cell>Pakistani Racial Speaker Classification</ns0:cell><ns0:cell cols='2'>Multilayer Perceptron (MLP) Decision Tree Three layers Deep Neural Network 92.56 91.45 67.45</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Five layers Deep Neural Network</ns0:cell><ns0:cell>94.78</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Seven Layer DNN-DA (Proposed)</ns0:cell><ns0:cell>97.732</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>On the other hand, if we utilize an insufficient training dataset, the model will lack expertise, resulting</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>in inferior output during testing. The proposed approach will gain a more profound understanding and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>increase the model's generalizability by including many testing datasets. As shown in table 4-6, another</ns0:cell></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "
Original Article Title: “Data Augmentation and Deep Neural Networks for the Classification of Pakistani Racial Speakers Recognition'
Dear Editor:
Thank you very much for allowing a resubmission of our manuscript “Data Augmentation and Deep Neural Networks for the Classification of Pakistani Racial Speakers Recognition”. We are very happy to have received a positive evaluation, and we would like to express our appreciation to you and all Reviewers for the thoughtful comments and helpful suggestions. Reviewers raised several concerns, which we have carefully considered and made every effort to address. We fundamentally agree with all the comments made by the Reviewers, and we have incorporated corresponding revisions into the manuscript.
Our detailed, point-by-point responses to the editorial and reviewer comments are given below, whereas the corresponding revisions are marked in colored text in the manuscript file. Specifically, bold text indicates changes made in response to the suggestions of Reviewers. Additionally, we have carefully revised the manuscript to ensure that the text is optimally phrased and free from typographical and gramatical errors. We believe that our manuscript has been considerably improved as a result of these revisions, and hope that our revised manuscript “Data Augmentation and Deep Neural Networks For the Classification of Multiple Urdu Accent Speakers”” is acceptable for publication in the PeerJ.
We would like to thank you once again for your consideration of our work and for inviting us to submit the revised manuscript. We look forward to hearing from you.
Best regards
Hsien-Tsung Chang
Chang Chang Gung University
Department of Computer Science and Information Engineering Taoyuan, Taiwan
E-mail: [email protected]
Reviewer#1, Concern # 1: In this study, the authors present a data augmentation method that shifts the pitch, uses multiple window sizes, stretches the time, and adds white noise to the original audio. The presented subject is undoubtedly attractive. Just the presented study might be somewhat weak in innovation. Frankly, the proposed approach might not be technically sound
Author response: We thank you for your concern.
Author action: To the best of our knowledge, this is the first time we have used the data augmentation technique with seven layers to extract emotions from the Urdu speech dataset (multi-language). It is also important to mention that previous studies did not utilize multi-racial speaker classification with data augmentation and deep neural network.
Reviewer#1, Concern # 2: Transfer from previous research to deep neural networks, more like an MLP (multilayer perceptron), which stacks too many “Dropout” and “ReLU” layers, which might be somewhat unintelligible.
Author response: Thanks for the concern.
Author action: As seen in Fig. $\ref{label4}$, the seven-layer architecture of the DNN consists of one fully connected layer with 400 neurons on layer 2, which is the expected volume of neurons identified in our investigation. The following layer has just half of the neurons from the preceding layer. Layer one is composed of dense functions that create a fully connected layer. The second layer comprises 400 neurons composed of the dense and dropout functions used in the neural network to avoid overfitting and accelerate the learning process. The third layer comprises 200 neurons. The fourth layer comprises 100 neurons, the fifth layer comprises 50 neurons, and the sixth layer comprises 25 neurons. It is also composed of dense and dropout functions. Finally, the seventh layer comprises ten neurons with dense and dropout functions. At the same time, softmax activation is used as the output layer. The seven-layer DNN architecture is employed in this work because it provides the maximum level of accuracy compared to the three-layer DNN and five-layer DNN. (page#8 line#246)
Reviewer#1, Concern # 3: Data augmentation for fitting deep learning has been well-known efficacy most of the time.
Author response: Thank for the comment.
Reviewer#1, Concern # 4: Problem formation of this study might not be credible. Therefore, the authors are required to address the potential of the presented study based on speech emotion recognition.
Particularly, making necessary efforts to let the presented novelties much more recognizable by other scholars.
Author response: We thank you for your concern.
Author action: Several studies have been conducted regarding SER based on deep learning using different methods, such as the Deep Neural Network (DNN) (\cite{7415467,7738854,7472649,7965997,8461375,NAJAFIAN202044,ROHDIN202022,9466841,pr9122286,10.7717/peerj-cs.766,pr9122286}) and Convolutional Neural Network (CNN) methodologies used in the study (\cite{ravanelli2019speaker}) attained an overall accuracy of 85\% with the TIMIT database and 96\% with LibriSpeech. Using the deep learning technique, (\cite{8721628}) obtained 96.5 percent accuracy and significantly improved the ability to handle multiple issues in SER. However, DL requires a lot of training datasets, which are challenging to gather and expensive. Therefore, this approach is not suitable for utilization with SER because it will yield overfitting problems and may lead to skewed data. The use of data augmentation (DA) is one solution to the problem of small data in the SER study. A DA approach is a technique that can be used to create additional training datasets by altering the shape of a training dataset. DA is helpful in many investigations, such as digital signal processing, object identification, and image classification (\cite{Li2020}).
The proposed study presents a data augmentation method based on a seven-layer DNN for recognizing racial speakers in Pakistan by utilizing 400 audio samples from multiple classes of racial speakers in Pakistan. However, this kind of study may easily lead to multiclass difficulties due to the many classes it includes. On the other hand, DNN approaches are often utilized in SER (\cite{8632885}). In addition, DNN is also a powerful model capable of achieving excellent performance in pattern recognition (\cite{reading95532}). The study undertaken by (\cite{novotny2018analysis}) in conjunction with Mel-frequency cepstral coefficients (MFCC) has shown the effectiveness of DNN in SER and improved network efficiency in busy and echo conditions. Furthermore, DNN with Mel-frequency cepstral coefficients has outperformed numerous other research approaches on SER single networks (\cite{SALEEM2020107385}). Additionally, DNN has been effectively fusing with augmented datasets. The presented approach employs a seven-layer neural network because the seven-layer technique yields the highest efficiency and accuracy when used in previous works with an average precision above 90\% (\cite{Liu2016,Zhang2018,https://doi.org/10.1002/ima.22337}). Furthermore, including the Pakistani speakers with many classes employing DNN with DA would improve the identification efficiency of multiple emotional classes.
Reviewer#2, Concern # 1: The lack of structure, use of confusing language, and absence of sign-posting made this paper a bit harder to read. For instance, in the abstract, you first need to clearly define the problem and mention why it is important to solve it. You also need to sign-post your take on related work and the need for an improved, efficient mechanism to solve the problem. You then need to structure and briefly present your technique and salient features (i.e., some results). At the end you need to highlight the key takeaway(s) from your research.
Author response: Thank you for pointing this out.
Author action: We updated the manuscript by updating the abstract and also we update the whole manuscript according to the comments.
Reviewer#2, Concern # 2: From the abstract, it is not clear how and why SER is a complex issue? For clearer readability, perhaps, you may define SER before terming it as a complex issue. Once defined, you can then pin-point the complexity and attribute it to the main component(s) in SER.
Author response: Thank you for pointing this out.
Author action: We updated the manuscript by defining SER before terming it as a complex issue and also pointing out the complexity.
Speech emotion recognition (SER) systems have evolved into an important method for recognizing a person in several applications, including e-commerce, e-commerce, everyday interactions, law enforcement, and forensics. However, an SER system's efficiency depends on the length of the audio samples used for testing and training. However, the different suggested models successfully obtained relatively high accuracy in this study. Moreover, the degree of SER efficiency is not yet optimum due to the limited database, resulting in overfitting and skewing samples. (page#1, line#15)
Reviewer#2, Concern # 3: The paper addresses an interesting problem of Speech Emotion Recognition. Leveraging Deep Neural Networks, it proposes a technique to classify multiple Urdu language accent speakers. While the evaluation and analysis of the proposal show promising results, the language and structure (elaborated in **Additional Comment**) of the paper makes it extremely hard to read thus hindering fluency and comprehension.
Author response: Thank you for pointing this out.
Author action: We updated the manuscript by rewriting the whole manuscript and changing the language and structure of the manuscript.
Reviewer#2, Concern #4: Please consider restructuring the last paragraph of Introduction. For instance, 'This report (--> paper) is divided .... about the conclusion of the research outcomes.' could be a separate paragraph.
Author response: Thanks a lot for the comments.
Author action: : We updated the manuscript by dividing the suggested paragraph into two parts.(page#2 line#97)
Reviewer#2, Concern #5: I really appreciate the authors' efforts to share the code with the research community. I believe this will help further research and reproducibility of the proposed technique. It would be more beneficial if the author share the dataset (**Augmented Dataset**) for end-to-end reproducibility of the results.
Author response: Thanks a lot for the comments.
Author action: Thanks, Unfortunately, we did not receive consent from subjects to release the audio recordings publicly.
Reviewer#2, Concern #6: Having too many references without proper discussion or structure of the reference is concerning. Please avoid tangential references and cite the key, related work.
Author response: Thanks a lot for the comments.
Author action: Thanks
Reviewer#2, Concern 7: You need to clarify why **seven**-layer framework was employed? Why not less or greater than **seven** a layer framework was used?
Author response: Thanks a lot for the kind comments.
Author action: The proposed study presents a data augmentation method based on a seven-layer DNN for recognizing racial speakers in Urdu by utilizing 400 audio samples from multiple classes of racial speakers in Pakistan. However, this kind of study may easily lead to multiclass difficulties due to the many classes it includes. DNN approaches often utilized in SER (\cite{8632885}). In addition, DNN is also a powerful model capable of achieving excellent performance in pattern recognition (\cite{reading95532}).
The study undertaken by (\cite{novotny2018analysis}) in conjunction with Mel-frequency cepstral coefficients (MFCC) has shown the effectiveness of DNN in SER and improved network efficiency in busy and echo conditions. Furthermore, DNN with Mel-frequency cepstral coefficients has outperformed numerous other research approaches on SER single networks (\cite{SALEEM2020107385}). Additionally, DNN has been effectively fusing with augmented datasets. The presented approach employs a seven-layer neural network because the seven-layer technique yields the highest efficiency and accuracy when used in several investigations with an average accuracy above 90\% (\cite{Liu2016,Zhang2018,https://doi.org/10.1002/ima.22337}). Furthermore, including the Urdu language into a few classes and datasets employing DNN with DA would improve the identification efficiency of multiple emotional classes
Reviewer#2, Concern # 8: Data Augmentation I think this section should be renamed as Data Manipulation as some techniques such as 'Pitch Shifting' can not be termed as data augmentation technique. 'Pitch Shifting' is a technique rather than data augmentation.
Please duplicate references e.g.,:
Please consider restructuring this section as the first paragraph is way too long and hard to read.
Author response: Thanks for the suggestion.
Author action: We updated the manuscript by restructuring data augmentation section(page#4 Line#132). Numerous investigations used time stretching as a data augmentation with other approaches such as synchronous overlap, fuzzy, and CNN to increase the efficiency of the suggested framework (Sasaki et al. (2010); Kupryjanow and Czyewski (2011); Salamon and Bello (2017a)). These studies used different techniques, such as the Synchronous Overlap algorithm, fuzzy logic, and CNN, to improve the performance of the proposed model. Also, we used multiple window sizes in the proposed approach.
Reviewer#2, Concern #9:
Salamon, J. and Bello, J. P. (2016). Deep convolutional neural networks and data augmentation for
environmental sound classification. CoRR, abs/1608.04363.
Salamon, J. and Bello, J. P. (2017a). Deep convolutional neural networks and data augmentation for environmental sound classification. IEEE Signal Processing Letters, 24(3):279–283.
Salamon, J. and Bello, J. P. (2017b). Deep convolutional neural networks and data augmentation for
environmental sound classification. IEEE Signal Processing Letters, 24(3):279283.
- And the following
Damskgg, E.-P. and Vlimki, V. (2017a). Audio time stretching using fuzzy classification of spectral bins. Applied Sciences, 7(12):1293.
Damskgg, E.-P. and Vlimki, V. (2017b). Audio time stretching using fuzzy classification of spectral bins. Applied Sciences, 7(12)
Author response: Thanks a lot for the kind comments. We have removed duplicate references.
Author action: We updated the manuscript by removing the duplicate references.
Reviewer#2, Concern # 10: - 'Many countries ...' need to be correctly sign-posted. Countries do no research. Rather researchers investigate problems.
Author's response: Thank you for pointing this out. The reviewer is correct.
Author action: We updated the manuscript by changing the whole sentence. The new sentence is
' The proposed study on multi-accented voice recognition was carried out in many nations, like China (\cite{8632885}), Africa (\cite{6998900}), Italy (\cite{NAJAFIAN202044}), Pakistan (\cite{Syed2020,7918979}), United States (\cite{8334476}), and India, through CNN and MFCC (\cite{9080730}. It is a vital technique that many researchers have chosen to enhance SER efficacy (\cite{8839817}).'. (page#3 Line#102)
Reviewer#2, Concern # 11: What do you mean by 'numerous investigations'? Is numerous equals 10, 100, 1,000, etc?
Author response: We thank you for your concern.
Author action: We updated the manuscript by changing ' previous works' to 'several investigations.' (page#2 Line#86)
Reviewer#3, Concern #12: What does **0:42* represented in Table 2 (in Column, 'Duration per sample')? Please keep consistency when reporting numbers or data in tables.
Author response: : Thanks for your suggestion.
Author action: We updated the manuscript by updating Table 2. (page#8 Line#226)
Reviewer#2, Concern # 13: It would be great if you could minimize the use of **passive voice**. In most of well-written, readable CS paper use **active voice** for improved communication.
Author response: Thanks for your suggestion.
Author action: We updated the manuscript by changing the passive voice to active voice sentences.
Reviewer#2, Concern # 14: Please align Table 7 with text-width size. You may reduce the width of column # 1 and column # 3.
Author response: Thanks for the comments.
Author action: We updated the manuscript by updating the table size. (page#11 Line#308)
" | Here is a paper. Please give your review comments after reading it. |
722 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Metastatic cutaneous melanoma is an aggressive skin cancer with some progressionslowing treatments but no known cure. The omics data explosion has created many possible drug candidates, however filtering criteria remain challenging, and systems biology approaches have become fragmented with many disconnected databases. Using drug, protein, and disease interactions, we built an evidence-weighted knowledge graph of integrated interactions. Our knowledge graph-based system, ReDrugS, can be used via an API or web interface, and has generated 25 high quality melanoma drug candidates. We show that probabilistic analysis of systems biology graphs increases drug candidate quality compared to non-probabilistic methods. Four of the 25 candidates are novel therapies, three of which have been tested with other cancers. All other candidates have current or completed clinical trials, or have been studied in in vivo or in vitro. This approach can be used to identify candidate therapies for use in research or personalized medicine.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Metastatic cutaneous melanoma is an aggressive cancer of the skin with low prevalence but very high mortality rate, with an estimated 5 year survival rate of 6 percent <ns0:ref type='bibr' target='#b1'>(Barth et al., 1995)</ns0:ref> There are currently no known therapies that can consistently cure metastatic melanoma. Vemurafenib is effective against BRAF mutant melanomas <ns0:ref type='bibr' target='#b6'>(Chapman et al., 2011)</ns0:ref> but resistant cells often result in recurrence of metastases <ns0:ref type='bibr' target='#b44'>(Le et al., 2013)</ns0:ref> Melanoma itself may be best approached based on the individual genetics of the tumor, as it has been shown to involve mutations in many different genes to produce the same disease <ns0:ref type='bibr' target='#b42'>(Krauthammer et al., 2015)</ns0:ref>. Because of this, an individualized approach may be necessary to find effective treatments.</ns0:p><ns0:p>Drug repurposing, or the discovery of new uses for existing approved drugs, can often lead to effective new treatments for diseases. A wide range of computational methods have been developed in support of drug repositioning. Computational approaches <ns0:ref type='bibr' target='#b70'>(Sanseau and Koehler, 2011)</ns0:ref> include topic modeling, <ns0:ref type='bibr' target='#b3'>(Bisgin et al., 2012</ns0:ref><ns0:ref type='bibr' target='#b2'>(Bisgin et al., , 2014) )</ns0:ref> side effect similarity, <ns0:ref type='bibr' target='#b86'>(Yang and Agarwal, 2011;</ns0:ref><ns0:ref type='bibr' target='#b87'>Ye et al., 2014)</ns0:ref> drug and/or disease similarity <ns0:ref type='bibr' target='#b10'>(Chiang and Butte, 2009;</ns0:ref><ns0:ref type='bibr' target='#b19'>Gottlieb et al., 2011)</ns0:ref>, genome-wide association studies <ns0:ref type='bibr' target='#b39'>(Kingsmore et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b23'>Grover et al., 2014)</ns0:ref>, and gene expression <ns0:ref type='bibr' target='#b43'>(Lamb et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b74'>Sirota et al., 2011)</ns0:ref> Systems biology has also provided a number of network analysis approaches <ns0:ref type='bibr' target='#b86'>(Yang and Agarwal, 2011;</ns0:ref><ns0:ref type='bibr' target='#b84'>Wu et al., 2013b;</ns0:ref><ns0:ref type='bibr' target='#b9'>Cheng et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Emig et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b27'>Harrold et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b83'>Wu et al., 2013a;</ns0:ref><ns0:ref type='bibr' target='#b78'>Vogt et al., 2014)</ns0:ref> but the field has been limited by a fragmentation of databases. Most systems biology databases are not aligned with each other, and typically leave out crucial information about how other biological entities, like drugs and diseases, interact with the systems biology graph. Further, while some interaction databases provide human curation and validation of pathway interactions, and others provide experimental evidence for the recorded interactions, there has not yet been, to our knowledge, a resource that combines the two approaches and quantifies the reliability of the evidence used to assert the interactions.</ns0:p><ns0:p>A knowledge graph is a compilation of facts and figures that can be used to provide contextual meaning to searches. Google is using knowledge graphs to improve its search and to analyze the information graph of the web; Facebook is using them to analyze the social graph. We built our knowledge graph with the goal of unifying large parts of biomedical domain knowledge for both mining and interactive exploration related to drugs, diseases, and proteins. Our knowledge graph is enhanced by the provenance of each fragment of knowledge captured, which is used to compute the confidence probabilities for each of those fragments.</ns0:p><ns0:p>Further, we use open standards from the World Wide Web Consortium (W3C), including the Resource Description Framework (RDF) <ns0:ref type='bibr' target='#b40'>(Klyne and Carroll, 2005)</ns0:ref>, Web Ontology Language (OWL) <ns0:ref type='bibr' target='#b57'>(Motik et al., 2009)</ns0:ref>, and SPARQL <ns0:ref type='bibr' target='#b26'>(Harris et al., 2013)</ns0:ref>. The representation of the knowledge in our knowledge graph is aligned with best practice vocabularies and ontologies from the W3C and the biomedical community, including the PROV Ontology <ns0:ref type='bibr' target='#b45'>(Lebo et al., 2013)</ns0:ref>, the HUPO Proteomics Standards Initiative Molecular Interactions (PSI-MI) Ontology <ns0:ref type='bibr' target='#b30'>(Hermjakob et al., 2004)</ns0:ref>, and the Semanticscience Integrated Ontology (SIO) <ns0:ref type='bibr' target='#b14'>(Dumontier et al., 2014)</ns0:ref>. Use of these standards, vocabularies, and ontologies make it simple for ReDrugS to integrate with other similar efforts in the future with minimal effort.</ns0:p><ns0:p>We proposed and built a novel computational drug repositioning platform, that we refer to as ReDrugS, that applies probabilistic filtering over individually-supported assertions drawn from multiple databases pertaining to systems biology, pharmacology, disease association, and gene expression data. We use our platform to identify novel and known drugs for melanoma.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>RESULTS</ns0:head><ns0:p>We used ReDrugS to examine the drug-target-disease network and identify known, novel, and well supported melanoma drugs. The ReDrugS knowledge base contained 6,180 drugs, 3,820 diseases, 69,279 proteins, and 899,198 interactions. We examined drug and gene connections that were 3 or less interaction steps from melanoma, and additionally filtered interactions with a joint probability greater or equal to 0.93. We identified 25 drugs in the resulting drug-gene-disease network surrounding melanoma as illustrated in Figure <ns0:ref type='figure'>2</ns0:ref> .</ns0:p><ns0:p>We then validated the set of 25 drugs by determining their position in the drug discovery pipeline for Manuscript to be reviewed Figure <ns0:ref type='figure'>2</ns0:ref>. The interaction graph of predicted melanoma drugs with a probability of 0.93 or higher and have three or fewer intervening interactions between drug and disease. The 'Explore' tab contains the controls to expand the network in various ways, including the filtering parameters. Node and edge detail tabs provide additional information about the selected node or edge, including the probabilities of the edges selected. Users can control the layout algorithm and related options using the 'Options' tab. melanoma. Table <ns0:ref type='table'>1</ns0:ref> shows that nearly all drugs uncovered by ReDrugS were previously been identified as potential melanoma therapies either in clinical trials or in vivo or in vitro. Of the 25 drugs, 12 have been in Phase I, II, or III clinical trials, 5 have been studied in vitro, 4 in vivo, 1 was investigated as a case study, and 3 are novel.</ns0:p><ns0:p>To further evaluate our system, we examined the impact of decreasing the joint probability or increasing the number of interaction steps. Figures <ns0:ref type='figure' target='#fig_2'>3 A and B</ns0:ref> show precision, recall, and f-measure curves while varying each parameter. Using these information retrieval performance curves we found that using a joint probability of 0.93 or greater with 3 or less interaction steps maximizes the precision and recall as shown in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>.</ns0:p><ns0:p>By performing a sampled literature search on hypothesis candidates with a joint probability of 0.5 or higher and 6 or fewer interaction steps, we were able to generate precision, recall, and f-measure curves for both cutoffs to find our cutoff of 0.93 with 3 or fewer interaction steps. The precision, recall, and f-measure curves are shown for varying joint probability thresholds in Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> A and for varying interaction step counts in Figure <ns0:ref type='figure' target='#fig_2'>3 B</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>DISCUSSION</ns0:head><ns0:p>We designed ReDrugS to quickly and automatically integrate and filter a heterogeneous biomedical knowledge graph to generate high-confidence drug repositioning candidates. Our results indicate that ReDrugs generates clinically plausible drug candidates, in which half are in various stages of clinical trials, while others are novel or are being investigated in pre-clinical studies. By helping to consolidate the three main datatypes -drug targets, protein interactions, and disease genes, ReDrugs can amplify the ability of researchers to filter the vast amount of information into those that are relevant for drug discovery.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10366:1:1:NEW 9 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed Table <ns0:ref type='table'>1</ns0:ref>. Drug discovery status for 25 drug candidates identified using ReDrugS. 'Pathway' refers to the target or pathway that the drug acts on. 'Steps' is distance in number of interactions between the drug and the disease, and 'Joint p' is the joint probability that all of those interactions occur. <ns0:ref type='bibr' target='#b50'>(Luikart et al., 1984)</ns0:ref> MAP kinase 0.93 Phase II Zidovudine <ns0:ref type='bibr' target='#b32'>(Humer et al., 2008</ns0:ref>) TERT 0.98 Trametinib <ns0:ref type='bibr' target='#b36'>(Kim et al., 2012)</ns0:ref> MAP kinase 0.98 Regorafenib (Istituto Clinico Humanitas, 2015) BRAF 0.98 Nadroparin <ns0:ref type='bibr' target='#b58'>(Nagy et al., 2009)</ns0:ref> MYC 0.97 Vinorelbine <ns0:ref type='bibr' target='#b79'>(Whitehead et al., 2004)</ns0:ref> MAP kinase 0.93 Irinotecan <ns0:ref type='bibr' target='#b16'>(Fiorentini et al., 2009)</ns0:ref> CDKN2A 0.93 Topotecan <ns0:ref type='bibr' target='#b41'>(Kraut et al., 1997</ns0:ref> <ns0:ref type='bibr' target='#b75'>(Smalley et al., 2007)</ns0:ref> MAP kinase/TP53 0.97 Ellagic Acid <ns0:ref type='bibr' target='#b38'>(Kim et al., 2008)</ns0:ref> PRKCA/BRAF 0.95 Albendazole <ns0:ref type='bibr' target='#b66'>(Patel et al., 2011)</ns0:ref> CDKN2A 0.93 Colchicine <ns0:ref type='bibr' target='#b48'>(Lemontt et al., 1988)</ns0:ref> MAP kinase 0.93 In Vivo</ns0:p></ns0:div>
<ns0:div><ns0:head>Status</ns0:head><ns0:p>Plerixafor (D'Alterio et al., 2012) CXCR4 0.97 Vincristine <ns0:ref type='bibr' target='#b71'>(Sawada et al., 2004)</ns0:ref> MAP kinase 0.93 L-Methionine <ns0:ref type='bibr' target='#b11'>(Clavo and Wahl, 1996)</ns0:ref> CDKN2A 0.93 Mebendazole <ns0:ref type='bibr' target='#b13'>(Doudican et al., 2008</ns0:ref> varying number of interaction steps. Precision is the percentage of returned candidates that have been validated experimentally or have been in a clinical trial (a 'hit') versus all candidates returned. Recall is the percentage of all known validated 'hits'. F-measure is the geometric mean of precision and recall that provides a balanced evaluation of the quality and completeness of the results.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10366:1:1:NEW 9 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Candidate Significance</ns0:head><ns0:p>Three drugs were identified that have not previously been studied for melanoma treatment. Framycetin, a CXCR4 inhibitor, has not previously been considered for melanoma treatment. While it is nephrotoxic when administered orally <ns0:ref type='bibr' target='#b20'>(Greenberg, 1965)</ns0:ref>, is used topically as an antibacterial treatment. While it may not be of use for metastasis, it might serve as a simple, inexpensive prophylactic treatment after excision of primary tumors. Additionally, Lucanthone and Podofilox were identified as having potential effects on melanoma through CDKN2A and MAP kinase, respectively.</ns0:p><ns0:p>One drug we identified, Vemurafenib, is approved for treatment of late stage melanoma has been shown to inhibit the BRAF protein in BRAF-V600 mutant melanomas <ns0:ref type='bibr' target='#b6'>(Chapman et al., 2011)</ns0:ref>. However, cells can become resistant to Vemurafenib, thereby leading to metastasis <ns0:ref type='bibr' target='#b44'>(Le et al., 2013)</ns0:ref>.</ns0:p><ns0:p>A number of the drugs we identified are in clinical trials for treatment of melanoma. We identified BRAF-oriented drugs, Dabrafenib <ns0:ref type='bibr' target='#b28'>(Hauschild et al., 2012</ns0:ref><ns0:ref type='bibr'>), Sorafenib (National Cancer Institute, 2005)</ns0:ref>,</ns0:p><ns0:p>and Regorafenib (Istituto Clinico Humanitas, 2015), that have been evaluated in clinical trials, but have not yet been approved. Zidovudine, or Azidothymidine (AZT) is a TERT inhibitor that has shown significant melanoma tumor reductions in mouse models <ns0:ref type='bibr' target='#b32'>(Humer et al., 2008)</ns0:ref>. Three MAP kinase-related compounds, Vinblastine <ns0:ref type='bibr' target='#b50'>(Luikart et al., 1984)</ns0:ref>, Trametinib <ns0:ref type='bibr' target='#b36'>(Kim et al., 2012)</ns0:ref>, Vinorelbine <ns0:ref type='bibr' target='#b79'>(Whitehead et al., 2004)</ns0:ref> were identified that are in clinical trials for melanoma treatment. CDKN2A was another popular target, as Irinotecan <ns0:ref type='bibr' target='#b16'>(Fiorentini et al., 2009)</ns0:ref> Topotecan <ns0:ref type='bibr' target='#b41'>(Kraut et al., 1997)</ns0:ref> Sodium stibogluconate <ns0:ref type='bibr' target='#b59'>(Naing, 2011)</ns0:ref> are all drugs in clinical trial that we identified as potential therapies.</ns0:p><ns0:p>Many other drugs were identified that are being studied in the lab. Additional drugs were identified that target the MAP kinase pathway, including Bosutinib <ns0:ref type='bibr' target='#b31'>(Homsi et al., 2009)</ns0:ref>, Purvalanol <ns0:ref type='bibr' target='#b75'>(Smalley et al., 2007)</ns0:ref>, Colchicine <ns0:ref type='bibr' target='#b48'>(Lemontt et al., 1988)</ns0:ref> Vincristine <ns0:ref type='bibr' target='#b71'>(Sawada et al., 2004</ns0:ref>). Podofilox has not yet been investigated in melanoma treatments, but preliminary investigations have focused on treating Chronic Lymphocytic Leukemia (CLL) <ns0:ref type='bibr' target='#b73'>(Shen et al., 2013)</ns0:ref> and Non-Small Cell Lung Cancer (NSCLC) <ns0:ref type='bibr' target='#b67'>(Peng et al., 2014)</ns0:ref>. Since these drugs attack MAPK2 and related proteins rather than BRAF or NRAS, they can potentially synergize with other treatments <ns0:ref type='bibr' target='#b31'>(Homsi et al., 2009)</ns0:ref>. Bosutinib in particular has been investigated as a synergistic treatment for melanoma <ns0:ref type='bibr' target='#b29'>(Held et al., 2012)</ns0:ref>. Another possible treatment pathway is CXCR4 inhibition. Mouse models suggest that CXCR4 inhibitors like Plerixafor can reduce tumor metastasis and primary tumor growth <ns0:ref type='bibr' target='#b12'>(D'Alterio et al., 2012)</ns0:ref>. We identify both Plerixafor and Framycetin (Neomycin B) as useful CXCR4 inhibitors. Two PKRCA activators, Ingenol Mebutate and Ellagic Acid, were also identified. PKRCA binds with BRAF <ns0:ref type='bibr' target='#b64'>(Pardo et al., 2006)</ns0:ref>, but it is mechanistically unclear how PKRCA activation would result in treatment of melanoma. A number of other therapies are also notable. Purvalenol can inhibit GSK3β , which in turn activates TP53. Some, but not all, melanomas have TP53 deactivation <ns0:ref type='bibr' target='#b75'>(Smalley et al., 2007)</ns0:ref>. Nadroparin, a MYC inhibitor, may inhibit tumor progression <ns0:ref type='bibr' target='#b58'>(Nagy et al., 2009)</ns0:ref>. More broadly, heparins can potentially inhibit the metastatic process in melanoma and other cancers <ns0:ref type='bibr' target='#b54'>(Maraveyas et al., 2010)</ns0:ref>.</ns0:p><ns0:p>The approach that we present here offers a novel, mechanism-focused exploration to identify and examine drugs and targets related to cancer. This approach filters our noisy or poorly supported parts of the knowledge graph to identify more confident mechanisms between drugs, targets and diseases.</ns0:p><ns0:p>Thus, our approach can be used to explore high confidence associations that are produced as a result of large scale computational screens that use network connectivity <ns0:ref type='bibr' target='#b86'>(Yang and Agarwal, 2011;</ns0:ref><ns0:ref type='bibr' target='#b84'>Wu et al., 2013b;</ns0:ref><ns0:ref type='bibr' target='#b9'>Cheng et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b15'>Emig et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b27'>Harrold et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b83'>Wu et al., 2013a;</ns0:ref><ns0:ref type='bibr' target='#b78'>Vogt et al., 2014)</ns0:ref>, the complementarity in drug-disease gene expression, and the similarity of chemical fingerprints, side-effects, targets, or indications <ns0:ref type='bibr' target='#b86'>(Yang and Agarwal, 2011;</ns0:ref><ns0:ref type='bibr' target='#b87'>Ye et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b10'>Chiang and Butte, 2009;</ns0:ref><ns0:ref type='bibr' target='#b19'>Gottlieb et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b43'>Lamb et al., 2006;</ns0:ref><ns0:ref type='bibr' target='#b74'>Sirota et al., 2011)</ns0:ref>. Importantly, since we focus on protein networks that are strongly linked with diseases, we believe that our mechanism focused approach will also aid in the identification of disease-modifying drug candidates, rather than solely those that would be useful for the treatment of symptomatic phenotypes or related co-morbid conditions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Architecture</ns0:head><ns0:p>ReDrugS uses a fairly straightforward web architecture, as shown in Manuscript to be reviewed</ns0:p><ns0:p>Computer Science framework hosted using the Web Services Gateway Interface (WSGI) standard via an Apache HTTP server. TurboGears in turn hosts the SADI web services that drive the application and access the database.</ns0:p><ns0:p>It also serves up the static HTML and supporting files.</ns0:p></ns0:div>
<ns0:div><ns0:head>RDF Store</ns0:head><ns0:p>Python + Apache Web Server Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ReDrugS API Figure <ns0:ref type='figure'>6</ns0:ref>. The ReDrugS data flow. Data is selected from external databases and converted using scripts into nanopublication graphs, which are loaded into the ReDrugS data store. This is combined with experimental method assessments, expressed in OWL, and public ontologies into the RDF store. The web service layer queries the store and produces aggregate analyses of those nanopublications, which is consumed and displayed by the rich web client. The same APIs can be used by other tools for further analysis.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Limitations and Future Work</ns0:head><ns0:p>Our study has a some limitations. First, our study is limited by the sources of data used. We used 3 databases (DrugBank, iRefIndex, and OMIM) to construct the initial knowledge graph. These databases are continuously changing and necessarily incomplete with respect to the total number of drugs, targets, protein interactions, diseases, and disease genes. For instance, as of 8/15/2016 there are over 2000 additional FDA approved drugs in DrugBank than in the version that was initially used. Second, the focus of our work is on the potential repositioning of FDA approved drugs, which means that tens of thousands of chemical compounds with protein binding activity cannot be considered as candidates in the current study. Third, our path expansion is currently limited to pairwise protein-protein interactions, which excludes interactions as a result of protein complexes or regulatory pathways. Having a more sophisticated understanding of non direct interactions will help identify candidate drugs that can regulate entire pathways in a more rational manner. Additionally, we aim to incorporate knowledge of the complementarity of drug and disease gene expression patterns as evidenced by the Connectivity Map <ns0:ref type='bibr' target='#b43'>(Lamb et al., 2006)</ns0:ref>, which could suggest therapeutic and adverse interactions. Finally, as we develop new hypotheses about potential new drug effects, we plan to test them using a new three-dimensional cellular microarray to perform high throughput drug screening <ns0:ref type='bibr' target='#b46'>(Lee et al., 2008)</ns0:ref> with reference samples.</ns0:p><ns0:p>The integration of computational predictions and high throughput screening platform will enable the systematic evaluation of any drug or mechanism of action against any disease or adverse event.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>MATERIALS AND METHODS</ns0:head><ns0:p>This research project did not involve human subjects. The ReDrugS platform consists of a graphical web application, an application programming interface (API), and a knowledge base. The graphical web application enables users to initiate a search using drug, gene, and disease names and synonyms. Users can then interact with the application to expand the network at an arbitrary number of interactions away from</ns0:p></ns0:div>
<ns0:div><ns0:head>7/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10366:1:1:NEW 9 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the entity of interest, and to filter the network based on a joint probability between the source and target entities. Drug-protein, protein-protein, and gene-disease interactions were obtained from several datasets and integrated into ontology-annotated and provenance and evidence bearing representations called nanopublications. The web application obtains information from the knowledge base using semantic web services. Finally, we evaluated our approach by examining the mechanistic plausibility of the drug in having melanoma-specific disease modifying ability. We evaluated a large number of possible drug/disease associations with varying joint probabilities and interaction steps to determine the thresholds with the highest F-Measure, resulting in our thresholds of three or less interactions and a joint probability of 0.93 or higher.</ns0:p><ns0:p>Using the ReDrugS application page 1 , we initiate our search for 'melanoma', and select the first suggestion obtained from the Experimental Factor Ontology (EFO). 2 The application then provides immediate neighborhood of drugs and genes that are associated with melanoma. We expanded the network by first selecting the melanoma node and expanding the link distance to |I| ≤ 3 and the changing the minimum joint probability to p ≥ 0.93 in the search options. Importantly, we also limit the node type to 'Drug'. Finally, we click on the 'find incoming links' button (two left-facing arrows). When finished the network will show all drugs interacting with melanoma that meet the above criteria, as well as any intervening entities and their interactions. The resulting network can be downloaded as an image, or a summary CSV file. We used the CSV file to validate the links by searching Google Scholar and ClinicalTrials.gov for each proposed drug/disease combination. We consider a 'hit' to be a pairing with a published positive experiment in vivo or in vitro or any pairing that has been tested in a clinical trial.</ns0:p><ns0:p>While this level of validation does not guarantee efficacy, it does determine if the resulting connection is a plausible hypothesis that might be tested.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Data Fusion</ns0:head><ns0:p>We developed a structured knowledge base containing data pertaining to drugs, targets, interactions, and diseases. We used five data sources: iRefIndex <ns0:ref type='bibr' target='#b68'>(Razick et al., 2008)</ns0:ref> DrugBank <ns0:ref type='bibr' target='#b81'>(Wishart et al., 2006)</ns0:ref>, UniProt Gene Ontology Annotations (GOA) <ns0:ref type='bibr' target='#b5'>(Camon et al., 2004)</ns0:ref>, the Online Mendelian Inheritance in Man (OMIM) <ns0:ref type='bibr' target='#b25'>(Hamosh et al., 2005)</ns0:ref>, and the COSMIC Gene Census <ns0:ref type='bibr' target='#b17'>(Futreal et al., 2004)</ns0:ref>.</ns0:p><ns0:p>iRefIndex contains protein-protein interactions and protein complexes and is an amalgam of the Biomolecular Interaction Network Database (BIND) <ns0:ref type='bibr' target='#b0'>(Bader et al., 2003)</ns0:ref>, BioGRID <ns0:ref type='bibr' target='#b77'>(Stark et al., 2006)</ns0:ref>, the Comprehensive Resource of Mammalian protein complexes (CORUM) <ns0:ref type='bibr' target='#b69'>(Ruepp et al., 2010)</ns0:ref>, Database of Interacting Proteins (DIP), <ns0:ref type='bibr' target='#b85'>(Xenarios et al., 2002)</ns0:ref>, Human Protein Reference Database (HPRD), <ns0:ref type='bibr' target='#b35'>(Keshava Prasad et al., 2009)</ns0:ref>, InnateDB <ns0:ref type='bibr'>(Lynn et al., 2008)</ns0:ref>, IntAct <ns0:ref type='bibr' target='#b34'>(Kerrien et al., 2011)</ns0:ref>, MatrixDB <ns0:ref type='bibr' target='#b8'>(Chautard et al., 2011)</ns0:ref>, Molecular INTeraction database (MINT) <ns0:ref type='bibr' target='#b7'>(Chatr-aryamontri et al., 2008)</ns0:ref>, MPact <ns0:ref type='bibr' target='#b24'>(Güldener et al., 2006)</ns0:ref>, microbial protein interaction database (MPIDB) <ns0:ref type='bibr' target='#b18'>(Goll et al., 2008)</ns0:ref>, MIPS mammalian protein-protein interaction database (MPPI) <ns0:ref type='bibr' target='#b63'>(Pagel et al., 2005)</ns0:ref>, and Online Predicted Human Interaction Database (OPHID) <ns0:ref type='bibr' target='#b4'>(Brown and Jurisica, 2005)</ns0:ref>. DrugBank provides information about experimental/approved drugs and their targets, and UniProt GOA describes proteins in terms of their biological processes, cellular locations, and molecular functions. OMIM provides associations between genes and inherited or genetically-driven diseases. The COSMIC Gene Census is a curated list of genes that have causal associations with one or more cancer types.</ns0:p><ns0:p>Each association (e.g. drug-target, protein-protein, disease-gene) was captured using the nanopublication <ns0:ref type='bibr' target='#b22'>(Groth et al., 2010)</ns0:ref> scheme. A nanopublication is a digital artifact that consists of an assertion, its provenance, and information about the digital publication. Our nanopublications are represented as Linked Data: Each data item is identified using an dereferenceable HTTP Uniform Resource Identifier (URI) and statements are represented using the Resource Description Framework (RDF). Each nanopublication corresponds to a single interaction assertion from one of the databases. We used a number of automated scripts to produce the nanopublications and load them into the SPARQL endpoint. An example nanopublication is shown in Figure <ns0:ref type='figure'>7</ns0:ref>. We used the Semanticscience Integrated Ontology (SIO) <ns0:ref type='bibr' target='#b14'>(Dumontier et al., 2014)</ns0:ref> as a global schema to describe the nature and components of the associations, and coupled this with the PSI-MI Ontology <ns0:ref type='bibr' target='#b30'>(Hermjakob et al., 2004)</ns0:ref> to denote the types of interactions. We used the World Wide Web Consortium's Provenance Ontology (PROV-O) <ns0:ref type='bibr' target='#b45'>(Lebo et al., 2013)</ns0:ref> to capture provenance of the assertion (which data source it originated from). We loaded our nanopublications into Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Blazegraph, an RDF nanopublication compatible database. The data is accessed using its native SPARQL endpoint by the web application.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref>. Representation of a protein/protein interaction within a nanopublication. Three graphs are represented. The assertion graph (NanoPub 501799 Assertion), states that an interaction (X) is of type sio:DirectInteraction, and has the target of SLC4A8, and a participant of CA2. The supporting graph (NanoPub 501799 Supporting), states that the assertion graph was generated by a pull down experiment (one of many encoded experiment types used in , a subclass of prov:Activity. The attribution graph (NanoPub 501799 Attribution), in turn, states that the assertion had a primary source of <ns0:ref type='bibr' target='#b49'>(Loiselle et al., 2004)</ns0:ref> and that the interaction was quoted from BioGrid.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Assertion Probability</ns0:head><ns0:p>Each knowledge graph fragment, enclosed in a nanopublication, is assigned a probability based on the quality of the methods used to create the assertions in the fragment. We compute probabilities based on two different methods. Manually curated assertions, from DrugBank, OMIM, and COSMIC Gene Census, are directly given a probability p = 0.999. Assertions that have been derived from a specific experimental method are given probabilities appropriate for that method. These probabilities are derived from a expert-driven measure of the reliability of the experimental method used to derive the association.</ns0:p><ns0:p>Factors involved in the assessment of confidence include the degree of indirection in the assay, the sensitivity and specificity of the approach, and reproducibility of results under different conditions based on the comparative analyses of techniques <ns0:ref type='bibr' target='#b62'>(Obenauer and Yaffe, 2004;</ns0:ref><ns0:ref type='bibr' target='#b76'>Sprinzak et al., 2003)</ns0:ref>. Two expert bioinformaticians rated the reliability of each method and assigned a score of 1-3, where 1 corresponds to low confidence and 3 high confidence. After their initial assessment, they conferred on their reasoning for each score to resolve differences where possible. The experts considered level 1 to correspond to weak evidence that needs independent verification. Level 2 methods are generally reliable, but should have additional biological evidence. Level 3 methods are high quality method that produces few false positives. We calculated inter-annotator agreement between the two annotators over the three categories using Scott's Pi. Scott's Pi is similar to Cohen's kappa in that it improves on simple observed agreement by factoring in the extent of agreement that might be expected by chance. We determined the agreement to <ns0:ref type='bibr'>, 1955)</ns0:ref>.</ns0:p><ns0:p>The scores of 1, 2, and 3 were then assigned provisional probabilities of p = 0.8, p = 0.95, and p = 0.99 respectively. We chose these probabilities as approximations of the conceptual levels of probability for each rating by the experts, and feel that those probabilities correspond to how often an experiment at that confidence level can be expected to be accurate. We plan to provide a more rigorous assessment of the accuracy of each method against gold standards in future work. These confidence values were encoded into an OWL ontology along with the evidence codes. The full inferences were extracted using Pellet 3 and loaded into the SPARQL endpoint, where they were used to apply the probabilities to each assertion in the knowledge graph that had experimental evidence.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Semantic Web Services</ns0:head><ns0:p>We developed four Semantic Automated Discovery and Integration (SADI) web services <ns0:ref type='bibr' target='#b80'>(Wilkinson et al., 2009)</ns0:ref> in Python 4 to support easy access to the nanopubications (see Table <ns0:ref type='table' target='#tab_6'>2</ns0:ref>) in ReDrugS. The four services are enumerated in Table <ns0:ref type='table' target='#tab_6'>2</ns0:ref>.</ns0:p><ns0:p>The first service is a simple free text lookup, that takes an pml:Query 5 McGuinness et al. ( <ns0:ref type='formula'>2007</ns0:ref>) with a prov:value as a query and produces a set of entities whose labels contain the substring. This is used for interactive typeahead completion of search terms so users can look up URIs and entities without needing to know the details.</ns0:p><ns0:p>The other three SADI services look up interactions that contain a named entity. Two of them look at the entity to find upstream and downstream connections, and the third service assumes that the entity is a biological process and finds all interactions that related to that process. The services return only one interaction for each triple (source, interaction type, target). There are often multiple probabilities per interaction, and more than one interaction per interaction type. This is because the interaction may have been recorded in multiple databases, based on different experimental methods. To provide a single probability score for each interaction of a source and target, the interactions are combined. A single probability is generated per identified interaction by taking the geometric mean of the probabilities for that interaction. However, this method is undesirable when combining multiple interaction records of the same type. We instead combine the interaction records using a form of probabilistic voting using composite Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Z-Scores. This is done to model that multiple experiments that produce the same results reinforce each other, and should therefore give a higher overall probability than would be indicated by taking their mean or even by Bayes Theorem. We do this by converting each probability into a Z Score (aka Standard Score) using the Quantile Function (Q()), summing the values, and applying the Cumulative Distribution Function (CDF()) to compute the corresponding probability:</ns0:p><ns0:formula xml:id='formula_0'>P (x 1...n ) = CDF n ∑ i=1 Q (P (x i ))</ns0:formula><ns0:p>These composite Z Scores, which we transform back into probabilities, are frequently used to combine multiple indicators of the same underlying phenomena, as in <ns0:ref type='bibr' target='#b56'>(Moller et al., 1998)</ns0:ref>. It has a drawback, however. One concern is that the strategy does not account for multiple databases recording the same non-independent experiment. This can possibly inflating the probabilities of interactions described by experiments that are published in more than one database.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Graph Expansion Using Joint Probability</ns0:head><ns0:p>In order to compute the probability that a given entity affects another, we compute the joint probability that each of the intervening interactions are true. Joint probability is the probability that every assertion in the set is true. This is computed by taking the product of probabilities of each interaction:</ns0:p><ns0:formula xml:id='formula_1'>P (x 1 ∧ ... ∧ x n ) = n ∏ i=1 P (x i )</ns0:formula><ns0:p>This joint probability is used as a threshold that users can set to stop graph expansion. We also provide expansion limits using the number of interaction steps that are needed to connect the two entities.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.5'>User Interface</ns0:head><ns0:p>The user interface was developed using the above SADI web services and uses Cytoscape.js, 6 , angular.js, 7</ns0:p><ns0:p>and Bootstrap 3. 8 An example network is shown in Figure <ns0:ref type='figure'>2</ns0:ref> Users can search for biological entities and processes, which can then be autocompleted to specific entities that are in the ReDrugS graph. Users can then add those entities and processes to the displayed graph and retrieve upstream and downstream connections and link out to more details for every entity. Cytoscape.js is used as the main rendering and network visualization tool, and provides node and edge rendering, layout, and network analysis capabilities, and has been integrated into a customized rich web client.</ns0:p><ns0:p>In order to evaluate this knowledge graph, we developed a demonstration web interface 9 based on the Cytoscape.js 10 JavaScript library. The interface lets users enter biological entity names. As the user types, the text is resolved to a list of entities. The user finishes by selecting from the list, and submitting the search. The search returns interactions and nodes associated with the entity selected, which are added to the Cytoscape.js graph. Users are also able to select nodes and populate upstream or downstream connections. Figure <ns0:ref type='figure'>2</ns0:ref> is an example output of this process.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>ACKNOWLEDGEMENTS</ns0:head><ns0:p>A special thanks to Pascale Gaudet, who, with Michel Dumontier, evaluated the experimental methods and evidence codes listed in the Protein/Protein Interaction Ontology and Gene Ontology. Thank you also to Kusum Solanki and John Erickson for evaluation, feedback, and planning in the initial stages of this project. Manuscript to be reviewed Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Percentage approved drugs in each of the categories of the Anatomic Therapeutic Classification (ATC) system.</ns0:figDesc><ns0:graphic coords='3,141.73,422.17,413.59,203.68' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Precision, recall, and f-measure by (A) varying thresholds for joint probability and (B)varying number of interaction steps. Precision is the percentage of returned candidates that have been validated experimentally or have been in a clinical trial (a 'hit') versus all candidates returned. Recall is the percentage of all known validated 'hits'. F-measure is the geometric mean of precision and recall that provides a balanced evaluation of the quality and completeness of the results.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Figure 4. It uses the Blazegraph RDF database backend. The database layer is interchangeable except that the full text search service needs to use Blazegraph-only properties to perform text searches as text indexing is not yet standardized in the SPARQL query language. All other aspects are standardized and should work with other RDF databases without modification. ReDrugs currently uses the Python-based TurboGears web application 5/16 PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10366:1:1:NEW 9 Dec 2016)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The ReDrugS software architecture. Using web standards and a three layer architecture (RDF store, web server, and rich web client), we were able to build a complete knowledge graph analysis platform.</ns0:figDesc><ns0:graphic coords='7,428.84,251.69,110.45,54.02' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. The authors demonstrate the ReDrugS user interface in the Collaborative-Research Augmented Immersive Virtual Environment (CRAIVE) Lab at RPI.</ns0:figDesc><ns0:graphic coords='7,141.73,520.16,413.53,151.84' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='4,141.73,63.78,413.56,285.56' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='10,141.73,100.25,413.58,286.83' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>(A) Information Retrieval by Probability Threshold (A) Information Retrieval by Probability Threshold (A) Information Retrieval by Probability Threshold</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>)</ns0:cell><ns0:cell>CDKN2A</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell>Novel</ns0:cell><ns0:cell /><ns0:cell>Framycetin</ns0:cell><ns0:cell /><ns0:cell>CXCR4</ns0:cell><ns0:cell>0.97</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Lucanthone</ns0:cell><ns0:cell /><ns0:cell>CDKN2A</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Podofilox</ns0:cell><ns0:cell /><ns0:cell>MAP kinase</ns0:cell><ns0:cell>0.93</ns0:cell></ns0:row><ns0:row><ns0:cell>precision precision precision</ns0:cell><ns0:cell>recall recall recall</ns0:cell><ns0:cell>fmeasure fmeasure fmeasure</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.75 0.75 0.75</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.5 0.5 0.5</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.25 0.25 0.25</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.9 0.9 0.9</ns0:cell><ns0:cell /><ns0:cell>0.8 0.8 0.8</ns0:cell><ns0:cell>0.7 0.7 0.7</ns0:cell><ns0:cell>0.6 0.6 0.6</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>iRefIndex ReDrugS RDF Store Analytical Tools ReDrugS Cytoscape.js App Ontological Resources</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell cols='2'>Protein/Protein Interaction Ontology,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Semanticscience Integrated Ontology, Gene Ontology</ns0:cell><ns0:cell>Experimental Method</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>vocabularies, relationships</ns0:cell><ns0:cell>Assessment</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Confidence scores of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>experimental methods.</ns0:cell></ns0:row><ns0:row><ns0:cell>queries</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='2'>Interaction network search</ns0:cell></ns0:row><ns0:row><ns0:cell>and expansion</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>queries graph</ns0:cell><ns0:cell>queries graph</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>The API endpoint prefix is http://redrugs.tw.rpi.edu/api/.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>9/16PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10366:1:1:NEW 9 Dec 2016)</ns0:note></ns0:figure>
<ns0:note place='foot' n='13'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10366:1:1:NEW 9 Dec 2016)</ns0:note>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2016:04:10366:1:1:NEW 9 Dec 2016) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Tetherless World Constellation
James P. McCusker III, Ph.D.
[email protected]
http://tw.rpi.edu
Telephone: 860-255-8445
December 7, 2016
Dear Editors:
We thank the reviewers for their detailed feedback on our manuscript and have edited and
expanded the manuscript in response to their questions and comments.
We have expanded the methods section to document our methods in detail, and our discussion
section to present our software architecture and data loading methods. We also reformulated the
'Candidates Significance' subsection of Discussion to better highlight positive controls and the
novel drug candidates. We have also added data processing and architecture diagrams to the
paper to better explain how the system works, as well as one with summary statistics (Figure 1)
for the types of drugs included in the knowledge graph.
We believe that the manuscript is now suitable for publication in PeerJ Computer Science.
Sincerely,
James P. McCusker III, Ph.D.
Tetherless World Director, Data Operations
Editor's Decision
The work itself is very interesting, but the paper can be much improved. Please extend the paper
considerably by providing the details on the methodology, and a deep discussion on the results
(computationally and biologically). Also provide justification on the selection of parameter on
the data analytics - please refer to the reviewer's comments / requirements on this.
We have expanded the methodology from data ingestion and representation to how the
queries are post-processed and what frameworks are used to build the software. We have
expanded the discussion of the results and have reorganized them to reflect the most
interesting search results and their evaluation.
Reviewer 1
Basic reporting
This manuscript proposes an integrated network analysis system to detect novel drug-disease
associations. Various drug and disease related data sources were integrated and a probabilistic
measure to evaluate the reliability of medical entity links was utilized. Several high quality
melanoma drugs were generated and evaluated by recent literatures.
Due to the complicated underlying network mechanisms of disease phenotypes and the
perturbation of drug treatment, it is difficult to find and confirm novel drugs for disease
treatment. This manuscript do provide an interesting computing approach to find novel drug
candidates for complex diseases.
Experimental design
This manuscript focuses on proposing a complex network analysis system for novel drug
detection for disease treatment. The authors have introduced the related data sources, data
fusion and network visualization functionalities. An instance on novel melanoma drug detection
is introduced and evaluated on the reliable and novelty.
Validity of the findings
The performance of the generated results from the network analysis system has been evaluated
by the measures like precision, recall and F1 measure and the results showed that the system has
obtained acceptable performance. Furthermore, 25 detected melanoma drugs were validated by
related literature as well as a search for registered clinical trials in ClinicalTrials.gov.
Comments for the author
This manuscript describes an integrated network analysis system on drug-target-disease
associations to predict novel repositioning drugs. It integrated various drug and disease related
data sources, such as drug-target associations, genotype-phenotype associations and
protein-protein interactions and proposed a probabilistic approach to define the direct and
indirect associations between these entities. The platform consists of a graphical web
application, an application programming interface (API), and a knowledge base. The knowledge
graph was stored in semantic web schema and it can be visualized as interactive networks using
social network packages. The system has acceptable performance on predicting novel drugs for
diseases and, In particular, the platform has generated 25 high quality Melanoma drugs, which
have been cross evaluated by literature query and clinicaltrial.gov database. Therefore, this is a
very interesting study that has delivered meaningful and biological useful results on novel drug
indication prediction. Overall, an excellent work on network pharmacology.
The minor improvement would be that a clear description on the calculation of 'joint probability'
should be provided possibly in Section 4.2. Currently, there is only introduction of the principle
to set the weights on direct links. However, the weight(or probability) of the indirect links(i.e.
drug-disease) is a key development of the proposed system and thus should be described.
We have added additional methods discussion in Section 4.2 on direct link weight
determinations, and added a description of how we combine weightings from multiple
experiments in section 4.3 We also added Section 4.4 that describes how we use joint
probability to limit automated graph expansion.
Reviewer 2
Basic reporting
The structure of the paper looks reasonable. The paper is easy to follow.
Experimental design
The goal of the paper is to use an evidence-based (in the form of probability) knowledge graph
to help identify novel and known drugs for melanoma. The probability is computed based on so
called expert-driven measures of the reliability of experimental methods that come from existing
studies. In order for other to reproduce the results, it is necessary to provide the detailed formula
for computing and combining the mentioned measures, in the final draft of the article. Also, little
justifications were provided to the rather high probability (0.95) for the second confidence level.
We have added a deeper discussion in Section 4.2 on what the different confidence levels
were interpreted as by our experts, and how we in turn interpret those confidence levels
into probabilities.
Validity of the findings
The presented knowledge graph uses 5 data sources: iRefIndex, DrugBank, UniProt Gene
Ontology Annotations, the Online Mendelian Inheritance in 168 Man and the COSMIC Gene
Census. Associations are represented as ontology based linked data. However, it is unclear how
such associations are captured. Some important justifications are still missing, including the two
thresholds of interaction steps (3) and joint probability value (0.93).
We have included an expanded discussion of how we arrived at our two thresholds in
Results, paragraphs 4-5, where we also reference information retrieval curves that vary
each threshold and report the resulting precision, recall, and f-measure.
Comments for the author
The paper presents a seemingly very promising approach; however, some key details are not
clarified in the current draft of the paper.
We have clarified the details in question and expanded both the discussion and methods
to make it clear how the experiments were performed, the knowledge graph was created,
and how searches and analysis are done.
" | Here is a paper. Please give your review comments after reading it. |
724 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Combinatorial search problems are often found in many different areas. The optimizer needs to scan a large of the available solutions in the search space during the search process. It is usually classified into the class of NP-Hard problems that requires an efficient search algorithm to maintain the quality of the solution with a reasonable execution time.</ns0:p><ns0:p>Team selection is one of the typical issues. This problem becomes more complicated to achieve multiple goals in the decision-making process. This study introduces a multiple cross-functional teams selection model with different skill requirements from candidates who meet the maximum required skills in both deep and wide aspects for the groups.</ns0:p><ns0:p>Compromise programming is used to approach the formulated multi-objective optimization problem. We designed metaheuristic algorithms to solve the proposed model, including genetic algorithm (GA) and ant colony optimization (ACO). We also compare the developed algorithms with the MIQP-CPLEX solver on datasets of 500 programming contestants on 37 skills and several randomized distributions datasets to evaluate their effectiveness.</ns0:p><ns0:p>Experimental results show that the proposed algorithms outperform CPLEX in several aspects of assessments, including solution quality and execution time. The proposed approach based on the combination between compromise programming and the metaheuristic also demonstrates its ability to make decisions for different scenarios on the evaluation datasets by comparing it with the MOEA algorithm.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head></ns0:div>
<ns0:div><ns0:head>A. Background</ns0:head><ns0:p>In management, operational research and other fields, cross-functional team (CFT) selection <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref> is an area of interest. Selecting the right teams for the jobs brings success to the organization. A cross-functional team is defined as a group of suitable candidates who have excellent personal skills and can collaborate and support each other in their work. This team's skills are multidisciplinary from many different fields such as biology, math, mechanics, technology, and more. This study is follow-up work from 2020 <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>to develop a methodology for selecting CFTs from available candidates in the organization. The team selection problem is classified into NP-Hard and combinatorial optimization <ns0:ref type='bibr'>[1...5]</ns0:ref>. In a single team selection problem, the solution is represented as where stands for the number of candidates, 𝑋 = {𝑥 𝑖 |𝑥 𝑖 = (0,1), 𝑖 = 1…𝐾} 𝐾 𝑥 𝑖 = 1 means that the student is assigned to the team and otherwise. To select a team with 𝑖 𝑡ℎ 𝑥 𝑖 = 0 ℎ members from candidates. The number of available solutions is up to <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>. 𝐶 In practice, the number of groups to be selected is usually more than one, corresponding to different tasks. An increase in the number of candidates or team size or the number of formulated teams significantly increases the search space. In this study, our goal is to choose teams simultaneously 𝐺 to satisfy multiple objectives of business requirements from limited resources. The number of available solutions in the search space is:</ns0:p><ns0:p>(</ns0:p><ns0:formula xml:id='formula_0'>ℎ 1 𝐾 )( ℎ 2</ns0:formula><ns0:p>𝐾 -ℎ 1 )(</ns0:p><ns0:formula xml:id='formula_1'>ℎ 3 𝐾 -ℎ 1 -ℎ 2 ) … ( ℎ 𝐺 𝐾 - 𝐺 -1 ∑ 𝑖 = 0 ℎ 𝑖 ) = 𝐺 ∏ 𝑔 = 1 ( ℎ 𝑔 𝐾 - 𝑔 -1 ∑ 𝑖 = 0 ℎ 𝑖 )</ns0:formula><ns0:p>where denotes the team size of the group and . The solution is represented as a graph ℎ 𝑔 𝑔 𝑡ℎ ℎ 0 = 0</ns0:p><ns0:p>, where V represents the set of candidates and Groups. Each existing edge in E ℋ = (𝑉, 𝐸) 𝐶 𝐺 illustrates the assignment of the corresponding candidate to the group. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> shows an example of the CFTs selection problem.</ns0:p></ns0:div>
<ns0:div><ns0:head>B. Related Research</ns0:head><ns0:p>The optimization problems in modern team selection are usually considered to achieve many goals following business requirements. Therefore, the considered problem encounters the difficulties of the classical problem and the multi-objective optimization (MOP) problem. The desired goals are often performance, cost, or benefit aspects. The selected members/ teams must cooperate to solve common problems to achieve a specific purpose. Employers need to maximize profits when selecting team members from available candidates <ns0:ref type='bibr' target='#b5'>[3]</ns0:ref>. Ahmed et al. provide a MOP to choose their cricket team. The model uses objective functions to access three aspects of team performance: batting, bowling, and fielding <ns0:ref type='bibr' target='#b6'>[4]</ns0:ref>. In <ns0:ref type='bibr' target='#b8'>[5]</ns0:ref>, Chand et al. also uses the MOP model to select a cricket team with similar objective functions as <ns0:ref type='bibr' target='#b6'>[4]</ns0:ref> but with a different goal of minimizing cost. Toledano et al. introduced a bi-objectives optimization to their optimizer, including team valuation and cost <ns0:ref type='bibr' target='#b9'>[6]</ns0:ref>. Another research also introduced the bi-objectives problem is <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>. The model aims to access both aspects of team performance to formulate a new team with many skills and mastering skills. Their approaches are different, while <ns0:ref type='bibr' target='#b9'>[6]</ns0:ref> uses a dominance-based algorithm to find solutions on praetor-frontier, <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref> uses Compromise programming to define a compromise solution close to the most referential point. In <ns0:ref type='bibr' target='#b12'>[7]</ns0:ref>, an optimization model is designed to form a team of players for football clubs maximizes the profits from transferring players in degrees. It consists of maximizing the expected net present value of the group, which includes the value of the players owned minus the money spent for buying and borrowing players and paying salaries, plus the income generated by selling and lending players. The problem with these studies is that they were used to select one group. Selecting multiple teams leaves no choice but to resolve the issues repeatedly by eliminating the selected candidates. It leads to the chosen later groups not being treated fairly. There are numerous existing techniques for MOP. There are numerous existing techniques for MOP. The most prevalent strategy for categorizing the methods is separating them into nopreference, preference, posteriority and interactive <ns0:ref type='bibr' target='#b13'>[8]</ns0:ref>. The comparison between them is listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The no-preference approach assumes that there is no existing decision-maker. A compromised solution is identified without preference information. Preference is the opposite of non-preference. It is applicable in cases where the decision-maker has predefined information available to provide a trade-off between objective functions. The posteriority method provides set of Pareto optimal solutions is first found and then allows the decision-maker to select one. The interactive methods involve different types of preference information at each iteration of the algorithm. Psychological convergence is often applied in interactive methods instead of mathematical convergence. There is a need for a suitable approach for the context when the decision-maker may/may not have predefined information for the trade-off between multiple objectives <ns0:ref type='bibr' target='#b14'>[9]</ns0:ref> in the decision-making process. The resolution techniques for MOP and combinatorial optimization includes mathematical approach and Metaheuristics. In practice, metaheuristics are more suitable for large-scale applications. Chand et al. build an algorithm based on determining the upper and lower bounds of the constraints <ns0:ref type='bibr' target='#b8'>[5]</ns0:ref>. The solutions are then determined based on the exhaustive method. This prosed algorithm has fewer available solutions to scan than pure exhaust, but it may not be feasible for large-scale problems. Evolutionary algorithms (EA) and their version for MOP (MOEA) is often used to address the MOP and combinatorial optimization, including team selection problem. Ahmed et al. designed an NSGA-II algorithm to search for a Pareto-frontier in 3D space <ns0:ref type='bibr' target='#b6'>[4]</ns0:ref>. Zhao and Zhang developed some metaheuristic algorithms for team foundations <ns0:ref type='bibr' target='#b15'>[10]</ns0:ref>. They conclude D-PSO produced better results compared to GA and original PSO in the significant data context. Bello et al. Build an ACO algorithm to select a team based on the preferences of the two decisionmakers <ns0:ref type='bibr' target='#b16'>[11]</ns0:ref>. They perform experiments on small data sets (20 candidates). State-of-the-art allows EAs to solve the traditional combinatorial optimization problems with high-quality solutions and reasonable execution time. The MOEA approach yields results that lie on the Pareto frontier. It allows decision-makers to work with different scenarios. However, the execution time of these algorithms is often high in practical problems. Both EA and MOEA have been widely used to solve MOPs because of their advantages by obtaining a set of solutions present in a solution process. They can deal with non-convex Pareto fronts and different types of variables. When designing EA algorithms, the designers do not have to consider assumptions about the convexity and/or separability distinction on the objective function and constraints. Besides these advantages, EAs do not ensure the convergence of optimum solutions. They may use a lot of costly function evaluation, which increases execution time of the algorithm. This limitation is particularly crucial when tackling computationally expensive tasks. Therefore, it is vital to design an EA scheme to acquire solutions at accepted execution time without affecting the quality of the solutions <ns0:ref type='bibr' target='#b17'>[12]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>C. Contributions</ns0:head><ns0:p>This study presents an optimizer for selecting multiple groups from candidates who match the criteria in skills, which is an improvement from previous research <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>. The proposed multiobjective optimization model allows the selection of groups. Various aspects of team members' 𝐺 skills are set as goals to be achieved by the optimizer. We use the compromise programming (CP) approach for MOP. We propose GA and ACO schemes to solve the proposed model. To evaluate the efficiency of the algorithms, we compare them with CPLEX's MIQP-solver. Our study suggests a new variant of team selection problems. It benefits researchers in the field of management, as well as in empirical research on combinatorial search. This research also contributes to our proposed methodology for Multi objectives scheduling and planning problems <ns0:ref type='bibr' target='#b18'>[13]</ns0:ref>. The rest of this paper is organized as follows. The proposed model and algorithm are described in Sections 2 and 3. To evaluate the proposed approach, we show the experiments and discussion in Section 4. Finally, Section 5 offers a conclusion. In <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>, Ngo et al. proposed a two-objective model for selecting their team: 'deep' -candidates who are well-versed in the skills they know, and 'wide' -the selected candidates to have the number of skills, the more, the better. This is still suitable in the context of choosing a good team. However, we adjust accommodate the multi-groups selection model. The objective functions can be defined as follows:</ns0:p></ns0:div>
<ns0:div><ns0:head>Proposed Optimization Model</ns0:head></ns0:div>
<ns0:div><ns0:head>A. MOP-TS Model</ns0:head><ns0:p> To select candidates who are fluent in required skills by group:</ns0:p><ns0:formula xml:id='formula_2'>𝑚𝑎𝑥 ( 𝑓 𝑑𝑒𝑒𝑝 𝑔 (𝑋) = 𝐾 ∑ 𝑖 = 1 ( 𝑆 ∑ 𝑠 = 1</ns0:formula><ns0:p>𝑥 𝑖,𝑔 * 𝑁 𝑔,𝑠 * 𝑉 𝑖,𝑠 ) )</ns0:p><ns0:formula xml:id='formula_3'>∀ 𝑔 = 1,…,𝐺</ns0:formula><ns0:p> To select candidates who know many required skills by the group:</ns0:p><ns0:formula xml:id='formula_4'>𝑚𝑎𝑥 ( 𝑓 𝑤𝑖𝑑𝑒 𝑔 (𝑋) = 𝐾 ∑ 𝑖 = 1 𝑆 ∑ 𝑠 = 1 min (1,𝑥 𝑖,𝑔 * 𝑉 𝑖,𝑠 * 𝑁 𝑔,𝑠 ) ) ∀ 𝑔 = 1,…,𝐺</ns0:formula></ns0:div>
<ns0:div><ns0:head>Subject to:</ns0:head><ns0:p> No candidate can join more than one group:</ns0:p><ns0:formula xml:id='formula_5'>𝐺 ∑ 𝑔 = 1 𝑥 𝑖,𝑔 ≤ 1 ∀ 𝑖 = 1,…,𝐾<ns0:label>(𝐶1)</ns0:label></ns0:formula><ns0:p> No group is over team size:</ns0:p><ns0:formula xml:id='formula_6'>𝐾 ∑ 𝑖 = 1 𝑥 𝑖,𝑔 = 𝑍 𝑔 ∀ 𝑔 = 1,…,𝐺<ns0:label>(𝐶2)</ns0:label></ns0:formula><ns0:p> Selected groups must respect the minimum requirement on the skills:</ns0:p><ns0:formula xml:id='formula_7'>𝐾 ∑ 𝑘 = 1</ns0:formula><ns0:p>𝑥 𝑘,𝑔 * 𝑉 𝑘,𝑠 ≥ 𝐿 𝑔,𝑠 ∀ 𝑔 = 1,…,𝐺; 𝑠 = 1,..,𝑆 (𝐶3)</ns0:p></ns0:div>
<ns0:div><ns0:head>B. Compromise Programming to MOP-TS</ns0:head><ns0:p>The idea of Compromise programming (CP) <ns0:ref type='bibr' target='#b20'>[14]</ns0:ref> is based on not utilizing any preference information or depending on assumptions about the relevance of objectives. The approach does not strive to discover numerous Pareto optimum solutions. Instead, the distance between a reference point and the feasible objective region is reduced to identify a single optimal solution. There are many research that have used CP to their MOPs such as university timetabling that including examination timetabling <ns0:ref type='bibr' target='#b21'>[15]</ns0:ref>, teaching task assignment <ns0:ref type='bibr' target='#b22'>[16]</ns0:ref>, student enrollment timetabling <ns0:ref type='bibr' target='#b24'>[17]</ns0:ref>, knowledge-based recommender <ns0:ref type='bibr' target='#b25'>[18]</ns0:ref>, project task assignment <ns0:ref type='bibr' target='#b26'>[19]</ns0:ref>. There are several methods used to select the preferred point <ns0:ref type='bibr' target='#b27'>[20]</ns0:ref> or normalize the distance function <ns0:ref type='bibr' target='#b28'>[21]</ns0:ref>. Ngo et al <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>. did not standardize their compromise objective function. This causes deep or wide targets to be biased in distance calculation. The method of selecting a referential point or normalizing the distance function has many variations. The literature suggests that the weighted metrics are used for measuring the 𝐿 𝑝 distance of any solution from the reference point. The ideal objective vector is often used as the reference point: </ns0:p><ns0:formula xml:id='formula_8'>𝑤 𝑖 = 1</ns0:formula><ns0:p>In our case, the and are easy to pre-calculated as:</ns0:p><ns0:formula xml:id='formula_9'>𝑧 * 𝑧 𝑤𝑜𝑟𝑠𝑡  𝑧 𝑤𝑜𝑟𝑠𝑡 = {𝑑 𝑚𝑖𝑛 |𝑐 𝑚𝑖𝑛 }  𝑧 * = {𝑑 𝑚𝑎𝑥 |𝑐 𝑚𝑎𝑥 }  𝐹 = {𝑓 𝑑𝑒𝑒𝑝 |𝑓 𝑤𝑖𝑑𝑒 }</ns0:formula><ns0:p>Where: The number dimensional space of the solution is where two paired consecutive elements 𝐿 = 2 * 𝐺 represent the interesting of the particular team.</ns0:p><ns0:formula xml:id='formula_10'> 𝑑 𝑚𝑖𝑛 = {𝑑 𝑚𝑖𝑛 𝑔 |𝑑 𝑚𝑖𝑛 𝑔 = ∑ 𝑆 𝑠 = 1 ( ∑ 𝑍 𝑔 𝑖 = 1 𝑁 𝑔,𝑠 * 𝑃 𝑠,𝑖 ) ,𝑔 = 1,…,𝐺}  𝑐 𝑚𝑖𝑛 = {𝑐 𝑚𝑖𝑛 𝑔 |𝑐 𝑚𝑖𝑛 𝑔 = ∑ 𝑆 𝑠 = 1 min (1,𝑃 𝑠,1 * 𝑁 𝑔,𝑠 ) ,𝑔 = 1,…,𝐺}  𝑑 𝑚𝑎𝑥 = {𝑑 𝑚𝑎𝑥 𝑔 |𝑑 𝑚𝑎𝑥 𝑔 = ∑ 𝑆 𝑠 = 1 ( ∑ 𝑍 𝑔 𝑖 = 1 𝑁 𝑔,𝑠 * 𝑅 𝑠,𝑖 ) ,𝑔 = 1,…,𝐺}  𝑐 𝑚𝑎𝑥 = { 𝑐 𝑚𝑎𝑥 𝑔 │ 𝑐 𝑚𝑎𝑥 𝑔 = ∑ 𝑆 𝑠 = 1 ∑ 𝑍 𝑔 𝑖 = 1 min (1,𝑅 𝑠,1 * 𝑁 𝑔,𝑠 ) , 𝑔 = 1,…,</ns0:formula></ns0:div>
<ns0:div><ns0:head>Proposed Algorithms</ns0:head><ns0:p>The Metaheuristic algorithms are often used to solve Combinatorial Optimization and NP-Hard problems. This section describes the design of two evolutionary algorithms <ns0:ref type='bibr' target='#b29'>[22]</ns0:ref>, including the Genetic algorithm (TS-GA) and ant colony optimization (TS-ACO), to solve the proposed model.</ns0:p></ns0:div>
<ns0:div><ns0:head>A. Genetic Algorithm</ns0:head><ns0:p>GA is inspired by natural evolution. Its operations include selection, crossover, and mutation. The algorithm begins with a random population, with each representing a solution for the problem. The optimal solution is obtained through the adaptation of the new generations. The solutions quality improvement is evaluated by their fitness values. The GA scheme is illustrated in Figure <ns0:ref type='figure'>2</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>List<List<Integer>> of items each of stores the indexes of selected candidates as Figure <ns0:ref type='figure'>3</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝐺 𝑝 𝑔</ns0:head><ns0:p>This mechanism allows to reduce the size of original . 𝑋 The detail of the steps, as shown in Figure <ns0:ref type='figure'>2</ns0:ref>, is described as follow:  To stop algorithm, we defined a condition that if after generations, the system cannot find 𝛼 any better solutions. Otherwise, continue to selection phase.</ns0:p></ns0:div>
<ns0:div><ns0:head>B. Ant Colony Optimization for Multiple Team Selection</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The ant colony optimization algorithm is a technique to solve optimization problems. Using the multi artificial ants can find the right paths on the graphs. The behavior of real ants inspires these ants. They communicate with each other using the pheromone. We apply the similar data structure to represent the artificial ant (solution) and fitness function are represented as the same as GA. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> shows the flow chart of proposed ACO algorithm. The detail of each step of the ACO is described as following:</ns0:p><ns0:p> Initialize the cost matrix: we create matrix where represented for the cost</ns0:p><ns0:formula xml:id='formula_11'>𝐶 ∈ ℝ 𝐾 × 𝐺 𝐶 𝑘,𝑔</ns0:formula><ns0:p>if candidate chosen to group as following:</ns0:p><ns0:formula xml:id='formula_12'>𝑘 𝑡ℎ 𝑔 𝑡ℎ 𝐶 𝑘,𝑔 = 2 ∑ 𝑧 = 1 𝑤 𝑔 * 𝑧 | 𝑀 𝑧(𝑘,𝑔) -𝑧 * 𝑔 𝑧 𝑤𝑜𝑟𝑠𝑡 𝑔 -𝑧 * 𝑔 | 2</ns0:formula><ns0:p>Where:</ns0:p><ns0:formula xml:id='formula_13'>𝑀 1 (𝑘,𝑔) = 𝑆 ∑ 𝑠 = 1 (𝑁 𝑔,𝑠 * 𝑉 𝑘,𝑠 )</ns0:formula><ns0:p>and</ns0:p><ns0:formula xml:id='formula_14'>𝑀 2 (𝑘,𝑔) = 𝑆 ∑ 1 (min (1,𝑁 𝑔,𝑠 * 𝑉 𝑘,𝑠 ))</ns0:formula><ns0:p> Initialize the pheromone matrix: we create matrix where . Predictably, the computation time of GA will be better than ACO. In the next demonstration, we will use execution time (CPU time) to illustrate this. </ns0:p><ns0:formula xml:id='formula_15'>𝑇 ∈ ℝ 𝐾 × 𝐺 𝑇 𝑘,𝑔 = 1 𝐶 𝑖,𝑗 𝐶 𝑘,𝑔 = 2 ∑ 𝑧 = 1 𝑤 𝑔 * 𝑧 | 𝑀 𝑧(𝑘,𝑔) -𝑧 * 𝑔 𝑧 𝑤𝑜𝑟𝑠𝑡 𝑔 -𝑧 * 𝑔 | 2 𝑀 1 (𝑘,𝑔) = 𝑆 ∑ 𝑠 = 1 (𝑁 𝑔,𝑠 * 𝑉 𝑘,𝑠 )𝑀 2 (𝑘,𝑔) = 𝑆 ∑ 1 (min (</ns0:formula></ns0:div>
<ns0:div><ns0:head>Experiments and Results</ns0:head></ns0:div>
<ns0:div><ns0:head>A. Experimental Design</ns0:head><ns0:p>To evaluate the performance of the proposed algorithms on real-world dataset. We use the dataset of 500 programming contestants provided by <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>. Contestants tackle programming exercises, each of which is tagged into different types of exercises in 37 categories. Figure <ns0:ref type='figure' target='#fig_6'>6</ns0:ref> shows the statistical numbers corresponding to the available skills in the dataset. Each skill has a minimum score of 0, and a max score is not more significant than 160,000 except 'special,' which is plotted to the rightvertical axis. It has a maximum score of above 1,000,000, and few candidates got above 160,000 scores in this skill. Some skills have many high-score members like 'implementation' with the most extraordinary median is 8354 -it means half of the candidates achieved more than that-and right below is 6834 median score of 'math'. In opposition, many skills limit the number of applicants, ex: '2-stat' have not more than 100 candidates scored in this skill before. Moreover, the maximum score of 'schedules' is 714, and only 93 of 500 candidates have experience. The' special' is a particular skill because it has the highest maximum score, and the gap between the best and the worst is huge when not more than 250 candidates have scored in this skill. We display the dataset with a 2-dimensional space using t-SNE to find a similar probability distribution of the contestants in low-dimensional space, as shown in The quality of the solution of the Metaheuristics depends a lot on factors such as the distribution of the data parameters. Since the algorithms are designed based on stochastic operations, it is not easy to evaluate the algorithmic complexity. Besides using the benchmarking dataset, we also generated 50 random datasets based on different distributions to compare the solution quality based on the statistical method. The 50 datasets include 300 candidates, and 30 skills are randomized based on one of the six random distributions Hypergeometric, Poisson, Exponential, Gamma, Student, and Binomial using python scipy library. In more detail, we randomize the scores of 300 candidates in 1 skill according to the selected distribution and take turns with the remaining skills to form a data set.</ns0:p><ns0:p>For distributions with parameters like Poisson's, this parameter is also random in the interval (1,10) by python's random. Experiments are performed on computers configured according to Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref>. Metaheuristics algorithms operate according to user customization through parameters. They significantly affect the performance of algorithms. For example, one can bulk order search agents to increase the likelihood of finding a better-quality solution. However, this increases the execution time. We calibrate the parameter values shown in Table <ns0:ref type='table' target='#tab_7'>4</ns0:ref> used in this experiment by re-executing the algorithm several times. </ns0:p></ns0:div>
<ns0:div><ns0:head>B. Results</ns0:head><ns0:p>The results are grouped into three subsections. The first part compares the algorithm designed with the designed GA by Ngo et al. <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref> for a single team selection problem. The results of multiple team selection are displayed in the second subsection. To assess the performance of the compared algorithms, we consider the aspects of the quality of the optimal solutions, processing time, and dealing with different decision scenarios. The final subsection compares metaheuristics and the exact algorithm on randomly generated datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head>1) Single Team Selection</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>, the author compared GA algorithms (called GA-1), and DCA where their GA-1 showed superior results when selecting three members from the tested dataset. To evaluate the quality of proposed algorithms, we compare our designs to GA-1 in solving the single team selection problem using the same objective function. The results in Table <ns0:ref type='table'>5</ns0:ref> show that CPLEX could not find a solution from 500 candidates on the tested machine due to the out-of-memory error. Even when the number of candidates is 300, the objective value obtained by CPLEX was the worst. GA-1 is designed to solve the single team selection problem, so there are many differences compared to GA. GA-1 is quite different from the popular versions of the GA algorithm. When creating new generations, GA-1 creates the individual by selecting each candidate until there are enough teams from the parent pair or the entire data (in the case of mutations). The candidates are selected based on a ratio depending on the parameters (dom, rec, mut), respectively, selecting suitable candidates, choosing bad candidates, and randomly selecting any one candidate. This results in a pair of parents being less likely to produce children as the best and worst candidates are prioritized. This method requires evaluating the goodness of a candidate for the team based on each candidate's skills. To apply to the problem of multiple team selection will need to add a way to evaluate the candidate's goodness for each group -because of the team's requirements. Each group is different. This idea is applied to develop both proposed algorithms. Another point that makes the population diversity of GA-1 lower than GA is that GA-1 selects parents from a proportion of individuals with the best fitness. So, the results of GA-1 tend to converge very soon, especially when the mutation rate is low, like in the experiment (The author chooses the mutation rate of 0.1). Meanwhile, in GA-1, candidates are selected randomly from the candidates of a pair of parents selected from the entire population. In contrast, the proposed operation involves the whole candidates to assign them to the groups, so the diversity of the population in GA is much higher than in GA-1. This mechanism is vital when individuals of GA are G times of combinatorics more complex than GA-1, and the search space of GA is also much superior. For mutations, GA changed an individual to an entirely new individual at random, not just changing a few candidates like GA-1. With such large randomness, GA also needs techniques to keep genes as good as GA-1, which keeps the best number of individuals, not allowing them to be changed when coming to new generations. Another big difference between the two algorithms is that GA-1 removed all solutions that violated the constraints. GA applied a penalty to their fitness value (keep the violated solutions but assign them the worst fitness of 1). This difference contributed to the increase in the diversity of GA compared with GA-1. Both proposed GA and ACO can find the quality solution as GA-1 for single team selection. Table <ns0:ref type='table'>5</ns0:ref>: The results of different algorithms to select a single group that requires 37 skills on the tested dataset.</ns0:p></ns0:div>
<ns0:div><ns0:head>2) Multiple Team Selection</ns0:head><ns0:p>As mentioned in section 2.2, the original objective functions are transformed into a distance function from the actual solution point to the ideal point. In this experiment, we use the query in Figure <ns0:ref type='figure' target='#fig_9'>8</ns0:ref>. It shows the query that is used in the experiment. The number of selected groups is 3, along with the required skills required for each group. We use indexes of skills in arrays instead of listing their names. This query is used in next parts of the section. The heat map illustrates the minimum required scores that need to archive by the selected team on the skills. No required skills are displayed in white. Figure <ns0:ref type='figure' target='#fig_10'>9</ns0:ref> represents the results of 15 times execution of each algorithm's GA, ACO, and old GA, GA using the previous search operations in <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref> to eliminate individuals violating the constraint. (A) and (C) is the fitness values of final solutions on scales of 300 and 500 candidates. As we can see, GA always has a better median and best solution than Old GA and ACO. Old GA has a similar distribution of results compared to GA because the difference in the two algorithms is not significant but enough to make the results of GA much better than Old GA. Regarding ACO, although finding the best solution is better than Old GA, just equal to GA, in all 15 runs, the results of ACO are not as stable as Old GA and GA, so the median of ACO is not as good as Old GA. In terms of time execution, Old GA, and GA easily outperformed ACO, and at the same time, due to the mechanism of removing solutions that violate constraints, Old GA ran slower than GA. CPLEX has results in both fitness value and timely execution that are overwhelmed by the three algorithms above. So we can conclude that GA is better than Old GA and the algorithms we design are superior to CPLEX based on the this statistical result. Figure <ns0:ref type='figure' target='#fig_11'>10</ns0:ref> illustrates the performances of tested algorithms in different sizes of 50,100,300, and 500 candidates. The ACO is configured to achieve similar search results as GA. Calculations and updates on cost and pheromone matrices are increasingly expensive, significantly as the size of the search space expands. This setting leads the processing time of ACO to be a few times higher than GA, and this cost is increasing, corresponding to the increment of the candidate's size (Respectively 2 and 6 times for the system size of 300 and 500 candidates, although ACO took less computation time for smaller scales of the system). The search operations of ACO require very ant chooses its path by calculating the probability of each member to be selected for the groups in each iteration. Meanwhile, GA does not have to scan the whole candidates for mutation or crossover. The mechanism makes ACO takes a longer time to finish an iteration, but it allows the ACO to have more search capabilities over a larger space provided the computational resources are expanded. Both GA and ACO are popular metaheuristic algorithms. Like many other metaheuristic algorithms, they have plenty of ways to implementations. Our ACO had some improvement compared with the normal one, which usually. We were concerned about some best solutions in one iteration, not all solutions like others. We had used this instead of the vaporization mechanism when wrong solutions can not affect the pheromone matrix. The Cost and Pheromone matrix was designed to build a solution based on an ant's path to initialize every possible solution in search space. Moreover, we calculated the cost PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science matrix developed on the influence of candidates chosen to one group quite like GA-1. To increase the diversity of GA, we had changed the whole chosen solution in the mutation step rather than changing some genes only. Besides, the solutions preserved from the selection step are also maintained from the mutation step, while other algorithms still have the rate at which these solutions mutate. CPLEX is not capable of handling the size of 500 candidates. Both solution quality and computation time of CPLEX show that it is comprehensively inferior to the proposed algorithms. We only use a single core to execute the algorithms. The executions can be speeded up using the parallel mechanism for search agents in both ACO and GA proposed by Ngo et al. <ns0:ref type='bibr' target='#b25'>[18]</ns0:ref>. As described in section 3.A, search agents use fitness values to assess the quality of solutions. The use of stochastic operations allows agents to explore the search space. The more population diversity is ensured, the better the population's ability to discover. However, this does not include a solution to enforce solutions that prevent violation of constraints. Unlike Ngo et al. <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>, we only punish the violated resolutions. We then use the self-improvement ability of the algorithm to eliminate these solutions instead of using the repair mechanism to sterilize the individuals. As long as we find a solution that does not violate the constraint at any generation, the algorithm ensures that the solution is valid. We display the number of violated solutions over generations for the best fitness values of the proposed algorithm with 500 candidates in Figure <ns0:ref type='figure' target='#fig_12'>11</ns0:ref>. It shows that the number of these solutions is decreased when new generations are generated. However, the mutation/random selection process still produces a certain number of these solutions. Figure <ns0:ref type='figure' target='#fig_13'>12</ns0:ref> shows the value of the fitness functions and the objective functions returned through iterations of the search processes for the best-obtained fitness values to the proposed algorithms. By observing the shape of the graphs of fitness functions produced by GA and ACO, we can see that the convergence of GA is slightly better than that of ACO. ACO's search agents need more time to complete an iteration, which leads to costlier total processing time. GA achieved the optimal solution at the 55th generation. Meanwhile, ACO took 101 iterations to achieve the same result. The objective function values do not always increase even though the fitness function values always decrease through the loops. This phenomenon is due to normalizing the range of the objective functions to [0,1]. The data distribution affects the reduction of values in the distance-based fitness function. If the standard deviation is significant, the objective value has a higher impact on the fitness value calculation when solutions are projected down to a specific objective function in the objective space. Manuscript to be reviewed Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>3) Different Decision Scenarios</ns0:head><ns0:p>Approaches to the MOP problem based on the decomposition of multi-objective functions to a ranking function have many advantages. Compromise Programming is more suitable for the decision-maker who cannot indicate the preferences to trade-off the specific goals. The combination of compromise programming and evolutionary algorithms allows a seamless transition between model and algorithm design by using compromised-objective and fitness functions that are both distance-based. In contrast, it is challenging to find optimal solutions on the Pareto frontier as MOEA <ns0:ref type='bibr' target='#b30'>[23]</ns0:ref>. However, decision-makers use the weight parameters to manipulate the optimizer for different decision scenarios. We executed three algorithms several times using ten different sets of values of the weight parameters on the dataset of 300 contestants. MIQP-CPLEX does not generate any solutions on the system scale of 500 contestants; therefore, we only tested the proposed GA and ACO at this scale. The obtained solutions are displayed in Figure <ns0:ref type='figure' target='#fig_13'>12</ns0:ref> with corresponding values of the weight parameters. The left-hand side diagrams in the figure illustrate the solutions on the scale of 300 contestants, and the right-hand side shows the solutions of GA and ACO on the scale of 500 contestants. We can see that the user can direct the solver to generate the optimal solutions proportional to the corresponding parameter values. However, it isn't easy to estimate appropriate values for specific scenarios without relying on the experience of decision-makers. The obtained solutions that are shown in Figure <ns0:ref type='figure' target='#fig_14'>13</ns0:ref> may not totally dominate each other. Therefore, it is not easy to indicate how each algorithm can deal with different decision-making scenarios. To evaluate that we measure the hypervolume <ns0:ref type='bibr' target='#b31'>[24]</ns0:ref> covered by the obtained solutions. The greater 𝐻𝑉𝐶 indicates the algorithm can produce better Pareto frontier. The hypervolume can be computed 𝐻𝑉𝐶 as:</ns0:p><ns0:formula xml:id='formula_16'>𝐻𝑉𝐶 = 𝑣𝑜𝑙𝑢𝑚𝑒( ⋃ 𝑠 ∈ 𝑆 (𝑠,𝑧 𝑤𝑜𝑟𝑠𝑡 )) 𝑣𝑜𝑙𝑢𝑚𝑒(𝑐𝑢𝑏𝑒(𝑧 * ,𝑧 𝑤𝑜𝑟𝑠𝑡 ))</ns0:formula><ns0:p>where:</ns0:p></ns0:div>
<ns0:div><ns0:head></ns0:head><ns0:p>denotes the solution in the Pareto solution set that is generated by the algorithm. 𝑠 𝑆  denotes the oriented axes hypercube that is formulated by points and in the 𝑐𝑢𝑏𝑒(𝑎,𝑏) 𝑎 𝑏 objective space.</ns0:p><ns0:p> denotes the volume of the hypercube in the objective space.</ns0:p></ns0:div>
<ns0:div><ns0:head>𝑣𝑜𝑙𝑢𝑚𝑒(𝑐) 𝑐</ns0:head><ns0:p>To evaluate the capabilities of the proposed CP-based method, we used genetic operations designed to implement a version of the NSGA-2 algorithm <ns0:ref type='bibr' target='#b30'>[23]</ns0:ref>. The parameters to execute the algorithm are shown in Table <ns0:ref type='table' target='#tab_9'>7</ns0:ref>. It offers the capability to search for a Pareto front with more than 500 solutions for both scales of 300 and 500 candidates after 2123.6 and 3021. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science execute evaluated CP-based algorithms several times using different parameters. The Hypervolume obtained from these solutions is displayed in Table <ns0:ref type='table'>8</ns0:ref>. The results show that the proposed evolutionary algorithms provide superior solutions to CPLEX in different decision-making scenarios. Although the solutions generated by CP-based algorithms produce better hypervolume values than NGSA-2, it is hard to conclude that the CP-based method is better than MOEA. Still, one can say that the obtained Pareto frontier may contain lower quality solutions than the proposed approach. During the Pareto frontier search, searchers do not focus on achieving their goal as a single objective optimization problem. This approach requires significant computational overhead and is therefore difficult to adapt to a real-world environment. In addition, users have no choice but to use only one solution in practice. Other factors in the decision problem, such as user experience, do not contribute to the effort to improve the solution quality. </ns0:p></ns0:div>
<ns0:div><ns0:head>4)</ns0:head><ns0:p>Metaheuristics Vs Exact Algorithm CPLEX relaxes the original problem within bounds and solves the relaxation for the mixed-integer linear programming (MILP) and the mixed-integer quadratic programming (MIQP). When CPLEX tries to solve a nonconvex MIQP to global optimality, it is possible that a given relaxation of the original problem is not bounded. The proposed optimization problem is NP-Hard. It can undoubtedly be solved in exponential time by 'branch and bound' and may require exploring all possible permutations in the worst case <ns0:ref type='bibr' target='#b32'>[25]</ns0:ref>. Meanwhile, evolutionary algorithms (metaheuristics, in general) use stochastic operations to discover possible solutions. That is, with luck, the best solution can be found very early (even in the first iteration), but in the worst case, the solution found in the first and last iteration is the same quality. Therefore, if we stand from a theoretical perspective to determine the computational complexity of the two approaches, it is a challenging problem and beyond the scope of this study <ns0:ref type='bibr' target='#b33'>[26,</ns0:ref><ns0:ref type='bibr' target='#b35'>27]</ns0:ref>. Instead of giving theoretical calculations, we evaluate the performance of these algorithms through statistics. Table <ns0:ref type='table' target='#tab_10'>9</ns0:ref> shows the results from 3 algorithms, GA, ACO, and CPLEX when solving the problem with 50 generated datasets. It is easy to see that the meta-heuristics algorithms entirely outclass CPLEX in both performance and fitness. Out of 50 data sets, there are only 4 data sets that CPLEX finds quality solution as ACO or GA. The rests are worse results. The execution time of CPLEX is also many times longer while both GA and ACO can almost solve the problem in less than 1 minute while CPLEX both takes 3-5 minutes to execute. When comparing GA and ACO, ACO is superior when the dataset is distributed according to the Poisson and binomial distributions. In all 12 data sets of Poisson distribution, ACO gives better results in 8 data sets, and an exponential distribution, ACO is better in 7/7 data sets. In contrast to the hypergeometric distribution, GA performed better than ACO at 8/9 runs. The results are not much different in terms of fitness, and up to <ns0:ref type='bibr' target='#b25'>18</ns0:ref> Manuscript to be reviewed Computer Science executing both algorithms, the solution is the same. On the other hand, although, the proposed algorithms use the stochastic operation for their search process, but both find the results most of the time in less than 60 seconds, in which ACO runs faster than GA 26 on 50 runs total. The above results show that the proposed metaheuristics algorithms are more efficient than the tested-exact method on solution quality and processing time aspects by using an effective heuristic mechanism.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>This study presents an adaptive method to solve the MOP-TS problem. A MOP model was proposed to a new variant of the TS problem that requires selecting multiple teams among the set candidates. The proposed problem requires its optimizer to explore in a larger space than the single TS introduced by Ngo et al. <ns0:ref type='bibr' target='#b4'>[2]</ns0:ref>. The non-trivial MOPs are always needed to trade-off between the objective functions in the decision-making process. In the team selection problems, the decisionmakers may not have enough suitable candidates for the teams as expected. It requires involving the higher-level information to assign preferences to each goal. We use the approach of compromise programming to solve this problem. The Solver needs to find the closest solution to the pre-assigned compromise solution instead of solving the original MOP. The designed mathematical optimization models and evolutionary algorithms can be integrated using compromise programming. The compromised-objective function serves as the evaluation function of the search agents. We developed GA and ACO to solve the proposed model. To evaluate the efficiency of the proposed algorithms, we have conducted many experiments to assess the algorithm's performance in different decision-making contexts. The results show that even though the algorithm is designed to select multiple groups from the candidates, the quality of the solution obtained when applied to an objective is entirely equivalent to the previous study that aimed to solve the problem of single team selection. Compared with the exact method, the proposed EAs algorithms also show outstanding ability when dealing with large-scale systems and solution quality, execution time, and efficiency in different decision-making. The CP-based approach is beneficial when the decision-maker cannot specify priorities in the decision-making process. Even with the low computation cost than the MOEA approach, the decision-maker can execute the algorithm multiple times and determine the values of the weighting parameters based on their experience in determining the final solution. In different decision-making strategies, the decision-makers need to use their expertise to choose the values of the parameters to motivate the search agents. The limitation of the recommendation model is not concerned with team communication and other soft skills but only with technical skills. Therefore, our future research surrounds the development of a generic model for MOP-TS. Improving the performance of algorithms is also one of the priorities. Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 2</ns0:note><ns0:note type='other'>Computer Science Figure 3</ns0:note><ns0:note type='other'>Computer Science Figure 4</ns0:note><ns0:note type='other'>Computer Science Figure 5</ns0:note><ns0:note type='other'>Computer Science Figure 6</ns0:note><ns0:note type='other'>Computer Science Figure 7</ns0:note><ns0:p>The candidates are projected to 2-dimensional space using t-SNE PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 8</ns0:note><ns0:p>The query for multiple-team's selection with G=3.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 9</ns0:note><ns0:note type='other'>Computer Science Figure 10</ns0:note><ns0:note type='other'>Computer Science Figure 11</ns0:note><ns0:p>The number solutions that violate the constraints generated by A) GA; B) ACO.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:p><ns0:p>Manuscript to be reviewed Manuscript to be reviewed Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science Figure 12</ns0:note><ns0:note type='other'>Computer Science Figure 13</ns0:note><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: Example of Multi-Team Selection Problem with G=2, h_1 = 2, h_2 = 2 and K=15</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>where the chromosome is represented like the decision variables but here we use list as 𝑋 𝑝 PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 : 3 :</ns0:head><ns0:label>23</ns0:label><ns0:figDesc>Figure 2: Basic flow of Genetic Algorithm Figure 3: Example of the solution in GA with G=2 candidates with id {1,2,3} selected for team 1, candidates with id {3,5} selected for team 2.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head> 1 .Figure 4</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 4 illustrates an example of the three steps of the crossover phase.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: An example of the crossover phase</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 :</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5: Basic flow of Ant Colony Optimization</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Statistical numbers on 37 skills in the dataset of 500 programming contestants</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Each point represents a contestant in the original space. The locations of the points indicate that the class difference between the contestants is very distinct. This allocation affects the search results because some PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022) Manuscript to be reviewed Computer Science members often appear in most solutions because they dominate in several skills. A dataset of the normal distribution can have many Pareto solutions.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: The candidates are projected to 2-dimensional space using t-SNE</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: The query for multiple-team's selection with G=3.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Obtained results of 15 executions of the GA, ACO, Old GA and CPLEX on different scales of the tested dataset. A) and B) show Fitness values and Execution time on 300 candidates; C) and D) show Fitness values and Execution time on 500 candidates.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 10 :</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10: Obtained fitness values and time of computation by the algorithms on different system scales.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: The number solutions that violate the constraints generated by A) GA; B) ACO.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 12 :</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12: returned values of different functions over iterations/generations on the tested dataset of 500 candidates; A) Fitness values; B) f_1^deep; C) f_1^wide; D) f_2^deep; E) f_2^wide; F) f_3^deep; G) f_3^wide</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head>Figure 13 :</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13: 10 optimal solutions corresponding to different weight parameters for the datasets of 300 contestants and 500 contestants. A) Weight parameters values; Obtained solutions on dataset of 300 candidates by B)GA, D)ACO and F) CPLEX; Obtained solutions on dataset of 500 candidates by C)GA, E)ACO and F) CPLEX;</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_15'><ns0:head /><ns0:label /><ns0:figDesc>3 seconds. We use a similar cost to PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_16'><ns0:head /><ns0:label /><ns0:figDesc>times of PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_17'><ns0:head>Figure 1 Example</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_18'><ns0:head /><ns0:label /><ns0:figDesc>Basic flow of Genetic AlgorithmPeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_19'><ns0:head /><ns0:label /><ns0:figDesc>Example of the solution in GA with G=2 candidates with id {1,2,3} selected for team 1, candidates with id {3,5} selected for team 2PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_20'><ns0:head /><ns0:label /><ns0:figDesc>An example of the crossover phasePeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_21'><ns0:head /><ns0:label /><ns0:figDesc>Basic flow of Ant Colony OptimizationPeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_22'><ns0:head /><ns0:label /><ns0:figDesc>Statistical numbers on 37 skills in the dataset of 500 programming contestants PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_23'><ns0:head /><ns0:label /><ns0:figDesc>Obtained results of 15 executions of the GA, ACO, Old GA and CPLEX on different scales of the tested dataset. A) and B) show Fitness values and Execution time on 300 candidates; C) and D) show Fitness values and Execution time on 500 candidates. PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_24'><ns0:head /><ns0:label /><ns0:figDesc>Obtained fitness values and time of computation by the algorithms on different system scales.PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_25'><ns0:head /><ns0:label /><ns0:figDesc>returned values of different functions over iterations/generations on the tested dataset of 500 candidates; A) Fitness values; B) f_1^deep; C) f_1^wide; D) f_2^deep; E) f_2^wide; F) f_3^deep; G) f_3^wide PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_26'><ns0:head>1</ns0:head><ns0:label /><ns0:figDesc>10 optimal solutions corresponding to different weight parameters for the datasets of 300 contestants and 500 contestants. A) Weight parameters values; Obtained solutions on dataset of 300 candidates by B)GA, D)ACO and F) CPLEX; Obtained solutions on data PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,348.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='28,42.52,178.87,525.00,132.75' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='31,42.52,178.87,525.00,218.25' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 : Different approaches to MOP.</ns0:head><ns0:label>1</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>There are several variables are used for the model as following:</ns0:figDesc><ns0:table><ns0:row><ns0:cell></ns0:cell><ns0:cell>𝐾</ns0:cell><ns0:cell cols='4'>is the number of candidates.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell></ns0:cell><ns0:cell>𝐺</ns0:cell><ns0:cell cols='5'>denotes the number of groups.</ns0:cell></ns0:row><ns0:row><ns0:cell></ns0:cell><ns0:cell>𝑆</ns0:cell><ns0:cell cols='6'>represents the number of skills in the skillset.</ns0:cell></ns0:row><ns0:row><ns0:cell></ns0:cell><ns0:cell cols='2'>𝑍 𝑔</ns0:cell><ns0:cell cols='5'>stands for the number of members in group</ns0:cell><ns0:cell>. 𝑔 𝑡ℎ ∀ 𝑔 = 1,…,𝐺</ns0:cell></ns0:row><ns0:row><ns0:cell cols='5'> The decision variables</ns0:cell><ns0:cell cols='3'>𝑋 ∈ ℝ 𝐾 × 𝐺 = {𝑥 𝑘,𝑔 |𝑥 𝑘,𝑔 ∈ (0,1);𝑘 = 1…𝐾, 𝑔 = 1…𝐺}</ns0:cell><ns0:cell>where:</ns0:cell></ns0:row><ns0:row><ns0:cell cols='8'> 𝑥 𝑘,𝑔 = { 1 𝑖𝑓 𝑚𝑒𝑚𝑏𝑒𝑟 𝑘 𝑡ℎ 𝑠𝑒𝑙𝑒𝑐𝑡𝑒𝑑 𝑡𝑜 𝑔𝑟𝑜𝑢𝑝 𝑔 𝑡ℎ 0 𝑜𝑟𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒</ns0:cell></ns0:row><ns0:row><ns0:cell></ns0:cell><ns0:cell cols='7'>𝑁 𝑔,𝑠 = { 1 𝑖𝑓 𝑠𝑘𝑖𝑙𝑙 𝑠 𝑡ℎ 𝑖𝑠 𝑟𝑒𝑞𝑢𝑖𝑟𝑒𝑑 𝑡𝑜 𝑔𝑟𝑜𝑢𝑝 𝑔 𝑡ℎ 0 𝑜𝑟𝑡ℎ𝑒𝑟𝑤𝑖𝑠𝑒</ns0:cell><ns0:cell>. ∀ 𝑠 = 1,…,𝑆;𝑔 = 1,…,𝐺</ns0:cell></ns0:row><ns0:row><ns0:cell></ns0:cell><ns0:cell cols='3'>𝑉 𝑘,𝑠</ns0:cell><ns0:cell cols='2'>is the rating score for skill</ns0:cell><ns0:cell>𝑠 𝑡ℎ</ns0:cell><ns0:cell>of the candidate</ns0:cell><ns0:cell>𝑖 𝑡ℎ ∀ 𝑠 = 1,…,𝑆;𝑘 = 1,…,𝐾</ns0:cell><ns0:cell>.</ns0:cell></ns0:row><ns0:row><ns0:cell></ns0:cell><ns0:cell cols='3'>𝐿 𝑔,𝑠</ns0:cell><ns0:cell cols='4'>is the minimum required score for skill</ns0:cell><ns0:cell>𝑠 𝑡ℎ</ns0:cell><ns0:cell>for group</ns0:cell><ns0:cell>𝑔 𝑡ℎ</ns0:cell><ns0:cell>.</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head /><ns0:label /><ns0:figDesc>𝑉 2,𝑠 …,𝑉 𝐾,𝑠 )</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>𝐺}</ns0:cell></ns0:row><ns0:row><ns0:cell>and</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell> (𝑉 1,𝑠 ,  is sorted vector of 𝑅 𝑠 is sorted vector of 𝑃 𝑠</ns0:cell><ns0:cell>by descending order. by ascending order.</ns0:cell></ns0:row></ns0:table><ns0:note>(𝑉 1,𝑠 , 𝑉 2,𝑠 …,𝑉 𝐾,𝑠 )</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head /><ns0:label /><ns0:figDesc>Metaheuristics operations are stochastic. It is difficult to determine the exact complexity of these algorithms. Although We can calculate the cost of each iteration as shown in Table2. However, it is difficult to predict in advance the number of iterations the algorithm will need to perform.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Where</ns0:cell></ns0:row><ns0:row><ns0:cell>GA is</ns0:cell><ns0:cell cols='3'>𝑂(𝐾 * 𝐻 * 𝑆)</ns0:cell><ns0:cell cols='2'>, ACO is</ns0:cell><ns0:cell>𝑂(𝐾 * 𝐻 * 𝑆 * 𝐺)</ns0:cell><ns0:cell>. But the number of Iterations is not predictable. Thus,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>if we assign</ns0:cell><ns0:cell>𝑁 𝐺𝐴</ns0:cell><ns0:cell cols='2'>and</ns0:cell><ns0:cell>𝑁 𝐴𝐶𝑂</ns0:cell><ns0:cell>as the ended number of iterations for each algorithm, we can say that</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'>complexity of GA and ACO are respectively</ns0:cell><ns0:cell>𝑂(𝑁 𝐺𝐴 * 𝐾 * 𝐻 * 𝑆)</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝑂(𝑁 𝐴𝐶𝑂 * 𝐾 * 𝐻 * 𝐺 * 𝑆)</ns0:cell><ns0:cell>.</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>𝑃</ns0:cell><ns0:cell>𝜋</ns0:cell></ns0:row><ns0:row><ns0:cell cols='6'> Update trail and compute fitness: the population is constructed as: 𝑃</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>𝑟 𝑧 = 𝑟𝑎𝑛𝑑(𝑣 ' 𝑔 ), ∀ 𝑧 = 1…𝑍 𝑔</ns0:cell><ns0:cell>are selected candidate to the group</ns0:cell><ns0:cell>𝑔 𝑡ℎ ,∀ 𝑔 = 1…G</ns0:cell><ns0:cell>of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='5'>the ant , 𝑝 ∀ 𝑝 ∈ 𝑃 , where:</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='4'>---𝑣 ' 𝑟𝑎𝑛𝑑 denotes vector column represents the cumulative distribution function. of matrix . 𝑔 𝑡ℎ 𝑇 𝑣 𝑔 𝑔 = { 𝑣 𝑔,1 ∑ 𝐾 𝑖 = 1 𝑣 𝑔,𝑘 , 𝑣 𝑔,2 ∑ 𝐾 𝑖 = 1 𝑣 𝑔,𝑘 𝑣 𝑔,𝐾 ,…, ∑ 𝐾 𝑖 = 1 𝑣 𝑔,𝑘</ns0:cell></ns0:row></ns0:table><ns0:note>1,𝑁 𝑔,𝑠 * 𝑉 𝑘,𝑠 ))  Generate ant colony: we create new population is the set of artificial ants.}  Update pheromone matrix: Select top of the best solution from to list then the is 𝛷 𝑃 𝐸 𝑇 updated as following: PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022) Manuscript to be reviewed Computer Science 𝑇 𝑝 𝑧,𝑔 ,𝑔 = 1 𝐶 𝑝 𝑧,𝑔 ,𝑔 ∀ 𝑝 ∈ 𝐸,𝑔 = 1…𝐺,𝑧 = 1…𝑍 𝑔  To Stop algorithm, we also defined a condition that if after generations the best fitness 𝛼 value of the population did not change. Otherwise, comeback the step of generating new ant colony.C. Computational Complexity</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 2 : Computational Complexity of each iteration in GA and ACO algorithms.</ns0:head><ns0:label>2</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 : System configurations for experiments.</ns0:head><ns0:label>3</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 : Parameters to conduct the experiments.</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 6 : The ideal points and worst points in different scales of the system.</ns0:head><ns0:label>6</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>It requires the use of</ns0:cell><ns0:cell>𝑧 *</ns0:cell><ns0:cell>and</ns0:cell><ns0:cell>𝑧 𝑤𝑜𝑟𝑠𝑡</ns0:cell><ns0:cell>. The values of</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 7 : Parameters to execute and obtained results of the NGSA-2. Table 8: Hypervolume obtained by CP Based algorithms and NGSA-2.</ns0:head><ns0:label>7</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 9 : Obtained Results from 50 randomized generated datasets on different distribution by GA, ACO, and CPLEX.</ns0:head><ns0:label>9</ns0:label><ns0:figDesc /><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 7 (on next page)</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Parameters to execute and obtained results of the NGSA-2</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Parameter</ns0:cell><ns0:cell>Value</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of</ns0:cell><ns0:cell>1000</ns0:cell></ns0:row><ns0:row><ns0:cell>Generations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Number of</ns0:cell><ns0:cell>1000</ns0:cell></ns0:row><ns0:row><ns0:cell>Populations</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Selection Rate</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell>Crossover Rate</ns0:cell><ns0:cell>0.9</ns0:cell></ns0:row><ns0:row><ns0:cell>Mutation Rate</ns0:cell><ns0:cell>0.1</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell /></ns0:row></ns0:table><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022) Manuscript to be reviewed Computer Science PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022) Manuscript to be reviewed PeerJ Comput. Sci. reviewing PDF | (CS-2021:10:66364:2:0:NEW 6 Apr 2022)</ns0:note></ns0:figure>
</ns0:body>
" | "Cover Letter
We are grateful to the editor and reviewers for their valuable comments and suggestions. All raised questions are carefully considered and step-by-step improvements are made in the whole manuscript in the light of the reviewer’s comments. Major rework is done to improve the whole manuscript to get more clarity on the research work.
The authors would like to thank the anonymous reviewer for their effort in reviewing the manuscript and for their valuable and constructive comments and fruitful observations that helped in improving the quality of the manuscript to a publishable standard. Detailed below are some responses to the reviewers’ comments and suggestions. Reviewers’ questions and comments are in BLACK and the authors’ answers and comments are in RED.
Reviewer 1
Basic reporting
The manuscript under the title ' Some Metaheuristic Algorithms for Multiple Cross-Functional Teams Selection Problem' seems a mature contribution. However, some minor concerns still exist in revised version. Furthermore, grammatical ambiguities exist in the revised version.
Experimental design
1. In Introduction, the background section requires more previous contributions that will be help for readers to understand the domain. Moreover, highlighted the existing problems in the literature is not. satisfactory
We have edited the related researches section to highlight research problems
2. Authors must write the proposed algorithm in standard logical form and design the flow diagram for actual execution of the proposed algorithm.
We have revised the algorithm descriptions. We based on the flow chart of each algorithm and described each component in the diagram
3. The Claim of least computational cost is not properly incorporated in revised version . Authors must perform experiments for supporting this claim
In the process of learning about computational complexity/cost for metaheuristic algorithms. We've found ourselves using inexact terminology. You're right. What we are showing in the experiments is about execution time and CPU time, which are commonly used in operating algorithms based on stochastic operations. We have corrected all mentions in the manuscripts.
4. In context of computational cost, authors must define the cost factor in terms of time of execution and space consumed by executed part. Moreover, authors needs to calculate the BIG O notion if authors claim the time factor in computation cost
We added big O analysis to each designed algorithm and stated that metaheuristic operations algorithms are stochastic. It depends on the parameters, implementation, and applications. Therefore, using execution time to evaluate performance is more common.
Validity of the findings
1.The validation of results is not satisfactory , even no single pictorial representation of the results, authors need to improve the presentation of results section.
We edited to visualize some results in Figure forms.
2. The state of the art comparison still confusing , authors need to improve the state of the art comparison portion.
We have added some comparisions on the approaches to MOP as well as highlight research problems.
Additional comments
Authors need to improve the manuscript by fixing the above concerns
Reviewer 2
Basic reporting
The authors revised all comments well. I think that the work can be published.
Experimental design
All experiments are conducted well.
Validity of the findings
no comment
Thank you
" | Here is a paper. Please give your review comments after reading it. |
727 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Causal graph extraction from news has the potential to aid in the understanding of complex scenarios. In particular, it can help explain and predict events, as well as conjecture about possible cause-effect connections. However, limited work has addressed the problem of large-scale extraction of causal graphs from news articles. This article presents a novel framework for extracting causal graphs from digital text media. The framework relies on topic-relevant variables representing terms and ongoing events that are selected from a domain under analysis by applying specially developed information retrieval and natural language processing methods. Events are represented as eventphrase embeddings, which make it possible to group similar events into semantically cohesive clusters. A time series of the selected variables is given as input to a causal structure learning techniques to learn a causal graph associated with the topic that is being examined. The complete framework is applied to the New York Times dataset, which covers news for a period of 246 months (roughly 20 years), and is illustrated through a case study. An initial evaluation based on synthetic data is carried out to gain insight into the most effective time-series causality learning techniques. This evaluation comprises a systematic analysis of nine state-of-the-art causal structure learning techniques and two novel ensemble methods derived from the most effective techniques. Subsequently, the complete framework based on the most promising causal structure learning technique is evaluated with domain experts in a real-world scenario through the use of the presented case study. The proposed analysis offers valuable insights into the problems of identifying topic-relevant variables from large volumes of news and learning causal graphs from time series.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Causal modeling aims to determine the cause-effect relations among a set of variables. A variable is the basic building block of causal models and represents a property or descriptor that can take multiple values <ns0:ref type='bibr' target='#b9'>(Glymour et al., 2016)</ns0:ref>. The extraction of variables and their causal relations from news has the potential to aid in the understanding of complex scenarios. In particular, it can help explain and predict events, as well as conjecture about possible causality associations. Although the problem of causal modeling has attracted increasing attention in the Computer Science discipline <ns0:ref type='bibr' target='#b27'>(Pearl, 2009;</ns0:ref><ns0:ref type='bibr' target='#b1'>Bareinboim and Pearl, 2015;</ns0:ref><ns0:ref type='bibr' target='#b28'>Peters et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b23'>Meinshausen et al., 2020)</ns0:ref>, limited work has been devoted to the problem of large-scale extraction of causal graphs from news articles. Causality can provide tools to better understand machine learning models and their applicability. However, black-box predictive models have typically dominated machine learning-based decision making, with a lack of understanding of cause-effect connections <ns0:ref type='bibr' target='#b30'>(Rudin and Radin, 2019)</ns0:ref>. On the other hand, causality has been central to Econometrics, PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>where most methods rely either on the analysis of structural models <ns0:ref type='bibr' target='#b11'>(Heckman and Vytlacil, 2007)</ns0:ref> or on the application of Granger's idea of causation based on determining whether past values of a time series provide unique information to forecast future values of another <ns0:ref type='bibr' target='#b10'>(Granger, 1969)</ns0:ref>.</ns0:p><ns0:p>To discover causal relations, interventions and manipulations are typically necessary as part of a randomized experiment. However, undertaking such experiments is usually impractical or even impossible.</ns0:p><ns0:p>As a consequence, to address these limitations, many methods for causal discovery usually rely on observational data only and a set of (strong) assumptions. The relatively recent availability of large volumes of data compensates to a certain degree for the infeasibility of experimentation, offering an opportunity to collect and exploit observational data for causal modeling and analysis <ns0:ref type='bibr' target='#b47'>(Varian, 2014)</ns0:ref>.</ns0:p><ns0:p>Causality extraction from text has been previously explored mostly as a relation extraction problem, which can be addressed as a specific information extraction task. Existing approaches typically rely on the use of lexico-syntactic patterns <ns0:ref type='bibr' target='#b14'>(Joshi et al., 2010)</ns0:ref>, supervised learning <ns0:ref type='bibr' target='#b16'>(Khetan et al., 2022)</ns0:ref>, and bootstrapping <ns0:ref type='bibr' target='#b12'>(Heindorf et al., 2020)</ns0:ref>. These approaches apply local analysis methods to extract explicit causal relations from text by adopting an intra-or inter-sentence scheme. However, these methods are unable to detect implicit causal relations that can be inferred from the analysis of time series data built from sentences coming from several documents. Also, due to the limited availability of ground truth for causal discovery, few studies have been carried out in the context of a real-world application. Finally, another limitation of previous approaches to causal extraction from text is the absence of a clear semantics associated with the variables that represent the cause-effect relations. In other words, variables are usually terms identified in text, with no distinction between general terms and variables built from event mentions.</ns0:p><ns0:p>The work presented in this article attempts to overcome these limitations. It proposes a methodology for causal graph extraction from news and presents comparative studies that allow to address the following research questions:</ns0:p><ns0:p>• RQ1. What state-of-the-art methods for time-series causality learning are effective in generalized synthetic data?</ns0:p><ns0:p>• RQ2. Which of the most promising methods for time-series causality learning identified through RQ1 are also effective in real-world data extracted from news?</ns0:p><ns0:p>• RQ3. What type of variables extracted from a large corpus of news is effective for building interpretable causal graphs on a topic under analysis?</ns0:p><ns0:p>The proposed approach combines methods coming from information retrieval, natural language processing, machine learning, and Econometrics into a framework that extracts variables from large volumes of text to build highly interpretable causal models. The extracted variables represent terms (unigrams, bigrams and trigrams) and ongoing event clusters. The terms are selected from topic-relevant sentences using a supervised term-weighting scheme proposed and evaluated in our previous work <ns0:ref type='bibr' target='#b22'>(Maisonnave et al., 2019</ns0:ref><ns0:ref type='bibr' target='#b21'>(Maisonnave et al., , 2021b))</ns0:ref>. In the meantime, the ongoing event clusters are computed by clustering event phrase embeddings, where the task of detecting ongoing events is defined and evaluated by the authors in <ns0:ref type='bibr' target='#b20'>(Maisonnave et al., 2021a)</ns0:ref>. A time series of the selected variables is used to learn a causal graph associated with the topic that is being examined. The framework is applied to a case study using real-world data extracted from a 246-month period (roughly 20 years) of news from the New York Times (NYT) corpus <ns0:ref type='bibr' target='#b36'>(Sandhaus, 2008</ns0:ref>).</ns0:p><ns0:p>To answer research question RQ1 an evaluation based on synthetic data from TETRAD <ns0:ref type='bibr' target='#b37'>Scheines et al. (1998)</ns0:ref> and CauseMe <ns0:ref type='bibr'>Runge et al. (2019a)</ns0:ref> is carried out to gain insight into the selection of time-series causality learning techniques. This evaluation comprises a systematic analysis of nine state-of-the-art causal structure learning techniques and two ensemble methods derived from the most effective techniques.</ns0:p><ns0:p>To address RQ2 the proposed framework applies the most promising methods identified through RQ1 to extract causal relations from news on a topic under analysis. Then, a comparative study of the candidate causal learning methods is conducted based on assessments provided by domain experts. Finally, to answer RQ3 the two types of variables extracted by the framework are analyzed, namely general terms (unigrams, bigrams, and trigrams) and ongoing event clusters. Then, an evaluation of causal relations containing each type of variables is performed based on assessments derived from experts. This allows investigating whether there tends to be more agreement among experts when the variables representing potential causes and effects are of a specific type. It also allows determining if the evaluated causal extraction methods are more effective if the analysis is restricted to a certain type of variables.</ns0:p><ns0:p>Overall, the contributions of this work can be summarized as follows:</ns0:p></ns0:div>
<ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>• A framework that combines term selection and event detection to build a time series that is used to learn causal models from large volumes of text. The framework introduces a novel method for building event-phrase embeddings, which groups events extracted from news into semantically cohesive event clusters.</ns0:p><ns0:p>• An extensive evaluation on synthetic data of nine state-of-the-art causal structure learning techniques and two novel ensemble techniques derived from the most effective ones.</ns0:p><ns0:p>• The application of the presented framework to a case study that allows to illustrate the proposed causal graph extraction methodology and to further evaluate the analyzed causality learning techniques in a real-world scenario. Also, as a byproduct of the evaluation, we offer a dataset consisting of domain expert causality assessments on pairs of variables extracted from real-world data.</ns0:p><ns0:p>The data and full code of the methods used by the framework and to carry out the experiments are made available to allow reproducibility.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>In Computer Science, approaches to causality have been mostly centered around probabilistic graphical models <ns0:ref type='bibr' target='#b17'>(Koller and Friedman, 2009)</ns0:ref>, which are graphical representations of data and their dependency relationships. Bayesian networks <ns0:ref type='bibr' target='#b27'>(Pearl, 2009)</ns0:ref> are a kind of probabilistic graphical model used for causal inference by capturing both conditionally dependent and conditionally independent relations between random variables by means of a directed acyclic graph (DAG). Meanwhile, the study of the concept of causality is a central and long-standing issue in the field of Econometrics, where it has been addressed mainly by methods derived either from the analysis of structural models <ns0:ref type='bibr' target='#b11'>(Heckman and Vytlacil, 2007)</ns0:ref> or the application of the Granger Causality test <ns0:ref type='bibr' target='#b10'>(Granger, 1969)</ns0:ref>. Both approaches are based on two principles: (1) a cause precedes the effect, and (2) the cause produces unique changes in the effect, so past values of the cause help predict future values of the effect. In the case of causal structure models, different techniques have been developed, which are typically classified into three main categories, namely (1) independence-based causal structure learning <ns0:ref type='bibr' target='#b45'>(Spirtes and Glymour, 1991;</ns0:ref><ns0:ref type='bibr'>Runge et al., 2019b), (2)</ns0:ref> restricted structural causal models <ns0:ref type='bibr' target='#b40'>(Shimizu et al., 2006</ns0:ref><ns0:ref type='bibr' target='#b41'>(Shimizu et al., , 2011))</ns0:ref>, and (3) autoregressive models <ns0:ref type='bibr' target='#b10'>(Granger, 1969;</ns0:ref><ns0:ref type='bibr' target='#b38'>Schreiber, 2000;</ns0:ref><ns0:ref type='bibr' target='#b44'>Sims, 1980;</ns0:ref><ns0:ref type='bibr' target='#b26'>Nicholson et al., 2017;</ns0:ref><ns0:ref type='bibr' target='#b4'>Chiquet et al., 2008)</ns0:ref>. Autoregressive models are defined exclusively for time series, where lagged variables (i.e., dependent variables that are lagged in time) play a key role. It is worth mentioning that if we combine time-lagged and non-time-lagged variables, the autoregressive approach can be seen also as an independence-based approach with respect to the non-time-lagged variables.</ns0:p><ns0:p>Several previous works have addressed the problem of causal structure learning from text. <ns0:ref type='bibr' target='#b43'>Silverstein et al. (2000)</ns0:ref> propose a series of algorithms that combine different heuristics to identify causal relationships from heterogeneous datasets. In particular, the algorithms were run on large volumes of text from news that cover different topics and have demonstrated to have the capability of efficiently returning a number of causal relationships and not-directly-causal relationships. Another approach for extracting causal relations from text is presented by <ns0:ref type='bibr' target='#b8'>Girju and Moldovan (2002)</ns0:ref>, where a semi-automatic method is proposed to identify lexico-syntactic patterns referring to causation. A system for acquiring causal knowledge from text is proposed by <ns0:ref type='bibr' target='#b35'>Sanchez-Graillet and Poesio (2004)</ns0:ref>. The system identifies sentences that specify causal relations and builds Bayesian networks by extracting causal patterns from the sentences. <ns0:ref type='bibr' target='#b5'>Dehkharghani et al. (2014)</ns0:ref> propose a method for causal rule discovery that combines sentiment analysis and association rule mining. Another work proposed in <ns0:ref type='bibr' target='#b12'>(Heindorf et al., 2020)</ns0:ref> extracts claimed causal relations from the Web to induce the CauseNet causality graph, containing approximately 200,000 relations. The above works take linguistic or data mining approaches, where certain syntactic regularities that are manually crafted or automatically generated using machine learning techniques allow to detect pairs of terms potentially related by a causal relation. A recent survey that reviews techniques for the extraction of explicit and implicit inter-and intra-sentential causality from natural language text is presented by <ns0:ref type='bibr' target='#b48'>Yang et al. (2021)</ns0:ref>.</ns0:p><ns0:p>The framework described in this article is closely related to the one presented by <ns0:ref type='bibr' target='#b29'>Radinsky et al. (2012)</ns0:ref>, where semantic natural language modeling, machine learning, and data mining techniques are applied to 150 years of news articles to identify causal predictors of events. However, different from the approach described in this article, their focus is not the identification and extraction of causality but the Manuscript to be reviewed Computer Science prediction of future events caused by a given event. Also, rather than relying on a machine learning-driven natural language processing approach to detect events, they apply a semantic approach aimed at extracting canonic representations of events that rely on world knowledge ontologies mined from Linked Data <ns0:ref type='bibr' target='#b2'>(Bizer et al., 2011)</ns0:ref>. Another related work is presented by <ns0:ref type='bibr' target='#b0'>Balashankar et al. (2019)</ns0:ref>, where the authors describe a framework that allows to build a predictive causal graph by measuring how the occurrence of a word in the news influences the occurrence of other words in the future. Here, we extend the application of causal structure learning techniques to uncover relations among variables representing terms and events extracted from digital media, aiming to detect a network of causal links among these variables.</ns0:p></ns0:div>
<ns0:div><ns0:head>AN OVERVIEW OF CAUSAL STRUCTURE LEARNING</ns0:head><ns0:p>Causal learning is the process of inferring a causal model from data <ns0:ref type='bibr' target='#b28'>(Peters et al., 2017)</ns0:ref> while causal structure learning is the process of learning the causal graph or certain aspects of it <ns0:ref type='bibr' target='#b13'>(Heinze-Deml et al., 2018)</ns0:ref>. In this work, we address the causal structure learning problem, where data are presented as a time series of variables that stand for terms or events. A variety of techniques have been proposed in the literature to address causal structure learning. This section outlines and evaluates nine state-of-the-art and two ensemble techniques for causal structure learning from time series of independent and identically distributed random variables. The goal of this analysis is to identify the most promising techniques with the purpose of incorporating them into the proposed causal learning framework.</ns0:p><ns0:p>The analyzed techniques for causal structure learning are classified into three main categories, namely</ns0:p><ns0:p>(1) independence-based causal structure learning, (2) restricted structural causal models, and (3) autoregressive models. A general overview of the analyzed causal structure learning techniques is presented in Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. Independence-based causal structure learning relies on two main assumptions: the Markov property for directed graphs and faithfulness <ns0:ref type='bibr' target='#b17'>(Koller and Friedman, 2009)</ns0:ref>. These assumptions allow estimating the Markov equivalence classes of the DAG from the observational data. All DAGs in an equivalence class have the same skeleton (i.e., the causal graph with undirected edges) and the same v-structures (i.e., the same induced subgraphs of the form X → Y ← Z). However, the direction of some edges may not be determined. Since this work analyzes non-contemporary causalities, the direction of time can be used to determine the direction of the edges that remain undirected. That is, since the cause has to happen before the effect, it is known that the arrows cannot go back in time. Using this criterion, a fully directed graph is obtained from the independence-based techniques used. For the present work, we consider PC <ns0:ref type='bibr' target='#b45'>(Spirtes and Glymour, 1991)</ns0:ref> and PCMCI <ns0:ref type='bibr' target='#b34'>(Runge et al., 2019b)</ns0:ref> The techniques based on restricted structural causal models incorporate additional assumptions to obtain identifiability. For instance, in the case of non-Gaussian linear models (LiNGAM), it is possible to analyze the asymmetry between cause and effect to distinguish cause from effect. This is possible because the regression residuals are independent of the predictor only for the correct causal direction. This work analyzes two techniques based on restricted structural models, namely ICA-LiNGAM <ns0:ref type='bibr' target='#b40'>(Shimizu et al., 2006)</ns0:ref> and Direct-LiNGAM <ns0:ref type='bibr' target='#b41'>(Shimizu et al., 2011)</ns0:ref>.</ns0:p><ns0:p>The techniques based on autoregressive models are defined exclusively for time series and are based on determining whether past values of a variable X offer unique information (i.e. not provided by other variables) to predict or explain future values of another variable Y . If this is the case, it is possible to hypothesize that X → Y . This idea gives rise to the statistical concept of causality known as Granger Causality <ns0:ref type='bibr' target='#b10'>(Granger, 1969)</ns0:ref>. This work analyzes five techniques for inferring causal structures in time series based on these principles: (1) Lasso-Granger <ns0:ref type='bibr' target='#b10'>(Granger, 1969)</ns0:ref>, (2) Transfer Entropy <ns0:ref type='bibr' target='#b38'>(Schreiber, 2000)</ns0:ref>, ( <ns0:ref type='formula'>3</ns0:ref>) VAR <ns0:ref type='bibr' target='#b44'>(Sims, 1980)</ns0:ref>, ( <ns0:ref type='formula'>4</ns0:ref>) BigVAR <ns0:ref type='bibr' target='#b26'>(Nicholson et al., 2017)</ns0:ref>, and ( <ns0:ref type='formula'>5</ns0:ref>) SIMoNe <ns0:ref type='bibr' target='#b4'>(Chiquet et al., 2008)</ns0:ref>.</ns0:p><ns0:p>Two ensemble techniques are also implemented by combining the four most effective state-of-the-art causal structure learning techniques (Direct-LiNGAM, PCMCI, VAR, and PC) based on the evaluations carried out on synthetic data (to be presented in the next section). The first ensemble technique, referred to as ensemble ∩ , adds a causal relation only when the four best techniques agree on including it. On the other hand, the second ensemble technique, called ensemble ∪ , adds a causal relation when any of the four techniques includes it. Finally, for the sake of comparison, a baseline model, referred to as Random is also considered. The Random technique decides on a random basis with probability 0.5 whether to add or not each potential edge to the graph.</ns0:p></ns0:div>
<ns0:div><ns0:head>A FRAMEWORK FOR CAUSAL LEARNING FROM NEWS</ns0:head><ns0:p>This section describes the proposed framework, which provides support to experts while trying to analyze a specific topic by semi-automatically identifying relevant variables associated with a given topic and suggesting potential causal relations among these variables to build a causal graph. A diagram of the framework for building a causal graph from digital text media is presented in Figure <ns0:ref type='figure' target='#fig_4'>2</ns0:ref>. The framework completes the following steps:</ns0:p><ns0:p>• Step 1. Filter topic-relevant sentences from a collection of news. Given a topic description the framework selects those sentences that match the topic. A topic can be represented in a variety of ways. Examples of simple topic representations are n-grams or sets of n-grams. However, more complex schemes for representing topics can be naturally adopted by the framework, including machine-centered representations, such as vector space models or multimodal mixture models, and human-centered representations, such as concept maps.</ns0:p><ns0:p>• Step 2. Select terms from the topic-relevant sentences. The selection of relevant terms (unigrams, bigrams, and trigrams) from the given sentences relies on FDD β , a supervised term-weighting scheme proposed and evaluated by the authors <ns0:ref type='bibr' target='#b22'>(Maisonnave et al., 2019</ns0:ref><ns0:ref type='bibr' target='#b21'>(Maisonnave et al., , 2021b))</ns0:ref>. FDD β weights terms based on two relevancy scores. The first score is referred to as descriptive relevance (DESCR) and represents the importance of a term to describe the topic. Given a term t i and a topic T k the DESCR score is expressed as:</ns0:p><ns0:formula xml:id='formula_0'>DESCR(t i , T k ) = |d j : t i ∈ d j ' d j ∈ T k | |d j : d j ∈ T k | .</ns0:formula><ns0:p>In the above formula t i ∈ d j stands for the term t i occurring in the document d j , while d j ∈ T k stands for the document d j being relevant to the topic T k . The second score represents the discriminative Manuscript to be reviewed</ns0:p><ns0:p>Computer Science relevance (DISCR). This score is global to the collection and is computed for a term t i and a topic T k as follows:</ns0:p><ns0:formula xml:id='formula_1'>DISCR(t i , T k ) = |d j : t i ∈ d j ' d j ∈ T k | |d j : t i ∈ d j | .</ns0:formula><ns0:p>The FDD β score combines the DESCR and DISCR scores as follows:</ns0:p><ns0:formula xml:id='formula_2'>FDD β (t i , T k ) = (1 + β 2 ) DISCR(t i , T k ) × DESCR(t i , T k ) (β 2 × DISCR(t i , T k )) + DESCR(t i , T k ) .</ns0:formula><ns0:p>The tunable parameter β is a positive real factor that offers a means to favor descriptive relevance over discriminative relevance (by using a β value higher than 1) or the other way around (by using a β value smaller than 1). Human-subject studies reported by <ns0:ref type='bibr' target='#b22'>Maisonnave et al. (2019)</ns0:ref> indicate that a β = 0.477 offers a good balance between descriptive and discriminative power, with a Pearson correlation of 0.798 between relevance values assigned by domain experts and those assigned by FDD β . Note that FDD β is derived from the F β formula, known as F-score or F-measure, traditionally used in information retrieval, where β is chosen such that β > 1 assigns more weight to recall, while β < 1 favors precision. While we adopt F β as the term-weighting scheme in our framework, other supervised or unsupervised weighting schemes such as those investigated in <ns0:ref type='bibr' target='#b24'>(Moreo et al., 2018)</ns0:ref> can be naturally used to guide the selection of terms from topic-relevant sentences.</ns0:p><ns0:p>• Step 3. Detect ongoing events from the topic-relevant sentences. Event Detection (ED) is the task of automatically identifying event mentions in text <ns0:ref type='bibr' target='#b49'>(Zhang et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b25'>Nguyen and Grishman, 2018</ns0:ref>). An event mention is represented by an event trigger, which is the word that most clearly expresses the occurrence of the event. A specific ED task is Ongoing Event Detection (OED),</ns0:p><ns0:p>where the goal is to detect ongoing event mentions as opposed to historical, future, hypothetical, or other forms or events that are neither fresh nor current. The rationale behind focusing on ongoing events only is based on the need of building time series of events with the ultimate goal of learning a causal graph. Therefore, it is required that the detected events are ongoing events at the moment they are reported in the news. In previous work, we defined and extensively evaluated the OED task <ns0:ref type='bibr' target='#b20'>(Maisonnave et al., 2021a)</ns0:ref>. Also, we publicly released a dataset consisting of 2,000 news extracts from the NYT corpus containing ongoing event annotations <ns0:ref type='bibr' target='#b19'>(Maisonnave et al., 2020)</ns0:ref>. A model based on a recurrent neural network architecture that uses contextual word and sentence BERT embeddings <ns0:ref type='bibr' target='#b7'>(Devlin et al., 2018)</ns0:ref> demonstrated to be highly effective in the OED task, achieving an F1-score of 0.704 on the testing set. In that previous work, we used a pre-trained BERT (Bidirectional Encoder Representations from Transformers) model to build the word and sentence embeddings. BERT is a transformer-based deep language model used for NLP. We built the BERT word embeddings using the sum of the last four layers of the BERT pre-trained model.</ns0:p><ns0:p>Similarly, the BERT sentence embeddings were built by adding the BERT word embedding for all the words in the sentence.</ns0:p><ns0:p>• Step 4. Construct event phrase embeddings. An event-phrase embedding representation based on GloVe vectors 2 (with dimension 300) is built for each event trigger e k in each sentences or phrase P=w 1 , w 2 , . . . , w n as follows:</ns0:p><ns0:p>EPER(e k , P) = ∑</ns0:p><ns0:formula xml:id='formula_3'>w i ∈P 1 (|k − i| + 1) 2 • GloVe(w i ),<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where the event trigger e k is equal to the word w k for some k, 1 f k f n. The EPER representation allows to create a phrase embedding that accounts for the GloVe representation of each word w i in P with a quadratic penalization based on the distance of w i to e k .</ns0:p><ns0:p>• Step 5. Group events into semantically cohesive clusters. Clustering is applied to group similar event-phrase embeddings. Clustering event-phrase embeddings rather than clustering event triggers</ns0:p><ns0:p>2 GloVe vectors pretrained on an English corpus (https://spacy.io/models/en#en core web lg)</ns0:p></ns0:div>
<ns0:div><ns0:head>6/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science makes it possible to group events that have a similar representation. This overcomes the problem of dealing with lexically different event mentions that are conceptually associated as independent entities with no relation to each other. Since the number of event representations is typically very large, the highly efficient MiniBatch KMeans <ns0:ref type='bibr' target='#b39'>(Sculley, 2010)</ns0:ref> algorithm is applied for clustering.</ns0:p><ns0:p>However, other efficient variants of the KMeans algorithm, such as the one presented in <ns0:ref type='bibr' target='#b15'>(Kanungo et al., 2002</ns0:ref>) could be applied. A heuristic such as the Elbow method <ns0:ref type='bibr' target='#b46'>(Thorndike, 1953)</ns0:ref> is applied to determine the number of clusters.</ns0:p><ns0:p>• Step 6. Build a dataset of relevant variables observed over time. A dataset is constructed containing measurements of the observations of terms and event clusters occurrences across time.</ns0:p><ns0:p>• Step 7. Construct a time series of relevant variables. A time series is generated with the terms and event clusters at the desired temporal granularity (e.g, monthly, weekly, daily, etc.).</ns0:p><ns0:p>•</ns0:p><ns0:p>Step 8. Learn a causal structure for the given variables. A causal graph is learned where the nodes are the variables (terms and event clusters) identified in steps 2 and 5. The edges of the graph are the causal relations learned by applying a causal structure learning technique to the time series generated in step 7. It is worth mentioning that the proposed framework has several parameters that potential users could adjust to tailor it to the specific user needs. In step 1, the method adopted to filter sentences relevant to a topic is up to the user (e.g., querying a search engine, string-match filtering, etc.). In step 2, the user can configure the β value according to the specific needs. In step 5, the user should analyze different K values for the KMeans algorithm to choose the one that better suits the use case under analysis. Steps 1 through 5 provide the user with candidate variables to include in the causal graph. The user can manually inspect them and include all of them or only a subset. Lastly, in step 7, the user might want to choose the level of granularity for the time series (i.e., monthly, weekly, daily, etc.).</ns0:p><ns0:formula xml:id='formula_4'>X t-1 event-phrase embeddings … X t Y t Z t X 1 Y 1 Z 1 … … … X n Y n Z n dataset of</ns0:formula></ns0:div>
<ns0:div><ns0:head>APPLYING THE FRAMEWORK</ns0:head><ns0:p>This section describes the application of the proposed framework to a case study and evaluates its performance through a user study with domain experts.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Case Study</ns0:head><ns0:p>The full NYT corpus, covering a period of 246 months (roughly 20 years), is used as a source of news articles. The topic 'Iraq' is chosen to illustrate the application of the framework. Iraq was selected for the case study because it represents the geopolitical entity (GPE) outside the United States with the highest number of mentions in the analyzed corpus based on the spaCy's named entity recognizer. 3 The rationale for choosing a GPE outside the United States is that it allows to carry out a more focused and coherent analysis. Note, however, that any other topic, including another GPE, organization name, person name, or economic, social, political, or natural phenomenon with a sufficiently large number of mentions in the corpus could be chosen as a case study. The application of the framework is presented next.</ns0:p><ns0:p>• Step 1. For the sake of simplicity we assume that a topic is characterized by an n-gram or set of n-grams. A sentence is said to be relevant to a topic if it contains a mention of any of the n-grams associated with the topic. Since in this case study, we use the GPE 'Iraq' as the description of the topic of interest, all the sentences containing the term 'Iraq' are selected from the NYT corpus, resulting in 180,206 mentions in 170,497 unique sentences.</ns0:p><ns0:p>• Step 2. The FDD β score was used to weight, rank, and select terms from the set of topic-relevant sentences. Note that since FDD β is a supervised term-weighting technique, a sample of sentences non-relevant to the given topic is also needed. Consequently, 170,497 non-relevant sentences (the same number as relevant sentences) were randomly collected from the NYT corpus. Finally, ten topic-relevant terms are selected by applying the FDD β scheme with β = 0.477 to the set of relevant and non-relevant sentences, resulting in the list of terms presented in Table <ns0:ref type='table'>1</ns0:ref>.</ns0:p><ns0:p>Term 'weapons mass destruction' 'Persian Gulf war' 'United Nations Security' 'Iraq invasion Kuwait' 'chemical biological weapons' 'military action Iraq' 'United States' 'war Iraq' 'Saddam Hussein' 'Bush administration' Table <ns0:ref type='table'>1</ns0:ref>. Terms identified during Step 2.</ns0:p><ns0:p>• Step 3. A total of 498,560 ongoing event mentions are detected by applying the OED task on the 170,497 sentences related to 'Iraq' selected in Step 1.</ns0:p><ns0:p>• Step 4. An event-phrase embedding representation is built for each event mention detected in step 3.</ns0:p><ns0:p>• Step 5. MiniBatch KMeans is applied to group the 498,560 event-phrase embedding representations built in step 4 into 1000 clusters. The value K=1000 is selected by applying the Elbow method <ns0:ref type='bibr' target='#b46'>(Thorndike, 1953)</ns0:ref>. Only six highly cohesive clusters with a clear semantic and containing a large number of event mentions are selected to define event cluster variables. The event clusters selected for this analysis are described in Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref>.</ns0:p><ns0:p>• Step 6. Measurements of the observations of the sixteen selected variables (ten terms and six event clusters) across time are collected in a dataset. 'military action iraq', 'Iraq invasion Kuwait', and 'chemical weapons' 317 is presented in Figure <ns0:ref type='figure' target='#fig_5'>3</ns0:ref>.b.</ns0:p><ns0:p>318 1 9 8 7 -1 1 9 8 9 -1 1 9 9 1 -1 1 9 9 3 -1 1 9 9 5 -1 1 9 9 7 -1 1 9 9 9 -1 2 0 0 1 -1 2 0 0 3 -1 2 0 0 5 -1 2 0 0 7 -1 0 20 40 60 80 C249 -Military Action C109 -Death Reports (a) Ongoing event time series 1 9 8 7 -1 1 9 8 9 -1 1 9 9 1 -1 1 9 9 3 -1 1 9 9 5 -1 1 9 9 7 -1 1 9 9 9 -1 2 0 0 1 -1 2 0 0 3 -1 2 0 0 5 -1 2 0 0 7 -1 Manuscript to be reviewed</ns0:p><ns0:p>Computer Science stationarity of the variables. In the first place, we performed Augmented Dickey-Fuller (ADF) unit root tests for stationarity on each time series. From the ADF tests, we conclude that all the analyzed series are stationary, except for the one corresponding to the variable 'bush administration'.</ns0:p><ns0:p>To look further into this non-stationary time series we applied the Zivot-Andrews (ZA) unit root test and concluded that the series is stationary with a structural break. Note that the variable 'bush administration' is ambiguous since it refers to both the administrations of the 41st and the 43rd presidents of the US. But we can show that the structural break clearly distinguishes them. On the other hand, the number of observations corresponding to the first Bush presidency is almost null. That difference in the time series behavior before and after 2001 is likely the reason why the series is not stationary unless you count in the structural change that happened around that time. The causal graph resulting from applying the described framework to the NYT corpus on the topic 'Iraq' is presented in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>. The ensemble ∩ technique was used in this example because high precision is desired to only include causal relations with high confidence. Note, however, that any of the previously described causal learning techniques or other ensemble approaches could be adopted (e.g., based on a weighted voting scheme).</ns0:p><ns0:p>By analyzing the resulting causal graph it is possible to identify several causal relations with a clear semantic. For instance, the causal link 'weapons mass destruction' → 'military action While not all the causal links identified by the framework are necessarily correct, they provide useful information on potential causal relations in a domain. The next section presents an evaluation by domain experts of different casual relations inferred by the framework by applying the most promising causal structure learning techniques.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>The case study presented in the previous section offers initial evidence on the utility of the proposed framework. However, a systematic evaluation is required to provide stronger evidence of the effectiveness of the proposed approach. The evaluation methodology adopted in this work consists of two major evaluation tasks outlined in Figure <ns0:ref type='figure' target='#fig_8'>6:</ns0:ref> (1) an evaluation with synthetic data from two well-known datasets (TETRAD and CauseMe) and ( <ns0:ref type='formula'>2</ns0:ref>) an evaluation with real-world data generated by the proposed framework (based on the case study on the topic 'Iraq' described earlier). The following two sections describe each of the evaluation tasks. The first evaluation task addresses the first research question (i.e., RQ1.</ns0:p><ns0:p>What methods for time-series causality learning are effective in generalized synthetic data?). The second evaluation task offers evidence to answer the second research question (i.e., RQ2. Which of the most promising methods for time-series causality learning identified through RQ1 are also effective on realworld data extracted from news?). Finally, the second evaluation task also addresses the third research question (i.e., RQ3. What type of variables extracted from a large corpus of news is effective for building interpretable causal graphs on a topic under analysis?).</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation on Synthetic Data</ns0:head><ns0:p>For the experiments carried out with synthetic data, two different sources are used: (1) TETRAD <ns0:ref type='bibr' target='#b37'>(Scheines et al., 1998)</ns0:ref> and ( <ns0:ref type='formula'>2</ns0:ref>) CauseMe <ns0:ref type='bibr'>(Runge et al., 2019a)</ns0:ref>. The simulation tool TETRAD was used to generate 56 synthetic datasets with different characteristics. In addition, the eight datasets corresponding to the eight experiments of the nonlinear-VAR datasets 4 were selected from the CauseMe benchmarking platform 5 , resulting in a total of 64 synthetic datasets.</ns0:p><ns0:p>The TETRAD datasets were generated by varying the configuration parameters such as the time series length (T ∈ {100, 500, 1000, 2000, 3000, 4000, 5000}), number of observed variables <ns0:ref type='bibr'>(N ∈ {6, 9, 12, 15, 18, 21, 24, 27, 30})</ns0:ref> {1, 2, 3, 4, 5}). In addition, two settings were used to build the DAG, namely scale-free DAG (SFDAG) and random forward DAG (RFDAG). The average performance (across all the evaluated configuration parameters) of the analyzed state-of-the-art, ensemble, and baseline techniques in terms of precision, recall, and F1-score is presented in Figure <ns0:ref type='figure'>7</ns0:ref>.</ns0:p><ns0:p>The results show that the five state-of-the-art techniques that achieved the best precision both for RFDAG and SFDAG are (from best to worst) BigVAR, Direct-LiNGAM, PCMCI, VAR, and PC. The evaluation in terms of recall shows that the best four state-of-the-art techniques for RFDAG are VAR, PCMCI, Direct-LiNGAM, and PC. In the case of SFDAG, the four best state-of-the-art techniques are the same, but in a slightly different order, namely VAR, PCMCI, PC, and Direct-LiNGAM. Finally, the four state-of-the-art techniques that achieved the best F1-score for both RFDAG and SFDAG are Direct-LiNGAM, PCMCI, VAR, and PC. As mentioned earlier, the best four state-of-the-art techniques are combined into two ensemble techniques called ensemble ∩ and ensemble ∪ . Note that ensemble ∩ adds a causal relation only when all the combined techniques agree on including it and therefore it tends to favor precision. On the other hand, ensemble ∪ adds a causal relation when any of the combined techniques includes it, and as a consequence, it tends to favor recall. Note that ensemble ∩ is the technique achieving the best F1-score for both RFDAG and SFDAG. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The analysis on CauseMe shows a decrease in precision as the number of nodes increases, with</ns0:p><ns0:p>Direct-LiNGAM and ensemble ∩ being the techniques less affected by this loss of performance. On the other hand, the number of nodes does not have a noticeable impact on recall. It is worth mentioning that the high recall values achieved by Random are due to the fact that the ground truth causal graph is sparse and Random adds edges with a probability of 0.5. It is possible to observe that as the number of nodes increases, the analysis based on F1-score ranks the state-of-the-art techniques (from best to worst) as follows: Direct-LiNGAM, VAR, PC, PCMCI, and Random.</ns0:p><ns0:p>The evaluation carried out on the TETRAD and CauseMe datasets provide evidence to address RQ1 pointing out to the effectiveness of Direct-LiNGAM, VAR, PC, and PCMCI for time-series causality learning in generalized synthetic data. We also observe that ensemble ∩ tends to achieve high precision while ensemble ∪ tends to achieve high recall.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation with Real-World Data</ns0:head><ns0:p>An evaluation is carried out using real-world data generated by the framework based on the case study on the topic 'Iraq' described earlier. Three volunteer domain experts (annotators from now on) were recruited for an experiment aimed at assessing the existence of causal relations between pairs of variables extracted by the framework. Two of the annotators had a Ph.D. in History while the third had a Ph.D. in Political Science. Let T and E be the set of terms and event clusters from Tables <ns0:ref type='table' target='#tab_2'>1 and 2</ns0:ref>, respectively.</ns0:p><ns0:p>Three sets of unordered pairs of variables of different types (terms and event clusters) were built as follows:</ns0:p><ns0:p>• P {E,E} = {{e 1 , e 2 } : e 1 ∈ E ' e 2 ∈ E ' e 1 ̸ = e 2 }.</ns0:p><ns0:p>• P {E,T } = {{e,t} : e ∈ E ' t ∈ T }.</ns0:p><ns0:formula xml:id='formula_5'>• P {T,T } = {{t 1 ,t 2 } : t 1 ∈ T ' t 2 ∈ T ' t 1 ̸ = t 2 }.</ns0:formula><ns0:p>We randomly selected 15 pairs from each of the sets P {E,E} (event-event), P {E,T } (event-term) and P {T,T } (term-term), resulting in a total of 45 pairs. Based on the concept of causality understood by each annotator and having an understanding of the meaning of the variables by reading the annotation guidelines, the annotators were requested to select (to the best of their understanding) one of the following options for each pair of variables v 1 and v 2 :</ns0:p><ns0:p>1. The variables v 1 and v 2 are causally unrelated (i.e., v 1 ̸ → v 2 and v 2 ̸ → v 1 ).</ns0:p><ns0:p>2. The variables v 1 and v 2 are causally related in both directions (i.e., v 1 → v 2 and v 2 → v 1 ).</ns0:p><ns0:p>3. The variables v 1 and v 2 are causally related in one direction (i.e., v 1 → v 2 but v 2 ̸ → v 1 ).</ns0:p><ns0:p>4. The variables v 1 and v 2 are causally related in the other direction (i.e.,</ns0:p><ns0:formula xml:id='formula_6'>v 2 → v 1 but v 1 ̸ → v 2 ).</ns0:formula><ns0:p>Note that for each of the 45 evaluated pairs {v 1 , v 2 } it was possible to derive two Boolean assessments:</ns0:p><ns0:p>(1) v 1 causes v 2 or v 1 does not cause v 2 and (2) v 2 causes v 1 or v 2 does not cause v 1 . As a result, we obtained a total of 90 Boolean labels from each annotator. After collecting the list of labels from each annotator, we measured the inter-annotator agreement by computing the Cohen's Kappa coefficient between each pair of annotators. Since we were interested in investigating whether different types of variables (events vs. terms) had a different effect on the analysis, we computed separate coefficients for event-event, event-terms and term-term pairs. The resulting Cohen's Kappa coefficients of each type of pairs and the average value are reported in Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref>. We observe that there tends to be no agreement between term-term pairs, while the agreement between event-event pairs is considerably superior to the agreement observed between the other types of pairs. This points to the important fact that ongoing events offer a better ground for causality analysis than terms do. However, we observe little agreement among annotators in general, which indicates that the identification of causal relations is a subjective and difficult problem even for domain experts.</ns0:p><ns0:p>Due to the lack of a reliable gold-standard ground truth derived from domain experts to carry out a conclusive evaluation of the analyzed causal structure learning techniques we built three ground truth approximations as follows:</ns0:p><ns0:p>• Bold Ground Truth: <ns0:ref type='table' target='#tab_8'>4, 5</ns0:ref>, and 6, respectively.</ns0:p><ns0:p>As expected, the precision tends to increase as the ground truth becomes less conservative, while the recall is higher for a more conservative ground truth. Also, in the same way as in the evaluations carried out with synthetic data, the highest precision is usually achieved by ensemble ∩ (except for Bold Ground Truth), while the highest recall is always achieved by ensemble ∪ . This analysis provides evidence to answer RQ2, indicating that ensemble ∩ and ensemble ∪ are effective for learning causal relations, depending on whether the goal is to achieve high precision or high recall, respectively.</ns0:p><ns0:p>Since the inter-annotator agreement for the assessed causal relations reported in Table <ns0:ref type='table' target='#tab_6'>3</ns0:ref> indicates that event-event relations offer a better ground for causality analysis, we looked into the performance of each of the analyzed causal structure learning techniques restricted only to pairs from P {E,E} (i.e., both variables represent event clusters). The results of this analysis based on Bold Ground Truth, Moderate Ground Truth, and Conservative Ground Truth are reported in Tables <ns0:ref type='table' target='#tab_11'>7, 8</ns0:ref>, and 9, respectively. It is interesting to note that by restricting the analysis to variables representing events only, the performance achieved by most methods tends to be superior when evaluated on Moderate Ground Truth and Conservative Ground Truth.</ns0:p><ns0:p>The evaluation carried out with real-world data and domain experts points to two important findings.</ns0:p><ns0:p>In the first place, inter-annotator agreement on causal relations significantly increases when the variables represent ongoing events rather than general terms. In the second place, the performance of most methods tends to improve when the analysis is restricted to ongoing events. Hence, as an answer to RQ3 we conclude that variables representing ongoing events extracted from a large corpus of news are more </ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>This article looked into the problem of extracting a causal graph from a news corpus. An initial evaluation using synthetic data of nine state-of-the-art causal structure learning techniques allowed us to address RQ1, offering insight into which are the most promising methods for time-series causality learning. The evaluation with domain experts helped us to respond RQ2, by making it possible to further compare the analyzed methods and to assess the overall performance of the proposed framework using real-world data. The labeling task carried out by experts offered interesting insights into the problem of building a ground truth derived from annotators' assessments. In the first place, we learned from the evaluation that there tends to be little agreement among annotators in general, which points to the high subjectivity in causality analysis. However, we also noticed that the inter-annotator agreement significantly increases when the variables representing potential causes and effects refer to ongoing events rather than general terms (n-grams). We contend that this results from the fact that assessing causal relations between events is a better-defined problem than assessing causal relations between other variables with unclear semantics, such as general terms. This finding provides an initial answer to RQ3.</ns0:p><ns0:p>The lack of ground truth for causal discovery is a limitation recurrently discussed in the literature <ns0:ref type='bibr' target='#b18'>(Li et al., 2019;</ns0:ref><ns0:ref type='bibr' target='#b3'>Cheng et al., 2022)</ns0:ref> analysis in a real-world domain, which albeit its limitations, provides a new instrument for measuring the performance of casual discovery techniques.</ns0:p><ns0:p>The practical implications of this framework can be understood in terms of the analyses that can be derived from causal graphs obtained empirically. In particular, this approach helps identify new or unknown relationships associated with a topic or variable of interest that can offer a new perspective to the problem. Constructing a causal graph could be one of the first steps in building causal models. By moving from purely predictive models to causal modeling, we are enriching the level of analysis that could be performed over the variables and relationships of interest, allowing analysts not only to reason over existing data but to evaluate the effect of possible interventions or counterfactuals that did not occur in the observed data. Such analysis is possible because causal modeling allows us to model the generative process of the data, which leads to more robust and complete models. These practical applications of this framework can be highly relevant for public policy makers and social researchers aiming to evaluate cause and effect relations reported in large text corpora.</ns0:p><ns0:p>While we have evaluated the most salient causal discovery methods from the literature and merged the most effective ones into two ensemble methods, as part of our future work we plan to develop a novel causal discovery method from observational data that combines ideas coming from machine learning and Econometrics. The proposed transformation of data from news into time series of relevant variables makes it possible to combine data coming from news with other variables that are typically available as time series (e.g., stock market data, socioeconomic indicators, among others), enriching the domain and providing experts with additional valuable information. This will be explored as part of our future work.</ns0:p><ns0:p>Also, we plan to integrate the complete framework into a visual tool that will assist users in identifying relevant variables and exploring potential causal relations from digital media. The tool interface will allow the user to adjust different parameters to explore the data in more detail. For instance, the user could decide on the number and type of variables and causal links displayed in the causal graph, the time granularity of the time series (monthly, weekly, daily, etc.), and the number of event clusters, among other options. Finally, we plan to conduct additional user studies to further evaluate whether the developed tool facilitates sense-making in complex scenarios by domain experts.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Overview of time series causal structure learning techniques analyzed in this work. These techniques take a time series of variables and generate a causal graph. The analyzed techniques are divided into independence-based models, restricted structural models, autoregressive models, and ensemble models. The ensemble models combine Direct-LiNGAM, PCMCI, VAR, and PC (which proved to be the four most effective techniques according to evaluations carried out on synthetic data).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>as two representative 4/20 PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022) Manuscript to be reviewed Computer Science techniques based on independence. The TIGRAMITE package 1 (Runge et al., 2019b) is used to evaluate both techniques and to analyze the conditional independencies in the observed data through a partial correlation test (ParCor). This test estimates the partial correlations by means of a linear regression computed with ordinary least squares and a non-zero Pearson linear correlation test in the residuals. Other non-linear conditional independence tests are not included because of their prohibitive computation time.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>1 https://github.com/jakobrunge/tigramite 5/20 PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Framework for causal graph extraction from digital media. The framework takes as input a topic description and a corpus of news articles. It then applies eight steps aimed at building a causal graph associated with the topic of interest.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Example of ongoing event time series (a) and term time series (b) associated with the topic 'Iraq' extracted from the NYT corpus by the proposed framework.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Unit root tests. According to the critical values for the ADF test all the series are stationary, except for 'bush administration'. The ZA unit root test indicates that the series is stationary with a structural break in December 2000.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Causal graph resulting from applying the ensemble ∩ technique over the time series built from the NYT corpus on the topic 'Iraq'.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Evaluation methodology applied to answer the posed research questions. The evaluations are conducted on synthetic data (from TETRAD and CauseMe) and real-world data (generated by the framework and assessed by domain experts). The experiments with synthetic data attempt to identify the most promising time-series causality learning techniques (RQ1). The experiments with real-world data look into the question of which of the most promising methods for time-series causality learning identified through the experiments with synthetic data are also effective on real-world variables extracted from news (RQ2). The real-world data experiments also investigate what type of variables (terms or events) extracted from news are the most effective ones for building interpretable causal graphs (RQ3).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>Direct-LiNGAM, PCMCI, VAR, and PC) are further analyzed on the CauseMe datasets. Although BigVAR achieves high precision, its confidence intervals for the other metrics are very large, pointing out to inconsistent performance, and therefore it was omitted from the rest of the analysis. The performance achieved by the Random technique on the CauseMe datasets is also reported for comparison purposes.The eight datasets selected from the CauseMe benchmarking platform are built in a similar way with different time series lengths (T ∈ {300, 600}) and number of nodes (N ∈ {3, 5, 10, 20}). The precision, recall, and F1-score values achieved by the evaluated state-of-the-art, ensemble, and baseline techniques on the eight datasets are reported in Figure8. The charts on the left-and right-hand sides present the results for T=300 and T=600, respectively. Each chart displays the results for the four analyzed values of N.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure8. Performance in terms of precision, recall and F1-score on the CauseMe datasets for the evaluated state-of-the-art, ensemble, and baseline techniques. Results are reported both for time series of length 300 (left) and 600 (right), and for graphs with 3, 5, 10, and 20 nodes.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>•</ns0:head><ns0:label /><ns0:figDesc>Step 7. A monthly time series of length 246 (January 1987-June 2007) of the sixteen selected variables (ten terms and six event clusters) is built. Note that a different granularity (e.g. weekly or daily) or a different number of variables could be used to build the time series. As an example, we</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='2'>Cluster Salient terms</ns0:cell><ns0:cell>Description</ns0:cell></ns0:row><ns0:row><ns0:cell>C109</ns0:cell><ns0:cell cols='2'>killed, iraq, american, soldiers, civilians Death reports</ns0:cell></ns0:row><ns0:row><ns0:cell>C165</ns0:cell><ns0:cell>against, war, iraq, opposed, threat</ns0:cell><ns0:cell>Negative connotation reports</ns0:cell></ns0:row><ns0:row><ns0:cell>C201</ns0:cell><ns0:cell>attacks, terrorist, iraq, missile, suicide</ns0:cell><ns0:cell>Terrorist attack reports</ns0:cell></ns0:row><ns0:row><ns0:cell>C249</ns0:cell><ns0:cell>attack, iraq, military, missile, against</ns0:cell><ns0:cell>Military actions</ns0:cell></ns0:row><ns0:row><ns0:cell>C269</ns0:cell><ns0:cell>invasion, iraq, kuwait, american</ns0:cell><ns0:cell>Kuwait invasion</ns0:cell></ns0:row><ns0:row><ns0:cell>C550</ns0:cell><ns0:cell>war, iraq, led, 2003</ns0:cell><ns0:cell>Iraq war</ns0:cell></ns0:row></ns0:table><ns0:note>3 https://spacy.io/ 8/20 PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Event clusters identified during Step 5. present in Figure 3.a the resulting time series built by the framework for the events Military</ns0:figDesc><ns0:table><ns0:row><ns0:cell>315</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>316</ns0:cell><ns0:cell>Action (C249) and Death Reports (C109). Another example of time series for the terms</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Case study on topic Iraq ground truth ground truth ground truth ground truth</ns0:head><ns0:label /><ns0:figDesc>, number of hidden variables (H ∈ {0, 2, 4, 6, 8, 10, 12}), and number of lags (L ∈</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>causal graphs</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>learned from datasets</ns0:cell></ns0:row><ns0:row><ns0:cell>Evaluation with synthetic data</ns0:cell><ns0:cell>from TETRAD and CauseMe</ns0:cell><ns0:cell>dataset 1</ns0:cell><ns0:cell>TETRAD dataset 2</ns0:cell><ns0:cell>… … …</ns0:cell><ns0:cell>dataset 56</ns0:cell><ns0:cell>BigVAR Direct-LiNGAM ICA-LiNGAM Lasso Granger PCMCI PC Random SiMoNe Transfer Entropy VAR ensemble õ ensemble ô Direct-LiNGAM PCMCI Random SiMoNe</ns0:cell><ns0:cell>causal graphs learned from datasets</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>…</ns0:cell><ns0:cell /><ns0:cell>VAR ensemble õ ensemble ô</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>dataset 1</ns0:cell><ns0:cell>dataset 2</ns0:cell><ns0:cell /><ns0:cell>dataset 8</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>CauseMe</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>data Evaluation with real-world</ns0:cell><ns0:cell>from framework</ns0:cell><ns0:cell>Iraq</ns0:cell><ns0:cell cols='3'>New York Times Corpus dataset of variables (terms and events) observed over time</ns0:cell><ns0:cell>Direct-LiNGAM PCMCI SiMoNe VAR ensemble õ ensemble ô</ns0:cell><ns0:cell>general causal graphs learned from dataset</ns0:cell><ns0:cell>ground truth causal assessments from experts</ns0:cell></ns0:row></ns0:table><ns0:note>4 https://causeme.uv.es/model/nonlinear-VAR/ 5 causeme.net 11/20 PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Framework for causal learning from digital media</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>causal structure</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>learning techniques</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>performance of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>the methods</ns0:cell></ns0:row><ns0:row><ns0:cell>causal structure</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>learning techniques</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>performance of</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>the methods</ns0:cell></ns0:row><ns0:row><ns0:cell>causal structure</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>learning techniques</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>term and event causal graphs learned from dataset</ns0:cell><ns0:cell>performance of the methods for general, term and event</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>causal graph</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>extraction</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head /><ns0:label /><ns0:figDesc>The four state-of-the-art techniques that consistently achieved the best performance on TETRAD</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='7'>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='13'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell>1.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>0.724</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>0.639</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>precision</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>precision</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>BigVAR</ns0:cell><ns0:cell>DLiNGAM</ns0:cell><ns0:cell>ICALiNGAM</ns0:cell><ns0:cell>LGranger</ns0:cell><ns0:cell>PCMCI</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Random</ns0:cell><ns0:cell>SIMoNe</ns0:cell><ns0:cell>TE</ns0:cell><ns0:cell>VAR</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>BigVAR</ns0:cell><ns0:cell>DLiNGAM</ns0:cell><ns0:cell>ICALiNGAM</ns0:cell><ns0:cell>LGranger</ns0:cell><ns0:cell>PCMCI</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Random</ns0:cell><ns0:cell>SIMoNe</ns0:cell><ns0:cell>TE</ns0:cell><ns0:cell>VAR</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>ensemble</ns0:cell></ns0:row><ns0:row><ns0:cell>1.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>0.997</ns0:cell><ns0:cell /><ns0:cell>1.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>0.983</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>recall</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>recall</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>BigVAR</ns0:cell><ns0:cell>DLiNGAM</ns0:cell><ns0:cell>ICALiNGAM</ns0:cell><ns0:cell>LGranger</ns0:cell><ns0:cell>PCMCI</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Random</ns0:cell><ns0:cell>SIMoNe</ns0:cell><ns0:cell>TE</ns0:cell><ns0:cell>VAR</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>BigVAR</ns0:cell><ns0:cell>DLiNGAM</ns0:cell><ns0:cell>ICALiNGAM</ns0:cell><ns0:cell>LGranger</ns0:cell><ns0:cell>PCMCI</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Random</ns0:cell><ns0:cell>SIMoNe</ns0:cell><ns0:cell>TE</ns0:cell><ns0:cell>VAR</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>ensemble</ns0:cell></ns0:row><ns0:row><ns0:cell>1.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>1.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>0.791</ns0:cell><ns0:cell /><ns0:cell>0.8</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>0.659</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.6</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>F1-score</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>F1-score</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.2</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>0.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>0.0</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>BigVAR</ns0:cell><ns0:cell>DLiNGAM</ns0:cell><ns0:cell>ICALiNGAM</ns0:cell><ns0:cell>LGranger</ns0:cell><ns0:cell>PCMCI</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Random</ns0:cell><ns0:cell>SIMoNe</ns0:cell><ns0:cell>TE</ns0:cell><ns0:cell>VAR</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>BigVAR</ns0:cell><ns0:cell>DLiNGAM</ns0:cell><ns0:cell>ICALiNGAM</ns0:cell><ns0:cell>LGranger</ns0:cell><ns0:cell>PCMCI</ns0:cell><ns0:cell>PC</ns0:cell><ns0:cell>Random</ns0:cell><ns0:cell>SIMoNe</ns0:cell><ns0:cell>TE</ns0:cell><ns0:cell>VAR</ns0:cell><ns0:cell>ensemble</ns0:cell><ns0:cell>ensemble</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='5'>RFDAG</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>12/20 PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022) SFDAG Figure 7. Averaged performance in terms of precision, recall and F1-score on the TETRAD datasets for the evaluated state-of-the-art, ensemble, and baseline techniques. Results are reported both for RFDAG (left) and SFDAG (right). Confidence intervals are reported at the 95% level. 13/20 PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>variable v 1 causes v 2 if and only if at least one annotator indicates the existence of a causal relation from v 1 to v 2 .• Moderate Ground Truth: variable v 1 causes v 2 if and only if the majority of the annotators (i.e., at least two annotators) agree on the existence of a causal relation from v 1 to v 2 . Cohen's Kappa coefficients among annotators.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>15/20</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Methods' effectiveness on Bold Ground Truth.• Conservative Ground Truth: variable v 1 causes v 2 if and only if the three annotators agree on the existence of a causal relation from v 1 to v 2 .</ns0:figDesc><ns0:table /><ns0:note>The effectiveness of each of the analyzed causal discovery methods based on Bold Ground Truth, Moderate Ground Truth, and Conservative Ground Truth is reported in Tables</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Methods' effectiveness on Moderate Ground Truth.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='4'>Accuracy Precision Recall F1-score</ns0:cell></ns0:row><ns0:row><ns0:cell>Direct-LiNGAM</ns0:cell><ns0:cell>0.6889</ns0:cell><ns0:cell>0.3333</ns0:cell><ns0:cell>0.1667</ns0:cell><ns0:cell>0.2222</ns0:cell></ns0:row><ns0:row><ns0:cell>PC</ns0:cell><ns0:cell>0.7000</ns0:cell><ns0:cell>0.4400</ns0:cell><ns0:cell>0.4583</ns0:cell><ns0:cell>0.4490</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMCI</ns0:cell><ns0:cell>0.7111</ns0:cell><ns0:cell>0.4545</ns0:cell><ns0:cell>0.4167</ns0:cell><ns0:cell>0.4348</ns0:cell></ns0:row><ns0:row><ns0:cell>VAR</ns0:cell><ns0:cell>0.6000</ns0:cell><ns0:cell>0.3235</ns0:cell><ns0:cell>0.4583</ns0:cell><ns0:cell>0.3793</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∩</ns0:cell><ns0:cell>0.7333</ns0:cell><ns0:cell>0.5000</ns0:cell><ns0:cell>0.0833</ns0:cell><ns0:cell>0.1429</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∪</ns0:cell><ns0:cell>0.5667</ns0:cell><ns0:cell>0.3256</ns0:cell><ns0:cell>0.5833</ns0:cell><ns0:cell>0.4179</ns0:cell></ns0:row></ns0:table><ns0:note>16/20PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Methods' effectiveness on Conservative Ground Truth.effective for building interpretable causal graphs than variables representing terms.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='4'>Accuracy Precision Recall F1-score</ns0:cell></ns0:row><ns0:row><ns0:cell>Direct-LiNGAM</ns0:cell><ns0:cell>0.2333</ns0:cell><ns0:cell>0.6000</ns0:cell><ns0:cell>0.1250</ns0:cell><ns0:cell>0.2069</ns0:cell></ns0:row><ns0:row><ns0:cell>PC</ns0:cell><ns0:cell>0.3667</ns0:cell><ns0:cell>0.7778</ns0:cell><ns0:cell>0.2917</ns0:cell><ns0:cell>0.4242</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMCI</ns0:cell><ns0:cell>0.4667</ns0:cell><ns0:cell>0.9000</ns0:cell><ns0:cell>0.3750</ns0:cell><ns0:cell>0.5294</ns0:cell></ns0:row><ns0:row><ns0:cell>VAR</ns0:cell><ns0:cell>0.4000</ns0:cell><ns0:cell>0.7500</ns0:cell><ns0:cell>0.3750</ns0:cell><ns0:cell>0.5000</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∩</ns0:cell><ns0:cell>0.2333</ns0:cell><ns0:cell>0.6667</ns0:cell><ns0:cell>0.0833</ns0:cell><ns0:cell>0.1481</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∪</ns0:cell><ns0:cell>0.5000</ns0:cell><ns0:cell>0.8000</ns0:cell><ns0:cell>0.5000</ns0:cell><ns0:cell>0.6154</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_10'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Methods' effectiveness for event-event causal relations on Bold Ground Truth.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='4'>Accuracy Precision Recall F1-score</ns0:cell></ns0:row><ns0:row><ns0:cell>Direct-LiNGAM</ns0:cell><ns0:cell>0.5333</ns0:cell><ns0:cell>0.4000</ns0:cell><ns0:cell>0.1538</ns0:cell><ns0:cell>0.2222</ns0:cell></ns0:row><ns0:row><ns0:cell>PC</ns0:cell><ns0:cell>0.6667</ns0:cell><ns0:cell>0.6667</ns0:cell><ns0:cell>0.4615</ns0:cell><ns0:cell>0.5455</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMCI</ns0:cell><ns0:cell>0.6333</ns0:cell><ns0:cell>0.6000</ns0:cell><ns0:cell>0.4615</ns0:cell><ns0:cell>0.5217</ns0:cell></ns0:row><ns0:row><ns0:cell>VAR</ns0:cell><ns0:cell>0.5667</ns0:cell><ns0:cell>0.5000</ns0:cell><ns0:cell>0.4615</ns0:cell><ns0:cell>0.4800</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∩</ns0:cell><ns0:cell>0.6000</ns0:cell><ns0:cell>0.6667</ns0:cell><ns0:cell>0.1538</ns0:cell><ns0:cell>0.2500</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∪</ns0:cell><ns0:cell>0.6000</ns0:cell><ns0:cell>0.5333</ns0:cell><ns0:cell>0.6154</ns0:cell><ns0:cell>0.5714</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_11'><ns0:head>Table 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Methods' effectiveness for event-event causal relations on Moderate Ground Truth.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_12'><ns0:head>Table 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>and hence the tendency to use synthetic datasets to evaluate new causal discovery techniques. In this work, we took a further step, building a ground truth dataset for causal Methods' effectiveness for event-event causal relations on Conservative Ground Truth.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Method</ns0:cell><ns0:cell cols='4'>Accuracy Precision Recall F1-score</ns0:cell></ns0:row><ns0:row><ns0:cell>Direct-LiNGAM</ns0:cell><ns0:cell>0.6333</ns0:cell><ns0:cell>0.2000</ns0:cell><ns0:cell>0.1250</ns0:cell><ns0:cell>0.1538</ns0:cell></ns0:row><ns0:row><ns0:cell>PC</ns0:cell><ns0:cell>0.6333</ns0:cell><ns0:cell>0.3333</ns0:cell><ns0:cell>0.3750</ns0:cell><ns0:cell>0.3529</ns0:cell></ns0:row><ns0:row><ns0:cell>PCMCI</ns0:cell><ns0:cell>0.6667</ns0:cell><ns0:cell>0.4000</ns0:cell><ns0:cell>0.5000</ns0:cell><ns0:cell>0.4444</ns0:cell></ns0:row><ns0:row><ns0:cell>VAR</ns0:cell><ns0:cell>0.6000</ns0:cell><ns0:cell>0.3333</ns0:cell><ns0:cell>0.5000</ns0:cell><ns0:cell>0.4000</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∩</ns0:cell><ns0:cell>0.7000</ns0:cell><ns0:cell>0.3333</ns0:cell><ns0:cell>0.1250</ns0:cell><ns0:cell>0.1818</ns0:cell></ns0:row><ns0:row><ns0:cell>ensemble ∪</ns0:cell><ns0:cell>0.5667</ns0:cell><ns0:cell>0.3333</ns0:cell><ns0:cell>0.6250</ns0:cell><ns0:cell>0.4348</ns0:cell></ns0:row></ns0:table><ns0:note>17/20PeerJ Comput. Sci. reviewing PDF | (CS-2022:05:73895:1:1:NEW 30 Jun 2022)Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
</ns0:body>
" | "June 25 2022
Dear Muhammad Aleem,
We would like to thank you for considering our manuscript 'Causal graph extraction
from news: a comparative study of time-series causality learning techniques' as a candidate
for publication in the PeerJ Computer Science journal. Likewise, we would like to express
our gratitude to you and the reviewers for all the time and effort you invested in reading and
reviewing our manuscript. We have systematically addressed the received comments and
believe that this has significantly strengthened the manuscript.
Please find below the comments made by the Reviewers and our Point-to-Point
responses to each of them.
Sincerely,
Ana Maguitman
(on behalf of the authors)
Note: Reviewers’ and Editor’s comments are in black; responses are in blue italics.
Editor comments (Muhammad Aleem)
MAJOR REVISIONS
Based on reviewers’ comments, you may resubmit the revised manuscript for further
consideration. Please consider the reviewers’ comments carefully and submit a list of
responses to the comments along with the revised manuscript.
Reviewer 1 (Anonymous)
Basic reporting
A good article which presents a novel framework for extracting causal graphs from digital
text media that are selected from a domain under analysis by applying specially developed
information retrieval and natural language processing methods. The framework is applied to
the New York Times dataset, which covers news for a period of 246 months. The proposed
analysis offers valuable insights into the problems of identifying topic-relevant variables from
large volumes of news and learning causal graphs from time series.
(Q1) A. Normally, the Abstract and beginning of an introduction section contain the problems
in the existing approaches followed by the solution but in this article, the problem statement
is somehow not discussed.
(R1) We added the following text to the beginning of the abstract: “Causal graph extraction
from news has the potential to aid in the understanding of complex scenarios. In particular, it
can help explain and predict events, as well as conjecture about possible cause-effect
connections. However, limited work has explored the problem of large-scale extraction of
causal graphs from news articles. “ Also, we extended the first paragraph of the introduction
section to briefly discuss the main limitations of existing approaches.
(Q2) B. In the introduction, the proposed work starts from line 48 then in between (lines
53-55) the works of the other authors came.
- It will be nice to have if existing work is discussed in one place followed by proposed work.
In this way, the reader will have a good understanding of the work.
(R2) The work cited in lines 53-55 is our own previous work. The methods proposed in the
cited articles are used to extract relevant terms and ongoing events from new articles. We
have modified the text to make this clear.
(Q3) C. The employed dataset is consisting of 246 month period.
- Why not this period is mentioned in years. So that each reader doesn’t have to calculate it.
(R3) We now included both the number of months and years when discussing the size of the
dataset.
(Q4) D. How research questions have been formulated/reached? Normally, this is done after
an extensive literature review but in this article, it seems some sort of reverse engineering is
being done e.g., research questions are mapped to literature.
(R4) We agree with the reviewer in that the research gap that leads to the research
questions was not sufficiently emphasized before the research questions were posed. The
structure of our manuscript has been modified to follow a widely adopted organization of
scientific articles. The Introduction section of the revised version of our manuscript presents
and motivates the topic, outlines current approaches, identifies the research gap, formulates
the research questions, summarizes the proposal and how the research questions are
addressed, and finally presents the contributions. A detailed Related Work section is
presented after the Introduction section, with a discussion on how our proposal distinguishes
itself from previous work.
(Q5) E. In the related work section, existing research is being discussed but shortcomings of
each work are not highlighted. These shortcomings ultimately lead to research questions.
(R5) As mentioned in (R4), the structure of the paper has been modified to address this
issue.
(Q6) F. Figure 2 can be improved. At the moment, it is a bit congested. E.g., step 2 and step
5 converge at a point that is not very clear to the reader.
(R6) We reorganized the content of Figure 2 in such a way that each step is delimited by a
box. The new figure clearly reflects the fact that some steps are sequential while others can
be parallelized.
(Q7) G. Line # 200, a topic is said to be relevant if it contains mention
- This is a big assumption because it is not necessary to have mentioned it in all cases.
(R7) Finding all text fragments related to a given topic is a task so complex that it is a field of
study in itself (information retrieval (IR) or high-recall information retrieval). Due to the
framework complexity, we presented in our case study a simplified version of a topic
description based on GPE mentions. Nonetheless, the framework without any modifications
can be applied using other schemes to represent topics (which could be built using IR
strategies or generated by users). Hence, we modified the formulation of Step 1 in the
framework description to reflect the fact that topics can be represented in a variety of ways.
Accordingly, we mention that examples of simple topic representations are n-grams or sets
of n-grams. However, more complex schemes for representing topics can be naturally
adopted by the framework, including machine-centered representations, such as vector
space models or multimodal mixture models, and human-centered representations, such as
concept maps.
It is worth mentioning that although we could be missing potentially relevant text fragments
by using our strategy in the case study, we expect a high precision due to the straightforward
methodology by which we selected the topic-relevant text fragments (i.e., if a text fragment
mentions a GPE, then that text fragment is related to that GPE).
(Q8) H. Line # 221, what is BERT embeddings?
(R8) In the cited work (Maisonnave et al., 2021a), we used a pre-trained BERT (Bidirectional
Encoder Representations from Transformers) model to build the word and sentence
embeddings. BERT is a transformer-based deep language model used for NLP. In the
mentioned work, we built the BERT word embeddings using the sum of the last four layers
of the BERT pre-trained model. Similarly, we built the BERT sentence embeddings by adding
the BERT word embedding for all the words in the sentence.
To clarify, we added the previously mentioned explanation to our manuscript.
(Q9) I. Line # 248, the selected topic has the highest number of mentions in the corpus.
- Is this a biased input to the proposed framework? What if the selected topic has the lowest
number of mentions? How proposed framework will behave in this scenario?
(R9) As is the case for many other data-driven approaches, the amount of data could
potentially impact the performance of the estimations. Therefore, we could have a high
variance in the estimations for small data cases. It could also happen that the number of
relevant variables found is not as high as the user might need. However, we have no reason
to think the results would be biased. The intended use of our proposed framework is as a
means for an expert to analyze a relatively large corpus of news articles, especially in those
cases where manual analysis is not possible.
(Q10) J. Why scale-free and random forward DAGs are used?
(R10) As we mentioned in the paper, the use of synthetic data was motivated by the need to
assess the methods’ performance in a controlled scenario where the ground truth is known.
In order to achieve a thorough assessment, we tried out several different configurations
available in the tool used for building synthetic data (different number of nodes, time series
length, etc.). The tool used, TETRAD, offers two settings for the type of graph generated,
namely scale-free and random forward DAG. Since (1) we do not find any reason to prefer
any of those two options, (2) we do not have any reason to discard any of those, and (3) we
want to assess the methods as thoroughly as possible, we included both options in our
analysis.
(Q11) K. The conclusion section contains the summary of the paper in the first two
paragraphs which should not be there because this is repeating stuff again and again.
(R11) We removed the summary of the paper from the conclusion section and streamlined
the main findings with a focus on the three research questions.
(Q12) L. In future work, integration of this work with some visual tools is mentioned.
- For example?
(R12) As part of our future work, we plan to incorporate a human-in-the-loop approach into
the framework. We don't have any particular visualization in mind. Instead, we plan to
provide interactivity through a graphical/visual user interface. With a human-in-the-loop
approach, the user will have the opportunity to explore the data in more detail. By adjusting
the different parameters of the framework, the user will be able to vary the number and type
of variables and causal links displayed in the graph. For example, by adjusting the beta
value in the FDD term weighting technique, the user could select to see more descriptive or
more discriminative variables. Also, the user could change the value of K in the KMeans
algorithm or the time granularity of the time series (monthly, daily, weekly, etc.). This
interaction would allow the user a richer exploration of the data, variables, and causal links
that the framework is able to find. Although we do not have any particular visualization in
mind, we could support these interactions with classical visualizations. For example, we
could display the Elbow plot to the user to select the value for K. Similarly, we could allow
the user to pick the variables to display from a set of candidate variables that are displayed
using visualizations (frequency histograms of N-grams, word clouds of events, etc.). We
mention some of these options in the revised version of the paper.
Experimental design
Experimental design is aligned with the proposed methodology with enough details to
reproduce the results.
For example, the code written in python language with sufficient comments is made public
for use:
https://cs.uns.edu.ar/~mmaisonnave/resources/causality/code/
Maisonnave et al 2020 - Event Detection.ipynb - Colaboratory (google.com)
Use case evaluation.ipynb - Colaboratory (google.com)
Validity of the findings
All underlying data have been provided with a well-stated discussion.
Economic Relevant News from The Guardian - Mendeley Data
Maisonnave et al 2020 - FDD paper.ipynb - Colaboratory (google.com)
Reviewer 2 (Anonymous)
Basic reporting
The paper presents a framework for extracting causal graphs from digital text media. It
consist of 8 steps that goes from analyzing the text of news to filter topical-relevant
sentences, discover events and construct time series to learn a causal structure among
variables. The framework is a valuable contribution for the area as it describe all the process
starting from raw text to achieve an actual graph structure of events, providing a concrete
technique for each of the steps so that it can be effectively operationalized.
The paper is in general well-written and explained. There are some issues that can be
improved regarding organization.
(Q13) - First, the place of the section “Causal structure learning” is a little misleading, since
such learning is just a part of the framework and the last step (not yet introduced at that
point), also only mentions existing state-of-the-art algorithms, I think it can be place in the
description of step 8 or later.
(R13) - The “Causal Structure Learning’’ section intends to provide a general overview of
causal structure learning techniques rather than present specific learning techniques for the
proposed framework. Hence, to avoid confusion, we renamed the section “An Overview of
Causal Structure Learning”.
(Q14) - Second, in each step of the framework, although a technique is chosen and
described, it would be interesting to mention other alternatives that can take their place in
the framework and justify the selection.
(R14) - Following the reviewer's advice we mention some alternative techniques that could
be adopted in certain steps (e.g., other topic description schemes, term-weighting schemes
and clustering algorithms) and briefly justify the choice (e.g., simplicity, feasibility, etc.).
Other details:
(Q15) - in step 1 it is mention the use of n-grams, but the examples only have single terms.
(e.g. United, States). Why not 2-grams for instance? or NER entities?
(R15) We use 1-gram, 2-gram, and 3-gram as term variables in the case study used to
illustrate the application of the proposed framework. Some examples of these n-grams are:
'war Iraq,' 'Bush administration,' and 'weapons mass destruction.' Although we didn’t use
other types of n-grams, such as named entities, the framework could be naturally extended
to account for them.
In the first submission of our manuscript we used the python tuple notation to represent
n-grams. For example, the 2-gram 'war Iraq' was represented as ('war', 'Iraq'). We
understand that this notation was confusing to the reviewer and might be confusing to other
potential readers, so we adopted a simpler representation of n-grams (e.g., “war Iraq” rather
than (‘war’, ‘Iraq’)).
(Q16) - the role of Beta within the framework in step 2 should be clarified, which is the
rationale of its value in the framework? In this setting, it is preferred descriptive or
discriminative terms?
(R16) There is no ideal beta value that is useful for all users. Ideally, each user should be
able to explore different sets of variables obtained using different beta values. For our case
study, we use a previous result in which we studied the relevance of terms to topics
assigned by human subjects. Using those results, we selected 0.477 as the beta value,
which is the value that maximizes the correlation between FDD and the user estimates of
topic relevance for each term. However, for different use cases, the user/researcher could try
different values according to their needs. This is explained in the description of step 2 of the
proposed framework.
(Q17) - in step 3, the event trigger consist of a single word? Which is the effect of using a
unique word in this context of a huge volume of texts?
(R17) The information contained in a single term is not enough to fully characterize an event.
Therefore, to represent (and compare events), we use the whole context surrounding the
event trigger (see Event Phrase Embeddings Representation (EPER) described in step 4 of
the proposed framework).
Defining the task of event detection as classifying event triggers (the most salient words
indicating the existence of events) is a practical way of framing the problem and is one of the
most commonly used approaches (see ACE 2005). In some approaches, there is an event
argument extraction phase that follows trigger detection. In our case, arguments are not
required but the event context is captured through the use of EPER.
(Q18) - step 5, the elbow method provide a suggestion for the number of clusters, it this
done automatically? All clusters are considered or some small ones can be discarded?
(R18) Elbow is a visual strategy for selecting an appropriate value for K. The method
consists of plotting the explained variation as a function of the number of clusters and then
looking for the curve inflection point. In our case, we plotted the elbow curve for different
values of K, and we identified that the best number of clusters for our use case is
approximately K=1000 (inflection point). Each user or researcher that uses our framework
should ideally perform this analysis to find the best value of K, as it will not be the same for
all applications or use cases.
After selecting K=1000, we had to narrow down the number of clusters to only those relevant
to our use case. We use two criteria for choosing which clusters to include in the causal
graph: size and cohesivity. We only kept the twenty clusters with higher values for both
metrics, and from those, we manually inspected them and selected the six that we found
more interesting to include in the causal graph. Step 5 of our case study describes this
process.
(Q19) - in the case study, it said “sixteen variables”, should be 6?
(R19) Throughout the paper, we talk about two types of extracted variables, namely, event
and term variables. In the presented case study we selected six event variables and ten
term variables, which give us a total of sixteen variables. We added to the manuscript more
detail about what we mean when we mention 'sixteen variables.'
Experimental design
The experimentation with a framework such as the one proposed by the authors is difficult
because of the lack of ground truth. Therefore, the evaluation with the Iran case study, which
was used previously for illustrating the example is valuable. I think the data generated for the
case study should be also considered a contribution of the work.
(Q20) - The evaluation with synthetic data of the causal learning technique is again a little
out of the focus of the paper, as it only proves which learning algorithms works well with the
generated synthetic data, but it is not really the type of data used in the framework.
Therefore, it looks more a general evaluation of algorithms unrelated to the framework itself.
(R20) As mentioned in the manuscript, the lack of ground truth for causal discovery is a
limitation recurrently discussed in the literature (see (Li et al., 2019; Cheng et al., 2022)) and,
hence the tendency to use synthetic datasets to evaluate new causal discovery techniques.
In our approach, the experiments on synthetic data allowed us to have simplified scenarios
that enabled us to discard causal discovery techniques with poor performances. The
synthetic data consisted of time series generated by linear functions with uniformed sampled
coefficients, and even in those settings, some techniques performed no better than random.
Although we know that the results from synthetic data do not allow us to draw general
conclusions about the performance of the techniques on real-world data, they help to
discard methods. To assess the method's performance on real-world data, we perform
evaluations using user studies, as described in the manuscript.
Validity of the findings
The findings are interesting and demonstrate that the framework can be operationalized and
used to analyze real data. Some additional comments to help to highlight the paper
contribution:
(Q21) - the practical implications of the framework can be discussed, probably in the
conclusions. As it can be a valuable tool for analysis, potential scenarios and uses can be
identified.
(R21) We added a discussion of the practical implications in the conclusions.
“The practical implications of the framework can be understood in terms of the analyzes that
can be derived from a causal graph obtained empirically. In particular, the approach helps
identify new or unknown relationships associated with a topic or variable of interest that can
offer a new perspective to the problem. Constructing a causal graph could be one of the first
steps in building causal models. By moving from purely predictive models to causal
modeling, we are enriching the level of analysis that could be performed over the variables
and relationships of interest, allowing analysts not only to reason over existing data but to
calculate the effect of possible interventions or counterfactuals that did not occur in the
observed data. Such analysis is possible because causal modeling allows us to model the
generative process of the data, which leads to more robust and complete models. These
practical implications of the framework can be thought of for a public policy maker, a social
researcher, or anyone interested in evaluating cause and effect through large text corpora.”
(Q22) - another aspect to discuss is the level of user intervention that is needed for using the
framework, the iterations required and setting of parameters, also some guidelines for their
application can be provided
(R22) Following the reviewer’s suggestion, the following paragraph was added to the
manuscript:
“It is worth mentioning that the proposed framework has several parameters that potential
users could adjust to tailor it to the specific user needs. In step 1, the method adopted to
filter sentences relevant to a topic is up to the user (e.g., by querying a search engine,
string-match filtering, etc.). In step 2, the user can configure the $\beta$ value according to
the specific needs. In step 5, the user should analyze different $K$ values for the KMeans
algorithm to choose the one that better suits the use case under analysis. Steps 1 through 5
provide the user with candidate variables to include in the causal graph. The user can
manually inspect them and include all of them or only a subset. Lastly, in step 7, the user
might want to choose the level of granularity for the time series (i.e., monthly, weekly, daily,
etc.)..”
The above summary discusses the adjustable parameters of the framework. However, we do
not provide any guideline on which values to use for each parameter because that is highly
dependent on the use case and the user needs.
(Q23) - as the paper introduced some research questions at the beginning it would be
important to go back to them in the discussion/conclusions to summarize the findings
(R23) We included a paragraph in the conclusions to summarize the findings framed
according to our research questions.
" | Here is a paper. Please give your review comments after reading it. |
728 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Online Social Networks (OSNs) have been widely adopted as a means of news dissemination, event reporting, opinion expression and discussion. As a result, news and events are being constantly reported and discussed online through OSNs such as Twitter.</ns0:p><ns0:p>However, the variety and scale of all the information renders manual analysis extremely cumbersome, and therefore, creating a storyline for an event or news story is an effortintensive task. The main challenge pertains to the magnitude of data to be analyzed. To this end we propose a framework for ranking the resulting communities and their metadata on the basis of structural, contextual and evolutionary characteristics such as community centrality, textual entropy, persistence and stability. We apply the proposed framework on three Twitter datasets and demonstrate that the analysis that followed enables the extraction of new insights with respect to influential user accounts, topics of discussion and emerging trends. These insights could primarily assist the work of social and political analysis scientists and the work of journalists in their own story telling, but also highlight the limitations of existing analysis methods and pose new research questions. To our knowledge, this study is the first to investigate the ranking of dynamic communities. In addition, our findings suggest future work regarding the determination of the general context of the communities based on structure and evolutionary behavior alone.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>OSNs have become influential means of disseminating news, reporting events and posting ideas as well as a medium for opinion formation <ns0:ref type='bibr' target='#b47'>Topirceanu et al. (2016)</ns0:ref>. Such networks combined with advanced statistical tools are often seen as the best sources of real-time information about global phenomena <ns0:ref type='bibr' target='#b25'>Lazer et al. (2009)</ns0:ref>; <ns0:ref type='bibr' target='#b0'>Aiello et al. (2013)</ns0:ref>. Numerous studies of OSNs in relation to a variety of events have been conducted based on data from Twitter, a micro-blogging service that allows users to rapidly disseminate and receive information within the limit of 140 characters in a direct, grouped or global manner <ns0:ref type='bibr' target='#b50'>Williams et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b34'>Nikolov et al. (2015)</ns0:ref>. Twitter is currently one of the largest OSN platforms, with 313 million monthly active users 1 and as such the vast amounts of information shared through it cannot be accessed or made use of unless this information is somehow organized. Thus, appropriate means of filtering and sorting are necessary to support efficient browsing, influential user discovery, and searching and gaining an overall view of the fluctuating nature of online discussions. Existing information browsing facilities, such as simple text queries typically result in immense amounts of posts rendering the inquirer clueless with respect to the online topics of discussion. Since online social networks exhibit the property of community structure, one of the more implicit manners of grouping information and thus facilitating the browsing process is by detecting the communities formed within the network. Research on community detection on static networks can be found in the <ns0:ref type='bibr' target='#b24'>Lancichinetti and Fortunato (2009)</ns0:ref> survey, 1 According to company statistics: about.twitter.com/company (last accessed on August 2016).</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science as well as in the works of <ns0:ref type='bibr' target='#b18'>Granell et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b32'>Newman (2006)</ns0:ref>; <ns0:ref type='bibr' target='#b26'>Leskovec et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b37'>Papadopoulos et al. (2012)</ns0:ref>. Real-world OSNs, however, are definitely not static. The networks formed in services such as Twitter undergo major and rapid changes over time, which places them in the field of dynamic networks <ns0:ref type='bibr' target='#b5'>Asur et al. (2007)</ns0:ref>; <ns0:ref type='bibr'>Giatsoglou and Vakali (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b36'>Palla et al. (2007)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Takaffoli et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b46'>Tantipathananandh et al. (2007)</ns0:ref>; Roy <ns0:ref type='bibr' target='#b42'>Chowdhury and Sukumar (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b15'>Gauvin et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>Greene et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b1'>Aktunc et al. (2015)</ns0:ref>; <ns0:ref type='bibr' target='#b2'>Albano et al. (2014)</ns0:ref>. These changes are manifested as users join in or leave one or more communities, by friends mentioning each other to attract attention or by new users referencing a total stranger. These trivial interactions seem to have a minor effect on the local structure of a social network. However, the network dynamics could lead to a non-trivial transformation of the entire community structure over time, and consequently create a need for reidentification. Particularly in OSNs, the immensely fast and unpredictably fluctuating topological structure of the resulting dynamic networks renders them an extremely complicated and challenging problem. Additionally, important questions related to the origin and spread of online messages posted within these networks, as well as the dynamics of interactions among online users and their corresponding communities remain unanswered.</ns0:p><ns0:p>To this end, we present a framework for analyzing and ranking the community structure, interaction and evolution in graphs. We also define a set of different evolution scenarios, which our method can successfully identify. A community here is essentially a subgraph which represents a set of interacting users as they tweet and mention one another. The edges of the subgraph represent the mentions made between users. A dynamic community is formed by a temporal array of the aforementioned communities with the condition that they share common users <ns0:ref type='bibr' target='#b9'>Cazabet and Amblard (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b33'>Nguyen et al. (2014)</ns0:ref>.</ns0:p><ns0:p>Community evolution detection has been previously used to study the temporal structure of a network <ns0:ref type='bibr' target='#b15'>Gauvin et al. (2014)</ns0:ref>; <ns0:ref type='bibr' target='#b19'>Greene et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b36'>Palla et al. (2007)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Takaffoli et al. (2011)</ns0:ref>. However, even by establishing only the communities that sustain interest over time, the amount of communities and thus metadata that a user has to scan through is immense. In our previous work <ns0:ref type='bibr' target='#b23'>Konstantinidis et al. (2013)</ns0:ref>, we proposed an adaptive approach to discover communities at points in time of increasing interest, but also a size-based varying threshold for use in community evolution detection. Both were introduced in an attempt to discard trivial information and to implicitly reduce the available content. Although the amount of information was somewhat reduced, the extraction of information still remained a tedious task. Hence, to further facilitate the browsing of information that a user has to scan through in order to discover items of interest, we present a sorted version of the data similarly to a search engine. The sorting of the data is performed via the ranking of dynamic communities on the basis of seven distinct features which represent the notions of Time, Importance, Structure, Context and Integrity (TISCI). Nonetheless, the sorting of textual information and thus some kind of summarization is only a side-effect of the dynamic community ranking. The main impact lies in the identification and monitoring of persistent, consistent and diverse groups of people who are bound by a specific matter of discussion.</ns0:p><ns0:p>The closest work to dynamic community ranking was recently presented by <ns0:ref type='bibr' target='#b28'>Lu and Brelsford (2014)</ns0:ref> in a behavioral analysis case study and as such it is used here as a baseline for comparison purposes.</ns0:p><ns0:p>However, it should be mentioned that the ranking was not the primary aim of their research and that the communities were separately sorted by size thus missing the notions of importance, temporal stability and content diversity which are employed in the proposed framework. To the best of our knowledge this is the first time that structural, temporal, importance and contextual features are fused in a dynamic community ranking algorithm for a modern online social network application.</ns0:p><ns0:p>Although the overall problem is covered by the more general field of evolving network mining, it actually breaks down in many smaller issues that need to be faced. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> presents the decomposition of the problem into these issues, together with popular available methods which can be used to overcome them, along with the techniques employed by <ns0:ref type='bibr' target='#b28'>Lu and Brelsford (2014)</ns0:ref> and the ones proposed by the TISCI framework which is presented here.</ns0:p><ns0:p>In this work, we consider the user activity in the form of mentioning posts, the communities to which the users belong, and most importantly, the evolutionary and significance characteristics of these communities and use them in the ranking process. The proposed analysis is carried out in three steps. In the first step, the Twitter API is used to extract mentioning messages that contain interactions between users and a sequence of time periods is determined based on the level of activity. Then, for each of these periods, graph snapshots are created based on user interactions and the communities of highly interacting users are extracted using the Infomap community detection method <ns0:ref type='bibr' target='#b41'>Rosvall and Bergstrom (2008)</ns0:ref>. During the second step, the community evolution detection process inspects whether any communities have persisted in time over a long enough period (eight snapshots). In the last and featured step, the evolution is studied in order to rank the communities and their metadata (i.e. tweeted text, hashtags, URLs, etc) with respect to the communities' persistence, stability, centrality, size, diversity and integrity characteristics, thus creating dynamic community containers which provide structured access to information. The temporal (evolutional) and contextual features are also the main reason why a static community detection method was employed instead of a dynamic method which would aggregate the information such as the one proposed by <ns0:ref type='bibr' target='#b33'>Nguyen et al. (2014)</ns0:ref>.</ns0:p><ns0:p>In order to evaluate the proposed framework it is applied on three datasets extracted from Twitter to demonstrate that it can serve as a means of discovering newsworthy pieces of information and real-world incidents around topics of interest. The first dataset was collected by monitoring discussions around tweets containing vocabulary on the 2014 season of BBC's Sherlock, the second and third contain discussions on Greece's 2015 January and September elections, and the last one containing vocabulary regarding the 2012 presidential elections in the US <ns0:ref type='bibr' target='#b0'>Aiello et al. (2013)</ns0:ref>. Three community detection methods are also employed to demonstrate that Infomap is the preferable scheme.</ns0:p><ns0:p>Due to the lack of ground truth datasets for the evaluation of the proposed framework, we devised and are proposing a novel, context-based evaluation scheme which could serve as a basis for future work.</ns0:p><ns0:p>It is our belief that by studying the contents of discussions being made in groups and the evolution of these groups we can produce a better understanding of the users' and communities' behavior and can give deeper insights into unfolding large-scale events. Consequently, our main contributions can be summed up in the following:</ns0:p><ns0:p>• a novel ranking framework for dynamic communities based on temporal and contextual features;</ns0:p><ns0:p>• a context-based evaluation scheme aimed to overcome the absence of ground truth datasets for the community discovery analysis;</ns0:p><ns0:p>• an empirical study on three Twitter datasets demonstrating the merits of the proposed framework An additional asset of the TISCI ranking method, which is the main contribution of this paper, is that it was created to work with any kind of community evolution detection method that retains discrete temporal information and that it is independent of the choice of the community detection algorithm applied to the individual timeslots.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>Mining OSN interactions is a topic that has attracted considerable interest in recent years. Interaction analysis provides insights and solutions to a wide range of problems such as cyber-attacks Wei et al.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science These elements include an interactive layout of the communication network shared among the most retweeted users of a meme and detailed user-level metrics on activity volume, sentiment, inferred ideology, language, and communication channel choices.</ns0:p><ns0:p>TwitInfo is another system that provides network analysis and visualizations of Twitter data. Its content is collected by automatically identifying 'bursts' of tweets <ns0:ref type='bibr' target='#b29'>Marcus et al. (2011)</ns0:ref>. After calculating the top tweeted URLs in each burst, it plots each tweet on a map, colored according to sentiment. TwitInfo focuses on specific memes, identified by the researchers, and is thus limited in cases when arbitrary topics are of interest. Both of the aforementioned frameworks present an abundance of statistics for individual users but contrary to our method, they do not take into account the communities created by these users or the evolution of these communities. <ns0:ref type='bibr' target='#b19'>Greene et al. (2010)</ns0:ref> presented a method in which they use regular fortnight time intervals to sample a mobile phone network in a two month period and extract the communities created between the users of the network. Although the network selected is quite large and the method is also very fast; the system was created in order to be applied on a mobile phone network which renders it quite different to the networks studied in this paper. The collected data lack the topic of discussion and the content of the messages between users, so there is no way to discover the reason for which a community was transformed or the effect that the transformation really had on the topic of that community. Moreover, although the features of persistence and stability are mentioned in the paper, no effort was made in ranking the communities.</ns0:p><ns0:p>Nonetheless, due to its speed and scalability advantages, in this paper we decided to employ and extend their method by introducing a couple of optimization tweaks which render it suitable for large scale applications such as the analysis of an OSN.</ns0:p><ns0:p>Finding the optimal match between communities in different timeslots was proposed in <ns0:ref type='bibr' target='#b46'>Tantipathananandh et al. (2007)</ns0:ref>, where the dynamic community detection approach was framed as a graph-coloring problem.</ns0:p><ns0:p>Since the problem is NP-hard, the authors employed a heuristic technique that involved greedily matching pairs of node-sets in between timeslots, in descending order of similarity. Although this technique has shown to perform well on a number of small well-known social network datasets such as the Southern Women dataset, as the authors state in the paper, it does not support the identification of dynamic events such as community merging and splitting thus losing significant information which is of utmost importance in the proposed ranking framework which heavily relies on the content and context of the tweets. <ns0:ref type='bibr' target='#b45'>Takaffoli et al. (2011)</ns0:ref> considered the context of the Enron (250 email addresses in the last year from the original dataset) and DBLP (three conferences from 2004 to 2009) datasets for evaluation purposes, but similar to Greene et al., they also studied the context independently of community evolution. They focused on the changes in community evolution and the average topic continuation with respect to changes in the similarity threshold. The analyzed data presented valuable information as to how to select the similarity threshold but no insight as to important communities, their users or specific topics.</ns0:p><ns0:p>Another dynamic community detection method used to extract trends was introduced by Cazabet et al.</ns0:p><ns0:p>(2012). They created an evolving network of terms, which is an abstraction of the complete network, and then applied a dynamic community detection algorithm on this evolving network in order to discover emerging trends. Although the algorithm is very effective for locating trends, it does not consider the interactions made between various users or the evolution of the communities.</ns0:p><ns0:p>The work by <ns0:ref type='bibr' target='#b27'>Lin et al. (2008)</ns0:ref> bears some similarities in terms of motivation as they also want to gain insights into large-scale involving networks. They do this via extracting themes (concepts) and associating them with users and activities (e.g. commenting) and then try to study their evolution. However, they provide no way of ranking the extracted themes, which is the focus of our work.</ns0:p><ns0:p>One of the main problems in detecting influential communities in temporal networks is that most of the time they are populated with a large amount of outliers. While tackling the problem of dynamic network summarization, <ns0:ref type='bibr' target='#b38'>Qu et al. (2014)</ns0:ref> capture only the few most interesting nodes or edges over time, and they address the summarization problem by finding interestingness-driven diffusion processes. <ns0:ref type='bibr' target='#b13'>Ferlez et al. (2008)</ns0:ref> proposed TimeFall which performs time segmentation using cut-points, community detection and community matching across segments. Despite the fact that they do not rank the</ns0:p></ns0:div>
<ns0:div><ns0:head>4/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science communities, the proposed scheme could be employed to extract and detect evolving communities which would in turn be ranked by the TISCI framework.</ns0:p><ns0:p>In <ns0:ref type='bibr' target='#b31'>Mucha et al. (2010)</ns0:ref> the concept of multiplex networks is introduced via the extension of the popular modularity function and by adapting its implicit null model to fit a layered network. Here, each layer is represented with a slice. Each slice has an adjacency matrix describing connections between nodes which belong to the previously considered slice. Essentially they perform community detection on a network of networks. Although these frameworks technically require a network to be node-aligned (all nodes appear in all layers/timeslots), they have been used explicitly to consider relatively small networks in which that is not the case by using zero-padding. However, this creates a huge overhead in OSNs since the majority of users does not appear in every timeslot. In addition, <ns0:ref type='bibr' target='#b31'>Mucha et al. (2010)</ns0:ref> do not provide any method for the ranking of the extracted communities, which is the focus of this paper.</ns0:p><ns0:p>A method for ranking communities, specifically quasi-cliques, was proposed by <ns0:ref type='bibr' target='#b51'>Xiao et al. (2007)</ns0:ref> in which they rank the respective cliques in respect to their betweeness centrality. However, they also do not take temporal measures into consideration and apply their method on a call graph from a telecom carrier and a collaboration network of co-authors thus excluding the context of the messages.</ns0:p><ns0:p>The most recent work regarding the extraction of information using evolving communities was presented in <ns0:ref type='bibr' target='#b28'>Lu and Brelsford (2014)</ns0:ref> which studied the behavior of people discussing the 2011 Japanese earthquake and Tsunami. Although they did rank the static communities by size, the evolution regarded only the before and after periods, so no actual dynamic community ranking was performed.</ns0:p></ns0:div>
<ns0:div><ns0:head>OSN ANALYSIS FRAMEWORK</ns0:head><ns0:p>OSN applications comprise a large number of users that can be associated to each other through numerous types of interaction. Graphs provide an elegant representation of data, containing the users as their vertices and their interactions (e.g. mentions, citations) as edges. Edges can be of different types, such as simple, weighted, directed and multiway (i.e. connecting more than two entities) depending on the network creation process.</ns0:p></ns0:div>
<ns0:div><ns0:head>Notation</ns0:head><ns0:p>In this paper, we employ the standard graph notation G = (V, E, w), where G stands for the whole network;</ns0:p><ns0:p>V stands for the set of all vertices and E for the set of all edges. In particular, we use lowercase letters (x)</ns0:p><ns0:p>to represent scalars, bold lowercase letters (x) to represent vectors, and uppercase letters (X) to represent matrices. A subscript n on a variable (X n ) indicates the value of that variable at discrete time n. We use a snapshot graph to model interactions at a discrete time interval n. In G n , each node v i ∈ V n represents a user and each edge e i j ∈ E n is associated with a directed weight w i j corresponding to the frequency of mentions between v i and v j . The interaction history is represented by a sequence of graph snapshots</ns0:p><ns0:formula xml:id='formula_0'>G 1 , G 2 , ..., G n , ... . A community C i,n which belongs to the set of communities C = {C 1,1 , ...,C i,n , ...} is</ns0:formula><ns0:p>defined here as a subgraph comprising a subset V comm ⊆ V of nodes such that connections between the nodes are denser than connections with the rest of the network. A dynamic community T i,n which belongs to the set T = {T 1,1 , ..., T i,n , ..., T i,N−1 } of time-evolving communities, is defined as a series of subgraphs that consist of subsets of all the nodes in V and the set of interactions among the nodes in these subsets that occur within a set of N timeslots.</ns0:p></ns0:div>
<ns0:div><ns0:head>Framework Description</ns0:head><ns0:p>This section describes the proposed framework in three parts: community detection, community evolution detection and ranking. Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref> illustrates all the steps of the proposed framework.</ns0:p></ns0:div>
<ns0:div><ns0:head>Interaction Data Discretization and Graph Creation</ns0:head><ns0:p>The tweet timestamp and a corresponding sampling frequency are used to group the interactions into timeslots. The selection of time granularity (inverse sampling frequency) for each network is based on the change in activity. The aim is to create clear sequential graph snapshots of the network as presented in Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref>. The sampling time should be meaningful on its own (hours, days) but individually for each case as well. For example, for the Greek election and Sherlock series datasets, a 24-hour period was selected to detect day-by-day changes to the flourishing discussions during the anticipation period of the corresponding events (election day, episode broadcasting) and the post-event reactions. The 24-hour period in conjunction with the deep search performed during the evolution detection process, allows the framework to discover persistent communities over a one week time-frame.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Every node in the resulting graphs represents a Twitter user who communicated tweets in the datasets by mentioning or being mentioned. A mention, and thus a directed edge between two users is formed when one of the two creates a reference to the other in his/her posted content via use of the @ symbol.</ns0:p><ns0:p>The number of mentions between them forms the edge weight.</ns0:p></ns0:div>
<ns0:div><ns0:head>Community Detection</ns0:head><ns0:p>Given a social network, a community can be defined as a subgraph comprising a set of users that are typically associated through a common element of interest. This element can be as varied as a topic, a real-world person, a place, an event, an activity or a cause <ns0:ref type='bibr' target='#b37'>Papadopoulos et al. (2012)</ns0:ref>. We expect to discover such communities by analyzing mention networks on Twitter. There is considerable work on the topic and a host of different community detection approaches appear in literature Fortunato (2010); <ns0:ref type='bibr' target='#b37'>Papadopoulos et al. (2012)</ns0:ref>. Due to the nature of Twitter mention networks, notably their sparsity and size, in this paper we apply the Infomap method <ns0:ref type='bibr' target='#b41'>Rosvall and Bergstrom (2008)</ns0:ref> It should be noted that the focus of the framework is to rank the dynamic communities independently from the method used for their detection and not to perform an exhaustive comparison of algorithms able to process dynamic networks. Nonetheless, Figure <ns0:ref type='figure'>3</ns0:ref> as well as Table <ns0:ref type='table' target='#tab_2'>2</ns0:ref> provide support in favor of Infomap in comparison to the Louvain and Newman methods, as Infomap detects the most communities and significantly more from the middle region. Figures <ns0:ref type='figure'>3 and 4c</ns0:ref>) and d) show the performance of the Louvain and the modularity optimization technique. They both seem to detect either very large or relatively small communities, which are out of the middle section. The middle section poses the most interest for this study as it contains reasonably populated groups for the purposes of a discussion. In the future, it may be interesting to thoroughly investigate the sensitivity of results with respect to the employed community detection method.</ns0:p></ns0:div>
<ns0:div><ns0:head>Community Evolution Detection</ns0:head><ns0:p>The problem of finding communities in static graphs has been addressed by researchers for several years Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science metadata such as popular hashtags and URLs, influential users and the posted text. A dynamic community is represented by the timeline of the communities of users that it comprises. The difference between sets C and T is that the former contains every static community in every available timeslot, whereas the latter contains sequences of communities that evolve through time. In both C i,n and T i,n i is a counter of communities and dynamic communities respectively, while particularly in T i,n n represents the timeslot of birth of the dynamic community. Figure <ns0:ref type='figure'>5</ns0:ref> presents an example of the most frequent conditions that communities might experience: birth, death, irregular occurrences, merging and splitting, as well as growth and decay that register when a significant percentage of the community population is affected.</ns0:p><ns0:p>In the example of Figure <ns0:ref type='figure'>5</ns0:ref>, the behavior of six potential dynamic communities is studied over a period of three timeslots (n − 1, n, n + 1). Dynamic community T 1,n−x originated from a previous timeslot n − x which then split up into a fork formation in timeslot n. In T 1,n−x , x is an integer valued variable representing the timeslot delay which can acquire a maximum value of</ns0:p><ns0:formula xml:id='formula_1'>D (1 ≤ x ≤ D ≤ N). The split</ns0:formula><ns0:p>indicates that for some reason the members of C 1,n−1 broke up into two separate smaller groups in timeslot n, which also explains the change in size. In our case it could be that a large group of users engaged in conversation during n − 1 but split up and are not cross mentioning each other in n and n + 1. Moreover, although the second group (C 2,n ) instigated a new dynamic community T 7,n , it continued its decaying activity for one more timeslot and then dispersed (community death). Nonetheless, both are obviously important to the evolution of the community and the separation poses a great interest from a content point of view as to the ongoing user discussion and as to why they actually split up. Both can be answered by using the metadata stored in the container corresponding to the dynamic community. A dual example is that of T 2,n−1 and T 3,n−x in which two communities started up as weak and small but evolved through a merger into one very strong, large community that continues on to n + 2. In this case it could be that two different groups of people witnessed the same event and began conversing on it separately. As time went by, connections were made between the two groups and in the n timeslot they finally merged into one.</ns0:p><ns0:p>Actually, the community continued to grow as shown on the n + 1 timeslot. T 4,n−1 and T n/a were both created (community birth) in n − 1 and both disappeared in n differentiating in that T 4,n−1 reappears in n + 1 (irregular occurrence) while T n/a does not and thus a dynamic community is not registered. This is the main reason why a timeslot delay is introduced in the system as will be described later; a search strictly between communities of consecutive timeslots would result in missing such re-occurrences.</ns0:p><ns0:p>To study the various lifecycle stages of a community, the main challenge pertains to the computational process used to identify and follow the evolution of any given community. On the one hand, it should be able to effectively map every community to its corresponding timeline, and on the other hand it should be as less of a computational burden as possible to be applicable to massive networks. However, community matching techniques presume a zero-to-one or one-to-one mapping between users in two communities, thus not supporting the identification of the above conditions in the lifecycle of a dynamic community. In order to overcome this predicament, we employ a heuristic by <ns0:ref type='bibr' target='#b19'>Greene et al. (2010)</ns0:ref> relying on a user-defined threshold to determine the matching between communities across different timeslots.</ns0:p><ns0:p>The algorithm steps are presented in more detail as follows. Initially, the first set of communities {C 11 ,C 21 , ...,C i1 , ...,C k1 } (i.e. the first snapshot) is extracted by applying the Infomap community detection algorithm <ns0:ref type='bibr' target='#b41'>Rosvall and Bergstrom (2008)</ns0:ref> to the G 1 graph. A dynamic community marker T i,1 (where</ns0:p><ns0:formula xml:id='formula_2'>i = [1, k]</ns0:formula><ns0:p>) is assigned to each community from this snapshot. Next, the second set of communities is extracted from the G 2 graph and a matching process is performed between all the community combinations from the two consecutive snapshots in order to determine any possible evolution from the first snapshot to the next. The dynamic communities T (1,2,...,k),1 are then updated based on that evolution. For example, if C a1 does not appear in the second snapshot, T a,1 is not updated; a split is registered if the community appears twice in the new timeslot, and a merger marker is assigned if two or more communities seem to have merged into one.</ns0:p><ns0:p>One of the problems community evolution detection processes face is the lack of consistency in the users' behavior. The lack of consistent and sequential behavior results in communities being labeled dead when in fact they could just be delayed. In order to avoid potential false positives of community deaths, a trail of several snapshots is retained; meaning that the search covers a wider range of timeslots in total instead of just the immediate previous one. The length of the trail depends on the selected granularity of the discretization process, in a manner that a meaningful period is covered (i.e. if the sampling is performed on a daily basis, the trail will consist of seven timeslots in order to provide a week's depth).</ns0:p><ns0:p>Hence, if the evolution of a community is not detected in the immediate to last timeslot, the system queries 10/27</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref>. A study of six potential dynamic communities tracked over three timeslots. The seven most frequent conditions that communities might experience are birth (concentric circles), death (no exiting arrow), irregular occurrences (skipped timeslot), merging and splitting, as well as growth and decay.</ns0:p><ns0:p>the D previous ones in a 'last come, first served' order. This means that the search progresses through the trail until a match is found, in which case the search is terminated and the dynamic community is observed to have skipped a timeslot. If no matching community is detected, the community is considered dead. The proof of necessity for such a delay is shown on table 2. The evolution detection procedure is repeated until all graphs have been processed. It should be noted that the decision for the delay being set to only a few timeslots instead for the whole trail, was made by considering the computational burden of the system in conjunction to the fact that people lose interest. If the users comprising the community do not engage in the discussion for a significant period of time, it would be safe to say, that the community has been dismantled.</ns0:p><ns0:p>In order to determine the matching between communities, the Jaccard coefficient is employed Jaccard <ns0:ref type='bibr'>(1912)</ns0:ref>. Following comparative preliminary results between the Jaccard and the Sorensen index (dice coefficient) <ns0:ref type='bibr' target='#b44'>Sørensen (1948)</ns0:ref>, the former was selected due to its efficiency. In fact, the Jaccard similarity is still one of the most popular similarity measures in community matching <ns0:ref type='bibr' target='#b52'>Yang et al. (2013)</ns0:ref>; <ns0:ref type='bibr' target='#b3'>Alvarez et al. (2015)</ns0:ref>. The similarity between a pair of consecutive communities C in and C (i(n−td))) is calculated by use of the following formula, where timeslot delay td ∈ [1, 7]:</ns0:p><ns0:formula xml:id='formula_3'>J C in ,C i(n−td) = C in ∩C i(n−td) C in ∪C i(n−td) (1)</ns0:formula><ns0:p>If the similarity exceeds a matching threshold φ , the pair is matched and C in is added to the timeline of the T i,n dynamic community. As in <ns0:ref type='bibr' target='#b19'>Greene et al. (2010)</ns0:ref>; <ns0:ref type='bibr' target='#b45'>Takaffoli et al. (2011)</ns0:ref>; <ns0:ref type='bibr' target='#b28'>Lu and Brelsford (2014)</ns0:ref>, the similarity threshold φ is a constant threshold. Following a more extensive analysis on the impact of the threshold selection, Greene suggested the use of 0.3 which concurs with our own results. Figure <ns0:ref type='figure'>4</ns0:ref> illustrates that 0.2 allows the creation of many strings of small communities, whereas 0.5 suppresses a lot of communities from the middle region which holds most of the information required for a fuller investigation.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Dynamic Community Ranking using TISCI</ns0:head><ns0:p>Although the evolution detection algorithm is efficient enough to identify which communities are resilient to the passing of time, it does not provide a measure as to which communities are worth looking into and which are not. A solution to this shortcoming is presented here via the TISCI score that ranks the evolving communities on the basis of seven distinct features which represent the notions of Time, Importance, Structure, Context and Integrity. Specifically, we employ persistence and stability which are temporal measures, normalized-community-centrality which is a relational importance measure, community-size which is a structural measure, mean-textual-entropy and unique-URL average which are contextual measures, and an integrity coefficient inspired by the 'ship of Theseus' paradox.</ns0:p><ns0:p>Persistence is defined as the characteristic of a dynamic community to make an appearance in as many timeslots as possible (i.e. overall appearances / total number of timeslots), and stability as the ability to appear in as many consecutive timeslots as possible disregarding the total number of appearances (i.e.</ns0:p><ns0:p>overall consecutive appearances / total number of timeslots).</ns0:p><ns0:formula xml:id='formula_4'>Persistence(T x,y ) = ∑ m n=1 δ [a[n]] m (2)</ns0:formula><ns0:formula xml:id='formula_5'>Stability(T x,y ) = ∑ m n=2 δ [a[n] − a[n − 1]] m − 1 (3)</ns0:formula><ns0:p>where δ is the impulse function, m represents the total number of timeslots, x, y are the labels of the oldest community in T x,y and</ns0:p><ns0:formula xml:id='formula_6'>a[n] = 1 ∀C i,n ∈ T x,y 0 otherwise (4)</ns0:formula><ns0:p>We expect consistent dynamic communities to be both persistent and stable as the combination of these features shows either a continuous interest in a subject or its evolution to something new. As such we combine the two features into one via multiplication. Figure <ns0:ref type='figure'>4</ns0:ref> shows how stable and how persistent the communities are with respect to the actual number of persistent users. Moreover, it shows the number of people who persist in time within a community in respect with the community's persistence and stability for the Infomap as well as for the Louvain and Newman methods.</ns0:p><ns0:p>Google's PageRank Brin and Page (1998) is used as the centrality feature which measures the number and quality of links to a community in order to determine an estimate of how important that community is.</ns0:p><ns0:p>The same measure is also applied to the users from every dynamic community, ranking them according to their own centrality and thus providing the most influential users per timeslot. There is however a difference between the two in how the final centrality values are extracted, since different timeslots create different populations between the static graphs. Although this does not affect the users' centralities as they are compared to each other within the timeslot, it does however influence the communities' centrality measures due to the difference in populations. In order to compare centrality measures from different timeslots, we employ the normalized PageRank solution as proposed in <ns0:ref type='bibr' target='#b6'>Berberich et al. (2007)</ns0:ref>. The Mean Centrality as it's used here is defined as:</ns0:p><ns0:formula xml:id='formula_7'>MC(T x,y ) = ∑ m n=1 ∑ k i=1 normPR(C in ∈ T x,y ) ∑ m n=1 ∑ k i=1 a[in] (<ns0:label>5</ns0:label></ns0:formula><ns0:formula xml:id='formula_8'>)</ns0:formula><ns0:p>where k is the number of communities per timeslot.</ns0:p><ns0:p>One of the measures that provides a sense of popularity is virality which in the case of Twitter datasets translates into multiple bursts of mentions in a short time. This can happen either due to an event or because an influential user (e.g. major newspaper account) posted something of interest. On the other hand, Lu and Berlsford used the lack of increased community size as an indication of disruption in the telecommunication services. For this reason we consider the increased size of a dynamic community as a feature that requires attention. Here, the feature of size is defined as the average size of the static communities that comprise it.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The integrity measure employed is an extension of the ship of Theseus coefficient. The ship of Theseus, also known as Theseus's paradox, is a thought experiment that raises the question of whether an object which has had all its components replaced remains fundamentally the same object <ns0:ref type='bibr' target='#b39'>Rea (1995)</ns0:ref>. We apply this theory to find out the transformation sustained by the dynamic community by calculating the number of consistent nodes within the communities which represents the integrity and consistency of the dynamic community.</ns0:p><ns0:p>Twitter datasets differ quite a lot to other online social networks since the user is restricted to 140 characters of text. Given this restriction, we assume that it is safe to say that there is a connection between the entropy of tweeted words used in a community (discarding URLs, mentions, hashtags and stopwords), the effort the users put into posting those tweets, and the diversity of its content. Whether there is a discussion between the users or a presentation of different events, high textual entropy implies a broader range of information and therefore more useful results. An added bonus to this feature is that spam and empty tweets containing only hashtags or mentions, as is the case in URL attention seeking tweets, rank even lower than communities containing normal retweets. For the ranking we employ the mean textual diversity of the dynamic community. The textual diversity in a community C i is measured by Shannon's entropy H of the text resulting from the tweets that appear in that community as follows:</ns0:p><ns0:formula xml:id='formula_9'>H(C i ) = k ∑ m=1 −p(W m )log 2 (p(W m ))<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>where p(W m ) is the probability of a word W m appearing in a community containing M words and is computed as follows:</ns0:p><ns0:formula xml:id='formula_10'>p(W m ) = ( f req(W m ))/(M)<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>The second contextual feature to be employed regards the references cited by the users via URLs in order for them to point out something they consider important or interesting. In fact, the URLs hold a lot more information than the single tweet and as such we also consider it useful for discovering content-rich communities. The ranking in this case is performed by simply computing the average of unique URLs in each dynamic community over time.</ns0:p><ns0:p>Since we have an array of six features, we have to combine them into a single value in order to rank the dynamic communities. The final ranking measure for every dynamic community is extracted by employing the Reciprocal Rank Fusion (RRF) <ns0:ref type='bibr' target='#b11'>Cormack et al. (2009)</ns0:ref> method; a preference aggregation method which essentially provides the sum of the reciprocals ranks of all extracted aforementioned features Q:</ns0:p><ns0:formula xml:id='formula_11'>RRF = |Q| ∑ q=1 1 α + rank q (8)</ns0:formula><ns0:p>where α is a constant which is used to mitigate the impact of high rankings by outlier systems. <ns0:ref type='bibr' target='#b11'>Cormack et al. (2009)</ns0:ref> set the constant to 60 according to their needs although the choice, as they state, is not critical and thus we prefer a lower score equal to the number of dynamic communities to be considered.</ns0:p><ns0:p>Despite its simplicity, the RRF has proven to perform better than many other methods such as the Condorset Fuse or the well established <ns0:ref type='bibr'>CombMNZ Cormack et al. (2009)</ns0:ref> and is considered one of the best baseline consensus methods <ns0:ref type='bibr' target='#b48'>Volkovs et al. (2012)</ns0:ref>. In addition, it requires no special voting algorithm or global information and the ranks may be computed and summed one system at a time, thus avoiding the necessity of keeping all the rankings in memory. However, this preference aggregation method is not without flaws, as it could potentially hide correlations between feature ranks. Although, in other applications this could pose a problem, as shown in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> the lack of correlation between the features' ranks encourages us to employ this simple but useful method. The correlation was measured using the Spearman rank-order correlation coefficient.</ns0:p></ns0:div>
<ns0:div><ns0:head>13/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Complexity and scalability</ns0:head><ns0:p>When it comes to temporal interaction analysis, scalability is always an issue. The cost of the TISCI</ns0:p><ns0:formula xml:id='formula_12'>framework is O(m + k 2 + c • w)</ns0:formula><ns0:p>where m are the number of edges for the Infomap method, k is the number of communities of each row in the evolution detection scheme, and c and w are the numbers of dynamic communities and words in each community in the ranking stage. Although currently, the framework would not scale well due to the squared complexity of the evolution detection process, future work will involve the use of Local Sensitivity Hashing to speed up the operation.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL STUDY</ns0:head><ns0:p>Despite the proliferation of dynamic community detection methods, there is still a lack of benchmark ground truth datasets that could be used for the framework's testing purposes. Instead, the results presented in this paper were attained by applying our framework on three Twitter interaction network datasets as described in the following section.</ns0:p></ns0:div>
<ns0:div><ns0:head>Datasets</ns0:head><ns0:p>The tweets related to the US election of 2012 were collected using a set of filter keywords and hashtags chosen by experts <ns0:ref type='bibr' target='#b0'>Aiello et al. (2013)</ns0:ref>. Keywords and hashtags in Greek and English containing all the Greek party names as well as their leaders' were used for the Greek elections of 2015. Last, variations of the names 'Sherlock' and 'Watson' were used for the Sherlock series dataset. We chose these three datasets as they all share a number of useful features but also have significant differences as one may deduce from the data description and the basic network characteristics which are presented in Table <ns0:ref type='table' target='#tab_4'>4</ns0:ref>.</ns0:p><ns0:p>On the one hand all of the datasets regard major events that generate a large volume of communication and are mostly dominated by English-language contributors, making analysis simpler for the majority of researchers. On the other hand, the US election (including voting, vote counting, speculation about results and subsequent analysis) lasted two days, whereas the Greek elections and the Sherlock frenzy lasted 10 days and 2 weeks respectively. Similarly, in an event focused discussion such as the US election, almost all the focus is either on specific events/announcements or specific people associated with the events, whereas topics in a general discussion regarding a fictitious character tend to be more spread out over time and to overlap with other topics while becoming more active when specific events take place. These differences help us understand the ways that social networks are used in very different circumstances as well as that the variation in temporal structure depends heavily on the query itself.</ns0:p></ns0:div>
<ns0:div><ns0:head>Sherlock Holmes dataset</ns0:head><ns0:p>This real-world dataset is a collection of mentioning posts acquired by a crawler that collected tweets containing keywords and hashtags which are variations of the names 'Sherlock' and 'Watson'. The crawler ran over a period of 2 weeks; from the 31st of December 2013 to the 14th of January 2014, extracting messages containing mentions. The evolution detection process discovered 9,211 dynamic communities comprising 178,361 snapshot communities. The information we sought pertained to the various communities created between people who interact via mentions and are interested in the BBC's Sherlock series, people who are influenced by these communities and any news that might give us more insight on this world wide phenomenon.</ns0:p><ns0:p>The dynamic community structure resulting from this dataset is totally different from the two election ones in more ways than one. Initially, there are many diverse and smaller communities which persist over</ns0:p></ns0:div>
<ns0:div><ns0:head>14/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science time that seem to be fairly detached from the rest. This means that the interest here is widely spread and the groups of people discussing the imminent or last episode of the series are smaller indicating that in most cases we are looking into friends chatting online. Nonetheless, the user can still acquire a variety of information such as the general feeling of the viewers, several spoilers regarding each episode, a reminder about when each episode starts and on what day, and also statistics on the viewership of the series. The latter was extracted from one of the largest communities which informed us that not only was the first episode viewed by an average of 9.2 million viewers but also that Chinese fans relish the series and especially the love theory between the two main characters 2 . Other typical topics of conversation include the anticipation of the series, commentary regarding the quality of the episode, commentary regarding the character and many more typical lines of discussion. A short list of findings is presented on Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>. What is interesting enough is that conversations and opinions regarding the bad habits or the sexuality of the character are pretty high in the rankings and that there are plenty of non-English speaking communities.</ns0:p><ns0:p>One last remark concerns the DyCCo (ranked #9) which contains a plethora of shopping labels. Usually, consecutive shop labels are an indication of spam as they are usually consistent, stable and contain URLs with the sole purpose to lure a potential customer. However, in this case the shopping labeled communities contain references to books, movies and the original series of Sherlock Holmes sold by major retailers, thus classifying the DyCCo as one that a Sherlock enthusiast would actually be interested in.</ns0:p></ns0:div>
<ns0:div><ns0:head>US Election dataset</ns0:head><ns0:p>The United States presidential election of 2012 was held on November the 6th in 2012. The respective dataset was collected by using a set of filter keywords and hashtags chosen by experts <ns0:ref type='bibr' target='#b0'>Aiello et al. (2013)</ns0:ref>.</ns0:p><ns0:p>Despite retrieving tweets for specific keywords only, the data collected was still too large for a single user to organize and index in order to extract useful information.</ns0:p><ns0:p>Here, the granularity selection of three hours was made based on the fact that there is a discrete but not wild change in activity. By employing a coarser granularity instead of an hourly one serves to reduce the time zone effect. Moreover, since all four political debates in 2012 lasted for an hour and a half, then twice the span of a debate seemed like a reasonable selection for Twitter users to engage in a political discussion.</ns0:p><ns0:p>Studying the top 20 DyCCos provides a variety of stories which does not only contain mainstream Table <ns0:ref type='table'>6</ns0:ref>. Key findings from the US election dataset news but other smaller pieces of information that journalists seek out. The first one for example, which is also the most heavily populated, regards a movement of motivating women into voicing their opinion by urging them to post photos of their 'best voting looks' but also pleading for Tony Rocha (radio producer)</ns0:p><ns0:p>to use his influence for one of the nominees. The first static community alone includes 2,774 people some of which are @KaliHawk, @lenadunham, @AmmaAsante, @marieclaire and others.</ns0:p><ns0:p>Overall, during election day people are mostly trying to collect votes for their candidate of choice by either trying to inspire themselves or ask from a celebrity to do so; whereas after the fact, everyone is either posting victory or hate posts, or commenting on what the election will bring the following day, whether or not the election was falsified/rigged. At all times people are referencing a number of journalists, bloggers and politicians (Herman Cain, Pat Dollard) as well as various newspapers, TV channels and famous blogs. Other examples include comments on racism stopping due to the reelection, a book on President Obama, posts by a number of parishes and many more which unfortunately cannot be illustrated in this manuscript due to space restrictions. However, a short list of non-mainstream findings is presented on Table <ns0:ref type='table'>6</ns0:ref>.</ns0:p><ns0:p>One of the main anticipated characteristics of this particular set is that the news media, political analysts, politicians, even celebrities are heavily mentioned in the event of an election.</ns0:p></ns0:div>
<ns0:div><ns0:head>Greek Election datasets</ns0:head><ns0:p>The two Greek elections of 2015 were held on January the 25th and on September the 20th and the collection of corresponding tweets was made using Greek and English keywords, hashtags and user accounts of all major running parties and their leaders.</ns0:p><ns0:p>Although participation in the second election was almost cut in half with regard to the first, there are a few similarities in the dynamic communities that are of interest. The top 20 DyCCos of both datasets surfaced groups of an extremely wide and diverse background. Groups from Turkey, Italy, Spain and England anticipated Syriza's (center-left anti-austerity party) potential wins in both elections and joined in to comment on all matters such as the Grexit, the future of the Greek people, how the Euro hit an 11 year low following the victory of the anti-austerity party but also that the markets managed to shake off the initial tremors created by it. Conspiracy tweets were also posted within a community mentioning operation Gladio; a NATO stay-behind operation during the Cold War that sought to ensure Communism was never able to gain a foothold in Europe, which then evolved into sending warnings to the Syriza party as Greece was supposedly being framed as an emerging hub for terrorists. A short list of non-mainstream findings is presented on Table <ns0:ref type='table' target='#tab_6'>7</ns0:ref>.</ns0:p><ns0:p>Although there were a lot of interesting international pieces of commentary such as the above, the framework did not miss the local communities where a strong presence was achieved by the far left supporters, the Independent Greeks supporters, and a slightly milder presence from the right wing and extreme-right wing supporters all of whom were rooting for their party and pointing out the mistakes of Manuscript to be reviewed</ns0:p><ns0:p>Computer Science the opposing ones.</ns0:p><ns0:p>One of the similarities between the two election datasets which is rather impressive lies in the almost identical structure of the two evolutions as shown by the respective heatmaps in the Evaluation section. It is also worth mentioning that many influential users (e.g. @avgerinosx, @freedybruna) and politicians (e.g. @panoskammenos, @niknikolopoulos) who were extremely active in the first election, were also present in the second one.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Processing</ns0:head><ns0:p>Prior to the framework application, the network data is preprocessed as follows. Initially, all interaction data is filtered by discarding any corrupt messages, tweets which do not contain any interaction information and all self-loops (self-mentions) since they most frequently correspond to accounts who are trying to manipulate their influence score on Twitter. The filtered data is then sampled resulting in a sequence of activity-based snapshots. Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> displays the mentioning activity of the four twitter networks.</ns0:p><ns0:p>The process which puts the greatest computational burden on the framework involves the evolution detection. In order to speed up the searching operation, instead of using strings, the users' names are hashed resulting in communities comprising sets of hashes. Moreover, we discard all the users' hashes which appear strictly in a single timeslot since they are considered temporal outliers. However, they are not discarded from the metadata container as they may provide useful information. Two additional acceleration measures are: the discarding of communities with a population of less than three users, and, similarly to the scheme proposed by <ns0:ref type='bibr' target='#b4'>Arasu et al. (2006)</ns0:ref>, a size comparison check prior to the Jaccard similarity calculation (i.e. if the size difference between the communities disallows a threshold overcome, there is no point in measuring their similarity).</ns0:p><ns0:p>Every community in every timeslot is used as a query in a maximum of D older timeslots, essentially searching for similar communities in a D + 1 timeslot window. Whenever a similar community is found the search is halted and two possible scenarios take place. Either a new dynamic community is initiated or the query is added to an already active dynamic community. Each of these dynamic communities contains information such as text, hashtags, URLs, user centralities and edges which are all stored in a Dynamic Community Container (DyCCo).</ns0:p><ns0:p>Following the formation of these DyCCos, a TF-IDF procedure is applied in order to extract valuable keywords and bigrams that will pose as a DyCCo guideline for the potential user that might interest him/er more. The corpus of the dataset (IDF) is created by using the unique set of words contained in each timeslot of every available dynamic community and the term frequency is computed by using the unique set of tweets within the community (i.e. only one of the repetitions is used in the case that the sentence was retweeted). For the purposes of better illustration and easier browsing, the products of the framework as seen in Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> consist of:</ns0:p><ns0:p>• a word cloud per dynamic community containing the most popular: a) hashtags, b) keywords, c) bigrams, d) domains and e) text items;</ns0:p><ns0:p>• the ten most influential users from each community which could provide the potential journalist/analyst with new users who are worth following</ns0:p><ns0:p>The color heatmap in the figure represents community size but can be adjusted to also give a comparative measure of centrality or density. By using this DyCCo containing framework, the user is provided with a more meaningful view of the most important communities as well as an insight to the evolving reaction of the public with respect to various events. The respective color heatmaps for the US election and the Greek election datasets are presented in Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>, 8 and 9.</ns0:p></ns0:div>
<ns0:div><ns0:head>Evaluation</ns0:head><ns0:p>While executing preliminary experiments it was concluded that the framework can undoubtedly provide the user with some useful information whether the query regarded politics, television series, specific events or even specific people/celebrities. Unfortunately, there is no known method to which we can compare the performance of our framework.</ns0:p><ns0:p>Due to this predicament, we took the opportunity to introduce a content-based evaluation scheme through which we may compare the effectiveness of the proposed framework to the size-grounded ranking baseline method used in <ns0:ref type='bibr' target='#b28'>Lu and Brelsford (2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>17/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed Manuscript to be reviewed Since it is immensely difficult to evaluate community importance based on the tweets themselves, we employed Amazon's Alexa service and the contained URLs of each static community to extract the category to which it belonged. Alexa requires a domain as input and returns a string of categories in a variety of languages to which it belongs. In order to avoid duplicates, categories in a language other than English were translated automatically using Google's translating service. Unfortunately, most of the domains, even popular ones, either returned a vary vague category (e.g. internet) or none at all.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Hence, manual domain categorization was also necessary in order to include the most popular of domains.</ns0:p><ns0:p>Specifically, the URLs we categorized using the following labels: television, video, photo, news, social networking, blog, conspiracy, personal sites, politics, shop, arts and spam. The dynamic communities of the three Twitter datasets combined contained 78,499 URLs which were reduced to 8,761 unique domains.</ns0:p><ns0:p>A mere 2,987 of these domains were categorized either by Alexa or manually, but the overall sample of categorized URLs was significant enough to be used in the categorization process. The result of the most popular category for each community is shown in the color heatmaps displayed in Figures 6a), 7, 8 and 9 for each dataset. Besides the labeling, the heatmap also provides information regarding the size of the community. The darker the colors get, the larger the community. By combining the two, the user is provided with a relatively good idea of where to begin his/er search.</ns0:p><ns0:p>The premise on which the evaluation is based is that the content of the top 10 to 20 dynamic communities' content should match the category of the query similarly to a search engine, since most users will not go past these many results. For example, if the queried event concerns an election, the</ns0:p></ns0:div>
<ns0:div><ns0:head>22/27</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>2015), recommendation systems Kim and Shim (2014); Gupta et al. (2013), summarization Schinas et al. (2015); Lin et al. (2008) and information diffusion Yang et al. (2013). One of the most recent attempts comes from McKelvey et al. (2012) who presented the Truthy system for collecting and analyzing political discourse on Twitter, providing real-time, interactive visualizations of information diffusion processes. They created interfaces containing several key analytical components.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Block diagram of the proposed framework.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Referencing frequency activity for all four networks: a) January 2015 Greek elections, b) September 2015 Greek elections, c) 2012 US elections and d) 2014 BBC's Sherlock series.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>to detect the underlying community structures. Infomap optimizes the map equation<ns0:ref type='bibr' target='#b40'>Rosvall et al. (2009)</ns0:ref>, which exploits the information-theoretic duality between the problem of compressing data, and the problem of detecting and extracting significant patterns or structures within those data. The Infomap method is essentially built based on optimizing the minimum description length of the random walk on the network. The Infomap scheme was selected for three reasons; first, according to<ns0:ref type='bibr' target='#b24'>Lancichinetti and Fortunato (2009)</ns0:ref>;<ns0:ref type='bibr' target='#b18'>Granell et al. (2015)</ns0:ref> and our own preliminary results, Infomap is very fast and outperforms even the most popular of community detection methods such as Louvain<ns0:ref type='bibr' target='#b7'>Blondel et al. (2008)</ns0:ref> and Newman's modularity optimization technique<ns0:ref type='bibr' target='#b32'>Newman (2006)</ns0:ref>. Further, the low computational complexity of O(m) (m signifies the number of edges in a graph) encourages its use on graphs of large real networks. Last,<ns0:ref type='bibr' target='#b28'>Lu and Brelsford (2014)</ns0:ref> used it in their experimental setup and as such it is suitable for comparative purposes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc><ns0:ref type='bibr' target='#b7'>Blondel et al. (2008)</ns0:ref>;<ns0:ref type='bibr' target='#b32'>Newman (2006)</ns0:ref>;Giatsoglou et al. (2013). However, the highly dynamic nature of OSNs has moved the spotlight to the subject of dynamic graph analysis<ns0:ref type='bibr' target='#b33'>Nguyen et al. (2014)</ns0:ref>;<ns0:ref type='bibr' target='#b5'>Asur et al. (2007)</ns0:ref>;Giatsoglou and Vakali (2013);<ns0:ref type='bibr' target='#b36'>Palla et al. (2007)</ns0:ref>;<ns0:ref type='bibr' target='#b45'>Takaffoli et al. (2011)</ns0:ref>; Roy<ns0:ref type='bibr' target='#b42'>Chowdhury and Sukumar (2014)</ns0:ref>.In this paper, we represent a dynamic network as a sequence of graph snapshots G 1 , G 2 , ..., G n , .... The objective is to detect and extract dynamic communities T by identifying the communities C that are present in the network across a set of N timeslots and create a container which includes a variety of 7/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure 3. Community Jaccard distance (similarity) over community size using the Infomap (top row: a&b), Newman (second row:c&d) and Louvain (bottom row) community detection algorithms for the 2014 BBC's Sherlock series (left column) and the 2012 US elections (right column). The red dots signify the undetected communities which were missed due to the absence of the time-delay search, whereas the blue-ones signify commonly detected communities.</ns0:figDesc><ns0:graphic coords='9,141.73,120.79,392.90,480.69' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>2 www.bbc.com/news/blogs-china-blog-25550426 15/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017) Manuscript to be reviewed Computer Science Finding URL (if available) 1 Women's voting motivational movement on instagram https://goo.gl/OKs17u 2 President Obama hoops with S. Pippen on election day https://goo.gl/Ybg83Z 3 Iran and Russia among countries with messages for Obama https://goo.gl/hgiaaN 4 Mainstream media tipped the scales in favor of Obama foxNews(removed) 5 Anti Obama protests escalate at university WashPost(removed)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. DyCCo structure and content illustration for the Sherlock Twitter dataset. a) A color heatmap displaying the evolution of the top 20 dynamic communities of BBC's 2014 Sherlock TV series dataset. Each evolution series contains a characterization of the community based on the contained URLs. The background color varies between shades of blue and is an indication of the logged size of the specific community. The population increases as the shade darkens (The numbers on the right represent the label of each dynamic community). b) Wordclouds of the most popular keywords, bigrams, hashtags, tweets and URLs acquired from community6, timeslot 3. c) The corresponding graphical illustration of community6, timeslot 3 under study.</ns0:figDesc><ns0:graphic coords='19,270.16,412.16,361.53,184.41' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Color heatmap displaying the evolution of the top 20 dynamic communities of the 2012 US election dataset. Each evolution series contains a characterization of the community based on the contained URLs. The background color varies between shades of blue and is an indication of the logged size of the specific community. The population increases as the shade darkens.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Color heatmap displaying the evolution of the top 20 dynamic communities of the January, 2015 Greek election dataset. Each evolution series contains a characterization of the community based on the contained URLs. The background color varies between shades of blue and is an indication of the logged size of the specific community. The population increases as the shade darkens.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .Figure 10 .</ns0:head><ns0:label>910</ns0:label><ns0:figDesc>Figure 9. Color heatmap displaying the evolution of the top 20 dynamic communities of the September, 2015 Greek election dataset. Each evolution series contains a characterization of the community based on the contained URLs. The background color varies between shades of blue and is an indication of the logged size of the specific community. The population increases as the shade darkens.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Deconstruction of the evolving network mining problem</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Subproblem</ns0:cell><ns0:cell>Available methods</ns0:cell><ns0:cell cols='2'>Lu and Brelsford (2014) Proposed framework</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Event-based</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Granularity selection</ns0:cell><ns0:cell>Time-based</ns0:cell><ns0:cell>Event-based</ns0:cell><ns0:cell>Time-based</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Activity-based</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Louvain</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Community detection</ns0:cell><ns0:cell>Infomap</ns0:cell><ns0:cell>Infomap</ns0:cell><ns0:cell>Infomap</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Modularity Optimization</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Jaccard</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Set similarity</ns0:cell><ns0:cell>Sorensen Euclidean</ns0:cell><ns0:cell>Jaccard</ns0:cell><ns0:cell>Jaccard</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Cosine</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Reciprocal Rank</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Feature fusion</ns0:cell><ns0:cell>Multi-criteria analysis</ns0:cell><ns0:cell>None</ns0:cell><ns0:cell>Reciprocal Rank Fusion</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>Condorcet</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>Size</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Information ranking</ns0:cell><ns0:cell>Centrality</ns0:cell><ns0:cell>Size</ns0:cell><ns0:cell>TISCI</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>TISCI</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Number of detected dynamic communities with and without the timeslot delay.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='3'>Louvain Newman Infomap</ns0:cell></ns0:row><ns0:row><ns0:cell>US election</ns0:cell><ns0:cell cols='2'>delay no delay 985 1696</ns0:cell><ns0:cell>1021 579</ns0:cell><ns0:cell>4422 2646</ns0:cell></ns0:row><ns0:row><ns0:cell>Sherlock</ns0:cell><ns0:cell cols='2'>delay no delay 638 1369</ns0:cell><ns0:cell>1175 544</ns0:cell><ns0:cell>3374 1684</ns0:cell></ns0:row><ns0:row><ns0:cell>Greek election</ns0:cell><ns0:cell>delay</ns0:cell><ns0:cell>322</ns0:cell><ns0:cell>266</ns0:cell><ns0:cell>1120</ns0:cell></ns0:row><ns0:row><ns0:cell>January</ns0:cell><ns0:cell cols='2'>no delay 235</ns0:cell><ns0:cell>191</ns0:cell><ns0:cell>763</ns0:cell></ns0:row><ns0:row><ns0:cell>Greek election</ns0:cell><ns0:cell>delay</ns0:cell><ns0:cell>219</ns0:cell><ns0:cell>198</ns0:cell><ns0:cell>639</ns0:cell></ns0:row><ns0:row><ns0:cell>September</ns0:cell><ns0:cell cols='2'>no delay 144</ns0:cell><ns0:cell>127</ns0:cell><ns0:cell>403</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Feature ranking similarity comparison using Spearman's rank correlation coefficient averaged across all three datasets.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='7'>Centrality Perstability Size Textdiversity Theseus Urldiversity TISCI</ns0:cell></ns0:row><ns0:row><ns0:cell>Centrality</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.046</ns0:cell><ns0:cell>0.051</ns0:cell><ns0:cell>0.032</ns0:cell><ns0:cell>-0.021</ns0:cell><ns0:cell>0.006</ns0:cell><ns0:cell>0.021</ns0:cell></ns0:row><ns0:row><ns0:cell>Perstability</ns0:cell><ns0:cell>0.046</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.032</ns0:cell><ns0:cell>0.154</ns0:cell><ns0:cell>0.015</ns0:cell><ns0:cell>0.047</ns0:cell><ns0:cell>0.029</ns0:cell></ns0:row><ns0:row><ns0:cell>Size</ns0:cell><ns0:cell>0.051</ns0:cell><ns0:cell>0.032</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.156</ns0:cell><ns0:cell>0.005</ns0:cell><ns0:cell>0.002</ns0:cell><ns0:cell>0.011</ns0:cell></ns0:row><ns0:row><ns0:cell>Textdiversity</ns0:cell><ns0:cell>0.032</ns0:cell><ns0:cell>0.154</ns0:cell><ns0:cell>0.156</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>0.034</ns0:cell><ns0:cell>0.055</ns0:cell><ns0:cell>0.029</ns0:cell></ns0:row><ns0:row><ns0:cell>Theseus</ns0:cell><ns0:cell>-0.021</ns0:cell><ns0:cell>0.015</ns0:cell><ns0:cell>0.005</ns0:cell><ns0:cell>0.034</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>-0.008</ns0:cell><ns0:cell>0.016</ns0:cell></ns0:row><ns0:row><ns0:cell>Urldiversity</ns0:cell><ns0:cell>0.006</ns0:cell><ns0:cell>0.047</ns0:cell><ns0:cell>0.002</ns0:cell><ns0:cell>0.055</ns0:cell><ns0:cell>-0.008</ns0:cell><ns0:cell>1.0</ns0:cell><ns0:cell>-0.01</ns0:cell></ns0:row><ns0:row><ns0:cell>TISCI</ns0:cell><ns0:cell>0.021</ns0:cell><ns0:cell>0.029</ns0:cell><ns0:cell>0.011</ns0:cell><ns0:cell>0.029</ns0:cell><ns0:cell>0.016</ns0:cell><ns0:cell>-0.01</ns0:cell><ns0:cell>1.0</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Dataset Statistics</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell cols='4'>Sherlock US Election GR Election Jan GR Election Sep</ns0:cell></ns0:row><ns0:row><ns0:cell># of tweets</ns0:cell><ns0:cell>2,904,321</ns0:cell><ns0:cell>4,148,782</ns0:cell><ns0:cell>2,748,613</ns0:cell><ns0:cell>1,084,304</ns0:cell></ns0:row><ns0:row><ns0:cell># of tweets with mentions</ns0:cell><ns0:cell>1,412,358</ns0:cell><ns0:cell>2,967,779</ns0:cell><ns0:cell>1,836,296</ns0:cell><ns0:cell>622,590</ns0:cell></ns0:row><ns0:row><ns0:cell># of hashtags</ns0:cell><ns0:cell>643,132</ns0:cell><ns0:cell>4,286,418</ns0:cell><ns0:cell>1,399,109</ns0:cell><ns0:cell>482,376</ns0:cell></ns0:row><ns0:row><ns0:cell># of URLs</ns0:cell><ns0:cell>139,663</ns0:cell><ns0:cell>675,862</ns0:cell><ns0:cell>581,823</ns0:cell><ns0:cell>215,530</ns0:cell></ns0:row><ns0:row><ns0:cell># of unique users</ns0:cell><ns0:cell>542,254</ns0:cell><ns0:cell>2,013,301</ns0:cell><ns0:cell>555,859</ns0:cell><ns0:cell>166,807</ns0:cell></ns0:row><ns0:row><ns0:cell># of edges</ns0:cell><ns0:cell>1,595,435</ns0:cell><ns0:cell>4,190,883</ns0:cell><ns0:cell>2,368,396</ns0:cell><ns0:cell>744,824</ns0:cell></ns0:row><ns0:row><ns0:cell># of communities</ns0:cell><ns0:cell>186,045</ns0:cell><ns0:cell>416,181</ns0:cell><ns0:cell>108,532</ns0:cell><ns0:cell>31,518</ns0:cell></ns0:row><ns0:row><ns0:cell># reduced communities</ns0:cell><ns0:cell>37,106</ns0:cell><ns0:cell>73,170</ns0:cell><ns0:cell>32,231</ns0:cell><ns0:cell>11,681</ns0:cell></ns0:row><ns0:row><ns0:cell># of evolution steps</ns0:cell><ns0:cell>9,211</ns0:cell><ns0:cell>8,843</ns0:cell><ns0:cell>4,336</ns0:cell><ns0:cell>1,610</ns0:cell></ns0:row><ns0:row><ns0:cell># of dynamic communities</ns0:cell><ns0:cell>3,457</ns0:cell><ns0:cell>3,936</ns0:cell><ns0:cell>1,758</ns0:cell><ns0:cell>639</ns0:cell></ns0:row><ns0:row><ns0:cell>Sampling time</ns0:cell><ns0:cell>24 hours</ns0:cell><ns0:cell>3 hours</ns0:cell><ns0:cell>24 hours</ns0:cell><ns0:cell>24 hours</ns0:cell></ns0:row><ns0:row><ns0:cell>Finding</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>URL (if available)</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>1 Creators reveal they already have series four mapped out</ns0:cell><ns0:cell /><ns0:cell>https://goo.gl/5ZDCMt</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>2 Cumberbatch's parents make Sherlock cameo</ns0:cell><ns0:cell /><ns0:cell>https://goo.gl/qPgtAj</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>3 Chinese fans relish new sherlock gay love theory as fans relish new series</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>4 Episode scores highest timeshifted audience in UK TV history</ns0:cell><ns0:cell>https://goo.gl/PkfHJs</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>5 January 6th is sherlock's birthday</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Interesting findings from the Sherlock dataset</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Key findings from the Greek elections datasets</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Finding</ns0:cell></ns0:row></ns0:table><ns0:note>16/27PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)</ns0:note></ns0:figure>
<ns0:note place='foot' n='2'>/27 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12420:1:0:REVIEW 4 Jan 2017)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Exploring Twitter Communication Dynamics
with Evolving Community Analysis
Response Letter
Konstantinos Konstantinidis, Symeon Papadopoulos,
Yiannis Kompatsiaris
January 4, 2017
First, we would like to sincerely thank the reviewers for taking the time to
carefully read our submission and to provide constructive feedback, as well as
the editor for efficiently managing the reviewing process. In our revision, we
have worked thoroughly to address the reviewers’ comments by considerably
reworking the writing and presentation of the manuscript and by making a
number of corrections on issues that were pointed to us. In the following, we
provide concrete responses to the individual comments of the reviewers.
Track changes note: In the manuscript copy with track changes, we denote new blocks of text with blue letters. The deletion of large blocks of text
is denoted by strikethrough. New or changed figures or tables are denoted
by blue letters on the caption title.
Response to Resubmission requirements
Comment 1: # Funding Statement: Please provide an amended statement
that declares *all* the funding or sources of support (whether external or internal to your organization) received during this study. Please also include the
statement There was no additional external funding received for this study.
in your updated Funding Statement.
1
Response: The funding statement has been amended according to the editor’s suggestions.
Comment 2: # References: In the reference section, please provide the
full author name lists for any references with et al.
Response: Reference ‘Lazer, D. et al.’, which was only the problematic
reference we could spot, has been corrected according to the editor’s suggestions.
Comment 3: # Figures: 1) Please use numbers to name your files, example: Fig1.eps, Fig2.png.
Response: Numbers are now used to name the figure files according to the
editor’s suggestions.
Comment 4: # Figures: 2) Please combine any figures with multiple parts
into single, labeled, figure files. Ex: Figs 1A and 1B should be one figure
grouping the parts either next to each other or one on top of the other and
only labeled A and B on the respective figure parts. Each figure with multiple
parts should label each part alphabetically (e.g. A, B, C) and all parts should
be submitted together in one file.
Response: Figures containing multiple parts have been combined into one
as suggested by the editor.
Comments 5 & 6: # Figures: 3) Please upload your figures in either EPS,
PNG, JPG (photographs only) or PDF (vector PDFs only), measuring at
least 900 by 900 pixels and eliminating excess white space around the images,
as primary files here <https://peerj.com/manuscripts/12420/files>.
# Figures: 4) Figures must be at least 900 by 900 pixels without unnecessary
white space around them. Please upload your replacement figures in EPS,
PNG, JPG (photographs only) or PDF (vector PDFs only), measuring at
least 900 by 900 pixels and eliminating excess white space around the images,
as primary files.
Response: The figure files have been uploaded in the required format.
2
Comment 7: # Tables: 1) We ask for the tables in Word (composed in
Word, not images pasted in Word docs), but if you have them composed in
the LaTeX source file we can use that instead at the time of production.
Response: Our tables have been composed in the LaTex source file.
Comment 8: # Tables: 2) Please leave a Note to Staff if you choose to
provide the tables in the LaTex source file so that staff will know that’s where
it can be found.
Response: A note to staff has been submitted stating that the tables and
figures are in the source file manuscript.
Comment 9: # LaTeX Submission: .bib and .tex Files: Please provide the
source file for the manuscript. We need a text document for your manuscript,
ie: ODT, DOC, DOCX. If your preference is to provide a LaTeX file, that
would be acceptable as well- we’d just need your tex and bib files uploaded
using the Primary Files, LaTeX Source File category.
Response: The tex and bib files have been submitted.
Response to Reviewer #1
Comment 1: The paper is clearly and unambiguously written, with exception of lines 71-73 and 314-318.
Response: Lines 71-73 and 314-318 have been rewritten to enhance readability.
Comment 2: The description of the TISCI measures in a single sentence
might confuse the reader and require multiple readings.
Response: The description of the TISCI measures (both in the ‘Introduction’ and ‘Dynamic Community Ranking using TISCI’ sections) have been
rewritten to increase readability.
3
Comment 3: Fig 2 (d) has incorrect timestamps.
Response: Regarding the 2013 timestamp in the x-axis title which represents the initialization time of the collection, the first date that appears on
the graph is a day later since we use a 24-hour sampling time. Regarding the
duplicate timestamp at the end, this was caused due to the quantization of
the last day. We have now corrected the timestamp to the following day.
Comment 4: Fig 6 (d) has the circle purple dynamic community taking
two different values for the population of recurring users for 03/Jan
Response: The reviewer is right to wonder why this happened. What
is witnessed here (and in a few other cases) is a “significant split”. Since
consistent user presence is relatively rare in OSNs, usually splits result in
a small part of a community breaking off or a community being completely
dismantled. This is due to the fact that the threshold of 0.3 is the same for
any kind of evolution event, and hence it is quite rare for a community to
break out into two new ones that overcome the threshold. Yet, what happens
on the 3rd and 9th of January is exactly that. A community has broken up
into two parts while retaining a huge percentage of the original population.
Comment 5: Fig 7 (b,c) are illegible (low quality).
Response: Figures 7b and c are screenshots from a prototype user interface
for the framework, so it cannot be created in vector quality. Nonetheless,
they are now of significantly better quality and readable according to the
reviewer’s suggestion.
Comment 6: The combination of the heatmap (dark blue for higher values)
and the characterization of the community (in black) used in Fig 7 (a), 8, 9,
and 10 makes the characterization of the communities with higher population.
Response: We are assuming that the reviewer means that s/he had a difficult time reading the text in the higher population communities. However,
since Figure 7(a) is of vector quality, the text is distinguishable in the electronic copy (especially with the help of zooming) and readable in hard-copy
4
when the printout resolution is of good quality. Tampering with the color of
the text would downgrade the vector quality of the image.
Comment 7: The approach is technically sound and thoroughly justified.
However, I have minor ethical concerts wrt. disclosing the twitter identities
of the most influential users.
Response: Indeed, how to treat twitter identities in scientific works is a
complicated matter raising several ethical questions. On the one hand, it
is not ethical to assume that any twitter user would like to be discussed
and presented in a publicly accessible paper (even though their identity is
publicly available). On the other hand, removing twitter identities from the
discussion would considerably affect the insights that can be gained through
the presentation of the case studies. After careful consideration of this issue,
we opted for retaining the twitter handles in the paper noting that a) they
are not targeted in any negative way through the discussion of results, b)
most of the mentioned accounts are public figures or cannot be linked to
physical persons, c) this is common practice among research papers that use
Twitter data (which on its own would not be sufficient to back our decision).
Comment 8: In addition to the lack of clarity in the two sentences (7173, 314-318), there is a typo in line 252: D(x ≤ 1 ≤ D ≤ N ), should be
D(1 ≤ x ≤ D ≤ N ).
Response: The reviewer is absolutely correct. We have now corrected the
typo in the formula.
Comment 9: Table 4 has inconsistencies wrt. the formating of the numbers for # of tweets Sherlock and # unique users US Election.
Response: The reviewer is absolutely correct. We have now rectified the
inconsistencies of Table 4.
Comment 10: I would suggest the introduction of a small table in description of each of the datasets such that the key findings of this framework could
be easily summarized. This would help to illustrate in what way the proposed
5
framework could “assist the work of journalist in their own story telling,
social and political analyst scientists[...]”.
Response: We thank the reviewer for this excellent idea. We have now
included a small table of findings for each of the datasets.
Response to Reviewer #2
Comment 1: Given that there are already several methods in literature
(many of them cited in this paper, some missing), it becomes important for
any paper on this topic to clearly state what is the achieved improvement, both
in concept/algorithm and in results. This paper, by and large, uses previously
devised methods and known techniques and therefore it is difficult to see what
is new about the contributions made in this paper.
Response: The reviewer is correct that the contribution should be made
clearer. We have now changed the format of the contribution section in the
introduction and included more information to clarify the achieved contribution which is focused on the ranking of the dynamic communities and not on
their extraction. In addition, the paper presents a context-based evaluation
scheme to overcome the absence of ground truth for the problem, and an
empirical study using the framework on four Twitter datasets.
Comment 2: The use of a standard community detection tool such as
InfoMap first to detect communities from every step and then match them
across is the classical incremental approach that was also used in prior works
such as Greene et al. 2010. In fact the approach of matching communities
from different timesteps using Jaccard similarity was also proposed in Greene
et al. 2010 paper.
Response: The reviewer is absolutely correct. Greene et al. employed
the Louvain community detection method but stated repeatedly that their
method could be used with any community detection tool. In fact, Lu
and Brelsford (2014), which we reference as the method which is closest
to our proposed framework, also employed a major part of Greene’s work
that they heuristically tweaked (they used a different similarity threshold
6
in split/merge situations) and in which they employ the Infomap detection
method. However, by addressing the reviewer’s first comment it is now made
clear at the end of the introduction section that we only use the evolution
detection method as a means to detect and extract dynamic communities,
and that we focus on the ranking of the detected communities. The optimization tweaks with which we extend Greene’s et al. method are mentioned
in the third paragraph of the Data Processing section of the manuscript and
concern the hashing of the usernames (instead of using plain strings) and the
temporary removal of all the users’ hashes who appear in a single timeslot
alone as they are considered temporal outliers. These tweaks are of course
not substantial enough to be considered a novelty and as such are not mentioned in the contribution discussion in the Introduction. Nevertheless, we
have now included an accompanying comment in the Related Work section
in an attempt to clarify our extension of Greene’s et al method and the
differences between the two works.
Comment 3: A similar approach was also presented (earlier) by Tantipathananandh et al. 2007 and later this line of work was extended to include
cost functions that straddle across time boundaries. The sequence of work by
this group is only briefly acknowledged but never expanded. It needs to be.
There is also work in the literature that tackles a more global approach toward
defining a dynamic community. For instance the following work by Mucha et
al. 2010, is one such example, which is not cited in the paper. There is also
this review paper: Cazabet, et al. The paper needs to state what is different
about their proposed method compared to these highly releted works, why that
matters, and then substantiate those claims of advantages and/or limitations
using a comparative study in the experimental results section.
Response: We thank the reviewer for the recommendations; we have included and discussed the proposed papers in the related work section. As we
explained in our response to the first and second comments, the focus of the
paper is on the ranking of the dynamic communities and not on the detection
of the dynamic communities themselves. As such, substantiating claims of
advantages or limitations via a comparative study is out of the scope of this
work. To the best of our knowledge, the only other method which performed
dynamic-community ranking with the intention to extract useful information
similarly to our work is that of Lu and Brelsford (2014) which we actually
7
do use for comparative purposes.
Comment 4: I found the reading of this paper very difficult. Its not just
the grammar or typos and such. But it is also the verbosity and the lack
of rigor in terms of formalisms. More specifically, there is a lot of sloppy
notation and undefined terms that make the understanding of the proposed
method impossible.
Response: We have performed an additional careful revision of the article
and have made amendments to improve its clarity which include correcting
the typos, making better use of grammar and breaking down long sentences
to reduce verbosity as much as possible. We have also included definitions
of terms which we thought a user would find useful as can be noticed in the
responses of comments 5 through 7.
Comment 5: First, no where is the notion of a “community” and how it
differs from a “dynamic community” explicitly defined.
Response: The notion of community is formally provided in the Community Detection section as: “Given a social network, a community can be
defined as a subgraph comprising a set Vcomm ⊆ V of users that are typically
associated through a common element of interest”. Additionally, the notions
of community and dynamic community are described in the introduction as
“A community here is essentially a subgraph which represents a set of interacting users as they tweet and mention one another. The edges of the
subgraph represent the mentions made between users. A dynamic community
is formed by a temporal array of the aforementioned communities with the
condition that they share common users”. Nonetheless, for clarification purposes we have included a more formal definition of the dynamic community
in the Community Evolution Detection section.
Comment 6: Key notation such as Ci,j , Ti,j , etc are never really defined
anywhere. This obviously comes in the way of understanding.
Response: The reviewer is correct. We have now amended the notation
section to clarify both Ci,n and Ti,n .
8
Comment 7: For instance, if we look at Fig. 5, what does each of the T
on the left hand side stand for, and what do their subscripts mean? What
is the basis for edge labelling? The caption says there should be 5 dynamic
communities but if I look at the figure, it shows actually C1 ...C6 .
Response: The reviewer is correct. We have now clarified the notation of
T and its subscripts in the text while correcting that there are actually six
dynamic communities in the example. Fig.5 has also been altered to avoid
any confusion regarding edge labeling
Comment 8: I suggest adding a separate subsection for Definitions and
notation prior to the description of the approach, where everything is defined
unambiguously and explicitly.
Response: We do include a notation section at the beginning of the OSN
Analysis Framework description where all the generic definitions are included
and we have enriched it according to the reviewer’s suggestion.
Comment 9: As for verbosity, every effort should be made to make the
reading easy - i.e., use short paragraphs (not long montonous strings of text).
Response: We agree that there were a few cases in which verbosity was
an issue and have made an attempt to reduce it as much as possible.
Comment 10: The paper uses InfoMap and the authors try to justify its
use by comparing it with other methods such as Louvain which are also widely
used. However, the comparison they show (Figs. 3 & 4) do not really help in
understanding the key differences. The authors should use some quantative
way of describing the differences/strengths and weaknesses.
Response: The author is correct in that we do not present a deep analysis
on the key differences of the two community detection methods. However,
as we have already stated in the responses of the first three comments, the
ranking framework can be employed with any community detection method
of choice. It is not confined to the use of the Infomap algorithm. In addition it is not part of the main contribution of the paper. As we state in
the manuscript, we decided to choose Infomap on the basis of four factors.
9
These are: efficiency and speed as has been proven in well established papers,
such as the ones from Lancichinetti and Fortunato (2009) and Granell et al.
(2015), the fact that it detects more communities from the middle region as
shown in Figs 3 & 4, and the fact that Lu and Brelsford used it which is
fitting for comparison purposes.
Comment 11: As for the Jaccard similarity test, a) are communities from
only successive timesteps compared or could communities from vastly different
timesteps be also compared? b) how does the use of a threshold impact the
result?
Response: a) As mentioned in the Community Evolution Detection section, a sequence of several snapshots is retained instead of just consecutive
ones. The time delay and subsequently the length of the sequence depends
on the granularity of the discretization process, in a manner that a meaningful period is covered. For example, if the sampling is performed on a daily
basis, the depth of search will consist of seven timeslots in order to provide
a weeks depth. The impact of the delay on the availability of information
is evaluated via Table 2 in which a large increase in the number of available
communities is shown.
b) The impact of the threshold on the result is validated through Fig.6 where
it is shown that a small threshold allows for the detection of an overwhelming number of dynamic communities which contain sequences of very small
communities, whereas a large one suppresses a lot of communities from the
middle region which essentially hold the information we are after. As now
stated in the paper, a more extensive analysis on the impact of the threshold
selection is presented in Greene’s et al. work in which they deduce that 0.3
is a safe default choice.
Comment 12: As for the results, the authors have done a decent job of
trying to explain their ranked community outputs. However, its not clear
a) whether the proposed approach is missing out on some key communities;
b) or whether there are no further ways to combine the output communities
into larger communities and still maintain their significance. The authors
should consider generating synthetic inputs through simulations where there
are known community structures and use those for validation first.
10
Response: The reviewer is absolutely right. Key communities will be missing but it all depends on what the user considers key. As such, the ranking
could change by removing features or changing the weights of the features.
For example, a user could possibly require the community with the highest text complexity or the community which contains the most URLs. For
him/her the default TISCI algorithm would be missing a key-community.
However, the task here is to discover and rank robust communities which
perform relatively well according to all the measures. Our aim was to produce a lightweight but efficient way to rank dynamic containers of information
in order for the potential user to have a guide as to where to start looking
for information much like a search engine. To answer the second question,
unfortunately, we do not know of any datasets containing ranked ground
truth on meaningful or important communities, or content. By introducing
and highlighting this lack of information, we hope to motivate the research
community into creating such a benchmark. As far as community structure is concerned, the TISCI ranking method which is our main contribution
was created to work with any kind of community evolution method which
retains temporal information. As such comparing Greene’s straightforward
algorithm (employed here) to another dynamic community detection method
would be out of scope.
11
" | Here is a paper. Please give your review comments after reading it. |
729 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>EEG is a rich source of information regarding brain functioning. However, the preprocessing of EEG data can be quite complicated, due to several factors. For example, the distinction between true neural sources and noise is indeterminate; EEG data can also be very large. The various factors create a large number of subjective decisions with consequent risk of compound error.</ns0:p><ns0:p>Existing tools present the experimenter with a large choice of analysis methods. Yet it remains a challenge for the researcher to integrate methods for batch processing of the average large datasets, and compare methods to choose an optimal approach across the many possible parameter configurations. Additionally, many tools still require a high degree of manual decision making for, e.g. the classification of artefacts in channels, epochs or segments. This introduces extra subjectivity, is slow, and is not reproducible.</ns0:p><ns0:p>Batching and well-designed automation can help to regularise EEG preprocessing, and thus reduce human effort, subjectivity, and consequent error. We present the Computational Testing for Automated Preprocessing (CTAP) toolbox, to facilitate: i) batch processing that is easy for experts and novices alike; ii) testing and manual comparison of preprocessing methods. CTAP extends the existing data structure and functions from the well-known EEGLAB toolbox, based on Matlab, and produces extensive quality control outputs.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Measurement of human electroencephalography (EEG) is a rich source of information regarding certain aspects of brain functioning, and is the most lightweight and affordable method of brain imaging. Although it can be possible to see certain large effects without preprocessing at all, in the general case EEG analysis requires careful preprocessing, with some degree of trial-and-error.</ns0:p><ns0:p>Such difficult EEG preprocessing needs to be supported with appropriate tools. The kinds of tools required for signal processing depends on the properties of data, and the general-case properties of EEG are demanding: large datasets and indeterminate data contribute to the number and complexity of operations.</ns0:p><ns0:p>In most research applications EEG data can be very large; systems are available with over 256 channels. This can result in the need to examine thousands or tens of thousands of data-points; for instance, visual examination of raw data quality for 50 subjects × 256 channels × 1200 seconds ≅ 16000 plot windows (where each window shows 32 channels × 30 seconds). Also, normally EEG can require many operations (see e.g. <ns0:ref type='bibr' target='#b10'>Cowley et al. (2016)</ns0:ref> for a review), such as referencing, event-handling, filtering, dimensional reduction, and artefact detection in channels, epochs, or otherwise; all of which is time-consuming and therefore costly. Many of these operations require repeated human judgements, e.g. selection of artefactual independent components <ns0:ref type='bibr' target='#b9'>(Chaumon et al., 2015)</ns0:ref>, leading to subjectivity, non-reproducibility of outcomes, and non-uniformity of decisions.</ns0:p><ns0:p>Nor is it possible that all such operations can ever be completely automated, as it is not possible to provide a ground-truth for computational methods by uniquely determination of the neural sources of EEG. With many relatively complex standard operations, code for EEG processing can also be harder to debug <ns0:ref type='bibr' target='#b31'>(Widmann and Schröger, 2012)</ns0:ref>.</ns0:p><ns0:p>These issues illustrate the need for a software tool, a workflow management system, that helps PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to integrate the wealth of existing methods. Some standards have been suggested <ns0:ref type='bibr' target='#b19'>(Keil et al., 2014)</ns0:ref>, however <ns0:ref type='bibr' target='#b8'>Bigdely-Shamlo et al. (2015)</ns0:ref> have pointed out that 'artefact removal and validation of processing approaches remain a long-standing open problem for EEG'. The EEGLAB toolbox <ns0:ref type='bibr' target='#b13'>(Delorme and Makeig, 2004</ns0:ref>) and its various plug-ins provide a wealth of functions, but in this ecosystem it remains difficult and time-consuming to build the necessary infrastructure to manage, regularise, and streamline EEG preprocessing.</ns0:p><ns0:p>A workflow management system for data-processing pipelines helps to ensure that the researcher/analyst saves most of their cognitive effort for choosing analysis steps (not implementing them) and assessing their outcome (not debugging them). A regularised workflow maximises the degree to which each file is treated the same -for EEG this means to minimise drift in file-wise subjective judgements, such as estimating the accuracy of artefact detection algorithm(s) by visual inspection. A streamlined workflow can be enabled by separating the building of functions (for analysis or data management) from exploring and tuning the data. These features improve reproducibility and separate the menial from the important tasks. To meet these needs, in this paper we present the Computational Testing Automated Preprocessing (CTAP) toolbox.</ns0:p></ns0:div>
<ns0:div><ns0:head>Approach</ns0:head><ns0:p>The CTAP toolbox is available as a GitHub repository at https://github.com/bwrc/ctap.</ns0:p><ns0:p>It is built on Matlab (R2015a and higher) and EEGLAB v13.4.4b; limited functions, especially non-graphical, may work on older versions but are untested.</ns0:p><ns0:p>The aim of CTAP is to regularise and streamline EEG preprocessing in the EEGLAB ecosystem.</ns0:p><ns0:p>In practice, the CTAP toolbox extends EEGLAB to provide functionality for: i) batch processing using scripted EEGLAB-compatible functions; ii) testing and comparison of preprocessing methods based on extensive quality control outputs. The key benefits include:</ns0:p><ns0:p>• ability to run a subset of a larger analysis</ns0:p><ns0:p>• bookkeeping of intermediate result files</ns0:p></ns0:div>
<ns0:div><ns0:head>• error handling</ns0:head><ns0:p>• visualisations of the effects of analysis steps</ns0:p><ns0:p>• simple to customise and extend</ns0:p></ns0:div>
<ns0:div><ns0:head>• reusable code</ns0:head><ns0:p>• feature and raw data export We will next briefly motivate each of the benefits above.</ns0:p><ns0:p>Incomplete runs A frequent task is to make a partial run of a larger analysis. This happens, for example, when new data arrives or when the analysis fails for a few measurements. The incomplete run might involve a subset of a) subjects, b) measurements, c) analysis branches, d) collections of analysis steps, e) single steps; or any combination of these. CTAP provides tools to make these partial runs while keeping track of the intermediate saves.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Bookkeeping A given EEG analysis workflow can have several steps, branches to explore alternatives, and a frequent need to reorganise analysis steps or to add additional steps in between.</ns0:p><ns0:p>Combined with incomplete runs, these requirements call for a system that can find the correct input file based on step order alone. CTAP does this and saves researchers time and energy for more productive tasks.</ns0:p><ns0:p>Error handling Frequently, simple coding errors or abnormal measurements can cause a long batch run to fail midway. CTAP catches such errors, saves their content into log files for later reference and continues the batch run. For debugging purposes it is also possible to override this behaviour and use Matlab's built-in debugging tools to solve the issue. Customisation In research it is vital to be able to customise and extend the tools in use. Extending CTAP with custom functions is easy as the interface that CTAP * .m functions must implement is simple. Intermediate results are stored in EEGLAB format and can be directly opened with the EEGLAB graphical user interface (GUI) for inspection or manual processing.</ns0:p></ns0:div>
<ns0:div><ns0:head>Visualisations</ns0:head></ns0:div>
<ns0:div><ns0:head>Code reuse</ns0:head><ns0:p>The CTAP * .m functions act as wrappers that make it possible to combine methods to build analysis workflows. Most analysis steps are actually implemented as standalone functions, such that they can be used also outside CTAP. In contrast to EEGLAB, CTAP functions do not pop-up configuration windows that interfere with automated workflows.</ns0:p><ns0:p>Export facilities Exporting results might prove time consuming in Matlab as there are no highlevel tools to work with mixed text and numeric data. To this end, CTAP provides its own format of storing data and several export options. Small datasets can be exported as, e.g. comma delimited text (csv) while larger sets are more practically saved in an SQLite database. CTAP also offers the possibility to store single-trial and average ERP data in HDF5 format, which makes the export to e.g. R and Python simple.</ns0:p><ns0:p>In summary, CTAP lets the user focus on content, instead of time-consuming implementation of foundation functionality. In the rest of the paper, we describe how CTAP toolbox does this using a synthetic dataset as a running example.</ns0:p><ns0:p>We start with related work followed by the Materials & Methods section detailing the architecture and usage of CTAP. The Results section then describes the technical details and outcomes of a motivating example application. In the Discussion section we set out the philosophy and possible uses of CTAP toolbox, including development as well as preprocessing; and describe issues and potential directions for future work.</ns0:p></ns0:div>
<ns0:div><ns0:head>Related Work</ns0:head><ns0:p>Many methods are available from the literature to facilitate automated preprocessing <ns0:ref type='bibr' target='#b1'>(Agapov et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b4'>Baillet et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b6'>Barua and Begum, 2014)</ns0:ref>, and the rate of new contributions is also high 1 In a milestone special issue, <ns0:ref type='bibr' target='#b3'>Baillet et al. (2011)</ns0:ref> gathered many of the academic contributions available 1 For example, we conducted a search of the SCOPUS database for articles published after 1999, with 'EEG' and 'electroencephalography' in the title, abstract, or keywords, plus 'Signal Processing' or 'Signal Processing, Computer-Assisted' in keywords, and restricted to subject areas 'Neuroscience', 'Engineering' or 'Computer Science'. The search returned over 300 hits, growing year-by-year from 5 in 2000 up to a mean value of 36 between 2010 and 2015. at that time. This special issue is quite skewed toward tools for feature extraction, which illustrates again the need for better/more up-to-date solutions for the fundamental stages of EEG processing.</ns0:p><ns0:p>Among tools dedicated to EEG processing, EEGLAB stands out for its large user community and high number of third-party contributors, to the degree that it is considered by some to be a de facto standard. Although EEGLAB functions can be called from the command-line interface and thus built into a preprocessing pipeline by the user's own scripts, in practice this is a non-trivial error-prone task.</ns0:p><ns0:p>Other popular tools focus on a more diverse set of signals, especially including magnetoencephalography (MEG). Brainstorm <ns0:ref type='bibr' target='#b26'>(Tadel et al., 2011)</ns0:ref>, Fieldtrip <ns0:ref type='bibr' target='#b23'>(Oostenveld et al., 2011)</ns0:ref>, and EMEGS (ElectroMagnetic EncaphaloGraphy Software) <ns0:ref type='bibr' target='#b25'>(Peyk et al., 2011)</ns0:ref> are all open source tools for EEG and MEG data analysis. Brainstorm in particular, but also the others, have originated with an emphasis on cortical source estimation techniques and their integration with anatomical data.</ns0:p><ns0:p>Like EEGLAB, these tools are all free and open source, but based on the commercial platform Matlab (Natick, MA), which can be a limitation in some contexts due to high licence cost. The most notable commercial tool is BrainVISION Analyzer (Brain Products GmbH, Munich, Germany), a graphical programming interface with a large number of features.</ns0:p><ns0:p>Tools which are completely free and open source are fewer in number and have received much less supplemental input from third parties. Python tools include MNE-Python for processing MEG and EEG data <ns0:ref type='bibr' target='#b18'>(Gramfort et al., 2013)</ns0:ref>, and PyEEG <ns0:ref type='bibr' target='#b5'>(Bao et al., 2011)</ns0:ref>, a module for EEG feature extraction. MNE, like Brainstorm and Fieldtrip, is primarily aimed at integrating EEG and MEG data. Several packages exist for the R computing environment, e.g. <ns0:ref type='bibr' target='#b29'>Tremblay and Newman (2015)</ns0:ref>, however these do not seem to be intended as general-purpose tools.</ns0:p><ns0:p>However, CTAP was designed to complement the existing EEGLAB ecosystem, not to provide a stand-alone preprocessing tool. This is an important distinction, because there exist some excellent stand-alone tools which work across data formats and platforms <ns0:ref type='bibr' target='#b7'>(Bellec et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b24'>Ovaska et al., 2010)</ns0:ref> 2 ; these features are valuable when collaborators are trying to work across, e.g. Windows and Linux, Matlab and Python. However we do not see a need in this domain; rather we see a need in the much narrower focus on improving the command-line interface batch-processing capabilities of EEGLAB.</ns0:p><ns0:p>We have chosen to extend EEGLAB because it has received many contributions to the core functionality, and is thus compatible with a good portion of the methods of EEG processing from the literature. Some compatible tools from the creators of EEGLAB at the Swartz Centre for Computational Neuroscience (SCCN) are detailed in <ns0:ref type='bibr' target='#b14'>Delorme et al. (2011)</ns0:ref>, including tools for forward head modelling, estimating source connectivity, and online signal processing. Other key third-party preprocessing contributions to EEGLAB include SASICA <ns0:ref type='bibr' target='#b9'>(Chaumon et al., 2015)</ns0:ref>, FASTER <ns0:ref type='bibr' target='#b22'>(Nolan et al., 2010)</ns0:ref>, and ADJUST <ns0:ref type='bibr' target='#b21'>(Mognon et al., 2011)</ns0:ref>, all semi-automated solutions for selection of artefactual data.</ns0:p><ns0:p>In terms of similar tools, <ns0:ref type='bibr' target='#b8'>Bigdely-Shamlo et al. (2015)</ns0:ref> released the PREP pipeline for Matlab, which also uses the EEGLAB data structure. PREP introduces specific important functionality for referencing the data, line noise removal, and detecting bad channels. PREP is aimed only at experiment-induced artefacts and not those deriving from subject-activity such as, e.g. blinks, and is designed to be complementary to the various algorithm toolboxes for artefact-removal by focusing on early-stage processing. In similar vein, CTAP is intended to be complementary to existing 2 Also NeuroPype, a commercial Python-based graphical programming environment for physiological signal processing. However, to the authors' knowledge, it has not been documented in a peer reviewed publication.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science For example, methods from FASTER and ADJUST are featured in CTAP as options for detecting bad data. This integration of existing solutions illustrates one core principle of CTAP: it aims to extend an existing rich ecosystem of EEG-specific methods, by meeting a clear need within that ecosystem for a workflow management system. The ready-made automation of batching and bookkeeping gives the user a distinct advantage over the common approach of 'EEGLAB + a few scripts', which seems simple on its face, but in practice is non-trivial as the number and complexity of operations grows. As all algorithms added to CTAP will produce quality control outputs automatically, fast performance comparison is possible between methods or method parameters, speeding the discovery of (locally) optimal solutions. The system has potential to enable such parameter optimization by automated methods, although this is not yet implemented.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIALS & METHODS</ns0:head><ns0:p>The core activity of CTAP is preprocessing EEG data by cleaning artefacts, i.e. detection and either correction or removal of data that is not likely to be attributable to neural sources. CTAP is able to operate on three different temporal granularities: channel, epoch and segment. Channel operations affect the entire time series at one spatial location. Epoch operations are performed on one or several epochs produced by EEGLAB epoching function. Finally, segments are fixed time-windows around specific events which can be extracted from both channel and epoch levels, see Figure <ns0:ref type='figure' target='#fig_1'>1</ns0:ref>. An example of a typical segment could be a blink artefact with a window wide enough to include the entire blink waveform. Further functionality is provided for independent component analysis (ICA)-based methods. Artefact-detection methods based on some flavour of ICA algorithm have been shown to outperform temporal approaches <ns0:ref type='bibr' target='#b16'>Delorme et al. (2007)</ns0:ref>. It was also shown that independent components (ICs) are valid representations of neural sources <ns0:ref type='bibr' target='#b15'>(Delorme et al., 2012)</ns0:ref>.</ns0:p><ns0:p>CTAP can thus help to combine the existing methods for EEG signal processing.</ns0:p></ns0:div>
<ns0:div><ns0:head>Outline of usage</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>2</ns0:ref> shows the core components of CTAP. The coloured boxes represent entities that the user has to specify in order to use CTAP. These are:</ns0:p><ns0:p>• what analysis functions to apply and in which order (analysis pipe)</ns0:p><ns0:p>• analysis environment and parameters for the analysis functions (configuration)</ns0:p><ns0:p>• which EEG measurements/files to process (measurement configuration)</ns0:p><ns0:p>Typically, the analysis is run by calling a single script that defines all of the above and passes these on to the CTAP pipeline looper.m function, that performs all requested analysis steps on all specified measurements. In the following, we describe in more detail how the configurations </ns0:p></ns0:div>
<ns0:div><ns0:head>Configuration</ns0:head><ns0:p>In CTAP a large analysis is broken down into a hierarchical set of smaller entities: steps, step sets, pipes and branches. Several analysis steps form a step set and an ordered sequence of step sets is called a pipe. Pipes can further be chained to form branches. The smallest unit is the analysis step which might be e.g. a filtering or a bad channel detection operation. A step is represented by a single call to a CTAP * .m -function.</ns0:p><ns0:p>Step sets and pipes are used to chop the analysis down into smaller chunks that are easy to move around if needed.</ns0:p><ns0:p>Intermediate saves are performed after each step set and therefore the organisation of steps into step sets also affects the way the pipe shows up on disk. Intermediate saves provide a possibility run the whole analysis in smaller chunks and to manually check the mid-way results as often needed, e.g., while debugging. Further on, the ability to create branches is important to help explore alternative ways of analysing the same data.</ns0:p><ns0:p>To specify the order of steps and sets within a pipe, we recommend to create a single m-file for each intended pipe 4 . This file will define both the step sets as well as all the custom parameters to be used in the steps. Default parameters are provided, but it is optimal to fine tune the behaviour by providing one's own parameters. Both pipe and parameter information is handled using data structures, rather than hard-coding. CTAP then handles assignment of parameters to functions based on name matching.</ns0:p><ns0:p>Once the steps and their parameters are defined, the last requirement to run the pipe is to define the input data. In CTAP the input data are specified using a table-like structure called measurement config that lists all available measurements, the corresponding raw EEG files etc. This dedicated measurement config data structure allows for an easy selection of what should be analysed and it also helps to document the project. It can be created manually, or auto-generated based on a list of files or a directory. The former allows for full control and enforces project documentation whereas the latter is intended for effortless one-off analyses. Both spreadsheet and SQLite formats are supported.</ns0:p><ns0:p>In the last required step before pipeline execution, the configuration struct and the parameter struct are checked, finalised and integrated by cfg ctap functions.m.</ns0:p></ns0:div>
<ns0:div><ns0:head>Pipe execution</ns0:head><ns0:p>Once all prerequisites listed above have been specified, the core CTAP pipeline looper. Users can also call the ctapeeg .m functions directly as part of their own custom scripts, since these are meant to be used like e.g. any EEGLAB analysis function.</ns0:p><ns0:p>Analysis results are saved separately for each pipe. A typical structure contains:</ns0:p><ns0:p>• intermediate results as EEGLAB datasets, in one directory per step set; names are taken from the step set IDs as defined by the user, prefixed by step number.</ns0:p><ns0:p>• export directory contains exported feature data (txt, csv or SQLite format).</ns0:p><ns0:p>• features directory: computed EEG features in Matlab format.</ns0:p><ns0:p>• logs directory: log files from each run. 4 For an example, see the cfg manu.m in the repository.</ns0:p></ns0:div>
<ns0:div><ns0:head>8/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• quality control directory: quality control plots, reflecting the visualisations of analysis steps chosen by the user.</ns0:p><ns0:p>Apart from running the complete pipe at once the user has many options to run just a subset of the pipe, analyse only certain measurements, or otherwise adjust usage. Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> gives some examples. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Almost all EEG processing methods in CTAP are either novel or rewritten from original source, usually because of the unintended side-effects of the original code, such as graphical pop-ups. Thus the outputs are similar to those of original EEGLAB or other toolbox methods, but the code base is refactored.</ns0:p><ns0:p>The highlights of available CTAP * .m functions include:</ns0:p><ns0:p>• functions to load data (and extract non-EEG data, e.g. ECG), events (and modify them), or channel locations (and edit them);</ns0:p><ns0:p>• functions to filter, subset select (by data indices or by events), re-reference, epoch, or perform ICA on the data;</ns0:p><ns0:p>• functions to detect artefactual data, in channels, epochs, segments or ICA components, including:</ns0:p><ns0:p>variance (channels),</ns0:p><ns0:p>amplitude threshold (epochs, segments, ICA components), -EEGLAB's channel spectra method (channels, epochs),</ns0:p><ns0:p>metrics from the FASTER toolbox (channels, epochs, ICA components),</ns0:p><ns0:p>metrics from the ADJUST toolbox (ICA components),</ns0:p><ns0:p>additionally bad data can be marked by events where detection is performed by some external method;</ns0:p><ns0:p>• functions to reject bad data, normalise, or interpolate;</ns0:p><ns0:p>• functions to extract time and frequency domain features, and create visualisations of data (as described below).</ns0:p></ns0:div>
<ns0:div><ns0:head>Outputs</ns0:head><ns0:p>CTAP provides a number of novel outputs for evaluation and data management. Data-points include trimmed and untrimmed versions of mean, median, standard deviation as well as skewness, kurtosis and normality testing. The set of statistics estimated for every data channel is saved in Matlab table format and also aggregated to a log file.</ns0:p><ns0:p>Feature export Extracted EEG features are stored internally as Matlab structs that fully document all aspects of the data. These can be used to do statistical analysis inside Matlab. However, often users like to do feature processing in some other environment such as R or similar. For this, CTAP provides export functionality that transforms the EEG feature mat files into txt/csv text files, and/or an SQLite database. For small projects (for example, up to 10 subjects and 16 channels) txt/csv export is feasible but for larger datasets SQLite is more practical.</ns0:p></ns0:div>
<ns0:div><ns0:head>System evaluation</ns0:head><ns0:p>To showcase what CTAP can do we present in this paper the output of an example analysis using synthetic data. The example is part of the CTAP repository; methods are chosen to illustrate the range of possibilities in CTAP, rather than for the qualities of each method itself. Thus for example, we include the CTAP-specific blink-correction method alongside simple amplitude thresholding, to exemplify different ways to handle artefacts.</ns0:p><ns0:p>Toy data CTAP provides a motivating example that can also be used as a starting point for one's own analysis pipe. The example is based on synthetically generated data with blink, myogenic (EMG), and channel variance artefacts to demonstrate the usage and output of CTAP. The example is part of the repository and the details of the synthetic data generation process are documented in the wiki 5 . Shortly, synthetic data is generated from seed data using generate synthetic data manuscript.m, which first converts the example dataset to EEGLAB-format and then adds artefacts to the data. Seed data included in the repository is from the BCI competition IV dataset 1 6 , recorded with BrainAmp MR plus at 100 Hz on 59 channels. The generated 10 minute dataset is sampled at 100 Hz and has 128 EEG channels, two mastoid channels and four EOG channels. It occupies ∼32MB on disk.</ns0:p><ns0:p>Artefacts added to the data include 100 blinks (generated by adding an exponential impulse of fixed duration, with amplitude that decreases linearly from front to rear of the scalp-map); and 50 periods of EMG (generated by adding a burst of noise across an arbitrary frequency band, at a high amplitude that decreases linearly away from a random centre-point). Also six channels are 'wrecked' by randomly perturbing the variance, either very high (simulating loose electrodes) or very low (simulating 'dead' electrodes).</ns0:p></ns0:div>
<ns0:div><ns0:head>Analysis steps</ns0:head><ns0:p>An example pipeline, described in the CTAP repository in the file cfg manu.m, is run on the synthetic data using runctap manu.m. Here we describe the non-trivial analysis steps in order of application. For each step, we first describe the method; then the Results section shows the generated Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>outcomes in terms of data quality control statistics and visualisations. The pipe below is shown to illustrate context of the steps, and is an abridged version of the repository code.</ns0:p><ns0:p>stepSet(1).id = '1_LOAD'; stepSet(1).funH = {@CTAP_load_data,... @CTAP_load_chanlocs,... @CTAP_reref_data,... @CTAP_peek_data,... @CTAP_blink2event};</ns0:p><ns0:p>stepSet(2).id = '2_FILTER_ICA'; stepSet(2).funH = {@CTAP_fir_filter,... @CTAP_run_ica}; stepSet(3).id = '3_ARTIFACT_CORRECTION'; stepSet(3).funH = {@CTAP_detect_bad_comps,... @CTAP_filter_blink_ica,... @CTAP_detect_bad_channels,... @CTAP_reject_data,... @CTAP_interp_chan, ... @CTAP_detect_bad_segments,... @CTAP_reject_data,... @CTAP_run_ica,...</ns0:p></ns0:div>
<ns0:div><ns0:head>@CTAP_peek_data};</ns0:head><ns0:p>Before-and-after 'Peeks' The CTAP peek data.m function is called near the start (after initial loading and re-referencing) and the end of the pipe. Visual inspection of raw data is a fundamental step in EEG evaluation, and quantitative inspection of channel-wise statistics is also available. A logical approach is to compare raw data at same time-points from before and after any correction operations. If ICA-based corrections are made, the same approach can also be used on the raw IC data. CTAP peek data.m expedites this work, and thus helps to regularise data inspection and facilitate comparison.</ns0:p><ns0:p>CTAP peek data.m will generate raw data plots and statistics of a set of time-points (points are generated randomly by default, or can be locked to existing events). These 'peek-points' are embedded as events which can then generate peeks at a later stage in the pipe, allowing true beforeand-after comparisons even if the data time course changes (due to, e.g., removal of segments). If no peek-point data remains at the after-stage, no comparison can be made; however (especially if peek-points are randomly chosen), such an outcome is itself a strong indication that the data is very bad, or the detection methods are too strict. Filtering CTAP filtering produces plots of filter output and tests of functionality as standard.</ns0:p><ns0:p>CTAP fir filter.m uses the firfilt-plugin 8 to do filtering, as it replaces the deprecated function pop eegfilt.m and provides more sensible defaults. Version 1.6.1 of firfilt ships with EEGLAB.</ns0:p><ns0:p>Other CTAP-supported filtering options are described in documentation.</ns0:p><ns0:p>Blink removal Blinks can either be rejected or corrected. We showcase correction using a method that combines blink-template matching and FIR high-pass filtering of blink-related ICs following ideas presented by <ns0:ref type='bibr' target='#b20'>Lindsen and Bhattacharya (2010)</ns0:ref>. The method is not part of EEGLAB, but an add-on provided by CTAP 9 .</ns0:p><ns0:p>Bad ICA component detection is performed by first creating ICs with CTAP run ica.m 10 , and then using one of several options from CTAP detect bad comps.m to detect artefactual ICs.</ns0:p><ns0:p>The blink template option compares mean activity of detected blink events to activations for each IC.</ns0:p><ns0:p>CTAP filter blink ica.m is used to filter blink-related IC data, and reconstruct the EEG using the cleaned components. The success of the blink correction is evaluated using blink Evoked Response Potentials (ERPs) which are simply ERPs computed for blink events (see e.g. <ns0:ref type='bibr' target='#b17'>Frank and Frishkoff (2007)</ns0:ref> for details).</ns0:p><ns0:p>Detect raw-data artefacts Bad channels were detected based on channel variance, with the function vari bad chans.m. Log relative variance χ was computed for all channels using the formula χ = log( channel variance median(channel variance) ). Values of χ more than three median absolute deviations away from median (χ) were interpreted as deviant and labelled as bad.</ns0:p><ns0:p>For bad segments, i.e. short segments of bad data over multiple channels, a common approach (in e.g. EEGLAB) is analysis of fixed length epochs, which is good for ERP experiments. Alternatively for working with continuous data, CTAP also provides the option of amplitude histogram thresholding. Many types of large artefacts can be easily found using simple histogram-based thresholding: a predefined proportion of most extreme amplitude values are marked as artefacts and segments are expanded around these. This can improve, e.g., ICA analysis of low density EEG by freeing ICs to capture neural source signals.</ns0:p><ns0:p>For all CTAP detect bad * .m functions, for whichever detection method option is used (user-defined options are also straightforward to add), a field is created in the EEG struct to store the results. Another field collects pointers to all results detected before a rejection. This logic allows the user to call one or many detection functions, possibly pooling the results of several approaches to bad data detection, and then pass the aggregate results to the CTAP reject data.m function.</ns0:p><ns0:p>Rejection CTAP usage logic suggests that one or more detect operations for a given data type, e.g. channels, or epochs, or components, should be followed by a reject operation. It is bad practice to detect bad data across modalities, e.g. channels and epochs, before rejecting any of it, because artefacts of one type may affect the other. CTAP reject data.m checks the detect field to determine which data type is due for rejection, unless explicitly instructed otherwise. Based 7 See code repository at https://github.com/bwrc/eogert 8 https://github.com/widmann/firfilt 9 Including all parts described above, this particular blink-correction method is unique to CTAP. 10 Default algorithm is FastICA, requiring the associated toolbox on the user's Matlab path.</ns0:p></ns0:div>
<ns0:div><ns0:head>13/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>on the data labelled by prior calls to detection functions, CTAP reject data.m will call an EEGLAB function such as pop select.m to remove the bad data. Upon rejection, visualisation tools described are used to produce plots that characterise the rejected components.</ns0:p><ns0:p>Note that data rejection is only necessary if there exists no method to correct the data, e.g. as is provided for the CTAP blink removal method. In that case the call to the CTAP detect bad * .m function is not followed by a call to CTAP reject data.m, because the method corrects the artefactual ICs rather than simply deleting them.</ns0:p><ns0:p>After Peek Finally the CTAP peek data.m function is called again, providing comparator data at the same points as the initial peek call. A useful approach is to call CTAP run ica.m again after all artefact correction steps. The resulting set of raw IC activations can be plotted by calling CTAP peek data.m, and a careful examination should reveal the presence or absence of any remaining sufficiently large artefacts. This is a convenient way to, for example, determine whether the blink detection has identified all blink ICs.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>In this section, we show the output of CTAP as applied to the synthetic dataset, based on the analysis-pipe steps shown above. The pipe outputs ∼30MB of EEG data after each step set, thus after debugging all steps can be expressed as one set, and data will occupy ∼62MB (before and after processing). Additionally the quality control outputs of this pipe occupy ∼70MB of space, mostly in the many images of the peek-data and reject-data functions.</ns0:p></ns0:div>
<ns0:div><ns0:head>Before-and-after 'Peeks'</ns0:head><ns0:p>Raw data Figure <ns0:ref type='figure'>3</ns0:ref> shows raw data before and after preprocessing.</ns0:p></ns0:div>
<ns0:div><ns0:head>EEG amplitudes</ns0:head><ns0:p>The signal amplitude histograms of a sample of good and bad channels from the synthetic data set are shown in Figure <ns0:ref type='figure' target='#fig_7'>4</ns0:ref>. This can be useful for finding a suitable threshold for bad segment detection, or e.g. to detect loose electrodes. The post-processing plots show the improvement in channel normality.</ns0:p><ns0:p>Statistical comparison Some of the first-order statistics calculated for before-and-after comparisons are plotted in Figure <ns0:ref type='figure' target='#fig_8'>5</ns0:ref>, averaged over all channels. This method allows inspection of global change in the signal, which overall can be expected to become less broad (smaller range) and less variable (smaller SD) after cleaning of artefacts.</ns0:p></ns0:div>
<ns0:div><ns0:head>Blink detection</ns0:head><ns0:p>The EOGERT blink detection process visualises the classification result for quality control purposes, as shown in Figure <ns0:ref type='figure' target='#fig_9'>6</ns0:ref>. Such figures make it easy to spot possible misclassifications. In our example, all 100 blinks inserted into the synthetic data were detected.</ns0:p></ns0:div>
<ns0:div><ns0:head>Filtering</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_10'>7</ns0:ref> shows one of the outputs of the FIR filtering. This figure can be used to check that the filter has the desired effect on power spectrum and that its response to a unit step function is reasonable.</ns0:p></ns0:div>
<ns0:div><ns0:head>Blink removal</ns0:head><ns0:p>Bad ICA component detection An example of a plot for ICA rejection is given in Figure <ns0:ref type='figure' target='#fig_11'>8</ns0:ref>,</ns0:p><ns0:p>showing some basic properties of a blink-related IC. Figure <ns0:ref type='figure'>3</ns0:ref>. Raw EEG data centered around a synthetic blink (A) before preprocessing, and (B) after preprocessing. The blinks have been largely removed and the EEG activity around blinks has remained intact. Note that the y-axis scales differ slightly.</ns0:p></ns0:div>
<ns0:div><ns0:head>Filter blink IC data</ns0:head><ns0:p>The ERP-evaluated success of the blink correction is shown in Figure <ns0:ref type='figure' target='#fig_12'>9</ns0:ref>. The correction method clearly removes most of the blink activity. As blink related ICs are corrected instead of rejected, effects on the underlying EEG are smaller. The result may have some remainder artefact (e.g. visible in Figure <ns0:ref type='figure'>3</ns0:ref> as small spikes after five seconds in channels C17, C18), which may motivate the complete removal of blink-related ICs instead of filtering.</ns0:p></ns0:div>
<ns0:div><ns0:head>Detect & reject raw-data artefacts</ns0:head><ns0:p>Bad channels In total 10 bad channels were found which included all six 'wrecked' channels -this shows the algorithm is slightly greedy, which is probably preferable in the case of a high-resolution electrode set with over 100 channels. Bad channels are rejected and interpolated before proceeding (not plotted as it is a straightforward operation).</ns0:p><ns0:p>Bad segments An example of bad segment detection, using simple histogram-based amplitude thresholding, is shown in Figure <ns0:ref type='figure' target='#fig_1'>10</ns0:ref>. In this case the bad data is high amplitude EMG but in a general setting e.g. motion artefacts often exhibit extreme amplitudes. Using these figures the user can quickly check what kind of activity exceeds the amplitude threshold in the dataset. Of the 50 EMG artefacts inserted in the synthetic data, 37 still existed at least partially, at the end of pipe. The low rejection percentage is due to the fact that EMG is more of a change in frequency spectrum than in amplitude, yet the pipe looked for deviant amplitudes only.</ns0:p></ns0:div>
<ns0:div><ns0:head>After Peek</ns0:head><ns0:p>The data comparisons after various artefact removal operations, Figures <ns0:ref type='figure' target='#fig_7'>3 & 4</ns0:ref>, illustrate the success or failure of the pipe. Of course there are a large number of permutations for how this can be done -it is the CTAP philosophy to facilitate free choice among these options, with the least implementation overhead. Additionally, the final plots of raw IC activations should show if there remains any artefacts in the data. For example, Figure <ns0:ref type='figure' target='#fig_13'>11</ns0:ref> shows a segment of raw data for the first 1/3 of ICs for </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>We have presented CTAP, an EEG preprocessing workflow-management system that provides extensive functionality for quickly building configurable, comparative, exploratory analysis pipes. Already by shifting the researcher's focus from scripting to analysis, CTAP can help reduce human effort, subjectivity, and consequent error. Specifically, the system can reduce the work load of the user by streamlining analysis specification away from function coding. It can improve reliability and objectivity of the analysis by helping users treat each file in a dataset in a uniform, regular manner. CTAP output can also be more easily reproduced because manual processing steps have been minimised. This enables the user to perform multiple comparative analyses for testing the robustness of the results against different preprocessing methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Philosophy, Benefits and Issues</ns0:head><ns0:p>CTAP provides many default parameters, and streamlines many features into a handful of wrapper functions. This is in order to facilitate rapid build and testing of analysis pipes. The philosophy is to prevent users becoming stuck in a single approach to the data because they have invested time in building the preprocessing code for it from scratch; or worse, because they have completed a laborious manual processing task and cannot afford to repeat it. CTAP structures pipes in function,argument specification files. This approach, instead of only making scripts that call the required set of functions directly, has several benefits. Function names and parameters become objects available for later processing, so one can operate on them, e.g. to record what was done to logs, or to swap functions/parameters on the fly, or to check the specification of the pipe. By specifying new approaches in new pipe files, and saving already-tried pipe files, one can treat the files as a record of attempted preprocesses. This record corresponds to the user's perspective, and thus complements the additional history structure saved to the EEG file, which records all parameters for each operation not only those specified by the user. Finally, the user should not usually rely on defaults (as given by CTAP, EEGLAB, or other toolboxes), because the optimal choice often depends on the data. This is also one reason to separately define pipeline and parameters.</ns0:p><ns0:p>Separating these as objects is convenient for e.g. testing multiple parameter configurations. A single As different analysis strategies and methods can vary greatly, CTAP was implemented as a modular system. Each analysis can be constructed from discrete steps which can be implemented as stand-alone functions. As CTAP is meant to be extended with custom analysis functions the interface between core CTAP features and external scripts is well defined in the documentation. The only requirement is to suppress any pop-ups or GUI-elements, which would prevent the automatic execution of the analysis pipe 11 . It is also up to the user to call the functions in the right order.</ns0:p><ns0:p>The system supports branching. This means that the analysis can from a tree-like structure, where some stage is used as input for multiple subsequent workflows. To allow this, any pipe can act as a starting point for another pipe. The CTAP repository provides a simple example get the user going. For branches to appear, a bare minimum is a collection of three pipes of which one is run first. The other two both act on this output but in different ways. Currently the user is responsible for calling the pipes of a branched setting in a meaningful order. However, this is straightforward to implement and having the analysis logic exposed in the main batch file makes it e.g. easy to run only a subset of the branches.</ns0:p><ns0:p>Although CTAP works as a batch processing pipeline, it supports seamless integration of manual operations. This works such that the user can define a pipeline of operations, insert save points at appropriate steps, and work manually on that data before passing it back to the pipe. The main extra benefit that CTAP brings is to handle bookkeeping for all pipeline operations, such that manual operations become exceptional events that can be easily tracked, rather than one more in a large number of operations to manage.</ns0:p><ns0:p>CTAP never overrides the user's configuration options, even when these might break the pipe.</ns0:p><ns0:p>For example, CTAP reject data.m contains code to auto-detect the data to reject. However the user can set this option explicitly, and can do so without having first called any corresponding detection function, which will cause preprocessing on that file to fail. Allowing this failure to happen is the most straightforward approach, and ultimately more robust. Combined with an informative error message the user gets immediate feedback on what is wrong with the pipe.</ns0:p><ns0:p>On the other hand, CTAP does provide several features to handle failure gracefully. As noted, the pipe will not crash if a single file has an unrecoverable error, although that file will not be processed further. This allows a batch to run unsupervised. Then, because no existing outputs are overwritten automatically, one can easily mop-up the files that failed without redoing all those that succeeded, if the fault is identified. Because pipes can be divided into step sets, tricky processes that are prone to failure can be isolated to reduce the overall time spent on crash recovery. CTAP saves crashed files at the point of failure (by setting the parameter 'trackfail' in CTAP pipeline looper.m), permitting closer analysis of the problematic data.</ns0:p><ns0:p>In contrast to many analysis plugins built on top of EEGLAB, no GUI was included in CTAP.</ns0:p><ns0:p>While GUIs have their advantages (more intuitive data exploration, easier for novice users, etc) there is a very poor return on investment for adding one to a complex batch-processing system like CTAP. A GUI also sets limits to configurability and can constrain automation if CTAP is executed on a hardware without graphical capabilities. The absence of GUI also makes the development of extensions easier as there are fewer dependencies to handle.</ns0:p><ns0:p>In contrast to many other broad-focus physiological data analysis tools, CTAP is designed to meet a very focused goal with a specific approach. This does however create some drawbacks.</ns0:p><ns0:p>Compared to scripting ones own pipeline from scratch, there are usage constraints imposed by the heavy use of struct-passing interfaces. Some non-obvious features may take time to master, and it can be difficult (albeit unnecessary) to understand the more complex underlying processes.</ns0:p><ns0:p>CTAP is also built to enable easy further development by third parties, by using standardised interfaces and structures. This was a feature of original EEGLAB code, but contrasts with many of the EEGLAB-compatible tools released since, whose functionality was often built in an ad hoc manner. The main requirement for development is to understand the content and purpose of the EEG.CTAP field (which is extensively documented in the wiki), and the general logic of CTAP.</ns0:p><ns0:p>Developers can easily extend the toolbox by using (or emulating) the existing ctapeeg * .m functions, especially the ctapeeg detect * .m functions, which are simply interfaces to external tools for detecting artefacts. Existing CTAP * .m functions can be relatively more complex to understand, but the existing template provides a guideline for development with the correct interface.</ns0:p></ns0:div>
<ns0:div><ns0:head>21/25</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Future work</ns0:head><ns0:p>CTAP is far from finalised, and development will continue after the initial release of the software.</ns0:p><ns0:p>The main aim of future work is to evolve CTAP from workflow management towards better automation, with computational comparative testing of analysis methods, to discover optimal parameters and help evaluate competing approaches.</ns0:p><ns0:p>As stated above, the potential to fully automate EEG processing is constrained by the indeterminacy of EEG: known as the inverse problem, this means that it is not possible to precisely determine a ground truth for the signal, i.e. a unique relationship to neural sources. The signal can also be highly variable between individuals, and even between intra-individual recording sessions <ns0:ref type='bibr' target='#b11'>(Dandekar et al., 2007)</ns0:ref>. These factors imply that there cannot be a general algorithmic solution to extract neurally-generated electrical field information from EEG, thus always requiring some human intervention. By contrast, for example in magnetoencephalography certain physical properties of the system permit inference of sources even from very noisy data <ns0:ref type='bibr' target='#b27'>(Taulu and Hari, 2009)</ns0:ref> (although recording of clean data is always preferable, it is not always possible, e.g. with deep brain stimulation patients <ns0:ref type='bibr' target='#b2'>Airaksinen et al. (2011))</ns0:ref>. While many publications have described methods for processing EEG for different purposes, such as removing artefacts, estimating signal sources, analysing event-related potentials (ERPs), and so on. However despite the wealth of methodological work done, there is a lack of benchmarking, or tools for comparison of such methods. The outcome is that the most reliable way to assess each method is to learn how it works, apply it, and test the outcome on one's own data: this is a highly time-consuming process which is hardly competitive with simply performing the bulk of preprocessing in a manual way, as seems to remain the gold standard. The effect of each method on the data is also not commonly characterised, such that methods to correct artefacts can often introduce noise to the data, especially where there was no artefact (false positives).</ns0:p><ns0:p>Thus, we also aim to enable testing and comparison of automated methods for preprocessing. This is still work in progress, as we are building an extension for CTAP that improves testing and comparison of preprocessing methods by repeated analyses on synthetic data. This extension, tentatively titled Handler for sYnthetic Data and Repeated Analyses (HYDRA), will use synthetic data to generate ground-truth controlled tests of preprocessing methods. It will have capability to generate new synthetic data matching the parameters of the lab's own data, and compare outcomes of methods applied to this data in a principled computational manner. This will allow experimenters to find good methods for their data, or developers to flexibly test and benchmark their novel methods.</ns0:p><ns0:p>Another desirable, though non-vital, future task is to expand the quality control output, to include functionality such as statistical testing of detected bad data, for the experimenter to make a more informed decision. Although statistical testing is already implied in many methods of bad data detection, it is not visible to users. This will take the form of automated tools to compare output from two (or more) peeks, to help visualise changes in both baseline level and local wave forms. Such aims naturally complement the work of others in the field, and it is hoped that opportunities arise to pool resources and develop better solutions by collaboration.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>The ultimate goal of CTAP is to improve on typical ways of preprocessing high-dimensional EEG data through a structured framework for automation.</ns0:p><ns0:p>We will meet this goal via the following three steps: a) facilitate processing of large quantities of Manuscript to be reviewed Computer Science algorithms to tune the thresholds of statistical selection methods (for bad channels, epochs, segments or components) to provide results which are robust enough to minimise manual intervention.</ns0:p><ns0:p>We have now addressed aim a), partly also b), and laid the groundwork to continue developing solutions for c). Thus the work described here provides the solid foundation needed to complete CTAP, and thereby help to minimise human effort, subjectivity and error in EEG analysis; and facilitate easy, reliable batch processing for experts and novices alike.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>It is always good practice to check how the analysis alters the data. CTAP provides several basic visualisations for this task giving the user additional insight into what is going on. See section Results for examples.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Relationship of the time domain data constructs dealt with in CTAP.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. An overview of the core logic of CTAP. 'configuration', 'analysis pipe' and 'measurement config' illustrate the parts that a user must specify. White boxes represent Matlab functions, with the function-name on top.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>m function is called to run the pipe. This function takes care of loading the correct (initial or intermediate) data set, applying the specified functions from each step set, and intermediate saving of the data. The looper manages error handling such that it is robust to crashing (unless in Debug mode), and will simply skip the remaining steps for a crashed file. Other settings determine how to handle crashed files at later runs of the pipe (see Documentation).CTAP pipeline looper.m is designed to accept functions named CTAP * .m, as these are defined to have a fixed interface. They take two arguments: data (EEG) and configuration struct (Cfg); and they return the same after any operations. Some CTAP * .m perform all operations (e.g. call EEGLAB functions) directly, while others call a corresponding ctapeeg .m function that actually implements the task. Hence CTAP * .m functions can be regarded as wrappers that facilitate batch processing by providing a uniform interface. They also implement e.g. the plotting of quality control figures. Since CTAP * .m functions are quite simple, new ones can easily be added by the user to include new analysis steps, working from the provided CTAP template function.m.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Visual evaluation CTAP automatically produces plots that help the user to answer questions such as: what has been done, what the data looks like, and was an analysis step successful or not. The following selected visualisations are illustrated in Section Results:• blinks: detection quality, blink ERP • bad segments: snippets of raw EEG showing detections • EEG amplitudes: amplitude histograms, peeks • filtering: PSD comparison • ICA: IC scalp-map contact sheets, zoom-ins of bad components 10/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)Manuscript to be reviewedComputer ScienceQuantitative evaluation Every major pipe operation writes a record to the main log file. Data rejections, including channels, epochs, ICs or segments, are summarised here and also tabulated in a separate 'rejections' log. Values are given for how much data was marked as bad, and what percentage of the total was bad. If more than 10% of data is marked bad by a single detection, a warning is given in the main log. In addition, useful statistics of each channel are logged at every call to CTAP peek data.m, based on the output of the EEGLAB function signalstat.m.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>CTAP peek data.m includes plotting routines for signal amplitude histograms as well as for raw EEG data. Many EEG artefacts cause large changes in signal amplitudes, and consequently several basic, yet effective, EEG artefact detection methods are based on identifying samples exceeding a given amplitude threshold. On the other hand, even in controlled measurement conditions, individual baseline variation can affect the amplitude of the recorded signal. Hence accurate knowledge of the average signal amplitude is often important.Blink detectionThe function CTAP blink2event.m is called early in the pipe to mark blinks.It creates a set of new events with latencies and durations matched to the detected blinks. The12/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016) Manuscript to be reviewed Computer Science current blink detection implementation is based on a modified version of the EOGERT algorithm by Toivanen et al. (2015) 7 . The algorithm finds all local peaks in the data, constructs a criterion measure and classifies peaks into blinks and non-blinks based on this measure.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. EEG amplitude histograms for four channels (A) before preprocessing, and (B) after preprocessing. Fitted normal probability density function (PDF) is shown as red solid curve. Upper and lower 2.5 % quantiles are vertical black solid lines; data inside these limits was used to estimate the trimmed standard deviation (SD), and normal PDF fitted using trimmed SD is shown as black solid curve. Distribution mean is vertical dashed blue line. Channel D15 has clearly been detected as bad, removed and interpolated.</ns0:figDesc><ns0:graphic coords='17,117.89,63.78,376.22,197.07' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. Changes in channel statistics for range (A) and standard deviation (SD) (B). Mean over channels is indicated using a dot and the range spans from 5:th to 95:th percentile.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Scatter plot of the criterion used to detect blinks. Horizontal axis shows the criterion value while vertical axis is random data to avoid over-plotting. The classification is done by fitting two Gaussian distributions using the EM algorithm and assigning labels based on likelihoods.</ns0:figDesc><ns0:graphic coords='18,117.89,63.78,376.21,141.08' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. A visual of filtering effects. (A) the effects of filtering on power spectrum. (B) the filter's unit step response which can be used to assess e.g. the filter's effect on ERP timings.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Independent component information plot for a blink-related ICA component found using blink template matching. Shown are (A) component scalp map, (B) power spectrum, and (C) a stacked plot of the time series (using erpimage.m). (C) shows only 200 first 300 ms segments of the data. The synthetic blinks start at full seconds by design.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. An example of the blink ERP. (A) the blink-centered ERP before correction with a clearly visible blink signal. (B) the same plot after correction. The blink is clearly removed but the underlying EEG remains largely unaffected because the correction was done in IC base. Channel C17 shows highest blink amplitudes in the synthetic dataset.</ns0:figDesc><ns0:graphic coords='19,117.89,351.11,376.21,282.16' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 11 .</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11. Plot of raw IC activations after all processing steps. IC14 shows clear evidence of EMG noise remaining; while IC36 may indicate a drifting channel.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_14'><ns0:head /><ns0:label /><ns0:figDesc>EEG data; b) improve reliability and objectivity of such processing; c) support development of smart 22/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='21,117.89,63.78,376.22,364.47' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Some advanced ways to use the pipe.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Usage</ns0:cell><ns0:cell>Possible Reasons</ns0:cell><ns0:cell>How</ns0:cell></ns0:row><ns0:row><ns0:cell>Options</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Subset step</ns0:cell><ns0:cell>investigate a bug; recompute only</ns0:cell><ns0:cell>set run sets to subset index, e.g.</ns0:cell></ns0:row><ns0:row><ns0:cell>sets</ns0:cell><ns0:cell>intermediate results</ns0:cell><ns0:cell>Cfg.pipe.runSets = 3:5</ns0:cell></ns0:row><ns0:row><ns0:cell>Run test step</ns0:cell><ns0:cell>test new feature before including in</ns0:cell><ns0:cell>add step set with id 'test', then set</ns0:cell></ns0:row><ns0:row><ns0:cell>set</ns0:cell><ns0:cell>pipe</ns0:cell><ns0:cell>Cfg.pipe.runSets =</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>'test'</ns0:cell></ns0:row><ns0:row><ns0:cell>'Rewire' the</ns0:cell><ns0:cell>test an alternative ordering of existing</ns0:cell><ns0:cell>set the .srcID of a given step set</ns0:cell></ns0:row><ns0:row><ns0:cell>pipe</ns0:cell><ns0:cell>steps or temporarily change the input of</ns0:cell><ns0:cell>equal to the id of another</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>some step</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Measurement</ns0:cell><ns0:cell>run pipe for: subset of test subjects, or:</ns0:cell><ns0:cell>use function struct filter.m</ns0:cell></ns0:row><ns0:row><ns0:cell>configuration</ns0:cell><ns0:cell>measurements classes with separate</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>filter</ns0:cell><ns0:cell>configurations, e.g. pilots</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Run in debug</ns0:cell><ns0:cell>develop new method in CTAP</ns0:cell><ns0:cell>set CTAP pipeline looper</ns0:cell></ns0:row><ns0:row><ns0:cell>mode</ns0:cell><ns0:cell /><ns0:cell>parameter 'debug', true</ns0:cell></ns0:row><ns0:row><ns0:cell>Overwrite</ns0:cell><ns0:cell>update part of pipe: write new step set</ns0:cell><ns0:cell>set CTAP pipeline looper</ns0:cell></ns0:row><ns0:row><ns0:cell>obsolete</ns0:cell><ns0:cell>output over existing files</ns0:cell><ns0:cell>parameter 'overwrite', true</ns0:cell></ns0:row><ns0:row><ns0:cell>results</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Write files</ns0:cell><ns0:cell>check partial outcome of step set</ns0:cell><ns0:cell>set CTAP pipeline looper.m</ns0:cell></ns0:row><ns0:row><ns0:cell>from failed</ns0:cell><ns0:cell /><ns0:cell>parameter 'trackfail', true</ns0:cell></ns0:row><ns0:row><ns0:cell>step sets</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Turn off</ns0:cell><ns0:cell>extract numerical/visual analytics</ns0:cell><ns0:cell>set stepSet(x).save =</ns0:cell></ns0:row><ns0:row><ns0:cell>intermediate</ns0:cell><ns0:cell>without producing updated files</ns0:cell><ns0:cell>false; set</ns0:cell></ns0:row><ns0:row><ns0:cell>saves</ns0:cell><ns0:cell /><ns0:cell>stepSet(x+1).srcID =</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>stepSet(x-1).id</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Analytic methods</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='3'>As presented, CTAP is primarily a framework for analysis management; however it contains a</ns0:cell></ns0:row></ns0:table><ns0:note>number of analysis functions, functions for evaluation, and data-management functions including a way to generate synthetic datasets for testing (for details see function documentation). The user is easily able to add their preferred functions, but may note the available functions as a quick way to start. All provided functions, for analysis, evaluation or data-handling, have default parameters which may serve as a starting point.9/25PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)</ns0:note></ns0:figure>
<ns0:note place='foot' n='3'>https://github.com/bwrc/ctap/wiki 7/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='15'>/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='11'>As noted above, for this reason much original code has been refactored to avoid runtime-visible or focus-grabbing outputs. The ultimate aim is for CTAP to interface directly to Matlab functions to remove dependency on EEGLAB releases, while retaining compatibility with the EEGLAB data structure 20/25 PeerJ Comput. Sci. reviewing PDF | (CS-2016:06:11265:1:1:NEW 2 Dec 2016)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Finnish Institute of Occupational Health, Helsinki, Finland
14 October 2016
Dear editors,
We are very grateful to yourselves and to the reviewers for all efforts to help us produce a
better manuscript and improve our contribution to the EEG research community.
We have addressed in detail the concerns and comments of the reviewers, and responded
in the letter below. This is formatted in the following way:
● All comments are grouped by theme/section, including:
○ Basic reporting
○ Abstract/Introduction
○ Methods & Materials
○ Results
○ Discussion
○ Figures & Tables
● Comments from multiple reviewers with the same point are grouped with one answer,
each comment identified by reviewer number
● A group of similarly-themed comments from a single reviewer may appear under one
reviewer attribution, with one answer for each
● Responses are highlighted in blue
● The manuscript itself has had changes tracked by use of a .pdf diff tool, producing a
side-by-side pdf showing changes from submission 1 to submission 2.
A number of comments addressed the scope in different ways (we replied to each one
below), and scope seems to be the major thing to improve. We tackle this by refining and
enriching our current contribution, rather than waiting to extend it by future work, because we
feel that there is already necessary and sufficient material for a contribution, and that the
value added at this stage would be overlooked if released in a paper also describing our
(in-progress) automated comparison methods. After all it is clear that the novel methods are
more exciting to the average reviewer than the framework they are packaged in. For
justification, consider the many EEGLAB-supplement toolbox releases which have focused
on their novel methods: the code for which is often buggy and lacks any interface layer to
help developers to use it. In summary, there have been two major changes:
1. The description of what CTAP is and what it does has been clarified throughout, in
direct statements in abstract and Introduction, in statements comparing with other
tools, and in terms of what is provided already and what is planned future work
2. The Results section has been improved and expanded, giving a broader and clearer
view of what CTAP can do at present, with more comparative evaluation of
outcomes, and description of these methods moved to Methods section.
Lesser changes are described in the responses below; based on all changes we believe this
version is now ready for publication in PeerJ Computer Science.
With thanks on behalf of all authors,
Best regards,
Benjamin Cowley, PhD
Basic Reporting
Style
Reviewer 1 (Anonymous)
The article should be revised for style, grammar and typos. The authors should avoid the
use of technical jargons
Reviewer 3 (Guillaume Rousselet)
There are typos throughout - please edit carefully.
Overall, the paper would be clearer if edited in plain English as much as possible, and using
examples to clarify many unclear and often unnecessary technical terms. I get what you’re
trying to do, and this is important work, but the paper reads as a rough draft at the moment:
it really needs to be restructured and edited.
ANSWER: Regarding typos, we’re grateful to the reviewers for spotting these - we clearly
missed an extra round of proofreading before submission, and have now done that.
Regarding the clarity and use of examples, we have illustrated the opening section with
examples, and added where appropriate throughout the following sections.
Scope
Reviewer 1 (Anonymous)
Regarding the relevance of the contribution. At the beginning of the manuscript, the authors
claim to propose a pipeline to compare different streamings of EEG data analysis. This is a
very good idea and and also a need of the EEG community. However, what is presented
instead is a framework that comes on top of EEGLAB scripting capabilities (core and
plug-in), leaving the main and most exciting part of the contribution for a future work.
ANSWER: Thanks for this point of view. To clarify, we do propose “a pipeline to compare
different streamings of EEG data analysis” - what we provide in this paper is exactly that, but
it does not yet do automated comparisons.
In retrospect, it is clear that the early mention of automated comparison was a mistake. In
fact, e.g. in the sentence “ii) testing and comparison of automated methods based on
extensive quality control outputs”, we intended only to indicate that manual comparison of
methods is possible, and those methods are intended to be automated (such as the
FASTER toolbox). But the word automated is misleading. We have thus reworded: “testing
and comparison of artefact-detection methods based on extensive quality control outputs”.
NOTE (relevant to scope of the paper): Since we believe that the current state of the toolbox
provides a sufficient contribution, in the Intro we have clarified what CTAP does now, and in
the Results we give more details on the available comparison options, which are mainly
manual at this time.
Discussion > Future Work still mentions that we aim to introduce automated comparisons,
but the nature of these is, we believe, too extensive to share space in a single publication
with existing material. This is especially true regarding the readership, because the methods
involved in computational comparison will require a level of knowledge which is not required
for the current paper, and therefore suggests a more tightly-constrained audience. It is our
hope that CTAP as described in this paper can help to make EEG processing m
ore
accessible.
Reviewer 3 (Guillaume Rousselet)
Language and writing are accessible and appropriate. The manuscript does not present
original research but the description and documentation of a software (interface) for more
efficient data analysis. I’m not sure whether the manuscript is within the scope of the journal.
ANSWER: Thank you. We did have a discussion with PeerJ editors before submission about
the fit to scope. It is of course possible that we came to the wrong conclusion, and should
submit elsewhere, e.g. Frontiers in Neuroinformatics. However that would be (at least partly)
a waste of the reviewers’ effort: unless they agreed to also review for the other journal (not to
mention the editors)! So we leave the decision to them.
Reviewer 1 (Anon)
In the manuscript the authors reference the wiki page developed for the toolbox to avoid
deeper explanations on the matter of discussion. This should be avoided as this affect the
unity and ‘self-containing’ aspect of the manuscript.
ANSWER: Thank you. It seems helpful to copy from the wiki the (shortened) description of
the synthetic data generation, and seed data details. On the other hand, the opinion of other
reviewers was that even more documentary material should be moved to the wiki. And in
fact the wiki as it stands is required in its current form as part of the github repository
documentation. Thus we think it is counter-productive to add much more material to the
paper.
Reviewer 3 (Guillaume Rousselet)
In general, most of the content of the manuscript would be expected in a software manual
rather than in a research article. The manuscript could be massively improved if the focus is
moved away from the mere documentation of the software (interface and workflow) towards
the genuine scientific contribution, e.g. invention and/or implementation of new algorithms of
signal processing and classification. Furthermore, the manuscript could be improved by
adding empirical data. E.g., trying to demonstrate empirically that the approach and the
implemented algorithms (e.g. blink detection and classification etc.) indeed enhance the
quality of the pre-processed data.
...
I sympathetically see the requirement to supplement such a toolbox with a peer-review
publication. However, I would suggest to replace the rather documentational aspects (much
better fitting into a software manual) by (a) empirical validation of the toolbox and (b) focus
on the description and evaluation of the genuine scientific contribution (e.g., blink detection,
detailed description and evaluation of the validity of the synthetic test signals, description
and evaluation of provided defaults).
ANSWER: Thank you for this perspective. We agree that some elements are more suited to
a software manual - in the Methods section in particular. We have removed the code
snippets and will publish them in the repository wiki instead.
We have added description of a method for blink correction which is somewhat novel.
However we wish to make a strong case that the true benefits come from the perspective of
CTAP as workflow management system, with e.g. automatic bookkeeping, principled pipe
building and data export. Under this view, the evaluation of the contribution should be on the
basis of the complete picture of what it can do, not only on the data-cleaning methods
outcomes.
Other
Reviewer 2 (Anonymous)
Line 22: “by” missing?
Line 89: BrainVISION Analyzer
ANSWER: Thanks for spotting these. On Line 22 we wanted a parenthetical citation, so that
was an issue with our use of the relevant Latex command: fixed. Also changed Line 89 as
suggested.
Reviewer 3 (Guillaume Rousselet)
Avoid the quotation marks of mystery and be specific instead, for instance in:
complex ’standard’ EEG processing
’quality control’
of ’bad’ data
’mop up’ ’tricky’
’gold standard’
ANSWER: This is a valuable perspective - we agree that these uses seem non-specific. We
used ‘single quotes’ to try and avoid making strong claims (e.g. that there is a standard form
of EEG processing), without using awkward extended sentences to hedge around with
caveats. However if the reader is comfortable understanding this implicitly, then it is a
non-issue. We have removed the quotes.
Avoid monstrosities such as “learning curve”.
ANSWER: Fair enough. Altered this point to read: “Some non-obvious features will take
more time to master,...”
Abstract / Introduction - a
nswered ALL
Reviewer 3 (Guillaume Rousselet)
“the pre-processing of EEG data is quite complicated”. Unless you provide an example, it
remains unclear why this is the case. It would be fair to say that there are many options, but
you can see certain large effects without pre-processing at all - which is certainly not the
case for other brain imaging techniques.
ANSWER: Thanks for pointing out this needed clarification. We do not intend to compare
with other brain imaging techniques, because those do not share a common physical
measurement basis; thus we removed “...is the most lightweight and affordable method of
brain imaging” from the abstract. Instead we simply describe the main sources of difficulty in
EEG processing, as in the Intro.
Overall, the introduction would be much better restructured by pointing out the main steps
involved in EEG preprocessing, instead of hinting at its complexity, as well as existing tools
in Matlab & Python. Then describe what’s lacking and your contribution. At present, you
ignore all the pipeline toolboxes, and you fail to precisely describe what is lacking.
ANSWER: Thank you. The issue with identifying main steps is that no steps are mandatory,
but we have given a shortlist which we hope is not too controversial. We then more precisely
describe the need that we are meeting in the last Intro paragraph. We expand on that
description in the Approach section.
We keep the comparison with existing tools in the Related Work section. We did mention the
PREP pipeline in the first submission; now we’ve added some extra detail about the
differences. Mainly, they are complementary - they would even be good candidates for
integration.
Other general (cross-platform) pipes exist but this domain is not our main focus; we added a
paragraph about this philosophy of design.
This sentence is difficult to parse and refers to undefined “electrophysiology data”:
“However, among those types of human electrophysiology data recorded from surface
electrodes (to which EEG is most similar in terms of recording methods, see e.g. Cowley et
al. (2016) for a review), EEG data is comparatively difficult to pre-process.”
EEG is difficult to pre-process compare to what?
ANSWER: Thanks for identifying this. EEG can be more difficult compared to e.g.
electrocardiography, electrodermal activity, electrooculography. In each of these, analysis
can be complicated, but there are also ways to establish some ground truth, e.g. the ECG
signal source is well defined and the signal is very large, so it is possible to devise
algorithms to, e.g. label ectopic beats with high accuracy. While one can easily visually spot,
e.g. alpha oscillations in EEG, it is much harder to reliably classify algorithmically.
The class A and class B terminology does not bring anything: use clear sub-headings
instead
ANSWER: Thank you, we agree. We have simply removed the A and B referents, there
seems no need for subheadings.
focus on examples, to avoid technical sounding but vague terms.
ANSWER: Thank you. We have now added examples to the first several paragraphs of
motivation text, to ground the arguments in relatable findings.
“EEG data is high-bandwidth” -> EEG data can be very large.
ANSWER: Thanks, we have changed this.
“Due to the inverse problem it is not possible to precisely determine a ’ground truth’ for the
signal”. It’s unclear how this relates to problems with preprocessing.
ANSWER: Thanks - to clarify this, we have expanded the explanation in the relevant
paragraph and contrasted the situation with the case of MEG, where certain physical
properties enable the TSSS method (Taulu & Hari, 2009).
Approach
Reviewer 3 (Guillaume Rousselet)
“CTAP is built on Matlab (r2015a and higher)” - does it mean it was tested with 2015a and
higher, or absolutely not usable with 2014b and under?
Same question with EEGLAB.
ANSWER: Thanks for asking. CTAP has been tested with the cited versions. It, or parts
thereof, were in development for some time beforehand but there was no systematic testing
of backwards compatibility for the whole toolbox. To clarify, we added the sentence “limited
functions, especially non-graphical, may work on older versions but are untested”.
Related Work
Reviewer 2 (Anonymous)
Line 62: The [Related Work] chapter does not provide anything essential for the manuscript.
Considerably shorten? Omit?
Reviewer 3 (Guillaume Rousselet)
This [Related Work] section comes far too late and should be integrated in a more
comprehensive introduction.
ANSWER: Thank you both for these perspectives. Indeed in internal peer review, other
opinions were given - it seems there is a lot of variation in how people prefer to structure an
Introduction. Given this variation, and the several comments we received asking for more
comparison with competing tools, we end up with quite a long Intro and so think it is
appropriate to separate the material under headers, for clarity: 1 Introduction, 1.1 Approach,
1.2 Related Work. However this constrains the way the material can be structured - it would
then be strange to have the Related Work section before the Approach section.
Reviewer 1 (Anonymous)
The authors use abundant literature in the manuscript. However, they should provide more
relevant references and insert a section for already existing automated MATLAB-Octave
pipelines (See. http://psom.simexp-lab.org/).
ANSWER: Thanks for pointing out this tool. We have attempted to deal thoroughly with the
prior art of processing tools for EEG, but in truth there are so many tools that are not specific
to EEG that we decided to exclude those. We have thus included a paragraph in the
Discussion, where we explain our philosophy (to avoid spending pages on such things in the
Intro, before any of the meat of the paper):
“CTAP was designed to complement the existing EEGLAB ecosystem, not to provide
a stand-alone pre-processing tool. This is an important distinction, because there
exist some excellent stand-alone tools which work across data formats and platforms
\citep{Bellec2012,Ovaska2010}; these features are valuable when collaborators are
trying to work across, e.g. Windows and Linux, Matlab and R. However we do not
see a need in this domain; rather we see a need in the much narrower focus on
improving the command line interface batch processing capabilities of EEGLAB.”
Reviewer 1 (Anonymous)
In the manuscript, when reviewing the previous work on the field, the authors suggest that
EEGLAB is a GUI based tool with restricted ability to be used for batch or custom data
analysis scripts. Later the authors use this to justify the development of the CTAP. However,
the reality is that EEGLAB provide both GUI and scripting based capabilities. The authors
are aware of this, since these scripting capabilities made possible their own development
using EEGLAB as a base, as well as the many contribution received as EEGLAB's extension
or plugins. This should be mentioned in the manuscript.
Reviewer 2 (Anonymous)
Line 77-78: This statement is incorrect. The GUI is part of EEGLAB but the software can be
used completely without from CLI or script (as the authors actually do and thus, should
know).
Reviewer 3 (Guillaume Rousselet)
“EEGLAB is a graphical user interface (GUI)-based tool” - this is inaccurate, because
EEGLAB has been developed to be used at either GUI or script level.
ANSWER: This was a clear mistake, and valuable to have pointed out, thank you. We’ve
now changed the sentence in Related Work to read :
“Although (as most Matlab software) EEGLAB functions can be called from the
command-line interface (CLI) and thus built into a pre-processing pipeline by the
user's own scripts, in practice this is a non-trivial error-prone task”.
Reviewer 3 (Guillaume Rousselet)
“Although CTAP works as a batch processing pipeline, it supports seamless integration of
manual operations. This works such that the user can define a pipeline of operations, insert
save points at appropriate steps, and work manually on that data before passing it back to
the pipe.” This can be done easily using EEGLAB + scripts, and is covered in the EEGLAB
tutorial. How does your approach differ? Also, how does it differ from PREP and other tools?
ANSWER: Thanks for asking. The main difference at present is that CTAP provides a simple
interface for setting up one’s pipeline, and then provides extensive features to keep track of
data and operations; as opposed to EEGLAB alone, which requires building the scripted
pipeline that actually calls the functions. None of that overhead work is provided by default in
EEGLAB - as you say, it requires the ‘+ scripts’, which is really often non-trivial. We added a
paragraph on this in the Discussion.
As mentioned above, we have now added more detail about how we differ from PREP.
Materials & Methods
Reviewer 3 (Guillaume Rousselet)
I don’t see the point of Figure 1: it is rather confusing. The terms would be better defined in
the text.
ANSWER: Thank you, Figure 1 was indeed a bit confusing and we have updated it; the
terms are already defined in the text (first paragraph of Materials & Methods). Figure 1
illustrates the various ways to parse the data in the time domain, with the aim of clarifying
such basic concepts for the remainder of the paper where they are used often.
Reviewer 2 (Anonymous)
An in-depth review of the toolbox code is not possible within the scope of a manuscript
review. For curiosity I had a look at minor parts of the code, more or less accidentally
starting with data filtering. The default IIR elliptic filter is not suitable as a general purpose
filter for EEG data. The IIR filter is not part of the default EEGLAB toolbox but a plugin which
appears to be no longer maintained since quite some time and known to be problematic in
several aspects. The alternative EEGLAB FIR filter routines used by the toolbox are known
to be broken at least since 2012 (Widmann et al., 2012, DOI: 10.3389/fpsyg.2012.00233),
should no longer be used and were replaced. The documented equation for default filter
order is incorrect (in particular for the IIR but also for the FIR filter). The defaults for the
cutoff frequencies are inappropriate for general purpose EEG analysis (see e.g. Acunzo et
al., 2012, DOI: 10.1016/j.jneumeth.2012.06.011; Tanner et al., 2015, DOI:
10.1111/psyp.12437).
ANSWER: The faulty EEGLAB filter function has been replaced with the cited FIR function
by Widmann, in a dedicated function CTAP_fir_filter(). The user still has the option to define
their own filter and pass it to the function CTAP_filter_design().
Reviewer 3 (Guillaume Rousselet)
After explaining that CTAP differs from existing toolboxes, the methods section starts with:
“The core activity of CTAP is preprocessing EEG data by cleaning artefacts, i.e. detection
and either correction or removal of ’bad’ data, that is not likely to be attributable to neural
sources.”
which is what existing toolboxes do. So again, the introduction must do a much better job at
clarifying what is new here.
ANSWER: Thank you, we also see the need to clarify the core message. It has been
confused by what the user actually does with CTAP: create a robust manageable pipeline
(CTAP) to apply EEG-processing methods (mostly not CTAP). The big picture is further
confused by the additional purpose of CTAP as the foundation platform for the proposed
addition of automated evaluation/comparison (CTAP+HYDRA) of EEG-processing methods
(generic, not CTAP). We have worked over the entire manuscript to make these issues
clearer.
## Configuration
In the 2 step example, please do not use *i* as an index, as in:
`i = 1; %stepSet 1`
`s` or `step` would be better coding.
ANSWER: Thank you. In our own code we used such an index to make it easier to copy and
paste step sets, etc. However in such an example, the index is redundant so we now use
numbers directly, i.e. `stepSet(1).id = '1_load_WCST';`.
## Pipe execution
Can you provide an estimate of the space needed to store the typical directory tree, given
that all intermediate stages are saved? I realise this will depend on epoch lengths and if
time-frequency decomposition is performed.
ANSWER: We have added estimates to the Results section, and explained how pipe usage
affects storage.
## CTAP outcomes
For some of the main outcomes, such as blinks, bad segments, ICA, please clarify which
tools are used, for instance from EEGLAB exclusively? I see that the functions and
algorithms are described later, in results. These explanations should be part of methods.
Validity of the findings
ANSWER: Thank you. We have added detail to the Methods section for each method
applied to the data. We have also clarified the role of the methods demonstrated in the
paper, which are primarily to showcase the application of the pipe, rather than to report novel
methods (as a secondary purpose we do in fact describe a novel blink detection method).We
have also described how the methods in CTAP, while taken from various sources (including
EEGLAB, FASTER, ADJUST) have all been refactored to be compatible with the CTAP
procedure. Thus it is more the methods/algorithms rather than the code/functions which are
interesting.
Results
Reviewer 1 (Anonymous)
The authors should also consider an expanded view on artifact rejection and assess not only
for eye blinks but for all the many artifacts the EEG data is sensitive to.
ANSWER: Thank you. We do wish to demonstrate the range of options, though for the sake
of brevity we do not show ALL options. We show:
● Blink detection
● blink correction in:
○ Raw EEG
○ ERP
● channel histograms
● filter effects
● bad segments
● a bad ICA component
● Post-processing ICA components illustrating remainder artefacts
Reviewer 3 (Guillaume Rousselet)
Overall, the result section would be more convincing if it provided a more detailed
walkthrough of the preprocessing of a dataset, focusing on the figure outputs.
ANSWER: Thank you, we agree with this. Even though it increases the page length, we
agree it is a basic requirement of an EEG-processing toolbox to show how it processes
EEG! On the other hand, the novelty that we are describing is the approach and the QA
outcomes, not the methods. We try to make this clear, both by direct explanation and by
showcasing a set of methods that is not optimal, but instead is varied (illustrates the different
options).
One important aspect is missing, which is promised in the abstract: “testing and comparison
of automated methods”. Can you explicitly describe how this can be done using CTAP? Do
you have special functions for instance to generate figures comparing blink correction
techniques? The discussion mentions that such features are not yet available, so part of the
software description, including in the abstract, is misleading.
ANSWER: Thank you, we have addressed this above (first point under SCOPE). In fact the
CTAP toolbox, including its branching feature, is the foundation required for computational
testing of automated methods. A solid foundation seems a prerequisite for such a
demanding task!
Please define “trimmed sd”. Do you mean the SD of the trimmed mean. If so, what amount
of trimming is applied?
ANSWER: Thank you. The caption has been rewritten, so the relevant part now reads:
“Fitted normal probability density function (PDF) is shown as red solid curve. Upper
and lower 2.5~\% quantiles are vertical black solid lines; data inside these limits was
used to estimate the trimmed standard deviation (SD), and fitted normal PDF using
trimmed SD is shown as black solid curve.”
Discussion
Reviewer 1 (Anonymous)
I understand that the development of this framework is necessary for the later development
of a functionality to compare different streaming of preprocessing, however, without this
‘comparing’ functionality, the contribution of the current work is reduced to a new wrap on an
already existing functionality in EEGLAB.
As mentioned before, the idea of providing an environment for comparison of different
processing streams of EEG data is bold and would be an important contribution to the EEG
community. However, this contribution is not included in this manuscript. Apparently, a few
scripts could do the trick. Given the ability to do exactly the same thing as the CTAP does
with just a bit of scripting using EEGLAB, the authors should consider to hold on the
submission and make it more valuable by implementing the ideas enunciated in the section
’Future work’.
ANSWER: Many thanks for the opinion. However, with respect, we feel that this view is not
entirely justified. Firstly, CTAP does include novel methods, which have been evaluated in
more detail in the resubmitted version.
Secondly, it is very far from trivial to produce the framework that handles all the analysis
steps for a general case EEG analysis.
The argument that this can be done with a few scripts, and thus the contribution is merely a
new wrap on existing functionality, suffers from two directions.
1. Firstly, it underestimates the task and the effect of the task on the quality of the
outcome, since the more development coding the analyst must do, the less attention
she has for the analysis. High profile retractions have occurred due to simple Matlab
bugs. Thus the argument is addressed by practicality.
2. Secondly, because EEGLAB is substantially geared toward GUI use, we can also
say the batch-processing capability of EEGLAB is itself a wrap on Matlab functions
(and the issue with EEGLAB filtering pointed out by Reviewer 2 illustrates that it is
not always a productive wrap). Thus the argument is addressed by r eductio.
Finally, the idea that we hold the current content until we produce the (even more complex)
computational comparison part implies that we miss the chance to build the necessary
foundation with the added constructive criticism of the reviewers and community, which is
the very principle of making open source software.
Reviewer 2 (Anonymous)
From the information provided in the manuscript I cannot follow the claim in the abstract that
the toolbox helps avoiding manual decision making to reduce subjectivity and low
replicability. I do not see how the software reduces manual decision making except providing
default values for some pre-processing procedures. The general validity of default values in
such a diverse field as psychophysiology is questionable. Furthermore, the abstract
promises testing and comparison methods. This appears to refer mainly to synthetic test
signals but not to real world data. While I see the requirement to evaluate pre-processing
procedures with test signals and do see the benefits of standardized and replicable test
signals-as provided by the toolbox-I do not see how the toolbox helps or enforces validation
with real data.
ANSWER: Thank you. We have tried to address these points in the manuscript, and to some
extent there is overlap with reviewer 1’s points which we have addressed above. To clarify,
CTAP currently helps avoid manual decisions in setting up the preprocessing, for one or
many ‘branches’; it does not yet do so for methods evaluation (except that branching helps
to set up comparable runs of automated methods, e.g. FASTER vs ADJUST, for subsequent
manual checks). It is true that synthetic test signals do not guarantee good results for real
data. The synthetic test signals however do provide ‘ground truth’, as well as is possible for
EEG. Thus the contribution approaches, without guaranteeing, the kind of functionality which
the reviewer seems to be requesting here.
Reviewer 3 (Guillaume Rousselet)
I agree with your points about GUIs and software development.
ANSWER: Thank you!
Figures / Tables (TODO)
NOTE: All figures in the manuscript are as close to the actual output of CTAP as possible.
All improvements listed below have thus been made to the CTAP as well.
Reviewer 1 (Anonymous)
Labeling of figures and explanations needs important improvements. The authors should
provide labels and descriptions that make the figure a self-contained unit.
e.g.
Figure 2: Definitions and abbreviations are not explained
ANSWER: The figure has been simplified. We do not see what definitions or abbreviations
would need further clarification.
Figure 3: The authors does not describe or provide relevant information on the figure
ANSWER: The figure has been removed - the typical directory structure has been described
in text instead.
Figure 4: Size of figure should be reduced and typo on word ‘horizontal’ must be fixed. X
label is missing in the figure.
ANSWER: (old Fig 4 = new Fig 6) The typo has been corrected and the figure orientation
changed such that it requires less space. The missing axis label has been added. The axis is
also explained in the caption.
Figure 6: The authors should include a legend in the figure and define or remove
abbreviations as ‘pdf’ or ‘sd’
ANSWER: (old Fig 6 = new Fig 4) A legend and axis labels have been added. The terms
‘pdf’ and ‘sd’ have been defined in the caption.
Reviewer 2 (Anonymous)
Figures 4 and 6: Units missing or incorrect?!
ANSWER: Units have been added to old Figure 6 (new Fig 4). Note that in order to plot a
probability density function (probability in the renge [0, 1]) and a histogram (in counts) to a
single figure, the pdf values have been scaled. What matters is the shape of the pdf and
hence only histogram units are shown.
ERRATA
1. Upon reflection, we find that empirical mode decomposition (EMD) is too demanding:
it requires compilation of non-Matlab code, and is very slow for any realistic-sized
data set. We have replaced it in the codebase and the manuscript.
2. Processing of bad segments (continuous data) was promised but was missing from
the submitted manuscript, it has now been added.
3. Similarly PSD comparison was promised but was missing from the submitted
manuscript. It has now been added.
" | Here is a paper. Please give your review comments after reading it. |
730 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Music transcription involves the transformation of an audio recording to common music notation, colloquially referred to as sheet music. Manually transcribing audio recordings is a difficult and time-consuming process, even for experienced musicians. In response, several algorithms have been proposed to automatically analyze and transcribe the notes sounding in an audio recording; however, these algorithms are often general-purpose, attempting to process any number of instruments producing any number of notes sounding simultaneously. This paper presents a polyphonic transcription algorithm that is constrained to processing the audio output of a single instrument, specifically an acoustic guitar. The transcription system consists of a novel note pitch estimation algorithm that uses a deep belief network and multi-label learning techniques to generate multiple pitch estimates for each analysis frame of the input audio signal. Using a compiled dataset of synthesized guitar recordings for evaluation, the algorithm described in this work results in an 11% increase in the f-measure of note transcriptions relative to Zhou et al.'s transcription algorithm in the literature. This paper demonstrates the effectiveness of deep, multi-label learning for the task of polyphonic transcription.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Music transcription is the process of converting an audio signal into a music score that informs a musician which notes to perform and how they are to be performed. This is accomplished through the analysis of the pitch and rhythmic properties of an acoustical waveform. In the composition or publishing process, manually transcribing each note of a musical passage to create a music score for other musicians is a labour-intensive procedure <ns0:ref type='bibr' target='#b15'>(Hainsworth and Macleod, 2003)</ns0:ref>. Manual transcription is slow and errorprone: even notationally fluent and experienced musicians make mistakes, require multiple passes over the audio signal, and draw upon extensive prior knowledge to make complex decisions about the resulting transcription <ns0:ref type='bibr'>(Benetos et al., 2013)</ns0:ref>.</ns0:p><ns0:p>In response to the time-consuming process of manually transcribing music, researchers in the multidisciplinary field of music information retrieval (MIR) have summoned their knowledge of computing science, electrical engineering, music theory, mathematics, and statistics to develop algorithms that aim to automatically transcribe the notes sounding in an audio recording. Although the automatic transcription of monophonic (one note sounding at a time) music is considered a solved problem <ns0:ref type='bibr' target='#b3'>(Benetos et al., 2012)</ns0:ref>, the automatic transcription of polyphonic (multiple notes sounding simultaneously) music 'falls clearly behind skilled human musicians in accuracy and flexibility' <ns0:ref type='bibr' target='#b25'>(Klapuri, 2004)</ns0:ref>. In an effort to reduce the complexity, the transcription problem can be constrained by limiting the number of notes that sound simultaneously, the genre of music being analyzed, or the number and type of instruments producing sound. A constrained domain allows the transcription system to 'exploit the structure' <ns0:ref type='bibr' target='#b31'>(Martin, 1996</ns0:ref>) by leveraging known priors on observed distributions, and consequently reduce the difficulty of transcription. This parallels systems in the more mature field of speech recognition where practical algorithms are often language, gender, or speaker dependent <ns0:ref type='bibr' target='#b21'>(Huang et al., 2001)</ns0:ref>.</ns0:p><ns0:p>Automatic guitar transcription is the problem of automatic music transcription with the constraint that the audio signal being analyzed is produced by a single electric or acoustic guitar. Though this problem is constrained, a guitar is capable of producing six notes simultaneously, which still offers a multitude of challenges for modern transcription algorithms. The most notable challenge is the estimation of the pitches of notes comprising highly polyphonic chords, occurring when a guitarist strums several strings at once. Yet another challenge presented to guitar transcription algorithms is that a large body of guitarists PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science publish and share transcriptions in the form of tablature rather than common western music notation.</ns0:p><ns0:p>Therefore, automatic guitar transcription algorithms should also be capable of producing tablature. Guitar tablature is a symbolic music notation system with a six-line staff representing the strings on a guitar.</ns0:p><ns0:p>The top line of the system represents the highest pitched (thinnest diameter) string and the bottom line represents the lowest pitched (thickest diameter) string. A number on a line denotes the guitar fret that should be depressed on the respective string. An example of guitar tablature below its corresponding common western music notation is presented in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. A solution to the problem of isolated instrument transcription has substantial commercial interest with applications in musical games, instrument learning software, and music cataloguing. However, these applications seem far out of grasp given that the MIR research community has collectively reached a plateau in the accuracy of automatic music transcription systems <ns0:ref type='bibr' target='#b3'>(Benetos et al., 2012)</ns0:ref>. In a paper addressing this issue, <ns0:ref type='bibr' target='#b3'>Benetos et al. (2012)</ns0:ref> stress the importance of extracting expressive audio features and moving towards context-specific transcription systems. Also addressing this issue, <ns0:ref type='bibr' target='#b22'>Humphrey et al. (2012</ns0:ref><ns0:ref type='bibr' target='#b23'>Humphrey et al. ( , 2013) )</ns0:ref> propose that effort should be focused on audio features generated by deep belief networks instead of hand-engineered audio features, due to the success of these methods in other fields such as computer vision <ns0:ref type='bibr' target='#b28'>(Lee et al., 2009)</ns0:ref> and speech recognition <ns0:ref type='bibr' target='#b17'>(Hinton et al., 2012)</ns0:ref>. The aforementioned literature provides motivation for applying deep belief networks to the problem of isolated instrument transcription.</ns0:p><ns0:p>This paper presents a polyphonic transcription system containing a novel pitch estimation algorithm that addresses two arguable shortcomings in modern pattern recognition approaches to pitch estimation:</ns0:p><ns0:p>first, the task of estimating multiple pitches sounding simultaneously is often approached using multiple one-versus-all binary classifiers <ns0:ref type='bibr' target='#b34'>(Poliner and Ellis, 2007;</ns0:ref><ns0:ref type='bibr' target='#b33'>Nam et al., 2011)</ns0:ref> in lieu of estimating the presence of multiple pitches using multinomial regression; second, there exists no standard method to impose constraints on the polyphony of pitch estimates at any given time. In response to these points, the pitch estimation algorithm described in this work uses a deep belief network in conjunction with multi-label learning techniques to produce multiple pitch estimates for each audio analysis frame.</ns0:p><ns0:p>After estimating the pitch content of the audio signal, existing algorithms in the literature are used to track the temporal properties (onset time and duration) of each note event and convert this information to guitar tablature notation.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>The first polyphonic transcription system for duets imposed constraints on the frequency range and timbre of the two input instruments as well as the intervals between simultaneously performed notes <ns0:ref type='bibr' target='#b32'>(Moorer, 1975)</ns0:ref>. 1 This work provoked a significant amount of research on this topic, which still aims to further the accuracy of transcriptions while gradually eliminating domain constraints.</ns0:p><ns0:p>In the infancy of the problem, polyphonic transcription algorithms relied heavily on digital signal processing techniques to uncover the fundamental frequencies present in an input audio waveform. To this end, several different algorithms have been proposed: perceptually motivated models that attempt to model human audition <ns0:ref type='bibr' target='#b26'>(Klapuri, 2005)</ns0:ref>; salience methods, which transform the audio signal to accentuate the underlying fundamental frequencies <ns0:ref type='bibr' target='#b27'>(Klapuri, 2006;</ns0:ref><ns0:ref type='bibr' target='#b51'>Zhou et al., 2009)</ns0:ref>; iterative estimation methods, which iteratively select a predominant fundamental from the frequency spectrum and then subtract an estimate of its harmonics from the residual spectrum until no fundamental frequency candidates 1 Timbre refers to several attributes of an audio signal that allows humans to attribute a sound to its source and to differentiate between a trumpet and a piano, for instance. Timbre is often referred to as the 'colour' of a sound.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science remain <ns0:ref type='bibr' target='#b27'>(Klapuri, 2006)</ns0:ref>; and joint estimation, which holistically selects fundamental frequency candidates that, together, best describe the observed frequency domain of the input audio signal <ns0:ref type='bibr' target='#b48'>(Yeh et al., 2010)</ns0:ref>.</ns0:p><ns0:p>The MIR research community is gradually adopting a machine-learning-centric paradigm for many MIR tasks, including polyphonic transcription. Several innovative applications of machine learning algorithms to the task of polyphonic transcription have been proposed, including hidden Markov models (HMMs) <ns0:ref type='bibr' target='#b37'>(Raphael, 2002)</ns0:ref>, non-negative matrix factorization <ns0:ref type='bibr' target='#b41'>(Smaragdis and Brown, 2003;</ns0:ref><ns0:ref type='bibr' target='#b13'>Dessein et al., 2010)</ns0:ref>, support vector machines <ns0:ref type='bibr' target='#b34'>(Poliner and Ellis, 2007)</ns0:ref>, artificial shallow neural networks <ns0:ref type='bibr' target='#b30'>(Marolt, 2004)</ns0:ref> and recurrent neural networks <ns0:ref type='bibr' target='#b8'>(Boulanger-Lewandowski, 2014)</ns0:ref>. Although each of these algorithms operate differently, the underlying principle involves the formation of a model that seeks to capture the harmonic, and perhaps temporal, structures of notes present in a set of training audio signals.</ns0:p><ns0:p>The trained model then predicts the harmonic and/or temporal structures of notes present in a set of previously unseen audio signals.</ns0:p><ns0:p>Training a machine learning classifier for note pitch estimation involves extracting meaningful features from the audio signal that reflect the harmonic structures of notes and allow discrimination between different pitch classes. The obvious set of features exhibiting this property is the short-time Fourier transform (STFT), which computes the discrete Fourier transform (DFT) on a sliding analysis window over the audio signal. However, somewhat recent advances in the field of deep learning have revealed that artificial neural networks with many layers of neurons can be efficiently trained <ns0:ref type='bibr' target='#b20'>(Hinton et al., 2006)</ns0:ref> and form a hierarchical, latent representation of the input features <ns0:ref type='bibr' target='#b28'>(Lee et al., 2009)</ns0:ref>.</ns0:p><ns0:p>Using a deep belief network (DBN) to learn alternate feature representations of DFT audio features, <ns0:ref type='bibr' target='#b33'>Nam et al. (2011)</ns0:ref> exported these audio features and injected them into 88 binary support vector machine classifiers: one for each possible piano pitch. Each classifier outputs a binary class label denoting whether the pitch is present in a given audio analysis frame. Using the same experimental set up as <ns0:ref type='bibr' target='#b34'>Poliner and Ellis (2007)</ns0:ref>, <ns0:ref type='bibr' target='#b33'>Nam et al. (2011)</ns0:ref> noted that the learned features computed by the DBN yielded significant improvements in the precision and recall of pitch estimates relative to standard DFT audio features. <ns0:ref type='bibr' target='#b39'>Sigtia et al. (2016)</ns0:ref> attempted to arrange and join piano notes by trying to generate 'beams', continuous notes, all within a DBN. This makes <ns0:ref type='bibr'>Sigtia et al. work</ns0:ref> Some models for chord and pitch estimation attempt to produce the fingering of a guitar rather than the notes themselves. <ns0:ref type='bibr' target='#b0'>Barbancho et al. (2012)</ns0:ref> applied hidden Markov models (HMM) to pre-processed audio to extract fretboard fingerings for guitar notes. This HMM fretboard model achieves between 87% and 95% chord recognition accuracy. <ns0:ref type='bibr' target='#b24'>Humphrey and Bello (2014)</ns0:ref> After note pitch estimation it is necessary to perform note tracking, which involves the detection of note onsets and offsets <ns0:ref type='bibr' target='#b4'>(Benetos and Weyde, 2013)</ns0:ref>. Several techniques have been proposed in the literature including a multitude of onset estimation algorithms <ns0:ref type='bibr' target='#b1'>(Bello et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b14'>Dixon, 2006)</ns0:ref>, HMM note-duration modelling algorithms <ns0:ref type='bibr'>(Benetos et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b38'>Ryynänen and Klapuri, 2005)</ns0:ref>, and an HMM frame-smoothing algorithm <ns0:ref type='bibr' target='#b34'>(Poliner and Ellis, 2007)</ns0:ref>. The output of these note tracking algorithms are a sequence of note event estimates, each having a pitch, onset time, and duration. These note events may then be digitally encoded in a symbolic music notation, such as tablature notation, for cataloguing or publishing. Arranging tablature is challenging because the guitar is capable of producing the same pitch in multiple ways. Therefore, a 'good' arrangement is one that is biomechanically easy for the musician to perform, such that transitions between notes do not require excessive hand movement and the performance of chords require minimal stretching of the hand <ns0:ref type='bibr' target='#b16'>(Heijink and Meulenbroek, 2002)</ns0:ref>. Solutions to the problem of tablature arrangement include graph-search algorithms <ns0:ref type='bibr' target='#b35'>(Radicioni and Lombardo, 2005;</ns0:ref><ns0:ref type='bibr' target='#b36'>Radisavljevic and Driessen, 2004;</ns0:ref><ns0:ref type='bibr' target='#b11'>Burlet and Fujinaga, 2013)</ns0:ref>, neural networks <ns0:ref type='bibr' target='#b45'>(Tuohy and Potter, 2006)</ns0:ref>, and genetic algorithms <ns0:ref type='bibr' target='#b44'>(Tuohy and Potter, 2005;</ns0:ref><ns0:ref type='bibr' target='#b9'>Burlet, 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>DEEP BELIEF NETWORKS</ns0:head><ns0:p>Before introducing the developed pitch estimation algorithm, it is worthwhile to review the structure and training procedure of a deep belief network. The intent of deep architectures for machine learning is to form a multi-layered and structured representation of sensory input with which a classifier or regressor can use to make informed predictions about its environment <ns0:ref type='bibr' target='#b47'>(Utgoff and Stracuzzi, 2002)</ns0:ref>.</ns0:p><ns0:p>Recently, <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref> proposed a specific formulation of a multi-layered artificial neural network called a deep belief network (DBN), which addresses the training and performance issues arising when many hidden network layers are used. A preliminary unsupervised training algorithm aims to set the network weights to good initial values in a layer-by-layer fashion, followed by a more holistic supervised fine-tuning algorithm that considers the interaction of weights in different layers with respect to the desired network output <ns0:ref type='bibr' target='#b18'>(Hinton, 2007)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Unsupervised Pretraining</ns0:head><ns0:p>In order to pretrain the network weights in an unsupervised fashion, it is necessary to think of the network as a generative model rather than a discriminative model. A generative model aims to form an internal model of a set of observable data vectors, described using latent variables; the latent variables then attempt to recreate the observable data vectors with some degree of accuracy. On the other hand, a discriminative model aims to set the value of its latent variables, typically used for the task of classification or regression, without regard for recreating the input data vectors. A discriminative model does not explicitly care how the observed data was generated, but rather focuses on producing correct values of its latent variables. <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref> proposed that a deep neural network be composed of several restricted Boltzmann machines (RBMs) stacked on top of each other, such that the network can be viewed as both a generative model and a discriminative model. An RBM is an undirected bipartite graph with m visible nodes and n hidden nodes, as depicted in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. Typically, the domain of the visible and hidden nodes are binary such that v ∈ {0, 1} m and h ∈ {0, 1} n , respectively, such that</ns0:p><ns0:formula xml:id='formula_0'>P(h j = 1|v) = 1 1 + e −W j v and P(v i = 1|h) = 1 1 + e −W T i h ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where W ∈ R n×m is the matrix of weights between the visible and hidden nodes. For simplicity, Equation 1 does not include bias nodes for v and h. </ns0:p></ns0:div>
<ns0:div><ns0:head>Supervised Fine-tuning</ns0:head><ns0:p>The unsupervised pretraining of the stacked RBMs is a relatively efficient method that sets good initial values for the network weights. Moreover, in the case of a supervised learning task such as classification Manuscript to be reviewed</ns0:p><ns0:p>Computer Science One method of supervised fine-tuning is to add a layer of output nodes to the network for the purposes of (logistic) regression and to perform standard back-propagation as if the DBN was a multi-layered neural network <ns0:ref type='bibr' target='#b5'>(Bengio, 2009)</ns0:ref>. Rather than creating features from scratch, this fine-tuning method is responsible for modifying the latent features in order to adjust the class boundaries <ns0:ref type='bibr' target='#b18'>(Hinton, 2007)</ns0:ref>.</ns0:p><ns0:p>After fine-tuning the network, a feature vector can be fed forward through the network and a result realized at the output layer. In the context of pitch estimation, the feature vector represents the frequency content of an audio analysis frame and the output layer of the network is responsible for classifying the pitches that are present.</ns0:p></ns0:div>
<ns0:div><ns0:head>ISOLATED INSTRUMENT TRANSCRIPTION</ns0:head><ns0:p>The workflow of the proposed polyphonic transcription algorithm is presented in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>. The algorithm consists of an audio signal preprocessing step, followed by a novel DBN pitch estimation algorithm. The note-tracking component of the polyphonic transcription algorithm uses a combination of the existing frame-smoothing algorithm developed by <ns0:ref type='bibr' target='#b34'>Poliner and Ellis (2007)</ns0:ref> and the existing spectral flux onset estimation algorithm described by <ns0:ref type='bibr' target='#b14'>Dixon (2006)</ns0:ref> to produce a MIDI file. MIDI is a binary file format composed of tracks holding a sequence of note events, which each have an integer pitch from 0-127, a velocity value indicating the intensity of a note, and a tick number indicating when the note event occurs. This sequence of note events is then translated to guitar tablature notation using the graph-search algorithm developed by <ns0:ref type='bibr' target='#b11'>Burlet and Fujinaga (2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Audio Signal Preprocessing</ns0:head><ns0:p>The input audio signal is first preprocessed before feature extraction. If the audio signal is stereo, the channels are averaged to produce a mono audio signal. Then the audio signal is decimated to lower the sampling rate f s by an integer multiple, k ∈ N + . Decimation involves low-pass filtering with a cut-</ns0:p></ns0:div>
<ns0:div><ns0:head>5/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science off frequency of f s /2k Hz to mitigate against aliasing, followed by selecting every k th sample from the original signal.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Pitch Estimation</ns0:head><ns0:p>The structure of the DBN pitch estimation algorithm is presented in Figure <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. The algorithm extracts features from an analysis window that slides over the audio waveform. The audio features are subsequently fed forward through the deep network, resulting in an array of posterior probabilities used for pitch and polyphony estimation.</ns0:p><ns0:p>First, features are extracted from the input audio signal. The power spectrum of each audio analysis frame is calculated using a Hamming window of size w samples and a hop size of h samples. The power spectrum is calculated by squaring the magnitude of each frequency component of the DFT. Since the power spectrum is mirrored about the Nyquist frequency when processing an audio signal, half of the spectrum is retained, resulting in m = ⌊w/2⌋ + 1 features. The result is a matrix of normalized audio features Φ ∈ [0, 1] n×m , such that n is the number of analysis frames spanning the input signal.</ns0:p><ns0:p>The DBN consumes these normalized audio features; hence, the input layer consists of m nodes.</ns0:p><ns0:p>There can be any number of stochastic binary hidden layers, each consisting of any number of nodes.</ns0:p><ns0:p>The output layer of the network consists of k + p nodes, where the first k nodes are allocated for pitch estimation and the final p nodes are allocated for polyphony estimation. The network uses a sigmoid activation as the non-linear transfer function.</ns0:p><ns0:p>The feature vectors Φ are fed forward through the network with parameters Θ, resulting in a matrix of probabilities P( Ŷ |Φ, Θ) ∈ [0, 1] k+p that is then split into a matrix of pitch probabilities P( Ŷ (pitch) |Φ, Θ) and polyphony probabilities P( Ŷ (poly) |Φ, Θ). The polyphony of the i th analysis frame is estimated by selecting the polyphony class with the highest probability using the equation</ns0:p><ns0:formula xml:id='formula_1'>ρ i = argmax j P( Ŷ (poly) i j |Φ i , Θ) . (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>)</ns0:formula><ns0:p>Pitch estimation is performed using a multi-label learning technique similar to the MetaLabeler system <ns0:ref type='bibr' target='#b43'>(Tang et al., 2009)</ns0:ref>, which trains a multi-class classifier for label cardinality estimation using the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science output values of the original label classifier as features. Instead of using the matrix of pitch probabilities as features for a separate polyphony classifier, increased recall was noted by training the polyphony classifier alongside the pitch classifier using the original audio features. Formally, the pitches sounding in the i th analysis frame are estimated by selecting the indices of the ρ i highest pitch probabilities produced by the DBN. With these estimates, the corresponding vector of pitch probabilities is converted to a binary vector Ŷ (pitch) i ∈ {0, 1} k by turning on bits that correspond to the ρ i highest pitch probabilities.</ns0:p><ns0:p>For training and testing the algorithm, a set of pitch and polyphony labels are calculated for each audio analysis frame using an accompanying ground-truth MIDI file. A matrix of pitch annotations</ns0:p><ns0:formula xml:id='formula_3'>Y (pitch) ∈ {0, 1} n×k ,</ns0:formula><ns0:p>where k is the number of considered pitches, is computed such that an enabled bit indicates the presence of a pitch. A matrix of polyphony annotations Y (poly) ∈ {0, 1} n×p , where p is the maximum frame-wise polyphony, is also computed such that a row is a one-hot binary vector in which the enabled bit indicates the polyphony of the frame. These matrices are horizontally concatenated to form the final matrix Y ∈ {0, 1} n× (k+p) of training and testing labels.</ns0:p><ns0:p>The deep belief network is trained using a modified version of the greedy layer-wise algorithm described by <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref>. Pretraining is performed by stacking a series of restricted Boltzmann machines and sequentially training each in an unsupervised manner using 1-step contrastive divergence <ns0:ref type='bibr' target='#b5'>(Bengio, 2009)</ns0:ref>. Instead of using the 'up-down' fine-tuning algorithm proposed by <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref>, the layer of output nodes are treated as a set of logistic regressors and standard backpropagation is conducted on the network. Rather than creating features from scratch, this fine-tuning method is responsible for modifying the latent features in order to adjust the class boundaries <ns0:ref type='bibr' target='#b18'>(Hinton, 2007)</ns0:ref>.</ns0:p><ns0:p>The canonical error function to be minimized for a set of separate pitch and polyphony binary classifications is the cross-entropy error function, which forms the training signal used for backpropagation:</ns0:p><ns0:formula xml:id='formula_4'>E(Θ) = − n ∑ i=1 k+p ∑ j=1 Y i j ln P( Ŷi j |Φ i , Θ) + (1 −Y i j ) ln(1 − P( Ŷi j |Φ i , Θ))<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>The aim of this objective function is to adjust the network weights Θ to pull output node probabilities closer to one for ground-truth label bits that are on and pull probabilities closer to zero for bits that are off.</ns0:p><ns0:p>The described pitch estimation algorithm was implemented using the Theano numerical computation library for Python <ns0:ref type='bibr' target='#b7'>(Bergstra et al., 2010)</ns0:ref>. Computations for network training and testing are parallelized on the graphics processing unit (GPU). Feature extraction and audio signal preprocessing is performed using Marsyas, a software framework for audio signal processing and analysis <ns0:ref type='bibr' target='#b46'>(Tzanetakis and Cook, 2000)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Tracking</ns0:head><ns0:p>Although frame-level pitch estimates are essential for transcription, converting these estimates into note events with an onset and duration is not a trivial task. The purpose of note tracking is to process these pitch estimates and determine when a note onsets and offsets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Frame-level Smoothing</ns0:head><ns0:p>The frame-smoothing algorithm developed by <ns0:ref type='bibr' target='#b34'>Poliner and Ellis (2007)</ns0:ref> is used to postprocess the DBN pitch estimates Ŷ (pitch) for an input audio signal. The algorithm allows a frame-level pitch estimate to be contextualized amongst its neighbours instead of solely trusting the independent estimates made by a classification algorithm.</ns0:p><ns0:p>Formally, the frame-smoothing algorithm operates by training an HMM for each pitch. Each HMM consists of two hidden states: ON and OFF. The transition probabilities are computed by observing the frequency with which a pitch transitions between and within the ON and OFF states across analysis frames. The emission distribution is a Bernoulli distribution that models the certainty of each frame-level estimate and is represented using the pitch probabilities P( Ŷ (pitch) |Φ, Θ). The output of the Viterbi algorithm, which searches for the optimal underlying state sequence, is a revised binary vector of activation estimates for a single pitch. Concatenating the results of each HMM results in a revised matrix of pitch estimates Ŷ (pitch) .</ns0:p></ns0:div>
<ns0:div><ns0:head>Onset Quantization</ns0:head><ns0:p>If the HMM frame-smoothing algorithm claims a pitch arises within an analysis frame, it could onset at any time within the window. Arbitrarily setting the note onset time to occur at the beginning of the window often results in 'choppy' sounding transcriptions. In response, the onset detection algorithm that uses spectral flux measurements between analysis frames <ns0:ref type='bibr' target='#b14'>(Dixon, 2006</ns0:ref>) is run at a finer time resolution to pinpoint the exact note onset time. The onset detection algorithm is run on the original, undecimated audio signal with a window size of 2048 samples and a hop size of 512 samples. When writing the note event estimates as a MIDI file, the onset times calculated by this algorithm are used. The offset time is calculated by following the pitch estimate across consecutive analysis frames until it transitions from ON to OFF, at which point the time stamp of the end of this analysis frame is used. Note events spanning less than two audio analysis frames are removed from the transcription to mitigate against spurious notes.</ns0:p><ns0:p>Output of the polyphonic transcription algorithm at each stage-from feature extraction to DBN pitch estimation to frame smoothing and quantization (note tracking)-is displayed in Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> for a four-second segment of a synthesized guitar recording. The pitch probabilities output by the DBN show that the classifier is quite certain about its estimates; there are few grey areas indicating indecision.</ns0:p></ns0:div>
<ns0:div><ns0:head>Music Notation Arrangement</ns0:head><ns0:p>The MIDI file output by the algorithm thus far contains the note event (pitch, onset, and duration) transcriptions of an audio recording. However, a MIDI file lacks certain information necessary to write sheet music in common western music notation such as time signature, key signature, clef type, and the value (duration) of each note described in divisions of a whole note.</ns0:p><ns0:p>There are several robust opensource programs that derive this missing information from a MIDI file using logic and heuristics in order to generate common western music notation that is digitally encoded in the MusicXML file format. MusicXML is a standardized extensible markup language (XML) definition allowing digital symbolic music notation to be universally encoded and parsed by music applications.</ns0:p><ns0:p>In this work, the command line tools shipped with the opensource application MuseScore are used to convert MIDI to common western music notation encoded in the MusicXML file format. 2 The graph-based guitar tablature arrangement algorithm developed by <ns0:ref type='bibr' target='#b11'>Burlet and Fujinaga (2013)</ns0:ref> is used to append a guitar string and fret combination to each note event encoded in a MusicXML transcription file. The guitar tablature arrangement algorithm operates by using Dijkstra's algorithm to search for the shortest path through a directed weighted graph, in which the vertices represent candidate string and fret combinations for a note or chord, as displayed in Figure <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>The edge weights between nodes in the graph indicate the biomechanical difficulty of transitioning between fretting-hand positions. Three biomechanical complexity factors are aggregated to form each edge weight: the fret-wise distance required to transition between notes or chords, the fret-wise finger span required to perform chords, and a penalty of one if the fretting hand surpasses the seventh fret. The value of this penalty and fret threshold number were determined through subjective analysis of the resulting tablature arrangements. In the event that a note is followed by a chord, the fret-wise distance is calculated by the expression</ns0:p><ns0:formula xml:id='formula_5'>f − max(g) − min(g) 2 ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>such that f ∈ N is the fret number used to perform the note and g is a vector of fret numbers used to Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref>. A directed acyclic graph of string and fret candidates for a note and chord followed by two more notes. Weights have been omitted for clarity. The notation for each node is (string number, fret number).</ns0:p><ns0:p>perform each note in the chord. For more detailed information regarding the formulation of this graph, please refer to the conference proceeding of <ns0:ref type='bibr' target='#b11'>Burlet and Fujinaga (2013)</ns0:ref> or thesis of <ns0:ref type='bibr' target='#b9'>Burlet (2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Pitch Estimation Metrics</ns0:head><ns0:p>Given the pitch estimates output by the DBN pitch estimation algorithm for n audio analysis frames, Ŷ (pitch) ∈ {0, 1} n×k , and the corresponding ground-truth pitch label matrix for the corresponding audio analysis frames, Y (pitch) ∈ {0, 1} n×k , the following metrics can be computed:</ns0:p><ns0:formula xml:id='formula_6'>Precision: p = 1( Ŷ (pitch) & Y (pitch) )1 1 Ŷ (pitch) 1 ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>such that the logical operator & denotes the element-wise AND of two binary matrices and 1 indicates a vector of ones. In other words, this equation calculates the number of correct pitch estimates divided by the number of pitches the algorithm predicts are present across the audio analysis frames.</ns0:p><ns0:p>Recall:</ns0:p><ns0:formula xml:id='formula_7'>r = 1( Ŷ (pitch) & Y (pitch) )1 1Y (pitch) 1 ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>such that the logical operator & denotes the element-wise AND of two binary matrices and 1 indicates a vector of ones. In other words, this equation calculates the number of correct pitch estimates divided by the number of ground-truth pitches that are active across the audio analysis frames.</ns0:p><ns0:p>f -measure:</ns0:p><ns0:formula xml:id='formula_8'>f = 2pr p + r ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>such that p and r is the precision and recall calculated using Equation 5 and Equation <ns0:ref type='formula' target='#formula_7'>6</ns0:ref>, respectively. The f -measure calculated in Equation <ns0:ref type='formula' target='#formula_8'>7</ns0:ref>is the balanced f -score, which is the harmonic mean of precision and recall. In other words, precision and recall are weighted evenly.</ns0:p><ns0:p>Polyphony recall:</ns0:p><ns0:formula xml:id='formula_9'>r poly = ∑ n i=1 1{( Ŷ (pitch) 1) i = (Y (pitch) 1) i } n ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>such that 1{•} is an indicator function that returns 1 if the predicate is true, and n is the number of audio analysis frames being evaluated. In other words, this equation calculates the number of correct polyphony estimates across all audio analysis frames divided by the number of analysis frames.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>One error: given the matrix of pitch probabilities P( Ŷ (pitch) |Φ, Θ) ∈ [0, 1] n×k output by the DBN with model parameters Θ when processing the input audio analysis frame features Φ, the predominant pitch of the i th audio analysis frame is calculated using the equation</ns0:p><ns0:formula xml:id='formula_10'>j = argmax j P( Ŷ (pitch) i j |Φ i , Θ) ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>which can then be used to calculate the one error:</ns0:p><ns0:formula xml:id='formula_11'>one err = ∑ n i=1 1{Y (pitch) i j = 1} n ,<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>such that 1{•} is an indicator function that maps to 1 if the predicate is true. The one error calculates the fraction of analysis frames in which the top-ranked label is not present in the ground-truth label set.</ns0:p><ns0:p>In the context of pitch estimation, this metric provides insight into the number of audio analysis frames where the predominant pitch-often referred to as the melody-is estimated incorrectly.</ns0:p><ns0:p>Hamming loss:</ns0:p><ns0:formula xml:id='formula_12'>hamming loss = 1( Ŷ (pitch) ⊕ Y (pitch) )1 nk ,<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>such that n is the number of audio analysis frames, k is the cardinality of the label set for each analysis frame, and the boolean operator ⊕ denotes the element-wise XOR of two binary matrices. The hamming loss provides insight into the number of false positive and false negative pitch estimates across the audio analysis frames.</ns0:p></ns0:div>
<ns0:div><ns0:head>Polyphonic Transcription Metrics</ns0:head><ns0:p>Several information retrieval metrics are also used to evaluate the note event estimates produced by the polyphonic transcription algorithm described in the previous chapter, which consists of a note pitch estimation algorithm followed by a note temporal estimation algorithm. Given an input audio recording, the polyphonic transcription algorithm outputs a set of note event estimates in the form of a MIDI file. A corresponding ground-truth MIDI file contains the set of true note events for the audio recording. Each note event contains three pieces of information: pitch, onset time, and offset time.</ns0:p><ns0:p>The music information retrieval evaluation exchange (MIREX), an annual evaluation of MIR algorithms, has a multiple fundamental frequency estimation and note tracking category in which polyphonic transcription algorithms are evaluated. The MIREX metrics used to evaluate polyphonic transcription algorithms are:</ns0:p><ns0:formula xml:id='formula_13'>Precision: p = | N ∩ N| | N| , (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_14'>)</ns0:formula><ns0:p>such that N is the set of estimated note events and N is the set of ground-truth note events.</ns0:p><ns0:p>Recall:</ns0:p><ns0:formula xml:id='formula_15'>r = | N ∩ N| |N| ,<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>such that N is the set of estimated note events and N is the set of ground-truth note events.</ns0:p><ns0:p>f -measure:</ns0:p><ns0:formula xml:id='formula_16'>f = 2pr p + r ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>such that p and r are calculated using Equation 12 and Equation <ns0:ref type='formula' target='#formula_15'>13</ns0:ref>, respectively.</ns0:p><ns0:p>The criteria for a note event being correct, as compared to a ground-truth note event, are as follows: Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>• The pitch name and octave number of the note event estimate and ground-truth note event must be equivalent.</ns0:p><ns0:p>• The note event estimate's onset time is within ±250ms of the ground-truth note event's onset time.</ns0:p><ns0:p>• Only one ground-truth note event can be associated with each note event estimate.</ns0:p><ns0:p>The offset time of a note event is not considered in the evaluation process because offset times exhibit less perceptual importance than note onset times <ns0:ref type='bibr' target='#b12'>Costantini et al. (2009)</ns0:ref>.</ns0:p><ns0:p>Each of these evaluation metrics can also be calculated under the condition that octave errors are ignored. Octave errors occur when the algorithm predicts the correct pitch name but incorrectly predicts the octave number. Octave errors are prevalent in digital signal processing fundamental frequency estimation algorithms because high-energy harmonics can be misconstrued as a fundamental frequency, resulting in an incorrect estimate of the octave number <ns0:ref type='bibr' target='#b29'>Maher and Beauchamp (1994)</ns0:ref>. Reporting the evaluation metrics described in this section under the condition that octave errors are ignored will reveal whether machine learning transcription algorithms also succumb to a high number of octave errors.</ns0:p></ns0:div>
<ns0:div><ns0:head>TRANSCRIPTION EVALUATION</ns0:head><ns0:p>The polyphonic transcription algorithm described in this paper is evaluated on a new dataset of synthesized guitar recordings. Before processing these guitar recordings, the number of pitches k and maximum polyphony p of the instrument must first be calculated in order to construct the DBN. Knowing that the input instrument is a guitar with six strings, the pitch estimation algorithm considers the k = 51 pitches from C2-D6, which spans the lowest note capable of being produced by a guitar in Drop C tuning to the highest note capable of being produced by a 22-fret guitar in Standard tuning. Though a guitar with six strings is only capable of producing six notes simultaneously, a chord transition may occur within a frame and so the maximum polyphony may increase above this bound. This is a technical side effect of a sliding-window analysis of the audio signal. Therefore, the maximum frame-wise polyphony is calculated from the training dataset using the equation</ns0:p><ns0:formula xml:id='formula_17'>p = max i Y (pitch) 1 i + 1, (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_18'>)</ns0:formula><ns0:p>where 1 is a vector of ones. The addition of one to the maximum polyphony is to accommodate silence where no pitches sound in an analysis frame.</ns0:p><ns0:p>The experiments outlined in this section will evaluate the accuracy of pitch estimates output by the DBN across each audio analysis frame as well as the accuracy of note events output by the entire polyphonic transcription algorithm. A formal evaluation of the guitar tablature arrangement algorithm used in this work has already been conducted <ns0:ref type='bibr' target='#b11'>(Burlet and Fujinaga, 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ground-truth Dataset</ns0:head><ns0:p>Ideally, the note pitch estimation algorithm proposed in this work should be trained and tested using recordings of acoustic or electric guitars that are subsequently hand-annotated with the note events being performed. In practice, however, it would be expensive to fund the compilation of such a dataset and there is a risk of annotation error. Unlike polyphonic piano transcription datasets that are often created using a mechanically controlled piano, such as a Yamaha Disklavier, to generate acoustic recordings that are time aligned with note events in a MIDI file, mechanized guitars are not widely available. Therefore, the most feasible course of action for compiling a polyphonic guitar transcription dataset is to synthesize a set of ground-truth note events using an acoustic model of a guitar.</ns0:p><ns0:p>Using the methodology proposed by <ns0:ref type='bibr' target='#b11'>Burlet and Fujinaga (2013)</ns0:ref>, a ground-truth dataset of 45 synthesized acoustic guitar recordings paired with MIDI note-event annotations was compiled. The dataset was created by harvesting the abundance of crowdsourced guitar transcriptions uploaded to www.ultima te-guitar.com as tablature files that are manipulated by the Guitar Pro desktop application. 3 The transcriptions in the ground-truth dataset were selected by searching for the keyword 'acoustic', filtering results to those that have been rated by the community as 5 out of 5 stars, and selecting those that received the most numbers of ratings and views. The dataset consists of songs by artists ranging from</ns0:p><ns0:p>The Beatles, Eric Clapton, and Neil Young to Led Zeppelin, Metallica, and Radiohead. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Each Guitar Pro file was manually preprocessed to remove extraneous instrument tracks other than guitar, remove repeated bars, trim recurring musical passages, and remove note ornamentations such as dead notes, palm muting, harmonics, pitch bends, and vibrato. The guitar models for note synthesis were set to the Martin & Co. acoustic guitar with steel strings, nylon strings, and an electric guitar model. Finally, each Guitar Pro file is synthesized as a WAV file and also exported as a MIDI file, which captures the note events occurring in the guitar track. The MIDI files in the ground-truth dataset are publicly available on archive.org. 4 The amount of time required to manually preprocess a Guitar Pro tablature transcription and convert it into the necessary data format ranges from 30 minutes to 2.5 hours, depending on the complexity of the musical passage.</ns0:p><ns0:p>In total the dataset consists of approximately 104 minutes of audio, an average tempo of 101 beats per minute, 44436 notes, and an average polyphony of 2.34. The average polyphony is calculated by dividing the number of note events by the number of chords plus the number of individual notes. The distribution of note pitches in the dataset is displayed in Figure <ns0:ref type='figure' target='#fig_10'>7</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm Parameter Selection</ns0:head><ns0:p>Before training and evaluating the described polyphonic transcription algorithm on the ground-truth dataset, several preliminary experiments were conducted to select reasonable parameters for the algorithm. Each preliminary experiment involved the following variables: audio sampling rate (Hz), window size (samples), sliding window hop size (samples), number of network hidden layers, number of nodes per hidden layer, and input features: either the power spectrum or Mel frequency cepstral coefficient (MFCC) features, which are often used in the field of speech recognition <ns0:ref type='bibr' target='#b17'>(Hinton et al., 2012)</ns0:ref>. Each preliminary experiment selected one independent variable, while the other variables remained controlled.</ns0:p><ns0:p>The dependent variable was the standard information retrieval metric of f -measure, which gauges the accuracy of the pitch estimates produced by the DBN over all audio analysis frames. For these preliminary experiments, the ground-truth dataset was partitioned into two sets, such that roughly 80% of the guitar recordings are allocated for training and 20% are allocated for model validation.</ns0:p><ns0:p>The results of the preliminary experiments with the proposed transcription system revealed that a sampling rate of 22050 Hz, a window size of 1024 samples, a hop size of 768 samples, a network structure of 400 nodes in the first hidden layer followed by 300 nodes in the penultimate layer, and power spectrum input features yielded optimal results. For network pretraining, 400 epochs were conducted with a learning rate of 0.05 using 1-step contrastive divergence with a batch size of 1000 training instances. For network fine-tuning, 30000 epochs were conducted with a learning rate of 0.05 and a batch size of 1000 training instances. The convergence threshold, which ceases training if the value of the objective function between epochs does not fluctuate more than the threshold, is set to 1E − 18 for both pretraining and fine-tuning. These algorithm parameters are used in the experiments detailed in the following sections.</ns0:p><ns0:p>The experiments conducted in this paper were run on a machine with an Intel R Core TM i7 3.07 GHz quad core CPU, 24 GB of RAM, and an Nvidia GeForce GTX 970 GPU with 1664 CUDA cores. For more details regarding these preliminary experiments, consult the thesis of <ns0:ref type='bibr' target='#b10'>Burlet (2015)</ns0:ref>. Table <ns0:ref type='table'>1</ns0:ref>. 5-fold cross validation results of the frame-level pitch estimation evaluation metrics on acoustic guitar with steel strings: r poly denotes the polyphony recall, p denotes precision, r denotes recall, and f denotes f -measure.</ns0:p></ns0:div>
<ns0:div><ns0:head>Frame-level Pitch Estimation Evaluation</ns0:head><ns0:p>5-fold cross validation is used to split the songs in the compiled ground-truth dataset into 5 sets of training and testing partitions. For each fold, the transcription algorithm is trained using the parameters detailed in the previous section. After training, the frame-level pitch estimates computed by the DBN are evaluated for each fold using the following standard multi-label learning metrics <ns0:ref type='bibr' target='#b49'>(Zhang and Zhou, 2014)</ns0:ref>: precision (p), recall (r), f -measure ( f ), one error, and hamming loss. The precision calculates the number of correct pitch estimates divided by the number of pitches the algorithm predicts are present across the audio analysis frames. The recall calculates the number of correct pitch estimates divided by the number of ground-truth pitches that are active across the audio analysis frames. The f -measure refers to the balanced f -score, which is the harmonic mean of precision and recall. The one error provides insight into the number of audio analysis frames where the predominant pitch is estimated incorrectly.</ns0:p><ns0:p>The hamming loss provides insight into the number of false positive and false negative pitch estimates across the audio analysis frames. In addition, the frame-level polyphony recall (r poly ) is calculated to evaluate the accuracy of polyphony estimates made by the DBN.</ns0:p><ns0:p>Using the ground-truth dataset, pretraining the DBN took an average of 172 minutes while finetuning took an average of 246 minutes across each fold. After training, the network weights are saved so that they can be reused for future transcriptions. The DBN took an average of 0.26 seconds across each fold to yield pitch estimates for the songs in the test partitions. The results of the DBN pitch estimation algorithm are averaged across the 5 folds and presented in Table <ns0:ref type='table'>1</ns0:ref> on multiple Guitar models.</ns0:p><ns0:p>After HMM frame smoothing the results substantially improve with a precision of 0.81, a recall of 0.74, and an f -measure of 0.77. Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> provides visual evidence of the positive impact of HMM frame smoothing on the frame-level DBN pitch estimates, showing the removal of several spurious note events.</ns0:p><ns0:p>The results reveal that the 55% polyphony estimation accuracy likely hinders the frame-level fmeasure of the pitch estimation algorithm. Investigating further, when using the ground-truth polyphony for each audio analysis frame, an f -measure of 0.76 is noted before HMM smoothing. The 5% increase in f -measure reveals that the polyphony estimates are close to their ground-truth value. With respect to the one error, the results reveal that the DBN's belief of the predominant pitch-the label with the highest probability-is incorrect in only 13% of the analysis frames.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Event Evaluation</ns0:head><ns0:p>Although evaluating the pitch estimates made by the algorithm for each audio analysis frame provides vital insight into the performance of the algorithm, we can continue with an evaluation of the final note events output by the algorithm. After HMM smoothing the frame-level pitch estimates computed by the DBN, onset quantization is performed and a MIDI file, which encodes the pitch, onset time, and duration of note events, is written. An evaluation procedure similar to the MIREX note tracking task, a yearly competition that evaluates polyphonic transcription algorithms developed by different research institutions on the same dataset, is conducted using the metrics of precision, recall, and f -measure. 5</ns0:p><ns0:p>Relative to a ground-truth note event, an estimate is considered correct if its onset time is within 250ms and its pitch is equivalent. The accuracy of note offset times are not considered because offset times exhibit less perceptual importance than note onset times <ns0:ref type='bibr' target='#b12'>(Costantini et al., 2009)</ns0:ref>. A ground-truth note event can only be associated with a single note event estimate Given the long decay of a guitar note we relied on the MIDI transcription as ground-truth and the threshold to determine when a note had ended. <ns0:ref type='table'>2</ns0:ref>. 5-fold cross validation results of the precision, recall, and f -measure evaluation of note events transcribed using the DBN transcription algorithm compared to the <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> transcription algorithm. The first row includes octave errors while the second row excludes octave errors. Audio was generated from GuitarPro acoustic guitar with steel strings model.</ns0:p><ns0:p>Non-pitched transitions between notes and chords were not explicitly addressed, and would depend on the song and the transcription quality.</ns0:p><ns0:p>These metrics of precision, recall, and f -measure are calculated on the test partition within each of the 5 folds used for cross validation. Table <ns0:ref type='table'>2</ns0:ref> presents the results of the polyphonic transcription algorithm averaged across each fold. The result of this evaluation is an average f -measure of 0.67 when considering note octave errors and an average f -measure of 0.69 when disregarding note octave errors. Octave errors occur when the algorithm predicts the correct note pitch name but incorrectly predicts the note octave number. An approximately 2% increase in f -measure when disregarding octave errors provides evidence that the transcription algorithm does not often mislabel the octave number of note events, which is often a problem with digital signal processing transcription algorithms <ns0:ref type='bibr' target='#b29'>(Maher and Beauchamp, 1994)</ns0:ref>. Note</ns0:p><ns0:p>that the frame-level pitch estimation f -measure of 0.77, presented in Table <ns0:ref type='table'>1</ns0:ref>, does not translate to an equivalently high f -measure for note events because onset time is considered in the evaluation criteria as well as pitch.</ns0:p><ns0:p>Another interesting property of the transcription algorithm is its conservativeness: the precision of the note events transcribed by the algorithm is 0.81 while the recall is 0.60, meaning that the algorithm favours false negatives over false positives. In other words, the transcription algorithm includes a note event in the final transcription only if it is quite certain of the note's correctness, even if this hinders the recall of the algorithm. Another cause of the high precision and low recall is that when several guitar strums occur quickly in succession, the implemented transcription algorithm often transcribes only the first chord and prescribes it a long duration. This is likely a result of the temporally 'coarse' window size of 1024 samples or a product of the HMM frame-smoothing algorithm, which may extend the length of notes causing them to 'bleed' into each other. A remedy for this issue is to lower the window size to increase temporal resolution; however, this has an undesirable side-effect of lowering the frequency resolution of the DFT which is undesirable. A subjective, aural analysis of the guitar transcriptions reflects these results: the predominant pitches and temporal structures of notes occurring in the input guitar recordings are more or less maintained.</ns0:p><ns0:p>Additionally, the guitar recordings in the test set of each fold are transcribed by a digital signal processing polyphonic transcription algorithm developed by <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref>, which was evaluated in the 2008 MIREX and received an f -measure of 0.76 on a dataset of 30 synthesized and real piano recordings <ns0:ref type='bibr' target='#b50'>(Zhou and Reiss, 2008)</ns0:ref>. The <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> polyphonic transcription algorithm processes audio signals at a sampling rate of 44100 Hz. A window size of 441 samples and a hop size of 441 samples is set by the authors for optimal transcription performance <ns0:ref type='bibr' target='#b50'>(Zhou and Reiss, 2008)</ns0:ref>.</ns0:p><ns0:p>The transcription algorithm described in this paper resulted in an 11% increase, or a 20% relative increase, in f -measure compared to the transcription algorithm developed by <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> when evaluated on the same dataset, and further, performed these transcriptions in a sixth of the time. This result emphasizes a useful property of neural networks: after training, feeding the features forward through the network is accomplished in a small amount of time. With a precision of 0.70 and a recall of 0.50 when considering octave errors, the <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> transcription algorithm also exhibits a significantly higher precision than recall; in this way, it is similar to the transcription algorithm described in this paper. When disregarding octave errors, the f -measure of the <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> transcription algorithm increases by approximately 6%. Therefore, this signal processing transcription algorithm makes three times the number of note octave errors as the transcription algorithm described in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>Multiple Guitar Model Evaluation</ns0:head><ns0:p>This section explores the effect on performance of training and testing on different synthesized guitar models. Table <ns0:ref type='table' target='#tab_4'>3 depicts</ns0:ref> Surprisingly the performance of foreign samples on a network was not a large loss for the DBN models. The range of difference in f -measure was between −0.09 and 0.02 with a median of 0.01 and a mean of −0.002 with no significant difference in performance between the classification performance of the differently trained networks-although this could change given even more datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Number of Network Hidden Layers</ns0:head><ns0:p>This section explores the effect of changing the network architecture in terms of the number of fully connected hidden layers.</ns0:p><ns0:p>Hypothesis: Increasing the number of hidden layers in the DBN will increase pitch estimation fmeasure.</ns0:p></ns0:div>
<ns0:div><ns0:head>15/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Rationale: <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref> also noted that increasing the number of network layers is guaranteed to improve a lower bound on the log likelihood of the training data. In other words, the worst-case performance of the DBN is theoretically guaranteed to improve as hidden layers are added. Furthermore, taking a step above their shallow counterparts, deep networks provide a closer approximation to the expanse of neurons in the human brain. From a biological perspective, a sensible assumption is that as more layers of neurons are added to the DBN, the model further replicates the auditory perception power of the human brain and therefore, the f -measure of the pitch estimation algorithm should increase.</ns0:p><ns0:p>Parameters: Table <ns0:ref type='table' target='#tab_5'>4</ns0:ref> describes the pitch estimation algorithm variables for Experiment 4. This experiment sets the number of hidden layers as the independent variable, while keeping the number of nodes in each layer constant. Values of the controlled variables were selected based on preliminary tests. The f -measure (Equation <ns0:ref type='formula' target='#formula_8'>7</ns0:ref>) over all analysis frames is the dependent variable, as well as the other evaluation metrics described in Section . The hypothesis is confirmed if the f -measure increases as the number of hidden layers increases. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion:</ns0:head><ns0:p>The hypothesis speculated that increasing the number of hidden layers, and consequently the number of model parameters, would increase frame-level pitch estimation f -measure.</ns0:p><ns0:p>The rationale for this hypothesis was motivated by the mechanics of the human brain, which consists of hundreds of thousands of connected neurons that are responsible for auditory perception and attributing pitch to audio signals. Considering this biological model, it is reasonable to assume that increasing the number of hidden layers in the deep network will yield increasingly better results; however, the results presented in Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref> provide evidence supporting the contrary. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>each DBN trained in this experiment shows no significant differences between the models. Though the fmeasures of each model are not significantly different, the trend of decreasing f -measure as the number of network layers increases is still apparent.</ns0:p><ns0:p>There are several potential causes of this result. First, increasing the complexity of the model could have resulted in overfitting the network to the training data. Second, the issue of 'vanishing gradients' <ns0:ref type='bibr' target='#b6'>Bengio et al. (1994)</ns0:ref> could be occurring in the network fine-tuning training procedure, whereby the training signal passed to lower layers gets lost in the depth of the network. Yet another potential cause of this result is that the pretraining procedure may have found insufficient initial edge weights for networks with increasing numbers of hidden layers.</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Considering the results of the experiments outlined in the previous section, there are several benefits of using the developed transcription algorithm. As previously mentioned, the accuracy of transcriptions generated by the algorithm surpasses Zhou et al.'s model and makes less octave errors. Moreover, the developed polyphonic transcription algorithm can generate transcriptions for full-length guitar recordings in the order of seconds, rather than minutes or hours. Given the speed of transcription, the proposed polyphonic transcription algorithm could be adapted for real-time transcription applications, where live performances of guitar are automatically transcribed. This could be accomplished by buffering the input guitar signal into analysis frames as it is performed. Another benefit of this algorithm is that the trained network weights can be saved to disk such that future transcriptions do not require retraining the model.</ns0:p><ns0:p>As well, the size of the model is relatively small (less than 12MB) and so the network weights can fit on a portable device or microcontroller. Feeding forward audio features through the DBN is a computationally inexpensive task and could also operate on a portable device or microcontroller. Finally, the developed polyphonic transcription algorithm could easily be adapted to accommodate the transcription of other instruments. All that is required is a set of audio files that have accompanying MIDI annotations for supervised training.</ns0:p><ns0:p>On the other hand, there are several detriments of the transcription algorithm. As a consequence of the amount of time required to train the pitch estimation algorithm, it is difficult to search for good combinations of algorithm parameters. Another arguable detriment of the transcription algorithm is that the underlying DBN pitch estimation algorithm is essentially a black box. After training, it is difficult to ascertain how the model reaches a solution. This issue is exacerbated as the depth of the network increases. Finally, it is possible to overfit the model to the training dataset. When running the fine-tuning process for another 30000 epochs, the f -measure of the transcription algorithm began to deteriorate due to overfitting. To mitigate against overfitting, the learning rate could be dampened as the number of training epochs increase. Another solution involves the creation of a validation dataset, such that the fine-tuning process stops when the f -measure of the algorithm begins to decrease on the guitar recordings in the validation dataset. The method used in this paper is early stopping, where the fine-tuning process is limited to a certain number of epochs instead of allowing the training procedure to continue indefinitely.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations of Synthesis</ns0:head><ns0:p>One of the most obvious limitations with this study is the dataset. It is quite difficult to get high quality transcriptions and recordings of songs, in terms of time, labour, money and licensing. We relied primarily upon transcriptions of songs and synthesized renderings of the these transcriptions. Synthesis has many weaknesses as it is not a true acoustic instrument; it is meant to model an instrument. Thus when we train on synthesized models we potential overfit to a model of a guitar rather than a guitar playing. Fur- Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>recording and the timing of events than a synthesized example. The exactness of synthesized examples can pose a problem because one string pluck can be as predictable as another where as string plucks on actual guitars will vary in terms of duration, decay, energy, etc. This predictability is the primary weakness of synthesis; the lack of range or randomness in synthesized output should be a concern. Synthesis also makes assumptions in terms of tuning and accuracy of each pitch produced which might not reflect real world tunings. MIDI is also quite limited in terms of its timing and range of notes. A more accurate form of transcription might be needed to improve model and transcription quality.</ns0:p><ns0:p>Future work should involve more real recordings of guitar music to enable better transcription. Furthermore robotic guitars <ns0:ref type='bibr' target='#b40'>(Singer et al., 2003)</ns0:ref>, guitar versions of the Yamaha Disklavier, might provide more range of inputs yet still suffer from the issues regarding synthesis discussed earlier. Fundamentally synthesis is a cost trade-off: it enables careful reproduction of transcriptions but it comes with its own costs in terms of realism and applicability.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>When applied to the problem of polyphonic guitar transcription, deep belief networks outperform Zhou et al.'s general purpose transcription algorithms. Moreover, the developed transcription algorithm is fast: the transcription of a full-length guitar recording occurs in the order of seconds and is therefore suitable for real-time guitar transcription. As well, the algorithm is adaptable for the transcription of other instruments, such as the bass guitar or piano, as long as the pitch range of the instrument is provided and MIDI annotated audio recordings are available for training.</ns0:p><ns0:p>The polyphonic transcription algorithm described in this paper is capable of forming discriminative, latent audio features that are suitable for quickly transcribing guitar recordings. The algorithm workflow consists of audio signal preprocessing, feature extraction, a novel pitch estimation algorithm that uses deep learning and multi-label learning techniques, frame smoothing, and onset quantization. The generated note event transcriptions are digitally encoded as a MIDI file, that is processed further to create a MusicXML file that encodes the corresponding guitar tablature notation.</ns0:p><ns0:p>An evaluation of the frame-level pitch estimates generated by the deep belief network on a dataset of synthesized guitar recordings resulted in an f -measure of 0.77 after frame smoothing. An evaluation of the note events output by the entire transcription algorithm resulted in an f -measure of 0.67, which is 11% higher than the f -measure reported by Zhou et al.'s single-instrument transcription algorithm <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2009)</ns0:ref> on the same dataset.</ns0:p><ns0:p>The results of this work encourage the use of deep architectures such as belief networks to form alternative representations of industry-standard audio features for the purposes of instrument transcription. Moreover, this work demonstrates the effectiveness of multi-label learning for pitch estimation, specifically when an upper bound on polyphony exists.</ns0:p></ns0:div>
<ns0:div><ns0:head>Future Work</ns0:head><ns0:p>There are several directions of future work to improve the accuracy of transcriptions. First, there are substantial variations in the distribution of pitches across songs, and so the compilation of more training data is expected to improve the accuracy of frame-level pitch estimates made by the DBN. Second, alternate methods could be explored to raise the accuracy of frame-level polyphony estimates, such as training a separate classifier for predicting polyphony on potentially different audio features. Third, an alternate frame-smoothing algorithm that jointly considers the probabilities of other pitch estimates across analysis frames could further increase pitch estimation f -measure relative to the HMM method proposed by <ns0:ref type='bibr' target='#b34'>Poliner and Ellis (2007)</ns0:ref>, which smooths the estimates of one pitch across the audio analysis frames.</ns0:p><ns0:p>Finally, it would be beneficial to investigate whether the latent audio features derived for transcribing one instrument are transferable to the transcription of other instruments.</ns0:p><ns0:p>In the end, the big picture is a guitar tablature transcription algorithm that is capable of improving its transcriptions when provided with more examples. There are many guitarists that share manual tablature transcriptions online that would personally benefit from having an automated system capable of generating transcriptions that are almost correct and can subsequently be corrected manually. There is incentive to manually correct the output transcriptions because this method is potentially faster than performing a transcription from scratch, depending on the quality of the automated transcription and the difficulty of the song. The result is a crowdsourcing model that is capable of producing large ground-truth datasets for polyphonic transcription that can then be used to further improve the polyphonic transcription algorithm.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. A system of modern guitar tablature for the song 'Weird Fishes' by Radiohead, complete with common western music notation above.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>perhaps the closest work to the algorithm presented here. Sigtia et al. encode note tracking and pitch estimation into the same neural network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>applied deep convolutional networks, instead of HMMs, to transcribe guitar chords from audio. The Humphrey et al. model attempts to output string and fretboard fingerings directly. Thus instead of outputing a series of pitches the model estimates which strings are strummed and at what point are they pressed on the guitar fretboard. This model attempts to recover fingering immediately rather than build it or arrange fingering later. The authors report a frame-wise recognition rate of 77.42%.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. A restricted Boltzmann machine with m visible nodes and n hidden nodes. Weights on the undirected edges have been omitted for clarity.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Workflow of the proposed polyphonic transcription algorithm, which converts the recording of a single instrument to a sequence of MIDI note events that are then translated to tablature notation.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Structure of the deep belief network for note pitch estimation. Edge weights are omitted for clarity.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. An overview of the transcription workflow on a four-second segment of a synthesized guitar recording.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>http://musescore.org 8/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>3</ns0:head><ns0:label /><ns0:figDesc>www.guitar-pro.com 11/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Distribution of note pitches in the ground-truth dataset.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>4</ns0:head><ns0:label /><ns0:figDesc>https://archive.org/details/DeepLearningIsolatedGuitarTranscriptions 12/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>different test sets and different training sets. Steel on Steel means that steelstring acoustic guitar audio is tested on a steel-string acoustic guitar-trained DBN. Electric on Nylon is electric guitar audio tested against a DBN trained on a nylon-stringed acoustic guitar model. Electric on Steel and Nylon is an electric guitar tested on a DBN trained against both acoustic with steel string and acoustic with nylon string models. The results shown are averages from 5-fold cross validation splitting on songs: per each fold songs used for evaluation were not in the training set. In every single case, except one, the f -measure of the DBN model outperforms the Zhou model, except in the case of Steel on Nylon & Electric. The steel samples are generally louder than the electric or nylon acoustic samples so perhaps that has an effect. Regardless, the f -measure difference between the DBN model and the Zhou model is −0.03 to 0.10 with a mean difference in f -measure of 0.056, and a 95% confidence interval of 0.02 to 0.09 in f -measure difference. Wilcoxon rank sum test reports a pvalue of 0.003, indicating a statistically significant difference in performance between Zhou f -measure and DBN f -measure. Mixed networks, those trained on two guitar models seem to perform as well as models trained on one guitar model, with the exception of the Nylon & Electric network tested against Steel samples.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>First, the amount of time required to properly train the model is substantial and varies depending on several parameters such as the audio sampling rate, window size, hop size, and network structure. To make training time reasonable, the computations should be outsourced to a GPU that is capable of performing many calculations in parallel. Using a GPU with less CUDA cores, or just a CPU, significantly increases the amount of time required to train the model. After training ceases, either by reaching the set number of training epochs or when the objective function stops fluctuating, it is not guaranteed that the resulting network weights are optimal because the training algorithm may have settled at a local minima of the objective function.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table</ns0:head><ns0:label /><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>DBN TRANSCRIPTION</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>PRECISION RECALL f -MEASURE RUNTIME (S)</ns0:cell></ns0:row><ns0:row><ns0:cell>OCTAVE ERRORS</ns0:cell><ns0:cell>0.81</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>0.67</ns0:cell><ns0:cell>48.33</ns0:cell></ns0:row><ns0:row><ns0:cell>NO OCTAVE ERRORS</ns0:cell><ns0:cell>0.84</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>0.69</ns0:cell><ns0:cell>-</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='2'>Zhou et al. (2009)</ns0:cell><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell cols='4'>PRECISION RECALL f -MEASURE RUNTIME (S)</ns0:cell></ns0:row><ns0:row><ns0:cell>OCTAVE ERRORS</ns0:cell><ns0:cell>0.70</ns0:cell><ns0:cell>0.50</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>293.52</ns0:cell></ns0:row><ns0:row><ns0:cell>NO OCTAVE ERRORS</ns0:cell><ns0:cell>0.78</ns0:cell><ns0:cell>0.56</ns0:cell><ns0:cell>0.62</ns0:cell><ns0:cell>-</ns0:cell></ns0:row></ns0:table><ns0:note>5 http://www.music-ir.org/mirex 13/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>5-fold cross validation results of the frame-level pitch estimation evaluation metrics: p denotes precision, r denotes recall, and f denotes f -measure. Synthesis models are from Guitar Pro: acoustic guitar with steel, acoustic guitar with nylon, and clean electric guitar.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>STEEL ON STEEL</ns0:cell><ns0:cell>0.80 0.59 0.66</ns0:cell><ns0:cell>0.71</ns0:cell><ns0:cell>0.50</ns0:cell><ns0:cell>0.56</ns0:cell></ns0:row><ns0:row><ns0:cell>NYLON ON NYLON</ns0:cell><ns0:cell>0.77 0.65 0.69</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.61</ns0:cell><ns0:cell>0.67</ns0:cell></ns0:row><ns0:row><ns0:cell>ELECTRIC ON ELECTRIC</ns0:cell><ns0:cell>0.80 0.64 0.69</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>0.60</ns0:cell></ns0:row><ns0:row><ns0:cell>STEEL ON NYLON</ns0:cell><ns0:cell>0.76 0.62 0.67</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.59</ns0:cell></ns0:row><ns0:row><ns0:cell>STEEL ON ELECTRIC</ns0:cell><ns0:cell>0.80 0.62 0.68</ns0:cell><ns0:cell>0.72</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.59</ns0:cell></ns0:row><ns0:row><ns0:cell>NYLON ON STEEL</ns0:cell><ns0:cell>0.81 0.64 0.70</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.61</ns0:cell><ns0:cell>0.67</ns0:cell></ns0:row><ns0:row><ns0:cell>NYLON ON ELECTRIC</ns0:cell><ns0:cell>0.80 0.63 0.70</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.61</ns0:cell><ns0:cell>0.67</ns0:cell></ns0:row><ns0:row><ns0:cell>ELECTRIC ON STEEL</ns0:cell><ns0:cell>0.81 0.64 0.70</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>0.60</ns0:cell></ns0:row><ns0:row><ns0:cell>ELECTRIC ON NYLON</ns0:cell><ns0:cell>0.78 0.66 0.70</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>0.60</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>STEEL ON NYLON & ELECTRIC 0.64 0.54 0.57</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.53</ns0:cell><ns0:cell>0.60</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>NYLON ON STEEL & ELECTRIC 0.80 0.65 0.70</ns0:cell><ns0:cell>0.80</ns0:cell><ns0:cell>0.61</ns0:cell><ns0:cell>0.67</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>ELECTRIC ON STEEL & NYLON 0.77 0.64 0.68</ns0:cell><ns0:cell>0.74</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>0.60</ns0:cell></ns0:row></ns0:table><ns0:note>14/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017) Manuscript to be reviewed Computer Science p r f Zhou p Zhou r Zhou f</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Independent, controlled, and dependent variables for Hidden Layers Experiment</ns0:figDesc><ns0:table><ns0:row><ns0:cell>VARIABLE</ns0:cell><ns0:cell>VALUE</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Audio sampling rate, f s (Hz) 22050</ns0:cell></ns0:row><ns0:row><ns0:cell>Window size, w (samples)</ns0:cell><ns0:cell>2048</ns0:cell></ns0:row><ns0:row><ns0:cell>Hop size, h (samples)</ns0:cell><ns0:cell>75% of window size</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of hidden layers †</ns0:cell><ns0:cell>2, 3, 4</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of nodes per layer</ns0:cell><ns0:cell>300</ns0:cell></ns0:row><ns0:row><ns0:cell>Guitar Model</ns0:cell><ns0:cell>Acoustic with Steel Strings</ns0:cell></ns0:row><ns0:row><ns0:cell>Features</ns0:cell><ns0:cell>DFT power spectrum</ns0:cell></ns0:row><ns0:row><ns0:cell>f -measure ‡</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>† Denotes the independent variable.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>‡ Denotes the dependent variable.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Effect of number of Hidden Layers on transcriptionThe results invalidate the hypothesis and suggest that a more complex model does not correlate positively with model performance. Rather, the results show that the number of hidden layers is negatively correlated with pitch estimation f -measure. As the number of hidden network layers is increased, the precision and recall of the frame-level note pitch estimates decrease. However, the decrease in f -measure</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Layers</ns0:cell><ns0:cell>p</ns0:cell><ns0:cell>r</ns0:cell><ns0:cell>f</ns0:cell><ns0:cell>ONE</ns0:cell><ns0:cell>HAMMING</ns0:cell><ns0:cell>poly r</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ERROR</ns0:cell><ns0:cell>LOSS</ns0:cell></ns0:row><ns0:row><ns0:cell>BEFORE HMM</ns0:cell><ns0:cell>2 3 4</ns0:cell><ns0:cell cols='3'>0.675 0.601 0.636 0.650 0.600 0.623 0.643 0.591 0.616</ns0:cell><ns0:cell>0.192 0.200 0.211</ns0:cell><ns0:cell>0.040 0.042 0.043</ns0:cell><ns0:cell>0.463 0.452 0.433</ns0:cell></ns0:row><ns0:row><ns0:cell>AFTER HMM</ns0:cell><ns0:cell>2 3 4</ns0:cell><ns0:cell cols='3'>0.760 0.604 0.673 0.739 0.610 0.669 0.728 0.602 0.659</ns0:cell><ns0:cell>---</ns0:cell><ns0:cell>0.034 0.035 0.036</ns0:cell><ns0:cell>---</ns0:cell></ns0:row></ns0:table><ns0:note>is quite minimal: roughly −1% f -measure for each additional layer. Confirming how minimal these changes are, a Tukey-Kramer honest significance test on the f -measure of songs in the test dataset for 16/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)</ns0:note></ns0:figure>
<ns0:note place='foot' n='18'>/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:1:2:NEW 4 Jan 2017)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Dear Reviewers,
Thank you for your thoughtful reviews.
Our main changes to the document are:
• Include more analysis of alterantive architectures.
• Incluce a comparison of model trained and tested with different guitar
models.
• A discussion of the limitations of synthesis over real examples.
• Included clearer definition of evaluation measures.
The new models took some time to evaluate as they required cross-fold evaluation.
We hope that we have addressed the reviews to your satisfaction.
Sincerley,
Gregory Burlet and Abram Hindle
Fine Grained Remarks:
• [X] Checked Grammar
• [X] Spellchecked
• [X] cited A. Barbancho, A. Klapuri, L. J. Tardon, eand I. Barbancho,
“Automatic transcription of guitar chords and fingering from audio,” IEEE
Transactions on Audio, Speech & Language Processing, vol. 20, no. 3,
pp. 915–921, 2012 [?]
• [X] cited E. J. Humphrey and J. P. Bello. “From music audio to chord tablature: Teaching deep convolutional networks to play guitar.” IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP),
2014.
• [X] Cited author = {Sigtia, Siddharth and Benetos, Emmanouil and Dixon,
Simon}, title = {An End-to-end Neural Network for Polyphonic Piano
Music Transcription},
• [X] Discuss threats to validity brought up by reviewers
• [X] Discussion of synthesized data
• [X] Address adding 2 more models
• [X] Discuss Multimodel training
• [X] Discuss weakness of synthesis
• [X] Include discussion of different DNN configurations, b) is covered (deep
and shallow)
• [X] smallers/highest diameter string -> thinnest / thickest string Done.
• [X] Discuss Offset: We clarified that it the transcription that was used to
determine the threshhold. non-pitched transitions were not addressed as
they were meant to be handled by the midi transcription if at all.
• [X] Define “one error”, “hamming loss” and “polyphony recall” exactly
1
" | Here is a paper. Please give your review comments after reading it. |
731 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Music transcription involves the transformation of an audio recording to common music notation, colloquially referred to as sheet music. Manually transcribing audio recordings is a difficult and time-consuming process, even for experienced musicians. In response, several algorithms have been proposed to automatically analyze and transcribe the notes sounding in an audio recording; however, these algorithms are often general-purpose, attempting to process any number of instruments producing any number of notes sounding simultaneously. This paper presents a polyphonic transcription algorithm that is constrained to processing the audio output of a single instrument, specifically an acoustic guitar. The transcription system consists of a novel note pitch estimation algorithm that uses a deep belief network and multi-label learning techniques to generate multiple pitch estimates for each analysis frame of the input audio signal. Using a compiled dataset of synthesized guitar recordings for evaluation, the algorithm described in this work results in an 11% increase in the f-measure of note transcriptions relative to Zhou et al.'s transcription algorithm in the literature. This paper demonstrates the effectiveness of deep, multi-label learning for the task of polyphonic transcription.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Music transcription is the process of converting an audio signal into a music score that informs a musician which notes to perform and how they are to be performed. This is accomplished through the analysis of the pitch and rhythmic properties of an acoustical waveform. In the composition or publishing process, manually transcribing each note of a musical passage to create a music score for other musicians is a labour-intensive procedure <ns0:ref type='bibr' target='#b16'>(Hainsworth and Macleod, 2003)</ns0:ref>. Manual transcription is slow and errorprone: even notationally fluent and experienced musicians make mistakes, require multiple passes over the audio signal, and draw upon extensive prior knowledge to make complex decisions about the resulting transcription <ns0:ref type='bibr'>(Benetos et al., 2013)</ns0:ref>.</ns0:p><ns0:p>In response to the time-consuming process of manually transcribing music, researchers in the multidisciplinary field of music information retrieval (MIR) have summoned their knowledge of computing science, electrical engineering, music theory, mathematics, and statistics to develop algorithms that aim to automatically transcribe the notes sounding in an audio recording. Although the automatic transcription of monophonic (one note sounding at a time) music is considered a solved problem <ns0:ref type='bibr' target='#b3'>(Benetos et al., 2012)</ns0:ref>, the automatic transcription of polyphonic (multiple notes sounding simultaneously) music 'falls clearly behind skilled human musicians in accuracy and flexibility' <ns0:ref type='bibr' target='#b25'>(Klapuri, 2004)</ns0:ref>. In an effort to reduce the complexity, the transcription problem can be constrained by limiting the number of notes that sound simultaneously, the genre of music being analyzed, or the number and type of instruments producing sound. A constrained domain allows the transcription system to 'exploit the structure' <ns0:ref type='bibr' target='#b31'>(Martin, 1996</ns0:ref>) by leveraging known priors on observed distributions, and consequently reduce the difficulty of transcription. This parallels systems in the more mature field of speech recognition where practical algorithms are often language, gender, or speaker dependent <ns0:ref type='bibr' target='#b21'>(Huang et al., 2001)</ns0:ref>.</ns0:p><ns0:p>Automatic guitar transcription is the problem of automatic music transcription with the constraint that the audio signal being analyzed is produced by a single electric or acoustic guitar. Though this problem is constrained, a guitar is capable of producing six notes simultaneously, which still offers a multitude of challenges for modern transcription algorithms. The most notable challenge is the estimation of the pitches of notes comprising highly polyphonic chords, occurring when a guitarist strums several strings at once. Yet another challenge presented to guitar transcription algorithms is that a large body of guitarists PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science publish and share transcriptions in the form of tablature rather than common western music notation.</ns0:p><ns0:p>Therefore, automatic guitar transcription algorithms should also be capable of producing tablature. Guitar tablature is a symbolic music notation system with a six-line staff representing the strings on a guitar.</ns0:p><ns0:p>The top line of the system represents the highest pitched (thinnest diameter) string and the bottom line represents the lowest pitched (thickest diameter) string. A number on a line denotes the guitar fret that should be depressed on the respective string. An example of guitar tablature below its corresponding common western music notation is presented in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. A solution to the problem of isolated instrument transcription has substantial commercial interest with applications in musical games, instrument learning software, and music cataloguing. However, these applications seem far out of grasp given that the MIR research community has collectively reached a plateau in the accuracy of automatic music transcription systems <ns0:ref type='bibr' target='#b3'>(Benetos et al., 2012)</ns0:ref>. In a paper addressing this issue, <ns0:ref type='bibr' target='#b3'>Benetos et al. (2012)</ns0:ref> stress the importance of extracting expressive audio features and moving towards context-specific transcription systems. Also addressing this issue, <ns0:ref type='bibr' target='#b22'>Humphrey et al. (2012</ns0:ref><ns0:ref type='bibr' target='#b23'>Humphrey et al. ( , 2013) )</ns0:ref> propose that effort should be focused on audio features generated by deep architectures including deep belief networks, autoencoders, convolutional neural networks and other architectures instead of hand-engineered audio features, due to the success of these methods in other fields such as computer vision <ns0:ref type='bibr' target='#b28'>(Lee et al., 2009)</ns0:ref> and speech recognition <ns0:ref type='bibr' target='#b18'>(Hinton et al., 2012)</ns0:ref>. The aforementioned literature provides motivation for applying deep belief networks to the problem of isolated instrument transcription.</ns0:p><ns0:p>This paper presents a polyphonic transcription system containing a novel pitch estimation algorithm that addresses two arguable shortcomings in modern pattern recognition approaches to pitch estimation:</ns0:p><ns0:p>first, the task of estimating multiple pitches sounding simultaneously is often approached using multiple one-versus-all binary classifiers <ns0:ref type='bibr' target='#b35'>(Poliner and Ellis, 2007;</ns0:ref><ns0:ref type='bibr' target='#b34'>Nam et al., 2011)</ns0:ref> in lieu of estimating the presence of multiple pitches using multinomial regression; second, there exists no standard method to impose constraints on the polyphony of pitch estimates at any given time. In response to these points, the pitch estimation algorithm described in this work uses a deep belief network in conjunction with multi-label learning techniques to produce multiple pitch estimates for each audio analysis frame.</ns0:p><ns0:p>After estimating the pitch content of the audio signal, existing algorithms in the literature are used to track the temporal properties (onset time and duration) of each note event and convert this information to guitar tablature notation.</ns0:p></ns0:div>
<ns0:div><ns0:head>RELATED WORK</ns0:head><ns0:p>The first polyphonic transcription system for duets imposed constraints on the frequency range and timbre of the two input instruments as well as the intervals between simultaneously performed notes <ns0:ref type='bibr' target='#b33'>(Moorer, 1975)</ns0:ref>. 1 This work provoked a significant amount of research on this topic, which still aims to further the accuracy of transcriptions while gradually eliminating domain constraints.</ns0:p><ns0:p>In the infancy of the problem, polyphonic transcription algorithms relied heavily on digital signal processing techniques to uncover the fundamental frequencies present in an input audio waveform. To this end, several different algorithms have been proposed: perceptually motivated models that attempt to model human audition <ns0:ref type='bibr' target='#b26'>(Klapuri, 2005)</ns0:ref>; salience methods, which transform the audio signal to accentuate the underlying fundamental frequencies <ns0:ref type='bibr' target='#b27'>(Klapuri, 2006;</ns0:ref><ns0:ref type='bibr' target='#b51'>Zhou et al., 2009)</ns0:ref>; iterative estimation 1 Timbre refers to several attributes of an audio signal that allows humans to attribute a sound to its source and to differentiate between a trumpet and a piano, for instance. Timbre is often referred to as the 'colour' of a sound.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science methods, which iteratively select a predominant fundamental from the frequency spectrum and then subtract an estimate of its harmonics from the residual spectrum until no fundamental frequency candidates remain <ns0:ref type='bibr' target='#b27'>(Klapuri, 2006)</ns0:ref>; and joint estimation, which holistically selects fundamental frequency candidates that, together, best describe the observed frequency domain of the input audio signal <ns0:ref type='bibr' target='#b48'>(Yeh et al., 2010)</ns0:ref>.</ns0:p><ns0:p>The MIR research community is gradually adopting a machine-learning-centric paradigm for many MIR tasks, including polyphonic transcription. Several innovative applications of machine learning algorithms to the task of polyphonic transcription have been proposed, including hidden Markov models (HMMs) <ns0:ref type='bibr' target='#b38'>(Raphael, 2002)</ns0:ref>, non-negative matrix factorization <ns0:ref type='bibr' target='#b42'>(Smaragdis and Brown, 2003;</ns0:ref><ns0:ref type='bibr' target='#b14'>Dessein et al., 2010)</ns0:ref>, support vector machines <ns0:ref type='bibr' target='#b35'>(Poliner and Ellis, 2007)</ns0:ref>, artificial shallow neural networks <ns0:ref type='bibr' target='#b30'>(Marolt, 2004)</ns0:ref> and recurrent neural networks <ns0:ref type='bibr' target='#b9'>(Boulanger-Lewandowski, 2014)</ns0:ref>. Although each of these algorithms operate differently, the underlying principle involves the formation of a model that seeks to capture the harmonic, and perhaps temporal, structures of notes present in a set of training audio signals.</ns0:p><ns0:p>The trained model then predicts the harmonic and/or temporal structures of notes present in a set of previously unseen audio signals.</ns0:p><ns0:p>Training a machine learning classifier for note pitch estimation involves extracting meaningful features from the audio signal that reflect the harmonic structures of notes and allow discrimination between different pitch classes. The obvious set of features exhibiting this property is the short-time Fourier transform (STFT), which computes the discrete Fourier transform (DFT) on a sliding analysis window over the audio signal. However, somewhat recent advances in the field of deep learning have revealed that artificial neural networks with many layers of neurons can be efficiently trained <ns0:ref type='bibr' target='#b20'>(Hinton et al., 2006)</ns0:ref> and form a hierarchical, latent representation of the input features <ns0:ref type='bibr' target='#b28'>(Lee et al., 2009)</ns0:ref>.</ns0:p><ns0:p>Using a deep belief network (DBN) to learn alternate feature representations of DFT audio features, <ns0:ref type='bibr' target='#b34'>Nam et al. (2011)</ns0:ref> exported these audio features and injected them into 88 binary support vector machine classifiers: one for each possible piano pitch. Each classifier outputs a binary class label denoting whether the pitch is present in a given audio analysis frame. Using the same experimental set up as <ns0:ref type='bibr' target='#b35'>Poliner and Ellis (2007)</ns0:ref>, <ns0:ref type='bibr' target='#b34'>Nam et al. (2011)</ns0:ref> noted that the learned features computed by the DBN yielded significant improvements in the precision and recall of pitch estimates relative to standard DFT audio features. <ns0:ref type='bibr' target='#b40'>Sigtia et al. (2016)</ns0:ref> attempted to arrange and join piano notes by trying to generate 'beams', continuous notes, all within a DBN. This makes <ns0:ref type='bibr'>Sigtia et al. work</ns0:ref> Some models for chord and pitch estimation attempt to produce the fingering or chords of a guitar rather than the notes themselves. <ns0:ref type='bibr' target='#b0'>Barbancho et al. (2012)</ns0:ref> applied hidden Markov models (HMM) to preprocessed audio of isolated guitar recordings to extract fretboard fingerings for guitar notes. This HMM fretboard model achieves between 87% and 95% chord recognition accuracy on solo guitar recordings and is meant to output guitar fingering rather than just chords. <ns0:ref type='bibr' target='#b24'>Humphrey and Bello (2014)</ns0:ref> After note pitch estimation it is necessary to perform note tracking, which involves the detection of note onsets and offsets <ns0:ref type='bibr' target='#b4'>(Benetos and Weyde, 2013)</ns0:ref>. Several techniques have been proposed in the literature including a multitude of onset estimation algorithms <ns0:ref type='bibr' target='#b1'>(Bello et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b15'>Dixon, 2006)</ns0:ref>, HMM note-duration modelling algorithms <ns0:ref type='bibr'>(Benetos et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b39'>Ryynänen and Klapuri, 2005)</ns0:ref>, and an HMM frame-smoothing algorithm <ns0:ref type='bibr' target='#b35'>(Poliner and Ellis, 2007)</ns0:ref>. The output of these note tracking algorithms are a sequence of note event estimates, each having a pitch, onset time, and duration. These note events may then be digitally encoded in a symbolic music notation, such as tablature notation, for cataloguing or publishing. Arranging tablature is challenging because the guitar is capable of producing the same pitch in multiple ways. Therefore, a 'good' arrangement is one that is biomechanically easy for the musician to perform, such that transitions between notes do not require excessive hand movement and the performance of chords require minimal stretching of the hand (Heijink <ns0:ref type='bibr' target='#b17'>and Meulenbroek, 2002)</ns0:ref>. Solutions to the problem of tablature arrangement include graph-search algorithms <ns0:ref type='bibr' target='#b36'>(Radicioni and Lombardo, 2005;</ns0:ref><ns0:ref type='bibr' target='#b37'>Radisavljevic and Driessen, 2004;</ns0:ref><ns0:ref type='bibr' target='#b12'>Burlet and Fujinaga, 2013)</ns0:ref>, neural networks <ns0:ref type='bibr' target='#b45'>(Tuohy and Potter, 2006)</ns0:ref>, and genetic algorithms <ns0:ref type='bibr' target='#b44'>(Tuohy and Potter, 2005;</ns0:ref><ns0:ref type='bibr' target='#b10'>Burlet, 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>DEEP BELIEF NETWORKS</ns0:head><ns0:p>Before introducing the developed pitch estimation algorithm, it is worthwhile to review the structure and training procedure of a deep belief network. The intent of deep architectures for machine learning is to form a multi-layered and structured representation of sensory input with which a classifier or regressor can use to make informed predictions about its environment <ns0:ref type='bibr' target='#b47'>(Utgoff and Stracuzzi, 2002)</ns0:ref>.</ns0:p><ns0:p>Recently, <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref> proposed a specific formulation of a multi-layered artificial neural network called a deep belief network (DBN), which addresses the training and performance issues arising when many hidden network layers are used. A preliminary unsupervised training algorithm aims to set the network weights to good initial values in a layer-by-layer fashion, followed by a more holistic supervised fine-tuning algorithm that considers the interaction of weights in different layers with respect to the desired network output <ns0:ref type='bibr' target='#b19'>(Hinton, 2007)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Unsupervised Pretraining</ns0:head><ns0:p>In order to pretrain the network weights in an unsupervised fashion, it is necessary to think of the network as a generative model rather than a discriminative model. A generative model aims to form an internal model of a set of observable data vectors, described using latent variables; the latent variables then attempt to recreate the observable data vectors with some degree of accuracy. On the other hand, a discriminative model aims to set the value of its latent variables, typically used for the task of classification or regression, without regard for recreating the input data vectors. A discriminative model does not explicitly care how the observed data was generated, but rather focuses on producing correct values of its latent variables. <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref> proposed that a deep neural network be composed of several restricted Boltzmann machines (RBMs) stacked on top of each other, such that the network can be viewed as both a generative model and a discriminative model. An RBM is an undirected bipartite graph with m visible nodes and n hidden nodes, as depicted in Figure <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>. Typically, the domain of the visible and hidden nodes are binary such that v ∈ {0, 1} m and h ∈ {0, 1} n , respectively, such that</ns0:p><ns0:formula xml:id='formula_0'>P(h j = 1|v) = 1 1 + e −W j v and P(v i = 1|h) = 1 1 + e −W T i h ,<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>where W ∈ R n×m is the matrix of weights between the visible and hidden nodes. For simplicity, Equation 1 does not include bias nodes for v and h. Each RBM in the DBN is trained sequentially from the bottom up, such that the hidden nodes of the previous RBM are input to the subsequent RBM as an observable data vector. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>3</ns0:ref>. Workflow of the proposed polyphonic transcription algorithm, which converts the recording of a single instrument to a sequence of MIDI note events that are then translated to tablature notation.</ns0:p></ns0:div>
<ns0:div><ns0:head>Supervised Fine-tuning</ns0:head><ns0:p>The unsupervised pretraining of the stacked RBMs is a relatively efficient method that sets good initial values for the network weights. Moreover, in the case of a supervised learning task such as classification or regression, the ground-truth labels for each training data vector have not yet been considered. The supervised fine-tuning step of the DBN addresses these issues.</ns0:p><ns0:p>One method of supervised fine-tuning is to add a layer of output nodes to the network for the purposes of (logistic) regression and to perform standard back-propagation as if the DBN was a multi-layered neural network <ns0:ref type='bibr' target='#b5'>(Bengio, 2009)</ns0:ref>. Rather than creating features from scratch, this fine-tuning method is responsible for modifying the latent features in order to adjust the class boundaries <ns0:ref type='bibr' target='#b19'>(Hinton, 2007)</ns0:ref>.</ns0:p><ns0:p>After fine-tuning the network, a feature vector can be fed forward through the network and a result realized at the output layer. In the context of pitch estimation, the feature vector represents the frequency content of an audio analysis frame and the output layer of the network is responsible for classifying the pitches that are present.</ns0:p></ns0:div>
<ns0:div><ns0:head>ISOLATED INSTRUMENT TRANSCRIPTION</ns0:head><ns0:p>The workflow of the proposed polyphonic transcription algorithm is presented in Figure <ns0:ref type='figure'>3</ns0:ref>. The algorithm consists of an audio signal preprocessing step, followed by a novel DBN pitch estimation algorithm. The note-tracking component of the polyphonic transcription algorithm uses a combination of the existing frame-smoothing algorithm developed by <ns0:ref type='bibr' target='#b35'>Poliner and Ellis (2007)</ns0:ref> and the existing spectral flux onset estimation algorithm described by <ns0:ref type='bibr' target='#b15'>Dixon (2006)</ns0:ref> to produce a MIDI file. MIDI is a binary file format composed of tracks holding a sequence of note events, which each have an integer pitch from 0-127, a velocity value indicating the intensity of a note, and a tick number indicating when the note event occurs. This sequence of note events is then translated to guitar tablature notation using the graph-search algorithm developed by <ns0:ref type='bibr' target='#b12'>Burlet and Fujinaga (2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Audio Signal Preprocessing</ns0:head><ns0:p>The input audio signal is first preprocessed before feature extraction. If the audio signal is stereo, the channels are averaged to produce a mono audio signal. Then the audio signal is decimated to lower the sampling rate f s by an integer multiple, k ∈ N + . Decimation involves low-pass filtering with a cutoff frequency of f s /2k Hz to mitigate against aliasing, followed by selecting every k th sample from the original signal.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Pitch Estimation</ns0:head><ns0:p>The structure of the DBN pitch estimation algorithm is presented in Figure <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. The algorithm extracts features from an analysis window that slides over the audio waveform. The audio features are subsequently fed forward through the deep network, resulting in an array of posterior probabilities used for pitch and polyphony estimation.</ns0:p><ns0:p>First, features are extracted from the input audio signal. The power spectrum of each audio analysis frame is calculated using a Hamming window of size w samples and a hop size of h samples. The power spectrum is calculated by squaring the magnitude of each frequency component of the DFT. Since the power spectrum is mirrored about the Nyquist frequency when processing an audio signal, half of the spectrum is retained, resulting in m = ⌊w/2⌋ + 1 features. The result is a matrix of normalized audio features Φ ∈ [0, 1] n×m , such that n is the number of analysis frames spanning the input signal.</ns0:p><ns0:p>The DBN consumes these normalized audio features; hence, the input layer consists of m nodes.</ns0:p><ns0:p>There can be any number of stochastic binary hidden layers, each consisting of any number of nodes.</ns0:p><ns0:p>The output layer of the network consists of k + p nodes, where the first k nodes are allocated for pitch estimation and the final p nodes are allocated for polyphony estimation. The network uses a sigmoid activation as the non-linear transfer function.</ns0:p><ns0:p>The feature vectors Φ are fed forward through the network with parameters Θ, resulting in a matrix of probabilities P( Ŷ |Φ, Θ) ∈ [0, 1] k+p that is then split into a matrix of pitch probabilities P( Ŷ (pitch) |Φ, Θ) and polyphony probabilities P( Ŷ (poly) |Φ, Θ). The polyphony of the i th analysis frame is estimated by <ns0:ref type='table' target='#tab_7'>-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_1'>6/21 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>selecting the polyphony class with the highest probability using the equation</ns0:p><ns0:formula xml:id='formula_2'>ρ i = argmax j P( Ŷ (poly) i j |Φ i , Θ) . (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_3'>)</ns0:formula><ns0:p>Pitch estimation is performed using a multi-label learning technique similar to the MetaLabeler system <ns0:ref type='bibr' target='#b43'>(Tang et al., 2009)</ns0:ref>, which trains a multi-class classifier for label cardinality estimation using the output values of the original label classifier as features. Instead of using the matrix of pitch probabilities as features for a separate polyphony classifier, increased recall was noted by training the polyphony classifier alongside the pitch classifier using the original audio features. Formally, the pitches sounding in the i th analysis frame are estimated by selecting the indices of the ρ i highest pitch probabilities produced by the DBN. With these estimates, the corresponding vector of pitch probabilities is converted to a binary vector Ŷ (pitch) i ∈ {0, 1} k by turning on bits that correspond to the ρ i highest pitch probabilities.</ns0:p><ns0:p>For training and testing the algorithm, a set of pitch and polyphony labels are calculated for each audio analysis frame using an accompanying ground-truth MIDI file. A matrix of pitch annotations</ns0:p><ns0:formula xml:id='formula_4'>Y (pitch) ∈ {0, 1} n×k ,</ns0:formula><ns0:p>where k is the number of considered pitches, is computed such that an enabled bit indicates the presence of a pitch. A matrix of polyphony annotations Y (poly) ∈ {0, 1} n×p , where p is the maximum frame-wise polyphony, is also computed such that a row is a one-hot binary vector in which the enabled bit indicates the polyphony of the frame. These matrices are horizontally concatenated to form the final matrix Y ∈ {0, 1} n× (k+p) of training and testing labels.</ns0:p><ns0:p>The deep belief network is trained using a modified version of the greedy layer-wise algorithm described by <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref>. Pretraining is performed by stacking a series of restricted Boltzmann machines and sequentially training each in an unsupervised manner using 1-step contrastive divergence <ns0:ref type='bibr' target='#b5'>(Bengio, 2009)</ns0:ref>. Instead of using the 'up-down' fine-tuning algorithm proposed by <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref>, the layer of output nodes are treated as a set of logistic regressors and standard backpropagation is conducted on the network. Rather than creating features from scratch, this fine-tuning method is responsible for modifying the latent features in order to adjust the class boundaries <ns0:ref type='bibr' target='#b19'>(Hinton, 2007)</ns0:ref>.</ns0:p><ns0:p>The canonical error function to be minimized for a set of separate pitch and polyphony binary classifications is the cross-entropy error function, which forms the training signal used for backpropagation:</ns0:p><ns0:formula xml:id='formula_5'>E(Θ) = − n ∑ i=1 k+p ∑ j=1 Y i j ln P( Ŷi j |Φ i , Θ) + (1 −Y i j ) ln(1 − P( Ŷi j |Φ i , Θ))<ns0:label>(3)</ns0:label></ns0:formula><ns0:p>The aim of this objective function is to adjust the network weights Θ to pull output node probabilities closer to one for ground-truth label bits that are on and pull probabilities closer to zero for bits that are off.</ns0:p><ns0:p>The described pitch estimation algorithm was implemented using the Theano numerical computation library for Python <ns0:ref type='bibr' target='#b8'>(Bergstra et al., 2010)</ns0:ref>. Computations for network training and testing are parallelized on the graphics processing unit (GPU). Feature extraction and audio signal preprocessing is performed using Marsyas, a software framework for audio signal processing and analysis <ns0:ref type='bibr' target='#b46'>(Tzanetakis and Cook, 2000)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Tracking</ns0:head><ns0:p>Although frame-level pitch estimates are essential for transcription, converting these estimates into note events with an onset and duration is not a trivial task. The purpose of note tracking is to process these pitch estimates and determine when a note onsets and offsets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Frame-level Smoothing</ns0:head><ns0:p>The frame-smoothing algorithm developed by <ns0:ref type='bibr' target='#b35'>Poliner and Ellis (2007)</ns0:ref> is used to postprocess the DBN pitch estimates Ŷ (pitch) for an input audio signal. The algorithm allows a frame-level pitch estimate to be contextualized amongst its neighbours instead of solely trusting the independent estimates made by a classification algorithm. estimate and is represented using the pitch probabilities P( Ŷ (pitch) |Φ, Θ). The output of the Viterbi algorithm, which searches for the optimal underlying state sequence, is a revised binary vector of activation estimates for a single pitch. Concatenating the results of each HMM results in a revised matrix of pitch estimates Ŷ (pitch) .</ns0:p></ns0:div>
<ns0:div><ns0:head>Onset Quantization</ns0:head><ns0:p>If the HMM frame-smoothing algorithm claims a pitch arises within an analysis frame, it could onset at any time within the window. Arbitrarily setting the note onset time to occur at the beginning of the window often results in 'choppy' sounding transcriptions. In response, the onset detection algorithm that uses spectral flux measurements between analysis frames <ns0:ref type='bibr' target='#b15'>(Dixon, 2006</ns0:ref>) is run at a finer time resolution to pinpoint the exact note onset time. The onset detection algorithm is run on the original, undecimated audio signal with a window size of 2048 samples and a hop size of 512 samples. When writing the note event estimates as a MIDI file, the onset times calculated by this algorithm are used. The offset time is calculated by following the pitch estimate across consecutive analysis frames until it transitions from ON to OFF, at which point the time stamp of the end of this analysis frame is used. Note events spanning less than two audio analysis frames are removed from the transcription to mitigate against spurious notes.</ns0:p><ns0:p>Output of the polyphonic transcription algorithm at each stage-from feature extraction to DBN pitch estimation to frame smoothing and quantization (note tracking)-is displayed in Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> for a four-second segment of a synthesized guitar recording. The pitch probabilities output by the DBN show that the classifier is quite certain about its estimates; there are few grey areas indicating indecision.</ns0:p></ns0:div>
<ns0:div><ns0:head>Music Notation Arrangement</ns0:head><ns0:p>The MIDI file output by the algorithm thus far contains the note event (pitch, onset, and duration) transcriptions of an audio recording. However, a MIDI file lacks certain information necessary to write sheet music in common western music notation such as time signature, key signature, clef type, and the value (duration) of each note described in divisions of a whole note.</ns0:p><ns0:p>There are several robust opensource programs that derive this missing information from a MIDI file using logic and heuristics in order to generate common western music notation that is digitally encoded in the MusicXML file format. MusicXML is a standardized extensible markup language (XML) definition allowing digital symbolic music notation to be universally encoded and parsed by music applications.</ns0:p><ns0:p>In this work, the command line tools shipped with the opensource application MuseScore are used to convert MIDI to common western music notation encoded in the MusicXML file format. 2 The graph-based guitar tablature arrangement algorithm developed by <ns0:ref type='bibr' target='#b12'>Burlet and Fujinaga (2013)</ns0:ref> is used to append a guitar string and fret combination to each note event encoded in a MusicXML transcription file. The guitar tablature arrangement algorithm operates by using Dijkstra's algorithm to search for the shortest path through a directed weighted graph, in which the vertices represent candidate string and fret combinations for a note or chord, as displayed in Figure <ns0:ref type='figure'>6</ns0:ref>.</ns0:p><ns0:p>The edge weights between nodes in the graph indicate the biomechanical difficulty of transitioning between fretting-hand positions. Three biomechanical complexity factors are aggregated to form each edge weight: the fret-wise distance required to transition between notes or chords, the fret-wise finger span required to perform chords, and a penalty of one if the fretting hand surpasses the seventh fret. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Figure <ns0:ref type='figure'>6</ns0:ref>. A directed acyclic graph of string and fret candidates for a note and chord followed by two more notes. Weights have been omitted for clarity. The notation for each node is (string number, fret number).</ns0:p><ns0:p>The value of this penalty and fret threshold number were determined through subjective analysis of the resulting tablature arrangements. In the event that a note is followed by a chord, the fret-wise distance is calculated by the expression</ns0:p><ns0:formula xml:id='formula_6'>f − max(g) − min(g) 2 ,<ns0:label>(4)</ns0:label></ns0:formula><ns0:p>such that f ∈ N is the fret number used to perform the note and g is a vector of fret numbers used to perform each note in the chord. For more detailed information regarding the formulation of this graph, please refer to the conference proceeding of <ns0:ref type='bibr' target='#b12'>Burlet and Fujinaga (2013)</ns0:ref> or thesis of <ns0:ref type='bibr' target='#b10'>Burlet (2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Pitch Estimation Metrics</ns0:head><ns0:p>Given the pitch estimates output by the DBN pitch estimation algorithm for n audio analysis frames, Ŷ (pitch) ∈ {0, 1} n×k , and the corresponding ground-truth pitch label matrix for the corresponding audio analysis frames, Y (pitch) ∈ {0, 1} n×k , the following metrics can be computed:</ns0:p><ns0:formula xml:id='formula_7'>Precision: p = 1( Ŷ (pitch) & Y (pitch) )1 1 Ŷ (pitch) 1 ,<ns0:label>(5)</ns0:label></ns0:formula><ns0:p>such that the logical operator & denotes the element-wise AND of two binary matrices and 1 indicates a vector of ones. In other words, this equation calculates the number of correct pitch estimates divided by the number of pitches the algorithm predicts are present across the audio analysis frames.</ns0:p><ns0:p>Recall:</ns0:p><ns0:formula xml:id='formula_8'>r = 1( Ŷ (pitch) & Y (pitch) )1 1Y (pitch) 1 ,<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>such that the logical operator & denotes the element-wise AND of two binary matrices and 1 indicates a vector of ones. In other words, this equation calculates the number of correct pitch estimates divided by the number of ground-truth pitches that are active across the audio analysis frames.</ns0:p><ns0:p>f -measure:</ns0:p><ns0:formula xml:id='formula_9'>f = 2pr p + r ,<ns0:label>(7)</ns0:label></ns0:formula><ns0:p>such that p and r is the precision and recall calculated using Equation 5 and Equation <ns0:ref type='formula' target='#formula_8'>6</ns0:ref>, respectively. The f -measure calculated in Equation <ns0:ref type='formula' target='#formula_9'>7</ns0:ref>is the balanced f -score, which is the harmonic mean of precision and recall. In other words, precision and recall are weighted evenly.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/21</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Polyphony recall:</ns0:p><ns0:formula xml:id='formula_10'>r poly = ∑ n i=1 1{( Ŷ (pitch) 1) i = (Y (pitch) 1) i } n ,<ns0:label>(8)</ns0:label></ns0:formula><ns0:p>such that 1{•} is an indicator function that returns 1 if the predicate is true, and n is the number of audio analysis frames being evaluated. In other words, this equation calculates the number of correct polyphony estimates across all audio analysis frames divided by the number of analysis frames.</ns0:p><ns0:p>One error: given the matrix of pitch probabilities P( Ŷ (pitch) |Φ, Θ) ∈ [0, 1] n×k output by the DBN with model parameters Θ when processing the input audio analysis frame features Φ, the predominant pitch of the i th audio analysis frame is calculated using the equation</ns0:p><ns0:formula xml:id='formula_11'>j = argmax j P( Ŷ (pitch) i j |Φ i , Θ) ,<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>which can then be used to calculate the one error:</ns0:p><ns0:formula xml:id='formula_12'>one err = ∑ n i=1 1{Y (pitch) i j = 1} n ,<ns0:label>(10)</ns0:label></ns0:formula><ns0:p>such that 1{•} is an indicator function that maps to 1 if the predicate is true. The one error calculates the fraction of analysis frames in which the top-ranked label is not present in the ground-truth label set.</ns0:p><ns0:p>In the context of pitch estimation, this metric provides insight into the number of audio analysis frames where the predominant pitch-often referred to as the melody-is estimated incorrectly.</ns0:p><ns0:p>Hamming loss:</ns0:p><ns0:formula xml:id='formula_13'>hamming loss = 1( Ŷ (pitch) ⊕ Y (pitch) )1 nk ,<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>such that n is the number of audio analysis frames, k is the cardinality of the label set for each analysis frame, and the boolean operator ⊕ denotes the element-wise XOR of two binary matrices. The hamming loss provides insight into the number of false positive and false negative pitch estimates across the audio analysis frames.</ns0:p></ns0:div>
<ns0:div><ns0:head>Polyphonic Transcription Metrics</ns0:head><ns0:p>Several information retrieval metrics are also used to evaluate the note event estimates produced by the polyphonic transcription algorithm described in the previous chapter, which consists of a note pitch estimation algorithm followed by a note temporal estimation algorithm. Given an input audio recording, the polyphonic transcription algorithm outputs a set of note event estimates in the form of a MIDI file. A corresponding ground-truth MIDI file contains the set of true note events for the audio recording. Each note event contains three pieces of information: pitch, onset time, and offset time.</ns0:p><ns0:p>The music information retrieval evaluation exchange (MIREX), an annual evaluation of MIR algorithms, has a multiple fundamental frequency estimation and note tracking category in which polyphonic transcription algorithms are evaluated. The MIREX metrics used to evaluate polyphonic transcription algorithms are:</ns0:p><ns0:formula xml:id='formula_14'>Precision: p = | N ∩ N| | N| , (<ns0:label>12</ns0:label></ns0:formula><ns0:formula xml:id='formula_15'>)</ns0:formula><ns0:p>such that N is the set of estimated note events and N is the set of ground-truth note events.</ns0:p><ns0:p>Recall:</ns0:p><ns0:formula xml:id='formula_16'>r = | N ∩ N| |N| ,<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>such that N is the set of estimated note events and N is the set of ground-truth note events. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>f -measure:</ns0:p><ns0:formula xml:id='formula_17'>f = 2pr p + r ,<ns0:label>(14)</ns0:label></ns0:formula><ns0:p>such that p and r are calculated using Equation <ns0:ref type='formula' target='#formula_14'>12</ns0:ref>and Equation <ns0:ref type='formula' target='#formula_16'>13</ns0:ref>, respectively.</ns0:p><ns0:p>The criteria for a note event being correct, as compared to a ground-truth note event, are as follows:</ns0:p><ns0:p>• The pitch name and octave number of the note event estimate and ground-truth note event must be equivalent.</ns0:p><ns0:p>• The note event estimate's onset time is within ±250ms of the ground-truth note event's onset time.</ns0:p><ns0:p>• Only one ground-truth note event can be associated with each note event estimate.</ns0:p><ns0:p>The offset time of a note event is not considered in the evaluation process because offset times exhibit less perceptual importance than note onset times <ns0:ref type='bibr' target='#b13'>Costantini et al. (2009)</ns0:ref>.</ns0:p><ns0:p>Each of these evaluation metrics can also be calculated under the condition that octave errors are ignored. Octave errors occur when the algorithm predicts the correct pitch name but incorrectly predicts the octave number. Octave errors are prevalent in digital signal processing fundamental frequency estimation algorithms because high-energy harmonics can be misconstrued as a fundamental frequency, resulting in an incorrect estimate of the octave number <ns0:ref type='bibr' target='#b29'>Maher and Beauchamp (1994)</ns0:ref>. Reporting the evaluation metrics described in this section under the condition that octave errors are ignored will reveal whether machine learning transcription algorithms also succumb to a high number of octave errors.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXPERIMENTAL METHOD AND EVALUATION</ns0:head><ns0:p>The polyphonic transcription algorithm described in this paper is evaluated on a new dataset of synthesized guitar recordings. Before processing these guitar recordings, the number of pitches k and maximum polyphony p of the instrument must first be calculated in order to construct the DBN. Knowing that the input instrument is a guitar with six strings, the pitch estimation algorithm considers the k = 51 pitches from C2-D6, which spans the lowest note capable of being produced by a guitar in Drop C tuning to the highest note capable of being produced by a 22-fret guitar in Standard tuning. Though a guitar with six strings is only capable of producing six notes simultaneously, a chord transition may occur within a frame and so the maximum polyphony may increase above this bound. This is a technical side effect of a sliding-window analysis of the audio signal. Therefore, the maximum frame-wise polyphony is calculated from the training dataset using the equation</ns0:p><ns0:formula xml:id='formula_18'>p = max i Y (pitch) 1 i + 1, (<ns0:label>15</ns0:label></ns0:formula><ns0:formula xml:id='formula_19'>)</ns0:formula><ns0:p>where 1 is a vector of ones. The addition of one to the maximum polyphony is to accommodate silence where no pitches sound in an analysis frame.</ns0:p><ns0:p>The experiments outlined in this section will evaluate the accuracy of pitch estimates output by the DBN across each audio analysis frame as well as the accuracy of note events output by the entire polyphonic transcription algorithm. A formal evaluation of the guitar tablature arrangement algorithm used in this work has already been conducted <ns0:ref type='bibr' target='#b12'>(Burlet and Fujinaga, 2013)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Ground-truth Dataset</ns0:head><ns0:p>Ideally, the note pitch estimation algorithm proposed in this work should be trained and tested using recordings of acoustic or electric guitars that are subsequently hand-annotated with the note events being performed. In practice, however, it would be expensive to fund the compilation of such a dataset and there is a risk of annotation error. Unlike polyphonic piano transcription datasets that are often created using a mechanically controlled piano, such as a Yamaha Disklavier, to generate acoustic recordings that are time aligned with note events in a MIDI file, mechanized guitars are not widely available. Therefore, the most feasible course of action for compiling a polyphonic guitar transcription dataset is to synthesize a set of ground-truth note events using an acoustic model of a guitar.</ns0:p><ns0:p>Using the methodology proposed by <ns0:ref type='bibr' target='#b12'>Burlet and Fujinaga (2013)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:p>Computer Science created by harvesting the abundance of crowdsourced guitar transcriptions uploaded to www.ultima te-guitar.com as tablature files that are manipulated by the Guitar Pro desktop application. 3 The transcriptions in the ground-truth dataset were selected by searching for the keyword 'acoustic', filtering results to those that have been rated by the community as 5 out of 5 stars, and selecting those that received the most numbers of ratings and views. The dataset consists of songs by artists ranging from</ns0:p><ns0:p>The Beatles, Eric Clapton, and Neil Young to Led Zeppelin, Metallica, and Radiohead.</ns0:p><ns0:p>Each Guitar Pro file was manually preprocessed to remove extraneous instrument tracks other than guitar, remove repeated bars, trim recurring musical passages, and remove note ornamentations such as dead notes, palm muting, harmonics, pitch bends, and vibrato. The guitar models for note synthesis were set to the Martin & Co. acoustic guitar with steel strings, nylon strings, and an electric guitar model. Finally, each Guitar Pro file is synthesized as a WAV file and also exported as a MIDI file, which captures the note events occurring in the guitar track. The MIDI files in the ground-truth dataset are publicly available on archive.org. 4 The amount of time required to manually preprocess a Guitar Pro tablature transcription and convert it into the necessary data format ranges from 30 minutes to 2.5 hours, depending on the complexity of the musical passage.</ns0:p><ns0:p>In total the dataset consists of approximately 104 minutes of audio, an average tempo of 101 beats per minute, 44436 notes, and an average polyphony of 2.34. The average polyphony is calculated by dividing the number of note events by the number of chords plus the number of individual notes. The distribution of note pitches in the dataset is displayed in Figure <ns0:ref type='figure' target='#fig_8'>7</ns0:ref>. </ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm Parameter Selection</ns0:head><ns0:p>Before training and evaluating the described polyphonic transcription algorithm on the ground-truth dataset, several preliminary experiments were conducted to select reasonable parameters for the algorithm. Each preliminary experiment involved the following variables: audio sampling rate (Hz), window size (samples), sliding window hop size (samples), number of network hidden layers, number of nodes per hidden layer, and input features: the power spectrum. Each preliminary experiment selected one independent variable, while the other variables remained controlled. The dependent variable was the standard information retrieval metric of f -measure, which gauges the accuracy of the pitch estimates produced by the DBN over all audio analysis frames. For these preliminary experiments, the groundtruth dataset was partitioned into two sets, such that roughly 80% of the guitar recordings are allocated for training and 20% are allocated for model validation. The split is done at the song level, so a song exists solely in the training set or the validation set. Only the steel string guitar model was used in these preliminary experiments.</ns0:p><ns0:p>The results of the preliminary experiments with the proposed transcription system revealed that a sampling rate of 22050 Hz, a window size of 1024 samples, a hop size of 768 samples, a network structure of 400 nodes in the first hidden layer followed by 300 nodes in the penultimate layer, and power spectrum input features yielded optimal results. For network pretraining, 400 epochs were conducted with a 3 www.guitar-pro.com The experiments conducted in this paper were run on a machine with an Intel R Core TM i7 3.07 GHz quad core CPU, 24 GB of RAM, and an Nvidia GeForce GTX 970 GPU with 1664 CUDA cores. For more details regarding these preliminary experiments, consult the thesis of <ns0:ref type='bibr' target='#b11'>Burlet (2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Frame-level Pitch Estimation Evaluation</ns0:head><ns0:p>5-fold cross validation is used to split the songs in the compiled ground-truth dataset into 5 sets of training and testing partitions. For each fold, the transcription algorithm is trained using the parameters detailed in the previous section. After training, the frame-level pitch estimates computed by the DBN are evaluated for each fold using the following standard multi-label learning metrics <ns0:ref type='bibr' target='#b49'>(Zhang and Zhou, 2014)</ns0:ref>: precision (p), recall (r), f -measure ( f ), one error, and hamming loss. The precision calculates the number of correct pitch estimates divided by the number of pitches the algorithm predicts are present across the audio analysis frames. The recall calculates the number of correct pitch estimates divided by the number of ground-truth pitches that are active across the audio analysis frames. The f -measure refers to the balanced f -score, which is the harmonic mean of precision and recall. The one error provides insight into the number of audio analysis frames where the predominant pitch is estimated incorrectly.</ns0:p><ns0:p>The hamming loss provides insight into the number of false positive and false negative pitch estimates across the audio analysis frames. In addition, the frame-level polyphony recall (r poly ) is calculated to evaluate the accuracy of polyphony estimates made by the DBN.</ns0:p><ns0:p>Using the ground-truth dataset, pretraining the DBN took an average of 172 minutes while finetuning took an average of 246 minutes across each fold. After training, the network weights are saved so that they can be reused for future transcriptions. The DBN took an average of 0.26 seconds across each fold to yield pitch estimates for the songs in the test partitions. The results of the DBN pitch estimation algorithm are averaged across the 5 folds and presented in Table <ns0:ref type='table'>1</ns0:ref> on multiple Guitar models.</ns0:p><ns0:p>After HMM frame smoothing the results substantially improve with a precision of 0.81, a recall of 0.74, and an f -measure of 0.77. Figure <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> provides visual evidence of the positive impact of HMM frame smoothing on the frame-level DBN pitch estimates, showing the removal of several spurious note events.</ns0:p><ns0:p>The results reveal that the 55% polyphony estimation accuracy likely hinders the frame-level fmeasure of the pitch estimation algorithm. Investigating further, when using the ground-truth polyphony for each audio analysis frame, an f -measure of 0.76 is noted before HMM smoothing. The 5% increase in f -measure reveals that the polyphony estimates are close to their ground-truth value. With respect to the one error, the results reveal that the DBN's belief of the predominant pitch-the label with the highest probability-is incorrect in only 13% of the analysis frames.</ns0:p></ns0:div>
<ns0:div><ns0:head>Note Event Evaluation</ns0:head><ns0:p>Although evaluating the pitch estimates made by the algorithm for each audio analysis frame provides vital insight into the performance of the algorithm, we can continue with an evaluation of the final note events output by the algorithm. After HMM smoothing the frame-level pitch estimates computed by the DBN, onset quantization is performed and a MIDI file, which encodes the pitch, onset time, and duration of note events, is written. An evaluation procedure similar to the MIREX note tracking task, Relative to a ground-truth note event, an estimate is considered correct if its onset time is within 250ms and its pitch is equivalent. The accuracy of note offset times are not considered because offset times exhibit less perceptual importance than note onset times <ns0:ref type='bibr' target='#b13'>(Costantini et al., 2009)</ns0:ref>. A ground-truth note event can only be associated with a single note event estimate Given the long decay of a guitar note we relied on the MIDI transcription as ground-truth and the threshold to determine when a note had ended.</ns0:p><ns0:p>Non-pitched transitions between notes and chords were not explicitly addressed, and would depend on the song and the transcription quality.</ns0:p><ns0:p>These metrics of precision, recall, and f -measure are calculated on the test partition within each of the 5 folds used for cross validation. Table <ns0:ref type='table' target='#tab_4'>2</ns0:ref> presents the results of the polyphonic transcription algorithm averaged across each fold. The result of this evaluation is an average f -measure of 0.67 when considering note octave errors and an average f -measure of 0.69 when disregarding note octave errors. Octave errors occur when the algorithm predicts the correct note pitch name but incorrectly predicts the note octave number. An approximately 2% increase in f -measure when disregarding octave errors provides evidence that the transcription algorithm does not often mislabel the octave number of note events, which is often a problem with digital signal processing transcription algorithms <ns0:ref type='bibr' target='#b29'>(Maher and Beauchamp, 1994)</ns0:ref>. Note</ns0:p><ns0:p>that the frame-level pitch estimation f -measure of 0.77, presented in Table <ns0:ref type='table'>1</ns0:ref>, does not translate to an equivalently high f -measure for note events because onset time is considered in the evaluation criteria as well as pitch.</ns0:p><ns0:p>Another interesting property of the transcription algorithm is its conservativeness: the precision of the note events transcribed by the algorithm is 0.81 while the recall is 0.60, meaning that the algorithm favours false negatives over false positives. In other words, the transcription algorithm includes a note event in the final transcription only if it is quite certain of the note's correctness, even if this hinders the recall of the algorithm. Another cause of the high precision and low recall is that when several guitar strums occur quickly in succession, the implemented transcription algorithm often transcribes only the first chord and prescribes it a long duration. This is likely a result of the temporally 'coarse' window size of 1024 samples or a product of the HMM frame-smoothing algorithm, which may extend the length of notes causing them to 'bleed' into each other. A remedy for this issue is to lower the window size to increase temporal resolution; however, this has an undesirable side-effect of lowering the frequency resolution of the DFT which is undesirable. A subjective, aural analysis of the guitar transcriptions reflects these results: the predominant pitches and temporal structures of notes occurring in the input guitar recordings are more or less maintained.</ns0:p><ns0:p>Additionally, the guitar recordings in the test set of each fold are transcribed by a digital signal processing polyphonic transcription algorithm developed by <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref>, which was evaluated in the 2008 MIREX and received an f -measure of 0.76 on a dataset of 30 synthesized and real piano record- Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>ings <ns0:ref type='bibr' target='#b50'>(Zhou and Reiss, 2008)</ns0:ref>. The <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> polyphonic transcription algorithm processes audio signals at a sampling rate of 44100 Hz. A window size of 441 samples and a hop size of 441 samples is set by the authors for optimal transcription performance <ns0:ref type='bibr' target='#b50'>(Zhou and Reiss, 2008)</ns0:ref>.</ns0:p><ns0:p>The transcription algorithm described in this paper resulted in an 11% increase, or a 20% relative increase, in f -measure compared to the transcription algorithm developed by <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> when evaluated on the same dataset, and further, performed these transcriptions in a sixth of the time. This result emphasizes a useful property of neural networks: after training, feeding the features forward through the network is accomplished in a small amount of time.</ns0:p><ns0:p>With a precision of 0.70 and a recall of 0.50 when considering octave errors, the <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> transcription algorithm also exhibits a significantly higher precision than recall; in this way, it is similar to the transcription algorithm described in this paper. When disregarding octave errors, the f -measure of the <ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> transcription algorithm increases by approximately 6%. Therefore, this signal processing transcription algorithm makes three times the number of note octave errors as the transcription algorithm described in this paper.</ns0:p></ns0:div>
<ns0:div><ns0:head>Multiple Guitar Model Evaluation</ns0:head><ns0:p>This section explores the effect on performance of training and testing on different synthesized guitar models. Table <ns0:ref type='table' target='#tab_5'>3 depicts</ns0:ref> Surprisingly the performance of foreign samples on a network was not a large loss for the DBN models. The range of difference in f -measure was between −0.09 and 0.02 with a median of 0.01 and a mean of −0.002 with no significant difference in performance between the classification performance of the differently trained networks-although this could change given even more datasets.</ns0:p></ns0:div>
<ns0:div><ns0:head>Number of Network Hidden Layers</ns0:head><ns0:p>This section explores the effect of changing the network architecture in terms of the number of fully connected hidden layers.</ns0:p><ns0:p>Hypothesis: Increasing the number of hidden layers in the DBN will increase pitch estimation fmeasure.</ns0:p><ns0:p>Rationale: <ns0:ref type='bibr' target='#b20'>Hinton et al. (2006)</ns0:ref> also noted that increasing the number of network layers is guaranteed to improve a lower bound on the log likelihood of the training data. In other words, the worst-case performance of the DBN is theoretically guaranteed to improve as hidden layers are added. Furthermore, taking a step above their shallow counterparts, deep networks afford greater representational power to better model complex acoustic signals therefore, the f -measure of the pitch estimation algorithm should increase.</ns0:p><ns0:p>Parameters: <ns0:ref type='formula' target='#formula_9'>7</ns0:ref>) over all analysis frames is the dependent variable, as well as the other evaluation metrics described in Section . The hypothesis is confirmed if the f -measure increases as the number of hidden layers increases. </ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion:</ns0:head><ns0:p>The hypothesis speculated that increasing the number of hidden layers, and consequently the number of model parameters, would increase frame-level pitch estimation f -measure.</ns0:p><ns0:p>Given prior work in deep neural networks, the depth of the network is often viewed as providing an advantage to the model over a shallow neural network. Thus it is reasonable to assume that increasing the number of hidden layers in the deep network will yield increasingly better results; however, the results presented in Table <ns0:ref type='table' target='#tab_7'>5</ns0:ref> provide evidence supporting the contrary.</ns0:p><ns0:p>The results invalidate the hypothesis and suggest that a more complex model, with more layers and thus more parameters, does not correlate positively with model performance. Rather, the results show that the number of hidden layers is negatively correlated with pitch estimation f -measure. As the number of hidden network layers is increased, the precision and recall of the frame-level note pitch estimates decrease. However, the decrease in f -measure is quite minimal: roughly −1% f -measure for each additional layer. Confirming how minimal these changes are, a Tukey-Kramer honest significance test on the f -measure of songs in the test dataset for each DBN trained in this experiment shows no significant differences between the models. Though the f -measures of each model are not significantly different, the trend of decreasing f -measure as the number of network layers increases is still apparent. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>Considering the results of the experiments outlined in the previous section, there are several benefits of using the developed transcription algorithm. As previously mentioned, the accuracy of transcriptions generated by the algorithm surpasses Zhou et al.'s model and makes less octave errors. Moreover, the developed polyphonic transcription algorithm can generate transcriptions for full-length guitar recordings in the order of seconds, rather than minutes or hours. Given the speed of transcription, the proposed polyphonic transcription algorithm could be adapted for almost real-time transcription applications, where live performances of guitar are automatically transcribed. The DBN can run in real-time while the HMM requires some seconds of buffer to run effectively. Currently the system can annotate a recording in less time than it takes to play the recording. This could be accomplished by buffering the input guitar signal into analysis frames as it is performed. Another benefit of this algorithm is that the trained network weights can be saved to disk such that future transcriptions do not require retraining the model. As well, the size of the model is relatively small (less than 12MB) and so the network weights can fit on a portable device, such as smart-phone, or even a micro-controller. As a consequence of the amount of time required to train the pitch estimation algorithm, it is difficult to search for good combinations of algorithm parameters. Another arguable detriment of the transcription algorithm is that the underlying DBN pitch estimation algorithm is essentially a black box. After training, it is difficult to ascertain how the model reaches a solution. This issue is exacerbated as the depth of the network increases. Finally, it is possible to overfit the model to the training dataset. When running the fine-tuning process for another 30000 epochs, the f -measure of the transcription algorithm began to deteriorate due to overfitting. To mitigate against overfitting, the learning rate could be dampened as the number of training epochs increase. Another solution involves the creation of a validation dataset, such that the fine-tuning process stops when the f -measure of the algorithm begins to decrease on the guitar recordings in the validation dataset. The method used in this paper is fixed iterations, where the fine-tuning process is limited to a certain number of epochs instead of allowing the training procedure to continue indefinitely.</ns0:p></ns0:div>
<ns0:div><ns0:head>Limitations of Synthesis</ns0:head><ns0:p>One of the most obvious limitations with this study is the dataset. It is quite difficult to get high quality transcriptions and recordings of songs, in terms of time, labour, money and licensing. We relied primarily upon transcriptions of songs and synthesized renderings of the these transcriptions. Synthesis has many weaknesses as it is not a true acoustic instrument; it is meant to model an instrument. Thus when we train on synthesized models we potential overfit to a model of a guitar rather than a guitar playing.</ns0:p><ns0:p>Furthermore synthetic examples typically will be timed well with little clock slew or swing involved.</ns0:p><ns0:p>Real recordings of guitar music will start at different times, change tempo, and have far more noise in the recording and the timing of events than a synthesized example. The exactness of synthesized examples can pose a problem because one string pluck can be as predictable as another where as string plucks on actual guitars will vary in terms of duration, decay, energy, etc. Sample-based synthesizers might be too predictable and allow for neural nets to over-fit to the samples relied upon-if samplebased synthesizers are used one might have to add noise to the signal to improve robustness and avoid the learning of a 'sound-font'. This predictability is the primary weakness of synthesis; the lack of range or randomness in synthesized output should be a concern. Synthesis also makes assumptions in terms of tuning and accuracy of each pitch produced which might not reflect real world tunings. MIDI is also quite limited in terms of its timing and range of notes. A more accurate form of transcription might be needed to improve model and transcription quality.</ns0:p><ns0:p>Future work should involve more real recordings of guitar music to enable better transcription. Furthermore robotic guitars <ns0:ref type='bibr' target='#b41'>(Singer et al., 2003)</ns0:ref>, guitar versions of the Yamaha Disklavier, might provide more range of inputs yet still suffer from the issues regarding synthesis discussed earlier. Fundamentally synthesis is a cost trade-off: it enables careful reproduction of transcriptions but it comes with its own costs in terms of realism and applicability. When it's possible to find, curate, or synthesize data, this approach works well. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSION</ns0:head><ns0:p>Computer Science 11% higher than the f -measure reported by Zhou et al.'s single-instrument transcription algorithm <ns0:ref type='bibr' target='#b51'>(Zhou et al., 2009)</ns0:ref> on the same dataset.</ns0:p><ns0:p>The results of this work encourage the use of deep architectures such as belief networks to form alternative representations of industry-standard audio features for the purposes of instrument transcription. Moreover, this work demonstrates the effectiveness of multi-label learning for pitch estimation, specifically when an upper bound on polyphony exists.</ns0:p></ns0:div>
<ns0:div><ns0:head>Future Work</ns0:head><ns0:p>There are several directions of future work to improve the accuracy of transcriptions. First, there are substantial variations in the distribution of pitches across songs, and so the compilation of more training data is expected to improve the accuracy of frame-level pitch estimates made by the DBN. Second, alternate methods could be explored to raise the accuracy of frame-level polyphony estimates, such as training a separate classifier for predicting polyphony on potentially different audio features. Third, an alternate frame-smoothing algorithm that jointly considers the probabilities of other pitch estimates across analysis frames could further increase pitch estimation f -measure relative to the HMM method proposed by <ns0:ref type='bibr' target='#b35'>Poliner and Ellis (2007)</ns0:ref>, which smooths the estimates of one pitch across the audio analysis frames.</ns0:p><ns0:p>Finally, it would be beneficial to investigate whether the latent audio features derived for transcribing one instrument are transferable to the transcription of other instruments.</ns0:p><ns0:p>In the end, the big picture is a guitar tablature transcription algorithm that is capable of improving its transcriptions when provided with more examples. There are many guitarists that share manual tablature transcriptions online that would personally benefit from having an automated system capable of generating transcriptions that are almost correct and can subsequently be corrected manually. There is incentive to manually correct the output transcriptions because this method is potentially faster than performing a transcription from scratch, depending on the quality of the automated transcription and the difficulty of the song. The result is a crowdsourcing model that is capable of producing large ground-truth datasets for polyphonic transcription that can then be used to further improve the polyphonic transcription algorithm.</ns0:p><ns0:p>Not only would this improve the accuracy of the developed polyphonic transcription algorithm, but it would also provide a centralized repository of ground-truth guitar transcriptions for MIR researchers to train and test future state-of-the-art transcription algorithms.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. A system of modern guitar tablature for the song 'Weird Fishes' by Radiohead, complete with common western music notation above.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>perhaps the closest work to the algorithm presented here. Sigtia et al. encode note tracking and pitch estimation into the same neural network.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>applied deep convolutional networks, instead of HMMs, to transcribe guitar chord tablature from audio. The dataset used was different than Barbancho's as Humphrey et al. used pop music recordings rather than isolated guitar recordings. The Humphrey et al. model attempts to output string and fretboard chord fingerings directly. Thus instead of outputting a series of pitches the model estimates which strings are strummed and at what point are they pressed on the guitar fretboard. This model attempts to recover fingering immediately rather than build it or arrange fingering later. The authors report a frame-wise recognition rate of 77.42%.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. A restricted Boltzmann machine with m visible nodes and n hidden nodes. Weights on the undirected edges have been omitted for clarity.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Structure of the deep belief network for note pitch estimation. Edge weights are omitted for clarity.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. An overview of the transcription workflow on a four-second segment of a synthesized guitar recording.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>2</ns0:head><ns0:label /><ns0:figDesc>http://musescore.org 8/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Distribution of note pitches in the ground-truth dataset.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>4 https://archive.org/details/DeepLearningIsolatedGuitarTranscriptions 12/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017) 5-fold cross validation results of the frame-level pitch estimation evaluation metrics on acoustic guitar with steel strings: r poly denotes the polyphony recall, p denotes precision, r denotes recall, and f denotes f -measure. learning rate of 0.05 using 1-step contrastive divergence with a batch size of 1000 training instances. For network fine-tuning, 30000 epochs were conducted with a learning rate of 0.05 and a batch size of 1000 training instances. The convergence threshold, which ceases training if the value of the objective function between epochs does not fluctuate more than the threshold, is set to 1E − 18 for both pretraining and fine-tuning. These algorithm parameters are used in the experiments detailed in the following sections.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>5</ns0:head><ns0:label /><ns0:figDesc>http://www.music-ir.org/mirex 14/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>different test sets and different training sets. Steel on Steel means that steelstring acoustic guitar audio is tested on a steel-string acoustic guitar-trained DBN. Electric on Nylon is electric guitar audio tested against a DBN trained on a nylon-stringed acoustic guitar model. Electric on Steel and Nylon is an electric guitar tested on a DBN trained against both acoustic with steel string and acoustic with nylon string models. The results shown are averages from 5-fold cross validation splitting on songs: per each fold songs used for evaluation were not in the training set. In every single case, except one, the f -measure of the DBN model outperforms the Zhou model, except in the case of Steel on Nylon & Electric. The difference for the steel samples likely come from its timbral properties and its distinctiveness from Nylon or Electric, which have similar timbres. One way to perhaps fix this difference is to spectral flatten the signal (whitening), as suggested by Klapuri (2006), before transcription or training. Regardless, the f -measure difference between the DBN model and the Zhou model is −0.03 to 0.10 with a mean difference in f -measure of 0.056, and a 95% confidence interval of 0.02 to 0.09 in f -measure difference. Wilcoxon rank sum test reports a p-value of 0.003, indicating a statistically significant difference in performance between Zhou f -measure and DBN fmeasure. Mixed networks, those trained on two guitar models seem to perform as well as models trained on one guitar model, with the exception of the Nylon & Electric network tested against Steel samples.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017) Manuscript to be reviewed Computer Science</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>When applied to the problem of polyphonic guitar transcription, deep belief networks outperform Zhou et al.'s general purpose transcription algorithms. Moreover, the developed transcription algorithm is fast: the transcription of a full-length guitar recording occurs in the order of seconds and is therefore suitable for real-time guitar transcription. As well, the algorithm is adaptable for the transcription of other instruments, such as the bass guitar or piano, as long as the pitch range of the instrument is provided and MIDI annotated audio recordings are available for training.The polyphonic transcription algorithm described in this paper is capable of forming discriminative, latent audio features that are suitable for quickly transcribing guitar recordings. The algorithm workflow consists of audio signal preprocessing, feature extraction, a novel pitch estimation algorithm that uses deep learning and multi-label learning techniques, frame smoothing, and onset quantization. The generated note event transcriptions are digitally encoded as a MIDI file, that is processed further to create a MusicXML file that encodes the corresponding guitar tablature notation.An evaluation of the frame-level pitch estimates generated by the deep belief network on a dataset of synthesized guitar recordings resulted in an f -measure of 0.77 after frame smoothing. An evaluation of the note events output by the entire transcription algorithm resulted in an f -measure of 0.67, which is18/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>5-fold cross validation results of the precision, recall, and f -measure evaluation of note events transcribed using the DBN transcription algorithm compared to the<ns0:ref type='bibr' target='#b51'>Zhou et al. (2009)</ns0:ref> transcription algorithm. The first row includes octave errors while the second row excludes octave errors. Audio was generated from GuitarPro acoustic guitar with steel strings model. a yearly competition that evaluates polyphonic transcription algorithms developed by different research institutions on the same dataset, is conducted using the metrics of precision, recall, and f -measure.5 </ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell></ns0:row></ns0:table><ns0:note>13/21PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Table4describes the pitch estimation algorithm variables for Experiment 4. This experiment sets the number of hidden layers as the independent variable, while keeping the number of nodes in each layer constant. Note that adding each consecutive layer does make the number of parameters in the neural network larger-the number of neural network parameters was not fixed, it grows per each 5-fold cross validation results of the frame-level pitch estimation evaluation metrics: p denotes precision, r denotes recall, and f denotes f -measure. Synthesis models are from Guitar Pro: acoustic guitar with steel, acoustic guitar with nylon, and clean electric guitar. layer added. Values of the controlled variables were selected based on preliminary tests. The f -measure (Equation</ns0:figDesc><ns0:table><ns0:row><ns0:cell>15/21</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Independent, controlled, and dependent variables for Hidden Layers Experiment</ns0:figDesc><ns0:table><ns0:row><ns0:cell>VARIABLE</ns0:cell><ns0:cell>VALUE</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Audio sampling rate, f s (Hz) 22050</ns0:cell></ns0:row><ns0:row><ns0:cell>Window size, w (samples)</ns0:cell><ns0:cell>2048</ns0:cell></ns0:row><ns0:row><ns0:cell>Hop size, h (samples)</ns0:cell><ns0:cell>75% of window size</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of hidden layers †</ns0:cell><ns0:cell>2, 3, 4</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of nodes per layer</ns0:cell><ns0:cell>300</ns0:cell></ns0:row><ns0:row><ns0:cell>Guitar Model</ns0:cell><ns0:cell>Acoustic with Steel Strings</ns0:cell></ns0:row><ns0:row><ns0:cell>Features</ns0:cell><ns0:cell>DFT power spectrum</ns0:cell></ns0:row><ns0:row><ns0:cell>f -measure ‡</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>† Denotes the independent variable.</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>‡ Denotes the dependent variable.</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Effect of number of Hidden Layers on transcriptionThis performance could be due to overfitting the network to the small amount of training examples, especially since we increase the parameter search space with every layer we add.There are several potential causes of this result. First, increasing the complexity of the model could have resulted in overfitting the network to the training data. Second, the issue of 'vanishing gradients'<ns0:ref type='bibr' target='#b6'>Bengio et al. (1994)</ns0:ref> could be occurring in the network fine-tuning training procedure, whereby the training signal passed to lower layers gets lost in the depth of the network. Yet another potential cause of this result is that the pretraining procedure may have found insufficient initial edge weights for networks with increasing numbers of hidden layers. Although, the evidence for overfitting is strong, we found that while f -measure and precision decreased on the test-set per each layer added, the f -measure and precision increased steadily when evaluated on the training. f -measures on the training set ranged from 0.888 for 2 layers, 0.950 for 3 layers, and to 0.966, for 4 layers. This is relatively clear evidence of the network being conditioned to the training set. Thus to properly answer this question we would need more training and test data or to change the number of neurons per layer.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell>Layers</ns0:cell><ns0:cell>p</ns0:cell><ns0:cell>r</ns0:cell><ns0:cell>f</ns0:cell><ns0:cell>ONE</ns0:cell><ns0:cell>HAMMING</ns0:cell><ns0:cell>poly r</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>ERROR</ns0:cell><ns0:cell>LOSS</ns0:cell></ns0:row><ns0:row><ns0:cell>BEFORE HMM</ns0:cell><ns0:cell>2 3 4</ns0:cell><ns0:cell cols='3'>0.675 0.601 0.636 0.650 0.600 0.623 0.643 0.591 0.616</ns0:cell><ns0:cell>0.192 0.200 0.211</ns0:cell><ns0:cell>0.040 0.042 0.043</ns0:cell><ns0:cell>0.463 0.452 0.433</ns0:cell></ns0:row><ns0:row><ns0:cell>AFTER HMM</ns0:cell><ns0:cell>2 3 4</ns0:cell><ns0:cell cols='3'>0.760 0.604 0.673 0.739 0.610 0.669 0.728 0.602 0.659</ns0:cell><ns0:cell>---</ns0:cell><ns0:cell>0.034 0.035 0.036</ns0:cell><ns0:cell>---</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='17'>/21 PeerJ Comput. Sci. reviewing PDF | (CS-2015:07:5874:2:0:NEW 16 Feb 2017) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Reviewers,
Thank you for your reviews. We implemented just about every single suggestion. We appreciate the thoroughness
of the reviews as well.
Re-ran the depth experiment for overfitting (confirmed)
Renamed appropriate sections
Implemented the majority of suggestions
We thank you for the time you have spent and we feel we have met all the review comments.
Abram Hindle and Gregory Burlet
Detailed comments are inlined with the review below:
Dear Authors,
The revised version of the paper is a significant improvement compared to the original submission. Yet,
there are still some comments and suggestions by one of the reviewers, which should be taken into
account prior to acceptance.
Reviewer 1 (Eric Humphrey)
Basic reporting
Basic reporting looks great. I'd suggest the authors consider renaming the section 'Transcription
Evaluation' to 'Experimental Method', 'Experimentation', or something to that effect. Parameter sweeps
and tuning are still part of finding the solution that's being evaluated, which roll up to experimentation.
[X] Rename Transcription Eval to Experimental Method
Response: We renamed the section to Experimental Method and Evaluation.
Experimental design
Overall, the experimental design is reasonably solid. I do, however, have two smaller comments.
First, regarding L396 on page 12: The text states that 'recordings' are partitioned into two sets; is this
split conditional on the source songs? For example, imagine two GuitarPro files A and B. A is passed
through three guitar models producing A1, A2, and A3, and B is rendered to B1, B2, and B3. Can A and B
both occur in the test set? In the spirit of true scientific rigor, audio rendered from the same symbolic
source really shouldn't fall across partitions, and I'm curious whether or not this is considered here.
[X] L396 P 12 Explain that the splits are per song, so we do not train on the same songs we train on and splits
are across models
Response: We added more explanation in the paper about the how the preliminary splits work, Song A will be
ONLY part of training or ONLY part of test.
Second, regarding L527-528 on page 16: The authors intend to measure the effect of model depth, i.e.
the number of layers, independently, but keep the width, i.e. number of nodes, of each layer constant.
This isn't truly an independent assessment, because the number of parameters (and thus model
complexity) is certainly increasing. To truly measure the effect of layers, the number of parameters
should be held constant, as this would offer insight into what may be gained / lost with depth. Otherwise,
one would expect that over-fitting will almost certainly happen as the number of parameters increase,
especially here given the small size of the dataset and the minimal variance of the sound fonts
considered.
That said, I don't think this is an irrelevant experiment, but it's not testing the hypothesis set out by the
authors. Rather, I'd be willing to wager that performance on the training and test sets are going in
opposite directions here. Reporting performance on the training set (not given) would provide some
insight into what's happening here.
[X] L527 P 1 Clarify that the number of parameters is not held constant and that we might be obvserving
overfitting.
Response: We added text to this section to discuss how we could be overfitting and how the number of parameters
was not constant because we added constant size layers.
[X] L527 P 1 Can we rerun with training set performance reported?
Response: We evaluated the training and test performance and found improving training performance versus
declining test performance.
Validity of the findings
The majority of the findings reported by the authors are substantiated and insightful. There are a few,
though, that I would suggest the authors revisit.
L502-503: The authors offer that 'steel samples are generally louder than the electric or nylon acoustic
samples', and perhaps that's to account for a difference in performance between the reference system
(Zhou) and the one proposed here. I'm curious what the rationale would be in that one system would be
more or less affected by the gain of a signal? Or how could this hypothesis be tested? After all, symbolic
MIDI / GuitarPro files still encode a range of velocity values, no? I'd suspect it has more to do with the
timbre of the sounds than the loudness, in that the nylon and electric guitar are 'closer' than the steel
sound fount (damped overtone series?).
[X] L 502-503 Mention timbre also are difference. Ask greg.
Response: Regarding the difference between steel and nylon, I think Eric is right about timbre. I believe I
(Gregory) normalize the audio samples (or at least I do now) which would offset any major gain differences
between soundfonts, so I think it mostly comes down to timbre, which is quite complex and consists of several
perceptual attributes of the audio signal. One thing we could mention is that spectral whitening could be used in
an attempt to negate the effects of timbre when testing a model trained on one string set versus another. One
citation off the top of my head: 'where an input signal is first spectrally flattened (“whitenedâ€) in order to
suppress timbral information' (Klapuri, 2006) and my source code for this here. In the paper we talked about
whitening.
L522-525: In motivating the exploration of multiple layers in the network (2-4), a parallel is drawn
between deep networks and neurobiology. I would strongly advise against trying to make this link. Not
only is it debatable, it's an unnecessary distraction that undermines the good work around it. It is
sufficient to say 'deeper models afford greater representational power and can better model complex
acoustic signals' without bringing brains into the mix, and no one can take issue with the claim. Similar
comments hold for lines L536-538
[X] L522-525 536-538 Multiple layers
Response: We editted away some of the neuro-hand-waving. Thanks.
L548-553: As a conclusion to the same section named above, the authors offer three explanations for an
increase in depth leading to decreases in performance: 'First, increasing the complexity of the model
could have resulted in overfitting the network to the training data. Second, the issue of “vanishing
gradients†Bengio et al. (1994) could be occurring in the network fine-tuning training procedure,
whereby the training signal passed to lower layers gets lost in the depth of the network. Yet another
potential cause of this result is that the pretraining procedure may have found insufficient initial edge
weights for networks with increasing numbers of hidden layers.'
[X] L548-553 add pretraining found insufficient intial edge weights for
Response: We added that pretraining found insufficient intial edge weights for to the paper.
Based on past experience, I'd bet it is exclusively due to the first reason named. All speculation would be
easily resolved by including performance numbers over the training set, as well as the test set. If training
accuracy increases with model complexity, then we have our answer. In a similar manner, I am suspicious
that the vanishing gradient problem is to blame, or that pre-training yielded poor parameters. Again,
performance on the training set would shed some light on this. Also, I'd offer that some of the narrative
be adjusted to reflect that model complexity, not just depth, is being varied here.
[X] Collect training performance.
Response: We evaluated the training and test performance and found improving training performance versus
declining test performance.
Comments for the Author
Overall the article is in good shape, and I commend the authors for their diligence in continuing to
improve the work. My most important feedback (regarding science and whatnot) is named above, but I've
a number of much smaller notes to share. Do with them as you will.
L34: It would be more slightly more convincing to find a more modern reference than (Klapuri, 2004)
when referring to the state of the art being so far behind human experts, since it's almost 10 years older
than the (Benetos, 2012) citation, which is used as evidence that a monophonic transcription is solved.
[ ] Find a better reference for polyphonic transcription
Response: We have cited plenty of examples of systems in the rest of paper, the point of the Klapuri quote was that
they really said it quite beautifully, even if was in the past. But we understand the concern.
L81: It's a minor misrepresentation that Humphrey et al, 2012 & 2013 advocate the use of 'deep belief
networks', which are a specific kind of neural network (RBM pre-training followed by supervised fine
tuning). Rather, the articles argue for feature learning and deep architectures generically, e.g. CNNs,
DBNs, LSTMs, autoencoders, etc.
[X] Fix reference to Humphrey
Response: We clarified that it was feature engineering with deep architectures.
L119-127: It's somewhat unclear from this passage that the two systems presented are doing different
things on different data, i.e. extensive guitar fingerings from solo guitar recordings versus guitar chord
shapes over polyphonic pop/rock music.
[X] Clarify difference between systems
Response: We added clarification to indicate the difference in difficulty of each system.
L380-381: Why would MFCCs be a good feature for polyphonic pitch tracking?
[X] We removed the MFCC reference, it was part of Gregory's thesis. It did not work well, but it was used in
speech recognition literature.
L485-489: Both seem like reasonable kinds of errors. Do you have any insight to the frequency or
prevalence of one over the other? Personally I'd expect the duration merging kind to be more common
than the thresholding issue, but that's just a hunch.
[X] L485-489 Address octave errors.
Response: It's not exactly clear other than timbral differences between steel, electric, and nylon. Zhou does not do
well on steel either. We have not modified the paper to discuss this.
L473: Perhaps consider using the Constant-Q transform in future work, which provides a more
reasonable trade-off between frequency resolution and time resolution than the DFT.
[X] Response: Thanks for the suggestion
L560: It's more accurate to say 'faster than real-time', right? It's my understanding that the HMM
decoding is non-causal, which means the full signal must be processed before an output can be given at
t=0. This would be different from an 'on-line' system, as in 'as one plays music.'
[X] L560 clarify the system's deep learning aspect is faster than play-time. And in total it is faster than playtime, maybe not streaming low latency.
Response: We clarified in the paper what was meant, that the runtime is less than the playing time. The HMM
definitely would increase the latency of the technique.
L565: Similar to the previous comment, it's a bit of a stretch to claim that the algorithm could be
achieved with a microcontroller. It's doubtful that the processing speed seen on a personal computer
(with an Intel or AMD processor in the GHz) will translate well to smaller processors with less / slower
RAM.
[X] L575 -- microcontroller argument -- cite integer based bengio paper
Response: There's some interesting work in how to avoid floating point and use binary weighted networks to
improve performance. We have cited it and suggested that perhaps it could end up on a microcontroller. Current
cell phones have pretty impressive arm processors.
L568: Also in the realm of tempering optimistic claims, the sentence 'All that is required is a set of audio
files' is a tad ironic, given how difficult it sounds to obtain data in L355-363 and
[X] We toned down this claim.
L590-609. Perhaps something along the lines of 'When it's possible to find, curate, or synthesize data,
this approach is great.'
[X] L568 We added a similar claim to the paper .
L587: The approach used in this paper is not early stopping, but a fixed number of iterations. Early
stopping requires that some measure (typically over a validation set) is computed as a function of
iteration / parameters [https://en.wikipedia.org/wiki/Early_stopping].
[X] We fixed that in the paper, we said fixed iterations.
L590: What about sample-based synthesizers? These are at least real sound recordings, rather than
algorithmically defined synthesis equations.
[X] Clarify the dangers of sample based synths
Response: We added some arguement that sample based synths are a little dangerous due to the repetition of the
signals due to the low number of samples. It might be quite easy to learn the sound font rather than the actual
instrument that was recorded.
Thank you for your reviews!
We've thoroughly addressed them.
" | Here is a paper. Please give your review comments after reading it. |
732 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>With few exceptions, the field of Machine Learning (ML) research has largely ignored the browser as a computational engine. Beyond an educational resource for ML, the browser has vast potential to not only improve the state-of-the-art in ML research, but also, inexpensively and on a massive scale, to bring sophisticated ML learning and prediction to the public at large. This paper introduces MLitB, a prototype ML framework written entirely in Javascript, capable of performing large-scale distributed computing with heterogeneous classes of devices. The development of MLitB has been driven by several underlying objectives whose aim is to make ML learning and usage ubiquitous (by using ubiquitous compute devices), cheap and effortlessly distributed, and collaborative. This is achieved by allowing every internet capable device to run training algorithms and predictive models with no software installation and by saving models in universally readable formats. Our prototype library is capable of training deep neural networks with synchronized, distributed stochastic gradient descent. MLitB offers several important opportunities for novel ML research, including: development of distributed learning algorithms, advancement of web GPU algorithms, novel field and mobile applications, privacy preserving computing, and green grid-computing. MLitB is available as open source software.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>The field of Machine Learning (ML) currently lacks a common platform for the development of massively distributed and collaborative computing. As a result, there are impediments to leveraging and reproducing the work of other ML researchers, potentially slowing down the progress of the field. The ubiquity of the browser as a computational engine makes it an ideal platform for the development of massively distributed and collaborative ML. Machine Learning in the Browser (MLitB) is an ambitious software development project whose aim is to bring ML, in all its facets, to an audience that includes both the general public and the research community.</ns0:p><ns0:p>By writing ML models and algorithms in browser-based programming languages, many research opportunities become available. The most obvious is software compatibility: nearly all computing devices can collaborate in the training of ML models by contributing some computational resources to the overall training procedure and can, with the same code, harness the power of sophisticated predictive models on the same devices (see Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). This goal of ubiquitous ML has several important consequences: training ML models can now occur on a massive, even global scale, with minimal cost, and ML research can now be shared and reproduced everywhere, by everyone, making ML models a freely accessible, public good. In this paper, we present both a long-term vision for MLitB and a light-weight prototype implementation of MLitB, that represents a first step in completing the vision, and is based on an important ML use-case, Deep Neural Networks.</ns0:p><ns0:p>In Section 2 we describe in more detail our vision for MLitB in terms of three main objectives: 1) make ML models and algorithms ubiquitous, for both the public and the scientific community, 2) create an framework for cheap distributed computing by harnessing existing infrastructure and personal devices as novel computing resources, and 3) design research closures, software objects that archive ML models, algorithms, and parameters to be shared, reused, and in general, support reproducible research. In Section 3 we describe the current state of the MLitB software implementation, the MLitB prototype. We begin with a description of our design choices, including arguments for using JavaScript and the other modern web libraries and utilities. Then we describe a bespoke map-reduce synchronized event-loop, specifically designed for training a large class of ML models using distributed stochastic gradient descent (SGD). Our prototype focuses on a specific ML model, Deep Neural Networks (DNNs), using an existing JavaScript implementation <ns0:ref type='bibr' target='#b26'>(Karpathy, 2014)</ns0:ref>, modified only slightly for MLitB. We also report results of a scaling experiment, demonstrating the feasibility, but also the engineering challenges of using browsers for distributed ML applications. We then complete the prototype description with a walk-through of using MLitB to specify and train a neural network for image classification.</ns0:p><ns0:p>MLitB is influenced and inspired by current volunteer computing projects. These and other related projects, including those from machine learning, are presented in Section 4. Our prototype has exposed several challenges requiring further research and engineering; these are presented in Section 5, along with discussion of interesting application avenues MLitB makes possible. The most urgent software development directions follow in Section 6.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>MLITB: VISION</ns0:head><ns0:p>Our long-term vision for MLitB is guided by three overarching objectives: Ubiquitous ML: models can be training and executed in any web browsing environment without any further software installation. Cheap distributed computing: algorithms can be executed on existing grid, cloud, etc., computing resources with minimal (and possibly no) software installation, and can be easily managed remotely via the web; additionally, small internet enabled devices can contribute computational resources. Reproducibility: MLitB should foster reproducible science with research closures, universally readable objects containing ML model specifications, algorithms, and parameters, that can be used seamlessly to achieve the first two objectives, as well as support sharing of ML models and collaboration within the research community and the public at large.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Ubiquitous Machine Learning</ns0:head><ns0:p>The browser is the most ubiquitous computing device of our time, running, in some shape or form on all desktops, laptops, and mobile devices. Software for state-of-the-art ML algorithms and models, on the other hand, are very sophisticated software libraries written in highly specific programming languages within the ML research community <ns0:ref type='bibr' target='#b6'>(Bastien et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b25'>Jia et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b14'>Collobert et al., 2011)</ns0:ref>. As research tools, these software libraries have been invaluable. We argue, however, that to make ML truly ubiquitous requires writing ML models and algorithms with web programming languages and using the browser as the computational engine.</ns0:p><ns0:p>The software we propose can run sophisticated predictive models on cell phones or super-computers; for the former this extends the distributed nature of ML to a global internet. By further encapsulating the algorithms and model together, the benefit of powerful predictive modeling becomes a public commodity.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Cheap Distributed Computing</ns0:head><ns0:p>The usage of web browsers as compute nodes provides the capability of running sophisticated ML algorithms without the expense and technical difficulty of using custom grid or super-computing facilities (e.g. Hadoop cloud computing <ns0:ref type='bibr' target='#b36'>(Shvachko et al., 2010)</ns0:ref>). It has long been a dream to use volunteer computing to achieve real massive scale computing. Successes include Seti@Home <ns0:ref type='bibr' target='#b3'>(Anderson et al., 2002)</ns0:ref> and protein folding <ns0:ref type='bibr' target='#b29'>(Lane et al., 2013)</ns0:ref>. MLitB is being developed to not only run natively on browsers but also for scaled distributed computing on existing cluster and/or grid resources and, by harnessing the capacity of non-traditional devices, for extremely massive scale computing with a global volunteer base. In the former set-up, low communication overhead and homogeneous devices (a 'typical' grid computing solution) can be exploited. In the latter, volunteer computing via the internet opens the scaling possibilities tremendously, albeit at the cost of unreliable compute nodes, variable power, limited memory, etc. Both have serious implications for the user, but, most importantly, both are implemented by the same software.</ns0:p><ns0:p>Although the current version of MLitB does not provide GPU computing, it does not preclude its implementation in future versions. It is therefore possible to seamlessly provide GPU computing when available on existing grid computing resources. Using GPUs on mobile devices is a more delicate proposition since power consumption management is of paramount importance for mobile devices. However, it is possible for MLitB to manage power intelligently by detecting, for example, if the device is connected to a power source, its temperature, and whether it is actively used for other activities. A user might volunteer periodic 'mini-bursts' of GPU power towards a learning problem with minimal disruption to or power consumption from their device. In other words, MLitB will be able to take advantage of the improvements and breakthroughs of GPU computing for web engines and mobile chips, with minimal software development and/or support.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Reproducible and Collaborative Research</ns0:head><ns0:p>Reproducibility is a difficult yet fundamental requirement for science <ns0:ref type='bibr' target='#b33'>(McNutt, 2014)</ns0:ref>. Reproducibility is now considered just as essential for high-quality research as peer review; simply providing mathematical representations of models and algorithms is no longer considered acceptable <ns0:ref type='bibr' target='#b38'>(Stodden et al., 2013b)</ns0:ref>. Furthermore, merely replicating other work, despite its importance, can be given low publication priority <ns0:ref type='bibr' target='#b12'>(Casadevall and Fang, 2010)</ns0:ref> even though it is considered a prerequisite for publication. In other words, submissions must demonstrate that their research has been, or could be, independently reproduced.</ns0:p><ns0:p>For ML research there is no reason for not providing working software that allows reproduction of results (for other fields in science, constraints restricting software publication may exist). Currently, the main bottlenecks are the time cost to researchers for making research available, and the incompatibility of the research (i.e. code) for others, which further increases the time investment for researchers. One of our primary goals for MLitB is to provide reproducible research with minimal to no time cost to both the primary researcher and other researchers in the community. Following <ns0:ref type='bibr' target='#b37'>(Stodden et al., 2013a)</ns0:ref>, we support 'setting the default to reproducible.'</ns0:p><ns0:p>For ML disciplines, this means other researchers should not only be able to use a model reported in a paper to verify the reported results, but also retrain the model using the reported algorithm. This higher standard is difficult and time-consuming to achieve, but fortunately this approach is being adopted more and more often, in particular by a sub-discipline of machine learning called deep learning. In the deep learning community, the introduction of new datasets and competitions, along with innovations in algorithms and modeling, have produced a rapid progress on many ML prediction tasks. Model collections (also called model zoos), such as those built with Caffe <ns0:ref type='bibr' target='#b25'>(Jia et al., 2014</ns0:ref>) make this collaboration explicit and easy to access for researchers. However, there remains a significant time investment to run any particular deep learning model (these include compilation, library installations, platform dependencies, GPU dependencies, etc). We argue that these are real barriers to reproducible research and choosing ubiquitous software and compute engines makes it easier. For example, during our testing we converted a very performant computer vision model <ns0:ref type='bibr' target='#b31'>(Lin et al., 2013)</ns0:ref> into JSON format and it can now be used on any browser with minimal effort. 1 In a nod to the concept of closures concept common in functional programming, our approach treats a learning problem as a research closure: a single object containing model and algorithm configuration plus code, along with model parameters that can be executed (and therefore tested and analyzed) by other researchers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>MLITB: PROTOTYPE</ns0:head><ns0:p>The MLitB project and its accompanying software (application programming interfaces (APIs), libraries, etc.) are built entirely in JavaScript. We have taken a pragmatic software development approach to achieve as much of our vision as possible. To leverage our software development process, we have chosen, wherever possible, well-supported and actively developed external technology. By making these choices we have been able to quickly develop a working MLitB prototype that not only satisfies many of our objectives, but is as technologically future proof as possible. To demonstrate MLitB on a meaningful ML problem, we have similarly incorporated an existing JavaScript implementation of a Deep Neural Network into MLitB. The full implementation of the MLitB prototype can be found on GitHub 2 .</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Why JavaScript?</ns0:head><ns0:p>JavaScript is a pervasive web programming language, embedded in approximately 90% of <ns0:ref type='bibr'>web-sites (W3Techs, 2014)</ns0:ref>. This pervasiveness means it is highly supported (Can I Use, 2014), and is actively developed for efficiency and functionality <ns0:ref type='bibr' target='#b24'>(JavaScript V8, 2014;</ns0:ref><ns0:ref type='bibr'>asm.js, 2014)</ns0:ref>. As a result, JavaScript is the most popular programming language on GitHub and its popularity is continuing to grow <ns0:ref type='bibr' target='#b35'>(Ray et al., 2014)</ns0:ref>.</ns0:p><ns0:p>The main challenge for scientific computing with JavaScript is the lack of high-quality scientific libraries compared to platforms such as Matlab and Python. With the potential of native computational efficiency (or better, GPU computation) becoming available for JavaScript, it is only a matter of time before JavaScript bridges this gap. A recent set of benchmarks showed that numerical JavaScript code can be competitive with native C <ns0:ref type='bibr' target='#b27'>(Khan et al., 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>General Architecture and Design</ns0:head></ns0:div>
<ns0:div><ns0:head>Design Considerations</ns0:head><ns0:p>The minimal requirements for MLitB are based on the scenario of running the network as public resource computing. The downside of public resource computing is the lack of control over the computing environment. Participants are free to leave (or join) the network at anytime and their connectivity may be variable with high latency. MLitB is designed to be robust to these potentially destabilizing events. The loss of a participant results in the loss of computational power and data allocation. Most importantly, MLitB must robustly handle new and lost clients, re-allocation of data, and client variability in terms of computational power, storage capacity, and network latency.</ns0:p><ns0:p>Although we are agnostic to the specific technologies used to fulfill the vision of MLitB, in practice we are guided by both the requirements of MLitB and our development constraints. Therefore, as a first step towards implementing our vision, we chose technology pragmatically. Our choices also follow closely the design principles for web-based big data applications <ns0:ref type='bibr' target='#b7'>(Begoli and Horey, 2012)</ns0:ref>, which recommend popular standards and light-weight architectures. As we will see, some of our choices may be limiting at large scale, but they have permitted a successful small-scale MLitB implementation (with up to 100 clients).</ns0:p><ns0:p>Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> shows the high-level architecture and web technologies used in MLitB. Modern web browsers provide functionality for two essential aspects of MLitB: Web Workers (W3C, 2014) for parallelizing program execution with threads and Web Sockets (IETF, 2011) for fast bi-directional communication channels to exchange messages more quickly between server and browser. To maintain compatibility across browser vendors, there is little choice for alternatives to Web Workers and Web Sockets. These same choices are also used in another browser-based distributed computing platform <ns0:ref type='bibr' target='#b15'>(Cushing et al., 2013)</ns0:ref>.</ns0:p><ns0:p>On the server-side, there are many choices that can be made based on scalability, memory management, etc. However, we chose Node.js for the server application. 3 Node.js provides several useful features for our application: it is lightweight, written in JavaScript, handles events asynchronously, and can serve many clients concurrently <ns0:ref type='bibr' target='#b41'>(Tilkov and Vinoski, 2010)</ns0:ref>. Asynchronous events occur naturally in MLitB as clients join/leave the network, client computations are received by the server, users add new models and otherwise interact with the server. Since the main computational load is carried by the clients, and not the server, a light-weight server that can handle many clients concurrently is all that is required by MLitB.</ns0:p></ns0:div>
<ns0:div><ns0:head>Design Overview</ns0:head><ns0:p>The general design of MLitB is composed of several parts. A master server hosts ML problems/projects and connects clients to them. The master server also manages the main event loop, where client triggered events are handled, along with the reduce steps of a (bespoke) map-reduce procedure used for computation. When a browser (i.e. a heterogeneous device) makes an initial connection to the master server, a userinterface (UI) client (aka a boss) is instantiated. Through the UI, clients can add workers that can perform different tasks (e.g., train a model, download parameters, take a picture, etc). An independent data server serves data to clients using zip files and prevents the master server from blocking while serving data. For efficiency, data transfer is performed using XHR 4 . Trained models can be saved into JSON objects at any point in the training process; these can later be loaded in lieu of creating new models. </ns0:p></ns0:div>
<ns0:div><ns0:head>Master Server</ns0:head><ns0:p>The master node (server) is implemented in Node.js with communication between the master and slave nodes handled by Web Sockets. The master server hosts multiple ML problems/projects simultaneously along with all clients' connections. All processes within the master are event-driven, triggered by actions of the slave nodes. Calling the appropriate functions by slave nodes to the master node is handled by the router. The master must efficiently perform its tasks (data reallocation and distribution, reduce-steps) because the clients are idle awaiting new parameters before their next work cycle. New clients must also wait until the end of an iteration before joining a network. The MLitB network is dynamic and permits slave nodes to join and leave during processing. The master monitors its connections and is able to detect lost participants. When this occurs, data that was allocated to the lost client is re-allocated the remaining clients, if possible, otherwise it is marked as to be allocated.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Server</ns0:head><ns0:p>The data server is a bespoke application intended to work with our neural network use-case model and can be thought of a lightweight replacement for a proper image database. The data server is an independent Node.js application that can, but does not necessarily live on the same machine. Users upload data in zip files before training begins; currently, the data server handles zipped image classification datasets (where sub-directory names define class labels). Data is then downloaded from the data server and zipped files are sent to clients using XHR and unzipped and processed locally. XHR is used instead of WebSockets because they communicate large zip-files more efficiently. A redundant cache of data is stored locally in the clients' browser's memory. For example, a client may store 10,000 data vectors, but at each iteration it may only have the computational power to process 100 data vectors in its scheduled iteration duration. The data server uses specialized JavaScript APIs unzip.js and redis-server.</ns0:p></ns0:div>
<ns0:div><ns0:head>Clients</ns0:head><ns0:p>Clients are browser connections from heterogeneous devices that visit the master server's url. Clients interact through a UI worker, called a boss, and can create slave workers to perform various tasks (see Workers). The boss is the main worker running in a client's browser. It manages the slave and image download worker and functions as a bridge between the downloader and slaves. A simple wrapper handles UI interactions, and provides input/output to the boss. Client bosses use a data worker to download data from the data server using XHR. The data worker and server communicate using XHR and pass zip files in both directions. The boss handles unzipping and decoding data for slaves that request data. Clients therefore require no software installation other than its native browser. Clients can contribute to any project hosted by the master server. Clients can trigger several events through the UI worker. These include adjusting hyper-parameters, adding data, and adding slave workers, etc. (Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>). Most tasks are run in a separate Web Worker thread (including the boss), ensuring a non-blocking and responsive client UI. Data downloading is a special task that, via the boss and the data worker, uses XHR to download from the data server.</ns0:p></ns0:div>
<ns0:div><ns0:head>Workers</ns0:head><ns0:p>In Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> the tasks implemented using Web Worker threads are shown. At the highest-level is the client UI, with which the user interacts with ML problems and controls their slave workers. From the client UI, a user can create a new project, load a project from file, upload data to a project, or add slave workers for a project. Slaves can perform several tasks; most important is the trainer, which connects to an event loop of a ML project and contributes to its computation (i.e. its map step). Each slave worker communicates directly to the master server using Web Sockets. For the latter three tasks, the communication is mainly for sending requests for models parameters and receiving them. The training slave has more complicated behavior because it must download data then perform computation as part of the main event loop. To train, the user sets the slave task to train and selects start/restart. This will trigger a join event at the master server; model parameters and data will be downloaded and the slave will begin computation upon completion of the data download. The user can remove a slave at any time. Other slave tasks are tracking, which requires receiving model parameters from the master, and allows users to monitor statistics of the model on a dataset (e.g. classification error) or to execute the model (e.g. classify an image on a mobile device). Each slave worker communicates directly to the master server using Web Sockets. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.3'>Events and Software Behavior</ns0:head><ns0:p>The MLitB network is constructed as a master-slave relationship, with one server and multiple slave nodes (clients). The setup for computation is similar to a MapReduce network <ns0:ref type='bibr' target='#b17'>(Dean and Ghemawat, 2008)</ns0:ref>; however, the master server performs many tasks during an iteration of the master event loop, including a reduce step, but also several other important tasks.</ns0:p><ns0:p>The specific tasks will be dictated by events triggered by the client, such as requests for parameters, new client workers, removed/lost clients, etc. Our master event loop can be considered as a synchronized map-reduce algorithm with a user defined iteration duration T , where values of T may range from 1 to 30 seconds, depending on the size of the network and the problem. MLitB is not limited to a mapreduce paradigm and in fact we believe that our framework opens the door to peer-to-peer or gossip algorithms <ns0:ref type='bibr' target='#b10'>(Boyd et al., 2006)</ns0:ref>. We are currently developing asynchronous algorithms to improve the scalability of MLitB.</ns0:p></ns0:div>
<ns0:div><ns0:head>Master Event Loop</ns0:head><ns0:p>The master event loop consists of five steps and is executed by the master server node as long there is at least one slave node connected. Each loop includes one map-reduce step, and runs for at least T seconds. The following steps are executed, in order: When a client boss uploads data, it directly communicates with the data server using XHR. Once the data server has uploaded the zip file, it sends the data indices and classification labels to the boss. The boss then registers the indices with the master server. Each data index is managed: MLitB stores an allocated index (the worker that is allocated the id) and a cached index (the worker that has cached the id). The master ensures that the data allocation is balanced amongst its clients. Once a data set is allocated on the master server, the master allocates indices and sends the set of ids to workers. Workers can then request data from the boss, who in turn use its data downloader worker to download those worker specific ids from the data server. The data server sends a zipped file to the data downloader, which are then unzipped and processed by the boss (e.g. JPEG decoding for images). The zip file transfers are fast but the decoding can be slow. We therefore allow workers to begin computing before the entire dataset is downloaded and decoded, allowing projects to start training almost immediately while data gets cached in the background.</ns0:p></ns0:div>
<ns0:div><ns0:head>b) New client trainer initialization and data allocation</ns0:head><ns0:p>When a client boss adds a new slave, a request to join the project is sent to the master. If there is unallocated data, a balanced fraction of the data is allocated to the new worker. If there is no unallocated data, a pie-cutter algorithm is used to remove allocated data from other clients and assign it to the new client. This prevents unnecessary data transfers. The new worker is sent a set of data ids it will need to download from the client's data worker. Once the data has been downloaded and put into the new worker's cache, the master will then add the new worker to the computation performed at each iteration. The master server is immediately informed when a client or one of its workers is removed from the network. 5 Because of this, it can manage the newly unallocated data (that were allocated to the lost client).</ns0:p></ns0:div>
<ns0:div><ns0:head>c) Training workers' reduce step</ns0:head><ns0:p>The reduce step is completely problem specific. In our prototype, workers compute gradients with respect to model parameters over their allocated data vectors, and the reduce step sums over the gradients and updates the model parameters.</ns0:p></ns0:div>
<ns0:div><ns0:head>d) Latency monitoring and data allocation adjustment</ns0:head><ns0:p>The interval T represents both the time of computation and the latency between the client and the master node. The synchronization is stochastic and adaptive. At each reduce step, the master node estimates the latency between the client and the master and informs the client worker how long it should run for. A client does not need to have a batch size because it just clocks its own computation and returns results at the end of its scheduled work time. Under this setting, it is possible to have mobile devices that compute only a few gradients per second and a powerful desktop machine that performs hundreds or thousands. This simple approach also allows the master to account for unexpected user activity: if the user's device slows or has increased latency, the master will decrease the load on the device for the next iteration. Generally, devices with a cellular network connection communicate with longer delays than hardwired machines. In practice, this means the reduction step in the master node receives delayed responses from slave nodes, forcing it to run the reduction function after the slowest slave node (with largest latency) has returned. This is called asynchronous reduction callback delay.</ns0:p></ns0:div>
<ns0:div><ns0:head>e) Master broadcasts parameters</ns0:head><ns0:p>An array of model parameters is broadcast to each clients' boss worker using XHR; when the boss receives new parameters, they are given to each of its workers who then start another computation iteration.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.4'>ML use-case: Deep Neural Networks</ns0:head><ns0:p>The current version of the MLitB software is built around a pervasive ML use-case: deep neural networks (DNNs). DNNs are the current state-of-the-art prediction models for many tasks, including computer vision <ns0:ref type='bibr' target='#b28'>(Krizhevsky et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b31'>Lin et al., 2013)</ns0:ref>, speech recognition <ns0:ref type='bibr' target='#b21'>(Hinton et al., 2012)</ns0:ref>, and natural language processing and machine translation <ns0:ref type='bibr' target='#b32'>(Liu et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b5'>Bahdanau et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b39'>Sutskever et al., 2014)</ns0:ref>. Our implementation only required superficial modifications to an existing JavaScript implementation <ns0:ref type='bibr' target='#b26'>(Karpathy, 2014)</ns0:ref> to fit into our network design. </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.5'>Scaling Behavior of MLitB</ns0:head><ns0:p>We performed an experiment to study the scaling behavior of MLitB prototype. Using up to 32 4-core workstation machines connected on a local area network using a single router, we trained a simple convolutional NN on the MNIST dataset for 100 iterations (with 4 seconds per iteration/synchronization event). 6 The number of slave nodes doubled from one experiment to the next (i.e. 1, 2, 4, . . . , 96). We are interested in the scaling behavior of two performance indicators: 1) power, measured in data vectors processed per second, and 2) latency in milliseconds between slaves and master node. Of secondary interest is the generalization performance on the MNIST test set. As a feasibility study of a distributed ML framework, we are most interested scaling power while minimizing latency effects during training, but we also want to ensure the correctness of the training algorithm. Since optimization using compiled JS and/or GPUs of the ML JavaScript library possible, but not our focus, we are less concerned with the power performance of a single slave node.</ns0:p><ns0:p>Results for power and latency are shown in Fig. <ns0:ref type='figure' target='#fig_4'>4</ns0:ref>. Power increases linearly up to 64 slave nodes, at which point a large increase in latency limits additional power gains for new nodes. This is due to a single server reaching the limit of its capacity to process incoming gradients synchronously. Solutions include using multiple server processes, asynchronous updates, and partial gradient communication. Test error, as a function of the number of nodes is shown in Fig. <ns0:ref type='figure' target='#fig_5'>5</ns0:ref> after 50 iterations (200 seconds) and 100 iterations (400 seconds); i.e. each point represents the same wall-clock computation time. This demonstrates the correctness of MLitB for a given model architecture and learning hyperparameters.</ns0:p><ns0:p>Due to the data allocation policy that limits the data vector capacity of each node to 3000 vectors, experiments with more nodes process more of the training set during the training procedure. For example, using only 1 slave node trains on 3/60 of the full training set. With 20 nodes, the network is training on the full dataset. This policy could easily be modified to include data refreshment when running with unallocated data. The primary latency issue is due to all clients simultaneously sending gradients to the server at the end of each iteration. Three simple scaling solutions are 1) increasing the number of master node processes that receive gradients 2) using asynchronous update rules (each slave computes for a random amount of time, then sends updates), reducing the load of any one master node process, and 3) partial communication of gradients (decreasing bandwidth). </ns0:p></ns0:div>
<ns0:div><ns0:head n='3.6'>Walk-through of MLitB Prototype</ns0:head><ns0:p>We briefly describe how MLitB works from a researcher's point of view.</ns0:p></ns0:div>
<ns0:div><ns0:head>Specification of Neural Network and Training Parameters</ns0:head><ns0:p>Using a minimalist UI (not shown), the researcher can specify their neural network, for example they can add/remove layers of different types, and adjust regularization parameters (L1/L2/dropout) and learning rates. Alternatively, the researcher can load a previously saved neural network in JSON format (that may or may not have already been trained). Once a NN is specified (or loaded), it appears in the display, along with other neural networks also managed by the master node. By selecting a specific neural network, the researcher can then add workers and data (e.g. project cifar10 in Fig. <ns0:ref type='figure' target='#fig_6'>6</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>Specification of Training Data Training Mode</ns0:head><ns0:p>In the training mode, a training worker performs as many gradient computations as possible within the iteration duration T (i.e. during the map step of the main event loop). The total gradient and the number of gradients is sent to the master, which then in the reduce step computes a weighted average of gradients from all workers and takes a gradient step using AdaGrad <ns0:ref type='bibr' target='#b18'>(Duchi et al., 2011)</ns0:ref>. At the end of the main event loop, new neural network weights are sent via Web Sockets to both trainer workers (for the next gradient computations) and to tracker workers (for computing statistics and executing the latest model).</ns0:p></ns0:div>
<ns0:div><ns0:head>Figure 7. Tracking model (model execution):</ns0:head><ns0:p>The label of a test image is predicted using the latest NN parameters. Users can execute a NN prediction using an image stored on their device or using their device's camera. In this example, an image of a horse is correctly predicted with probability 0.687 (the class-conditional predictive probability).</ns0:p></ns0:div>
<ns0:div><ns0:head>Tracking Mode</ns0:head><ns0:p>There are two possible functions in tracking mode: 1) executing the neural network on test data, and 2) monitoring classification error on an independent data set. For 1, users can predict class labels for images taken with a device's camera or locally stored images. Users can also learn a new classification problem on the fly by taking a picture and giving it a new label; this is treated as a new data vector and a new output neuron is added dynamically to the neural network if the label is also new. Fig. <ns0:ref type='figure'>7</ns0:ref> shows a test image being classified by the cifar10 trained neural network. For 2, users create a statistics worker and can upload test images and track their error over time; after each complete evaluation of the test images, the latest neural network received from the master is used. Fig. <ns0:ref type='figure' target='#fig_7'>8</ns0:ref> shows the error for cifar10 using a small test set for the first 600 parameter updates.</ns0:p></ns0:div>
<ns0:div><ns0:head>Archiving Trained Neural Network Model</ns0:head><ns0:p>The prototype does not include a research closure specification. However, it does provide easy archiving functionality. At any moment, users can download the entire model specification and current parameter values in JSON format. Users can then share or initialize a new training session with the JSON object by uploading it during the model specification phase, which represents a high-level of reproducibility. Although the JSON object fully specifies the model, it does not include training or testing code. Despite this shortcoming, using a standard protocol is simple way of providing a lightweight archiving system.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.7'>Limitations of MLitB Prototype</ns0:head><ns0:p>In this section we briefly discuss the limitations of the current prototype; later in Section 5 we will discuss the challenges we face in scaling MLitB to a massive level.</ns0:p><ns0:p>Our scaling experiment demonstrates that the MLitB prototype can accommodate up to 64 clients before latency significantly degrades its performance. Latency, however, is primarily affected by the length of an iteration and by size of the neural network. For longer iterations, latency will become a smaller portion of the main event loop. For very large neural networks, latency will increase due to bandwidth pressure.</ns0:p><ns0:p>As discussed previously, the main computational efficiency loss is due to the synchronization requirement of the master event loop. This requirement causes the master server to be idle while the clients are computing and the clients to wait while the master processes all the gradients. As the size of the full gradients can be large (at least > 1MB for small neural networks), the network bandwidth is quickly saturated at the end of a computation iteration and during the parameter broadcast. By changing to an asynchronous model, the master can continuously process gradients and the bandwidth can be maximally utilized. By communicating partial gradients, further efficiency can be attained. We leave this for future work.</ns0:p><ns0:p>There is a theoretical limit of 500MB data storage per client (the viable memory of a web-browser). In our experience, the practical limit is closer to 100MB at which point performance is lost due to memory management issues. We found that 1MB/sec bandwidth was achievable on a local network, which meant that it could handle images on MNIST and CIFAR-10 easily, but would stall for larger images. With respect to Deep Neural Networks, the data processing ability of a single node was limited (especially is one compared to sophisticated GPU enables libraries <ns0:ref type='bibr' target='#b6'>(Bastien et al., 2012)</ns0:ref>). Although we were most interested in the scaling performance, we note that naive convolution implementations significantly slow performance. We found that reasonable sized images, up to 100 × 100 × 3 pixels, can be processed on mobile devices in less than a second without convolutions, but can take several seconds with convolutions, limiting its usefulness. In the future, near native or better implementations will be required for the convolutional layers.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>RELATED WORK</ns0:head><ns0:p>MLitB has been influenced by a several different technologies and ideas presented by previous authors and from work in different specialization areas. We briefly summarize this related work below.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Volunteer Computing</ns0:head><ns0:p>BOINC <ns0:ref type='bibr' target='#b2'>(Anderson, 2004</ns0:ref>) is an open-source software library used to set up a grid computing network, allowing anyone with a desktop computer connected to the internet to participate in computation; this is called public resource computing. Public resource or volunteer computing was popularized by SETI@Home <ns0:ref type='bibr' target='#b3'>(Anderson et al., 2002)</ns0:ref>, a research project that analyzes radio signals from space in the search of signs of extraterrestrial intelligence. More recently, protein folding has emerged as significant success story <ns0:ref type='bibr' target='#b29'>(Lane et al., 2013)</ns0:ref>. Hadoop <ns0:ref type='bibr' target='#b36'>(Shvachko et al., 2010)</ns0:ref> is an open-source software system for storing very large datasets and executing user application tasks on large networks of computers. MapReduce <ns0:ref type='bibr' target='#b17'>(Dean and Ghemawat, 2008</ns0:ref>) is a general solution for performing computation on large datasets using computer clusters.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>JavaScript Applications</ns0:head><ns0:p>In <ns0:ref type='bibr' target='#b15'>(Cushing et al., 2013)</ns0:ref> a network of distributed web-browsers called WeevilScout is used for complex computation (regular expression matching and binary tree modifications) using a JavaScript engine. It uses similar technology (Web Workers and Web Sockets) as MLitB. ConvNetJS <ns0:ref type='bibr' target='#b26'>(Karpathy, 2014</ns0:ref>) is a JavaScript implementation of a convolutional neural-network, developed primarily for educational purposes, which is capable of building diverse neural networks to run in a single web browser and trained using stochastic gradient descent; it can be seen as the non-distributed predecessor of MLitB.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Distributed Machine Learning</ns0:head><ns0:p>The most performant deep neural network models are trained with sophisticated scientific libraries written for GPUs <ns0:ref type='bibr' target='#b8'>(Bergstra et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b25'>Jia et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b14'>Collobert et al., 2011)</ns0:ref> that provide orders of magnitude computational speed-ups compared to CPUs. Each implements some form of stochastic gradient descent (SGD) <ns0:ref type='bibr' target='#b9'>(Bottou, 2010)</ns0:ref> as the training algorithm. Most implementations are limited to running on the cores of a single machine and by extension the memory limitations of the GPU. Exceptionally, there are distributed deep learning algorithms that use a farm of GPUs (e.g. Downpour SGD <ns0:ref type='bibr' target='#b16'>(Dean et al., 2012)</ns0:ref>) and farms of commodity servers (e.g. COTS-HPS <ns0:ref type='bibr' target='#b13'>(Coates et al., 2013)</ns0:ref>). Other distributed ML algorithm research includes the parameter server model <ns0:ref type='bibr'>(Li et al., 2014)</ns0:ref>, parallelized SGD <ns0:ref type='bibr' target='#b44'>(Zinkevich et al., 2010)</ns0:ref>, and distributed SGD <ns0:ref type='bibr' target='#b1'>(Ahn et al., 2014)</ns0:ref>. MLitB could potentially push commodity computing to the extreme using pre-existing devices, some of which may be GPU capable, with and without an organization's existing computing infrastructure. As we discuss below, there are still many open research questions and opportunities for distributed ML algorithm research.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>OPPORTUNITIES AND CHALLENGES</ns0:head><ns0:p>In tandem with our vision, there are several directions the next version of MLitB can take, both in terms of the library itself and the potential kinds of applications a ubiquitous ML framework like MLitB can offer. We first focus on the engineering and research challenges we have discovered during the development of our prototype, along with some we expect as the project grows. Second, we look at the opportunities MLitB provides, not only based on the research directions the challenges uncovered, but also novel application areas that are perfect fits for MLitB. In Section 6 we preview the next concrete steps in MLitB development.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.1'>Challenges</ns0:head><ns0:p>We have identified three keys engineering and research challenges that must be overcome for MLitB to achieve its vision of learning models a global scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>Memory Limitations</ns0:head><ns0:p>State-of-the-art Neural Network models have huge numbers of parameters. This prevents them from fitting onto mobile devices. There are two possible solutions to this problem. The first solution is to learn or use smaller neural networks. Smaller NN models have shown promise on image classification performance, in particular the Network in Network <ns0:ref type='bibr' target='#b31'>(Lin et al., 2013)</ns0:ref> model from the Caffe model zoo, is 16MB, and outperforms AlexNet which is 256MB <ns0:ref type='bibr' target='#b25'>(Jia et al., 2014)</ns0:ref>. It is also possible to first train a deep neural network then use it to train a much smaller, shallow neural network <ns0:ref type='bibr' target='#b4'>(Ba and Caruana, 2014)</ns0:ref>. Another solution is to distribute the NN (during training and prediction) across clients. An example of this approach is Downpour SGD <ns0:ref type='bibr' target='#b16'>(Dean et al., 2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Communication Overhead</ns0:head><ns0:p>With large models, large of numbers of parameters are communicated regularly. This is a similar issue to memory limitation and could benefit from the same solutions. However, given a fixed bandwidth and asynchronous parameter updates, we can ask what parameter updates (from master to client) and which gradients (from client to master) should be communicated. An algorithm could transmit a random subset of the weight gradients, or send the most informative. In other words, given a fixed bandwidth budget, we want to maximize the information transferred per iteration.</ns0:p></ns0:div>
<ns0:div><ns0:head>Performance Efficiency</ns0:head><ns0:p>Perhaps the biggest argument against scientific computing with JavaScript is its computation performance. We disagree that this should prevent the widespread adoption of browser-based, scientific computing because the goal of several groups to achieve native performance in JavaScript <ns0:ref type='bibr' target='#b24'>(JavaScript V8, 2014;</ns0:ref><ns0:ref type='bibr'>asm.js, 2014)</ns0:ref> and GPU kernels are becoming part of existing web engines (e.g. WebCL 7 ) and they can be seamlessly incorporated into existing JavaScript libraries, though they have yet to be written for ML.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5.2'>Opportunities</ns0:head></ns0:div>
<ns0:div><ns0:head>Massively Distributed Learning Algorithms</ns0:head><ns0:p>The challenges just presented are obvious areas of future distributed machine learning research (and are currently being developed for the next version of MLitB). Perhaps more interesting is, at a higher level, that the MLitB vision raises novel questions about what it means to train models on a global scale. For instance, what does it mean for a model to be trained across a global internet of heterogeneous and unreliable devices? Is there a single model or a continuum of models that are consistent locally, but different from one region to another? How should a model adapt over long periods of time? These are largely untapped research areas for ML.</ns0:p></ns0:div>
<ns0:div><ns0:head>Field Research</ns0:head><ns0:p>Moving data collection and predictive models onto mobile devices makes is easy to bring models into the field. Connecting users with mobile devices to powerful NN models can aid field research by bringing the predictive models to the field, e.g. for fast labeling and data gathering. For example, a pilot program of crop surveillance in Uganda currently uses bespoke computer vision models for detecting pestilence (insect eggs, leaf diseases, etc) <ns0:ref type='bibr' target='#b34'>(Quinn et al., 2011)</ns0:ref>. Projects like these could leverage publicly available, state-of-the-art computer vision models to bootstrap their field research.</ns0:p></ns0:div>
<ns0:div><ns0:head>Privacy Preserving Computing and Mobile Health</ns0:head><ns0:p>Our MLitB framework provides a natural platform for the development of real privacy-preserving application <ns0:ref type='bibr' target='#b19'>(Dwork, 2008</ns0:ref>) by naturally protecting user information contained on mobile devices, yet allowing the data to be used for valuable model development. The current version of MLitB does not provide privacy preserving algorithms such as <ns0:ref type='bibr' target='#b20'>(Han et al., 2010)</ns0:ref>, but these could be easily incorporated into MLitB. It would therefore be possible for a collection of personal devices to collaboratively train machine learning models using sensitive data stored locally and with modified training algorithms that guarantee privacy. One could imagine, for example, using privately stored images of a skin disease to build a classifier based on large collection of disease exemplars, yet with the data always kept on each patient's mobile device, thus never shared, and trained using privacy preserving algorithms.</ns0:p></ns0:div>
<ns0:div><ns0:head>Green Computing</ns0:head><ns0:p>One of our main objectives was to provide simple, cheap, distributed computing capability with MLitB. Because MLitB runs with minimal software installation (in most cases requiring none), it is possible to use this framework for low-power consumption distributed computing. By using existing organizational resources running in low-energy states (dormant or near dormant) MLitB can wake the machines, perform some computing cycles, and return them to their low-energy states. This is in stark contrast to a data center approach which has near constant, heavy energy usage <ns0:ref type='bibr'>(nrd, 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6'>FUTURE MLITB DEVELOPMENT</ns0:head><ns0:p>The next phases of development will focus on the following directions: a visual programming user interface for model configuration, development of a library of ML models and algorithms, development of performant scientific libraries in JavaScript with and without GPUs, and model archiving with the development of a research closure specification.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.1'>Visual Programming</ns0:head><ns0:p>Many ML models are constructed as chains of processing modules. This lends itself to a visual programming paradigm, where the chains can be constructed by dragging and dropping modules together. This way models can be visualized and compared, dissected, etc. Algorithms are tightly coupled to the model and a visual representation of the model can allow interaction with the algorithm as it proceeds. For example, learning rates for each layer of a neural network can be adjusted while monitoring error rates (even turned off for certain layers), or training modules can be added to improve learning of hidden layers for very deep neural networks, as done in <ns0:ref type='bibr' target='#b40'>(Szegedy et al., 2014)</ns0:ref>. With a visual UI it would be easy to pull in other existing, pre-trained models, remove parts, and train on new data. For example, a researcher could start with a pre-trained image classifier, remove the last layer, and easily train a new image classifier, taking advantage of an existing, generalized image representation model.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.2'>Machine Learning Library</ns0:head><ns0:p>We currently have built a prototype around an existing JavaScript implementation of DNNs <ns0:ref type='bibr' target='#b26'>(Karpathy, 2014)</ns0:ref>. In the near future we plan on implementing other models (e.g. latent Dirichlet allocation) and algorithms (e.g. distributed MCMC <ns0:ref type='bibr' target='#b1'>(Ahn et al., 2014)</ns0:ref>). MLitB is agnostic to learning algorithms and therefore is a great platform for researching novel distributed learning algorithms. To do this, however, MLitB will need to completely separate machine learning model components from the MLitB network. At the moment, the prototype is closely tied to its neural network use-case. Once separated, it will be possible for external modules to be added by the open-source community.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.3'>GPU implementations</ns0:head><ns0:p>Implementation of GPU kernels can bring MLitB performance up to the level of current state-of-the-art scientific libraries such as Theano <ns0:ref type='bibr' target='#b8'>(Bergstra et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b6'>Bastien et al., 2012)</ns0:ref> and Caffe <ns0:ref type='bibr' target='#b25'>(Jia et al., 2014)</ns0:ref>, while retaining the advantages of using heterogeneous devices. For example, balancing computational loads during training is very simple in MLitB and any learning algorithm can be shared by GPU powered desktops and mobile devices. Smart phones could be part of the distributed computing process by permitting the training algorithms to use short bursts of GPU power for their calculations, and therefore limiting battery drain and user disruption.</ns0:p></ns0:div>
<ns0:div><ns0:head n='6.4'>Design of Research closures</ns0:head><ns0:p>MLitB can save and load JSON model configurations and parameters, allowing researchers to share and build upon other researchers' work. However, it does not quite achieve our goal of a research closure where all aspects-code, configuration, parameters, etc-are saved into a single object. In addition to research closures, we hope to develop a model zoo, akin to Caffe's for posting and sharing research. Finally, some kind of system for verifying models, like recomputation.org, would further strengthen the case for MLitB being truly reproducible (and provide backwards compatibility). Each client connection to the master server initiates a {\em UI worker}, aka a {\em boss}.</ns0:p><ns0:p>For uploading data from a client to the data server and for downloading data from the data server to a client, a separate Web Worker called the {\em data worker} is used. Users can add slaves through the UI worker; each slave performs a separate task using a Web Worker.</ns0:p><ns0:p>Icon made by Freepik from www.flaticon.com</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Overview of MLitB. (1) A researcher sets up a learning problem in his/her browser. (2) Through the internet, grid and desktop machines contribute computation to solve the problem. (3) Heterogeneous devices, such as mobile phone and tablets, connect to the same problem and contribute computation. At any time, connected clients can download the model configuration and parameters, or use the model directly in their browsing environment.</ns0:figDesc><ns0:graphic coords='3,245.13,63.78,206.84,206.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. MLitB architecture and technologies. (1) Servers are Node.js applications. The master server is the main server controlling communication between clients and hosts ML projects. (2) Communication between the master server and clients occurs over Web Sockets. (3) When heterogeneous devices connect to the master server they use Web Workers to perform different tasks. Upon connection, a UI worker, or boss, is instantiated. Web Workers perform all the other tasks on a client and are controlled by the boss. See Fig. 3 for details. (4) A special data worker on the client communicates with the data server using XHR. (5) The data server, also a Node.js application, manages uploading of data in zip format and serves data vectors to the client data workers.</ns0:figDesc><ns0:graphic coords='6,224.45,302.40,248.20,270.76' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. MLitB Client Workers. Each client connection to the master server initiates a UI worker, aka a boss. For uploading data from a client to the data server and for downloading data from the data server to a client, a separate Web Worker called the data worker is used. Users can add slaves through the UI worker; each slave performs a separate task using a Web Worker.</ns0:figDesc><ns0:graphic coords='8,245.13,63.78,206.80,344.67' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>a) New data uploading and allocation. b) New client trainer initialization and data allocation. 7/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015) Reviewing Manuscript c) Training workers reduce step. d) Latency monitoring and data allocation adjustment. e) Master broadcasts parameters. a) New data uploading and allocation</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Effects of scaling on power and latency. Power-measured as the number of data vectors processed per second-scales linearly until 64 nodes, when the increase in latency jumps. The ideal linear scaling is shown in grey.</ns0:figDesc><ns0:graphic coords='10,193.43,123.60,310.18,206.78' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Effects of scaling on optimization. Convergence of the NN is measured in terms of test error after 50 and 100 iterations. Each point represents approximately the same wall-clock time (200/400 seconds for 50 and 100 iterations, respectively).</ns0:figDesc><ns0:graphic coords='11,193.43,63.78,310.18,206.78' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. CIFAR-10 project loaded in MLitB.</ns0:figDesc><ns0:graphic coords='11,193.43,411.82,310.27,172.37' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Tracking mode (classification error). A test dataset can be loaded and its classification error rate tracked over iterations; here using a NN trained on CIFAR-10.</ns0:figDesc><ns0:graphic coords='13,193.43,63.78,310.27,206.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 1 (</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 2 (</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure</ns0:head><ns0:label /><ns0:figDesc>Figure 3(on next page)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='12,193.43,271.17,310.19,206.15' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,42.52,229.87,525.00,349.50' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='27,42.52,178.87,525.00,291.00' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='29,42.52,229.87,525.00,349.50' type='bitmap' /></ns0:figure>
<ns0:note place='foot' n='2'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='3'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='1'>JavaScript Object Notation json.org/ 2 https://github.com/software-engineering-amsterdam/MLitB 4/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='3'>Node.js: http://nodejs.org. 4 XMLHttpRequest www.w3.org/TR/XMLHttpRequest 5/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='6'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='5'>If a user closes a client tab, the master will know immediately and take action. In the current implementation, if a user closes the master tab, all current connections are lost.8/17PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='6'>Slave node specifications (32 units): Intel Core i3-2120 3.3GHz (dual-core); 4GB RAM; Windows 7 Enterprise x64; Google Chrome 35. Master node specifications (1 unit): Intel Xeon E5620 2.4GHz (quad-core); 24 GB RAM; Ubuntu 10.04 LTS. NodeJS version: v0.10.28. The NN has a 28 × 28 input layer connected to 16 convolution filters (with pooling), followed by a fully connected output layer.9/17PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='10'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='11'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='12'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='13'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='7'>WebCL by Kronos: www.khronos.org/webcl. 14/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)Reviewing Manuscript</ns0:note>
<ns0:note place='foot' n='15'>/17 PeerJ Comput. Sci. reviewing PDF | (CS-2015:02:4063:1:1:ACCEPTED 18 Jun 2015)</ns0:note>
</ns0:body>
" | "Rebuttal Letter for MLitB
We thank the reviewers for taking the time to review our manuscript and their many useful comments. They have certainly helped us improve our manuscript.
We have attempted to address as many concerns as possible with this revision. In particular, we have
1) Refocused the contribution into two parts: a vision (what we want but haven’t fully reached) and a prototype (what is working and achieved).
2) Included a scaling experiment using up to 96 clients, and included discussion about limitations of the prototype
3) Added discussion of design choices
We address reviewer comments below. We have highlighted changes to the manuscript.
Reviewer Comments
Reviewer 1 (Cristian Mihaescu)
Basic reporting
The main drawback of the presentation regards clear presentation of client and server side. It was somehow difficult to figure out exactly which parts are on the server side and which are on the client side.
We have clarified these points in the manuscript.
For example:
- The Boss is a data worker and/or a web worker?
The boss is a Web Worker that interacts with the data server on behalf of the other workers on a slave node.
- Both Master Server and Data Server are on the same server machine?
Not necessarily.
- Is Hadoop used in this project? Where? On Master Server and/or Data Server?
No. We implement a bespoke map-reduce like event loop.
- Did authors use an already existing implementation of MapReduce, like Hadoop? ... and if yes, how does it integrates with JavaScript?
No. MLitB is built from the ground up using Web Workers, Web Sockets, and nodejs.
Experimental design
The experiments are fine although more accuracy and performance metrics may be required.
Validity of the findings
Yes, the findings are valid but may be a little preliminary. More structured experimental results may be needed.
In Section 3.5 of the revised manuscript, we have added a major experiment where we scaled the number of nodes in the network from 1 to 96. We found good scaling performance up to 64 nodes, at which point latency issues become detrimental. We follow up the experiment with discussion on how to improve this behavior with asynchronous updates, partial gradient updates, etc.
Comments for the author
The paper needs two improvements:
- refine the presentation of the architectural decisions for a better understanding
We have clarified the earlier concerns and also reorganized and added a subsection Design Considerations in section 3.2
- more structured experimental results (accuracy metrics and performance assessment) may be needed.
Please refer to Section 3.5 for the experiment and 3.7 for discussion of limitations.
Reviewer 2 (Anonymous)
Basic reporting
In the introduction, the authors should clearly clarify the three objectives that are later mentioned in Section 2.
Done.
Additionally, it is important to clearly state the motivation and justification of the paper. In this sense, research questions are missing. The paper should be more focused on the scientific side, but it mostly covers technological aspects.
We agree that this should have been clearer. We have refocused the research side of the paper into two aspects: a vision of what objectives (any) MLitB should fulfill, and a light-weight prototype showing how many of the objectives can be met, at least partially, with standard/open technologies. We have added experiments in section 3.2 to justify this. (Incidentally, MLitB was also demonstrated at Neural Information Processing Systems 2014, in Montreal, with many users in a very challenging environment. We understand this doesn’t qualify as a scientific result.)
Please describe the organisation of the paper uniformly. For example, Section 3.2 is described but section 3.1 is not even mentioned. Mentioning the main sections is more than enough.
Done.
Please, avoid including explanatory text in captions (Fig. 1 to 3). Such pieces of text should be incorporated to the section.
We have shortened and moved caption text to the main text whenever possible.
All the acronyms should be explained (e.g. SGD, etc.)
Done.
Experimental design
The paper is not properly motivated and the experimental framework presented is weak. For example, how did the authors reach this approach?
We have restructured our introduction and added design choice discussions to address this. The main motivation for the design choices was: how do we build the most successful prototype with limited human resources? This means using a map-reduce like event loop (which we acknowledge has limitations at scale) and harnessing existing browser technology as much as possible (see below).
Which design choices were made? Which limitations have their solution and how were they neutralised? Are other solutions (e.g. using services) viable in this case?
We have added extra discussion of design choices in section 3.2 and some of their limitations in 3.7. We of course focus on the limitations that a synchronized event loop presents, along with other issues such as bandwidth etc. These are issues that are just beginning to be addressed in the ML community, and therefore for a first prototype, state-of-the-art ML algorithms were not the focus.
The technological contribution is notorious and technically sound, but the scientific side should be conducted more precisely. Additionally, technological choices should be plenty justified as well, e.g., why node.js?
As we discuss in the manuscript, we chose pragmatically from available open and web standards (Web Workers/Sockets, node, JavaScript, JSON). It turns out that theses are the same choices and motivation for other research (as we refer to in the manuscript). We want to be very clear: we are agnostic to the choices as long as they attain the vision, however, the choices we made were not limiting. The most important limitations, which we discuss in the manuscript, are due to algorithmic design (and are active areas of ML research), not technology choices (though surely better technology will be necessary for very massive, global MLitB). As far as technology choices go, there are surprisingly few alternatives that will work across browser vendors, but the ones that do, which we used, are very good.
A more detailed comparative framework with both other design alternatives and other previous solutions would be required.
We are satisfied that a synchronized version of MLitB is the easiest first prototype to present. We have since developed new asynchronous algorithms, but this is ongoing. These are ML algorithms and not software design issues, though they of course are closely entwined. See above for technology choice discussion.
All the implementation and contribution seems to be focused on a given problem domain. What if the library should be scaled, integrated within other system or just a single module would be reused? How is it done? I am afraid that it would be not that simple, and most of the current modules should be adapted or reimplemented, e.g. to add preprocessing capabilities, data and result handlers, new algorithms, etc. Who is in charge of extending the library? Can an external developer to add new features or should we depend on the releases provided by a development team? If it is considered extendable, then please include evidences (case studies, a further discussion, etc.) All these aspects should be explained in detailed so the real contribution is clearer.
We agree with the reviewer that this is not a simple task. It is true that we have not written a completely versatile and modular software package, but instead a prototype framework that works with arguably the most important ML use case, deep neural networks. We are completely open about the limitations this presents for MLitB (in fact, DNNs are the most demanding of ML models as well, so it is a good test). In section 6.2 we acknowledge that we still need to separate MLitB networking with the convnet prototype. Once this is done, it would be quite possible for other research groups to add modules. This is done for Scikits, for example, and has been quite successful. The goal of our manuscript is not to software release.
The experimental framework should be clearly defined. Important information is missing to support the scalability of this approach. Which is the largest amount of data supported? Which technical/performance limitations have the authors found in their approach? Any limitation regarding data (type, size, etc.)?
Please see our scaling experiment in section 3.2 and discussion of limitations in 3.7.
Validity of the findings
The paper makes some strong assumptions without any substantial and precise scientific support. For example, in Section 2.1, the authors mention that “to make ML truly ubiquitous requires ML writing models and algorithms with web programming languages and using the browser as the engine”. This should be supported by references.
We do say: “We argue, …”, beforehand. We agree this is an assertion without support, but we stand by it.
In fact, a major issue is that the authors often make use of subjective references to endorse their work and assumptions. It is clearly lacking of rigour. References to particular blog entries or subjective articles should be avoided. Please cite instead peer-reviewed works and other precise sources of information. If assertions are not properly founded, they become speculation.
We agree. We have removed references to blog entries and added peer-reviewed citations.
Another strong issue is that the paper is not clear about what is really done and what is to be. For example, the abstract seems to indicate that GPU capabilities are already provided (“MlitB offers [..] including: development of distributed learning algorithms, advancement of web GPU algorithms [..]”. Until Section 2 we do not know whether it is really implemented or not. In fact, the last paragraph of Section 2.2 should be moved to Future Work.
Please, clearly differentiate the current contribution from future work.
It also happens with Objective 2 and 3, described in Section 2, which are not properly developed later.
We hope that the restructure of the introduction makes this clear. The vision as a roadmap and the prototype as the working framework.
Section 2.3 explains how important is to provide mechanisms to make reproducibility easier. I totally agree. However, it seems to be an item in our wish list, because how to make it with the library is not properly explained in the paper. In this sense, it is likely that the contribution of Dr. Antonio Ruiz and Dr. Jose Antonio Parejo (University of Seville, Spain) about reproducibility in the field of ML (they are building some sort of framework in this context) would be of interest.
We agree this could be made clearer. Our vision outlines what we want in terms of research closures and our prototype, which archives a JSON object with parameters and model specification. We have avoided imposing a true specification for research closures because we think it would be premature and speculative. For the moment, JSON provides a lot of flexibility and as an open standard it satisfies much of the reproducibility requirements be default. We searched for contributions by Ruiz and Parejo that would be relevant, but honestly could not find any. We would be happy to include them if they improve the manuscript.
A performance analysis is required so that we can really compare this approach and know whether it would be a viable choice. Citing other external works about the language performance is not enough. Please, design and include a comparative performance study of your own proposal. This becomes especially important in the field of ML because of its increasing computational requirements.
We have not included comparison of using an alternative to JavaScript (is this what the reviewer suggests?). Of course, we could use different code for the clients depending on there device, resources etc, but we wanted to make a universal framework, so we honestly do not see any alternative to JS. External references to JS optimization is important for the reader because it 1) prevents readers from assuming that JS is very slow and therefore dismissing the work, and 2) it reminds readers that the scaling performance is what is important for MLitB (and in general its ability to work on any device, which is not unimportant). We have limited human resources that prevent us from fulfilling all aspects of our vision and maximizing performance.
In a real environment, how many users could it support? How would the increase of users affect the performance?
We have added a scaling experiment to address this. We have included discussion of design limitations as well.
Comments for the author
In general, my view on this work is very positive. The idea is really interesting and the work, promising and challenging. However, I regret to say that, according to the criteria given to the reviewers by PeerJ, the manuscript seems to be immature yet and requires an important rewriting effort. The experimental framework and its validation from a scientific perspective are weak.
We thank you for your positive take! We hope that the revised manuscript addresses your concerns.
Web workers and web sockets should be explained. Discuss why they are the best choice would be interesting (limitations, characteristics, etc.)
We have added more discussion in Design Considerations. As we mention above, there are very few choices.
In Fig. 5, what does “Probability” means? It is not a precise measure.
This is the class-conditional probability made by the NN. We have added this to the manuscript.
Figure 6 is unreadable.
Changed.
I cannot see Section 5.3 as an opportunity. It should be presented as future work, right?
We are not entirely sure what the reviewer means. We have spent time reorganizing challenges and opportunities, so hopefully we have addressed this.
Section 7.2 does not provide a significant contribution. Please extend to a better clarification.
Done.
How are new developments incorporated to the server?
We do not fully understand the question. Do you mean communication: Communication is by Web Sockets / XHR. Web Sockets inform the server of new/lost connections and other events.
What would happen if the browser is suddenly closed?
If is it a client, the client is lost, but the network is ok. If it is the master, then the network is lost. We have added this to manuscript.
Is there any domain/s to which this library is especially well-suited?
We think that field computing is an interesting area where users can sue their cameras to take images, classify them, add to the training set, etc, is very interesting. The other is privacy preserving computing, where locally stored images are used to train collaboratively while not disclosing any sensitive information. This is a major research topic in ML and MLitB is a simple framework for testing it. In general, there are a lot of researchers in non-ML fields that could benefit from powerful prediction models and MLitB is framework that can provide this, albeit for a particular use case.
" | Here is a paper. Please give your review comments after reading it. |
733 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Data in the life sciences are extremely diverse and are stored in a broad spectrum of repositories ranging from those designed for particular data types (such as KEGG for pathway data or UniProt for protein data) to those that are general-purpose (such as FigShare, Zenodo, Dataverse or EUDAT). These data have widely different levels of sensitivity and security considerations. For example, clinical observations about genetic mutations in patients are highly sensitive, while observations of species diversity are generally not. The lack of uniformity in data models from one repository to another, and in the richness and availability of metadata descriptions, makes integration and analysis of these data a manual, time-consuming task with no scalability. Here we explore a set of resource-oriented Web design patterns for data discovery, accessibility, transformation, and integration that can be implemented by any general-or special-purpose repository as a means to assist users in finding and reusing their data holdings. We show that by using off-the-shelf technologies, interoperability can be achieved atthe level of an individual spreadsheet cell. We note that the behaviours of this architecture compare favourably to the desiderata defined by the FAIR Data Principles, and can therefore represent an</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Carefully-generated data are the foundation for scientific conclusions, new hypotheses, discourse, disagreement and resolution of these disagreements, all of which drive scientific discovery. Data must therefore be considered, and treated, as first-order scientific output, upon which there may be many downstream derivative works, among these, the familiar research article <ns0:ref type='bibr' target='#b15'>(Starr et al., 2015)</ns0:ref>. But as the volume and complexity of data continue to grow, a data publication and distribution infrastructure is beginning to emerge that is not ad hoc, but rather explicitly designed to support discovery, accessibility, (re)coding to standards, integration, machine-guided interpretation, and re-use.</ns0:p><ns0:p>In this text, we use the word 'data' to mean all digital research artefacts, whether they be data (in the traditional sense), research-oriented digital objects such as workflows, or combinations/packages of these (i.e. the concept of a 'research object', <ns0:ref type='bibr' target='#b1'>(Bechhofer et al., 2013)</ns0:ref>). Effectively, all digital entities in the research data ecosystem will be considered data by this manuscript. Further, we intend 'data' to include both data and metadata, and recognize that the distinction between the two is often user-dependent. Data, of all types, are often published online, where the practice of open data publication is being encouraged by the scholarly community, and increasingly adopted as a requirement of funding agencies <ns0:ref type='bibr' target='#b16'>(Stein et al., 2015)</ns0:ref>. Such publications utilize either a special-purpose repository (e.g. model-organism or molecular data repositories) or increasingly commonly will utilize general-purpose repositories such as FigShare, Zenodo, Dataverse, EUDAT or even institutional repositories. Special-purpose repositories generally receive dedicated funding to curate and organize data, and have specific query interfaces and APIs to enable exploration of their content. General-purpose repositories, on the other hand, allow publication of data in arbitrary formats, with little or no curation and often very little structured metadata. Both of these scenarios pose a problem with respect to interoperability. While APIs allow mechanized access to the data holdings of a special-purpose repository, each repository has its own API, thus requiring specialized software to be created for each cross-repository query. Moreover, the ontological basis of the curated annotations are not always transparent (neither to humans nor machines), which hampers automated integration. General purpose repositories are less likely to have rich APIs, thus often requiring manual discovery and download; however, more importantly, the frequent lack of harmonization of the file types/formats and coding systems in the repository, and lack of curation, results in much of their content being unusable <ns0:ref type='bibr' target='#b9'>(Roche et al., 2015)</ns0:ref>. Previous projects, specifically in the bio/medical domain, that have attempted to achieve deep interoperability include caBIO <ns0:ref type='bibr'>(Covitz et al., 2003)</ns0:ref> and TAPIR <ns0:ref type='bibr'>(De Giovanni et al., 2010)</ns0:ref>. The former created a rich SOAP-based API, enforcing a common interface over all repositories. The latter implemented a domain-specific query language that all participating repositories should respond to. These initiatives successfully enabled powerful cross-resource data exploration and integration; however, this was done at the expense of broad-scale uptake, partly due to the complexity of implementation, and/or required the unavoidable participation of individual data providers, who are generally resource-strained. Moreover, in both cases, the interoperability was aimed at a specific field of study (cancer, and biodiversity respectively), rather than a more generalized interoperability goal spanning all domains. With respect to more general-purpose approaches, and where 'lightweight' interoperability was considered acceptable, myGrid <ns0:ref type='bibr' target='#b17'>(Stevens et al., 2003)</ns0:ref> facilitated discovery and interoperability between Web Services through rich ontologically-based annotations of the service interfaces, and BioMoby <ns0:ref type='bibr'>(Wilkinson et al., 2008)</ns0:ref> built on these myGrid annotations by further defining a novel ontology-based service request/response structure to guarantee data-level compatibility and thereby assist in workflow construction <ns0:ref type='bibr'>(Withers et al., 2010)</ns0:ref>. SADI <ns0:ref type='bibr'>(Wilkinson et al., 2011), and</ns0:ref><ns0:ref type='bibr'>SSWAP (Gessler et al., 2009)</ns0:ref> used the emergent Semantic Web technologies of RDF and OWL to enrich the machine-readability of Web Service interface definitions and the data being passed -SADI through defining service inputs and outputs as instances of OWL Classes, and SSWAP through passing data embedded in OWL 'graphs' to assist both client and server in interpreting the meaning of the messages. In addition, two Web Service interoperability initiatives emerged from the World Wide Web Consortium -OWL-S <ns0:ref type='bibr'>(Martin et al., 2005)</ns0:ref> and SAWSDL <ns0:ref type='bibr' target='#b3'>(Martin et al., 2007)</ns0:ref>, both of which used semantic annotations to enhance the ability of machines to understand Web Service interface definitions and operations. All of these Service-oriented projects enjoyed success within the community that adopted their approach; however, the size of these adopting communities have, to date, been quite limited and are in some cases highly domain-specific. Moreover, each of these solutions is focused on Web Service functionality, which represents only a small portion of the global data archive, where most data is published as static records. Service-oriented approaches additionally require data publishers to have considerable coding expertise and access to a server in order to utilize the standard, which further limits their utility with respect to the 'lay' data publishers that make-up the majority of the scholarly community. As such, these and numerous other interoperability initiatives, spanning multiple decades, have yet to convincingly achieve a lightweight, broadly domain-applicable solution that works over a wide variety of static and dynamic source data resources, and can be implemented with minimal technical expertise. There are many stakeholders who would benefit from progress in this endeavour. Scientists themselves, acting as both producers and consumers of these public and private data; public and private research-oriented agencies; journals and professional data publishers both 'general purpose' and 'special purpose'; research funders who have paid for the underlying research to be conducted; data centres (e.g. the EBI <ns0:ref type='bibr'>(Cook et al., 2016)</ns0:ref>, and the SIB (SIB Swiss Institute of Bioinformatics Members, 2016)) who curate and host these data on behalf of the research community; research infrastructures such as BBMRI-ERIC <ns0:ref type='bibr' target='#b5'>(van Ommen et al., 2015)</ns0:ref> and ELIXIR <ns0:ref type='bibr'>(Crosswell & Thornton, 2012)</ns0:ref>, and diverse others. All of these stakeholders have distinct needs with respect to the behaviours of the scholarly data infrastructure. Scientists, for example, need to access research datasets in order to initiate integrative analyses, while funding agencies and review panels may be more interested in the metadata associated with a data deposition -for example, the number of views or downloads, and the selected license. Due to the diversity of stakeholders; the size, nature/format, and distribution of data assets; the need to support freedom-of-choice of all stakeholders; respect for privacy; acknowledgment of data ownership; and recognition of the limited resources available to both data producers and data hosts, we see this endeavour as one of the Grand Challenges of eScience. In January 2014, representatives of a range of stakeholders came together at the request of the Netherlands eScience Centre and the Dutch Techcentre for Life Sciences (DTL) at the Lorentz Centre in Leiden, the Netherlands, to brainstorm and debate about how to further enhance infrastructures to support a data ecosystem for eScience. From these discussions emerged the notion that the definition and widespread support of a minimal set of community-agreed guiding principles and practices could enable data providers and consumers -machines and humans alike -to more easily find, access, interoperate, and sensibly re-use the vast quantities of information being generated by contemporary data-intensive science. These principles and practices should enable a broad range of integrative and exploratory behaviours, and support a wide range of technology choices and implementations, just as the Internet Protocol (IP) provides a minimal layer that enables the creation of a vast array of data provision, consumption, and visualisation tools on the Internet. The main outcome of the workshop was the definition of the so-called FAIR guiding principles aimed at publishing data in a format that is Findable, Accessible, Interoperable and Reusable by both machines and human users. The FAIR Principles underwent a period of public discussion and elaboration, and were recently published <ns0:ref type='bibr' target='#b10'>(Wilkinson et al., 2016)</ns0:ref>. Briefly, the principles state:</ns0:p><ns0:p>Findable -data should be identified using globally unique, resolvable, and persistent identifiers, and should include machine-actionable contextual information that can be indexed to support human and machine discovery of that data. Accessible -identified data should be accessible, optimally by both humans and machines, using a clearly-defined protocol and, if necessary, with clearly-defined rules for authorization/authentication. Interoperable -data becomes interoperable when it is machine-actionable, using shared vocabularies and/or ontologies, inside of a syntactically and semantically machine-accessible format.</ns0:p><ns0:p>Reusable -Reusable data will first be compliant with the F, A, and I principles, but further, will be sufficiently well-described with, for example, contextual information, so it can be accurately linked or integrated, like-with-like, with other data sources. Moreover, there should be sufficiently rich provenance information so reused data can be properly cited. While the principles describe the desired features that data publications should exhibit to encourage maximal, automated discovery and reuse, they provide little guidance regarding how to achieve these goals. This poses a problem when key organizations are already endorsing, or even requiring adherence to the FAIR principles. For example, a biological research group has conducted an experiment to examine polyadenylation site usage in the pathogenic fungus Magnaporthe oryzae, recording, by high-throughput 3'-end sequencing, the preference of alternative polyadenylation site selection under a variety of growth conditions, and during infection of the host plant. The resulting data take the form of study-specific Excel spreadsheets, BED alignment graphs, and pie charts of protein functional annotations. Unlike genome or protein sequences and microarray outputs, there is no public curated repository for these types of data, yet the data are useful to other researchers, and should be (at a minimum) easily discovered and interpreted by reviewers or third-party research groups attempting to replicate their results. Moreover, their funding agency, and their preferred scientific journal, both require that they publish their source data in an open public archive according to the FAIR principles. At this time, the commonly used general-purpose data archival resources in this domain do not explicitly provide support for FAIR, nor do they provide tooling or even guidance for how to use their archival facilities in a FAIR-compliant manner. As such, the biological research team, with little or no experience in formal data publishing, must nevertheless selfdirect their data archival in a FAIR manner. We believe that this scenario will be extremely common throughout all domains of research, and thus this use-case was the initial focus for this interoperability infrastructure and FAIR data publication prototype.</ns0:p><ns0:p>Here we describe a novel interoperability architecture that combines three pre-existing Web technologies to enhance the discovery, integration, and reuse of data in repositories that lack or have incompatible APIs; data in formats that normally would not be considered interoperable such as Excel spreadsheets and flat-files; or even data that would normally be considered interoperable, but do not use the desired vocabulary standards. We examine the extent to which the features of this architecture comply with the FAIR Principles, and suggest that this might be considered a 'reference implementation' for the FAIR Principles, in particular as applied to non-interoperable data in any general-or special-purpose repository. We provide two exemplars of usage. The first is focused on a use-case similar to that presented above, where we use our proposed infrastructure to create a FAIR, self-archived scholarly deposit of biological data to the general-purpose Zenodo repository. The second, more complex example has two objectives -first to use the infrastructure to improve transparency and FAIRness of metadata describing the inclusion criterion for a dataset, representing a subset of a special-purpose, curated resource (UniProt); and second, to show how even the already FAIR data within UniProt may be transformed to increase its FAIRness even more by making it interoperable with alternative ontologies and vocabularies, and more explicitly connecting it to citation information. Finally, we place this work in the context of other initiatives and demonstrate that it is complementary to, rather than in competition with, other initiatives.</ns0:p></ns0:div>
<ns0:div><ns0:head>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Implementation</ns0:head></ns0:div>
<ns0:div><ns0:head>Overview of technical decisions and their justification</ns0:head><ns0:p>The World Wide Web Consortium's (W3C) Resource Description Framework (RDF) offers the ability to describe entities, their attributes, and their relationships with explicit semantics in a standardized manner compatible with widely used Web application formats such as JSON and XML. The Linked Data Principles <ns0:ref type='bibr' target='#b19'>(Berners-Lee, 2006)</ns0:ref> mandate that data items and schema elements are identified by HTTP-resolvable URIs, so the HTTP protocol can be used to obtain the data. Within an RDF description, using shared public ontology terms for metadata annotations supports search and large scale integration. Given all of these features, we opted to use RDF as the basis of this interoperability infrastructure, as it was designed to share data on the Web. Beyond this, there was a general feeling that any implementation that required a novel data discovery/sharing 'Platform', 'Bus', or API, was beyond the minimal design that we had committed to; it would require the invention of a technology that all participants in the data ecosystem would then be required to implement, and this was considered a non-starter. However, there needed to be some form of coalescence around the mechanism for finding and retrieving data. Our initial target-community -that is, the biomedical sciences -have embraced lightweight HTTP interfaces. We propose to continue this direction with an implementation based on REST <ns0:ref type='bibr'>(Fielding & Taylor, 2002)</ns0:ref>, as several of the FAIR principles map convincingly onto the objectives of the REST architectural style for distributed hypermedia systems, such as having resolvable identifiers for all entities, and a common machine-accessible approach to discovering and retrieving different representations of those entities. The implementation we describe here is largely based on the HTTP GET method, and utilizes rich metadata and hypermedia controls. We use widely-accepted vocabularies not only to describe the data in an interoperable way, but also to describe its nature (e.g. the context of the experiment and how the data was processed) and how to access it. These choices help maximize uptake by our initial target-community, maximize interoperability between resources, and simplify construction of the wide (not pre-defined) range of client behaviours we intend to support. Confidential and privacy-sensitive data was also an important consideration, and it was recognized early on that it must be possible, within our implementation, to identify and richly describe data and/or datasets without necessarily allowing direct access to them, or by allowing access through existing regulatory frameworks or security infrastructures. For example, many resources within the International Rare Disease Research Consortium participate in the RD Connect platform <ns0:ref type='bibr' target='#b18'>(Thompson et al., 2014)</ns0:ref> which has defined the 'disease card' -a metadata object that gives overall information about the individual disease registries, which is then incorporated into a 'disease matrix'. The disease matrix provides aggregate data about what disease variants are in the registry, how many individuals represent each disease, and other high-level descriptive data that allows, for example, researchers to determine if they should approach the registry to request full data access. Finally, it was important that the data host/provider is not necessarily a participant in making their data interoperable -rather, the interoperability solution should be capable of adapting existing data with or without the source provider's participation. This ensures that the interoperability objectives can be pursued for projects with limited resourcing, that 'abandoned' datasets may still participate in the interoperability framework, but most importantly, that those with the needs and the resources should adopt the responsibility for making their data-of-interest interoperable, even if it is not owned by them. This distributes the problem of migrating data to interoperable formats over the maximum number of stakeholders, and ensures that the most crucial resources -those with the most demand for interoperability -become the earliest targets for migration. With these considerations in mind, we were inspired by three existing technologies whose features were used in a novel combination to create an interoperability infrastructure for both data and metadata, that is intended to also addresses the full range of FAIR requirements. Briefly, the selected technologies are:</ns0:p><ns0:p>1) The W3C's Linked Data Platform <ns0:ref type='bibr' target='#b14'>(Speicher, Arwe & Malhotra, 2015)</ns0:ref>. We generated a model for hierarchical dataset containers that is inspired by the concept of a Linked Data Platform (LDP) Container, and the LDP's use of the Data Catalogue Vocabulary (DCAT, (Maali, Erickson & Archer, 2014)) for describing datasets, data elements, and distributions of those data elements. We also adopt the DCAT's use of Simple Knowledge Organization System (SKOS, <ns0:ref type='bibr' target='#b4'>(Miles & Bechhofer, 18 August, 2009)</ns0:ref>) Concept Schemes as a way to ontologically describe the content of a dataset or data record.</ns0:p><ns0:p>2) The RDF MappingLanguage <ns0:ref type='bibr'>(RML, (Dimou et al., 2014)</ns0:ref>. RML allows us to describe one or more possible RDF representations for any given dataset, and do so in a manner that is, itself, FAIR: every sub-component of an RML model is Findable, Accessible, Interoperable, and Reusable. Moreover, for many common semi-structured data, there are generic tools that utilize RML models to dynamically drive the transformation of data from these opaque representations into interoperable representations (https://github.com/RMLio/RML-Mapper). 3) Triple Pattern Fragments (TPF - <ns0:ref type='bibr'>(Verborgh et al., 2016)</ns0:ref>). A TPF interface is a REST Web API to retrieve RDF data from data sources in any native format. A TPF server accepts URLs that represent triple patterns [Subject, Predicate, Object], where any of these three elements may be constant or variable, and returns RDF triples from its data source that match those patterns. Such patterns can be used to obtain entire datasets, slices through datasets, or individual data points even down to a single triple (essentially a single cell in a spreadsheet table <ns0:ref type='table'>)</ns0:ref>. Instead of relying on a standardized contract between servers and clients, a TPF interface is self-describing such that automated clients can discover the interface and its data. We will now describe in detail how we have applied key features of these technologies, in combination, to provide a novel data discoverability architecture. We will later demonstrate that this combination of technologies also enables both metadata and data-level interoperability even between opaque objects such as flat-files, allowing the data within these objects to be queried in parallel with other data on the Semantic Web.</ns0:p></ns0:div>
<ns0:div><ns0:head>Metadata Interoperability -The 'FAIR Accessor' and the Linked Data Platform</ns0:head><ns0:p>The Linked Data Platform 'defines a set of rules for HTTP operations on Web resources… to provide an architecture for read-write Linked Data on the Web' (https://www.w3.org/TR/ldp/). All entities and concepts are identified by URLs, with machine-readable metadata describing the function or purpose of each URL and the nature of the resource that will be returned when that URL is resolved. Within the LDP specification is the concept of an LDP Container. A basic implementation of LDP containers involves two 'kinds' of resources, as diagrammed in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. The first type of resource represents the container -a metadata document that describes the shared features of a collection of resources, and (optionally) the membership of that collection. This is analogous to, for example, a metadata document describing a data repository, where the repository itself has features (ownership, curation policy, etc.) that are independent from the individual data records within that repository (i.e. the members of the collection). The second type of resource describes a member of the contained collection and (optionally) provides ways to access the record itself. Our implementation, which we refer to as the 'FAIR Accessor', utilizes the container concept described by the LDP, however, it does not require a full implementation of LDP, as we only require read functionality. In addition, other requirements of LDP would have added complexity without notable benefit. Our implementation, therefore, has two resource types based on the LDP Container described above, with the following specific features: Container resource: This is a composite research object (of any kind -repository, repositoryrecord, database, dataset, data-slice, workflow, etc.). Its representation could include scope or knowledge-domain covered, authorship/ownership of the object, latest update, version number, curation policy, and so forth. This metadata may or may not include URLs representing MetaRecord resources (described below) that comprise the individual elements within the composite object. Notably, the Container URL provides a resolvable identifier independent from the identifier of the dataset being described; in fact, the dataset may not have an identifier, as would be the case, for example, where the container represents a dynamically-generated dataslice. In addition, Containers may be published by anyone -that is, the publisher of a Container may be independent from the publisher of the research object it is describing. This enables one of the objectives of our interoperability layer implementation -that anyone can publish metadata about any research object, thus making those objects more FAIR. MetaRecord resource: This is a specific element within a collection (data point, record, study, service, etc.). Its representation should include information regarding licensing and accessibility, access protocols, rich citation information, and other descriptive metadata. It also includes a reference to the container(s) of which it is a member (the Container URL). Finally, the MetaRecord may include further URLs that provide direct access to the data itself, with an explicit reference to the associated data format by its MIME type (e.g. text/html, application/json, application/vnd.ms-excel, text/csv, etc.). This is achieved using constructs from the Data Catalogue Vocabulary (DCAT; W3C, 2014), which defines the concept of a data 'Distribution', which includes metadata facets such as the data source URL and its format. The lower part of In summary, the FAIR Accessor shares commonalities with the Linked Data Platform, but additionally recommends the inclusion of rich contextual metadata, based on the FAIR Principles, that facilitate discovery and interoperability of repository and record-level information. The FAIR Accessor is read-only, utilizing only HTTP GET together with widely-used semantic frameworks to guide both human and machine exploration. Importantly, the lack of a novel API means that the information is accessible to generic Web-crawling agents, and may also be processed if that agent 'understands' the vocabularies used. Thus, in simplistic terms, the Accessor can be envisioned as a series of Web pages, each containing metadata, and hyperlinks to more detailed metadata and/or data, where the metadata elements and relationships between the pages are explicitly explained to Web crawlers.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table'>2016:10:13730:1:0:NEW 23 Feb 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To help clarify this component prior to presenting the more complex components of our interoperability proposal, we will now explore our first use case -data self-archival. A simple FAIR Accessor has been published online <ns0:ref type='bibr' target='#b10'>(Rodriguez Iglesias et al., 2016)</ns0:ref> in the Zenodo general-purpose repository. The data self-archival in this citation represents a scenario similar to the polyadenylation use-case described in the Introduction section. In this case, the data describes the evolutionary conservation of components of the RNA Metabolism pathway in fungi as a series of heatmap images. The data deposit, includes a file 'RNAME_Accessor.rdf' which acts as the Container Resource. This document includes metadata about the deposit (authorship, topic, etc.), together with a series of 'contains' relationships, referring to MetaRecords inside of the file 'RNAME_Accessor_Metarecords.rdf'. Each MetaRecord is about one of the heatmaps, and in addition to metadata about the image, includes a link to the associated image (datatype image/png) and a link to an RDF representation of the same information represented by that image (datatype application/rdf+xml). It should be noted that much of the content of those Accessor files was created using a text editor, based on template RDF documents. The structure of these two documents are described in more detail in the Results section, which includes a full walk-through of a more complex exemplar Accessor. At the metadata level, therefore, this portion of the interoperability architecture provides a high degree of FAIRness by allowing machines to discover and interpret useful metadata, and link it with the associated data deposits, even in the case of a repository that provides no FAIRsupport. Nevertheless, these components do not significantly enhance the FAIRness and interoperability of the data itself, which was a key goal for this project. We will now describe the application of two recently-published Web technologies -Triple Pattern Fragments and RML -to the problem of data-level interoperability. We will show that these two technologies can be combined to provide an API-free common interface that may be used to serve, in a machinereadable way, FAIR data transformations (either from non-FAIR data, or transformations of FAIR data into novel ontological frameworks). We will also demonstrate how this FAIR data republishing layer can be integrated into the FAIR Accessor to provide a machine-traversable path for incremental drill-down from high-level repository metadata all the way through to individual data points within a record, and back.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Interoperability: Discovery of compatible data through RML-based FAIR Profiles</ns0:head><ns0:p>In our approach to data-level interoperability, we first identified a number of desiderata that the solution should exhibit:</ns0:p><ns0:p>1. View-harmonization over dissimilar datatypes, allowing discovery of potentially integrable data within non-integrable formats. 2. Support for a multitude of source data formats (XML, Excel, CSV, JSON, binary, etc.) 3. 'Cell-level' discovery and interoperability (referring to a 'cell' in a spreadsheet) 4. Modularity, such that a user can make interoperable only the data component ofinterest to them Manuscript to be reviewed</ns0:p><ns0:p>Computer Science 5. Reusability, avoiding 'one-solution-per-record' and minimizing effort/waste 6. Must use standard technologies, and reuse existing vocabularies 7. Should not require the participation of the data host (for public data) The approach we selected was based on the premise that data, in any format, could be metamodelled as a first step towards interoperability; i.e., the salient data-types and relationships within an opaque data 'blob' could be described in a machine-readable manner. The metamodels of two data sources could then be compared to determine if their contained data was, in principle, integrable. We referred to these metamodels as 'FAIR Profiles', and we further noted that we should support multiple metamodels of the same data, differing in structure or ontological/semantic framework, within a FAIR Profile. For example, a data record containing blood pressure information might have a FAIR Profile where this facet is modelled using both the SNOMED vocabulary and the ICD10 vocabulary, since the data facet can be understood using either. We acknowledge that these meta-modelling concepts are not novel, and have been suggested by a variety of other projects such as DCAT and Dublin Core (the DC Application Profile <ns0:ref type='bibr'>(Heery & Patel, 2000)</ns0:ref>, and have been extensively described by the ISO 11179 standard for 'metadata registries'. It was then necessary to select a modelling framework for FAIR Profiles capable of representing arbitrary, and possibly redundant, semantic models. Our investigation into relevant existing technologies and implementations revealed a relatively new, unofficial specification for a generic mapping language called 'RDF Mapping Language' (RML <ns0:ref type='bibr'>(Dimou et al., 2014)</ns0:ref>). RML is an extension of R2RML (Das, Sundara & Cyganiak, 27 September, 2012), a W3C Recommendation for mapping relational databases to RDF, and is described as 'a uniform mapping formalization for data in different format, which <ns0:ref type='bibr'>[enables]</ns0:ref> reuse and exchange between tools and applied data' <ns0:ref type='bibr'>(Dimou et al., 2014</ns0:ref>). An RML map describes the triple structure (subject, predicate, object, abbreviated as [S,P,O]), the semantic types of the subject and object, and their constituent URI structures, that would result from a transformation of non-RDF data (of any kind) into RDF data. RML maps are modular documents where each component describes the schema for a single-resource-centric graph (i.e. a graph with all triples that share the same subject). The 'object' position in each of these map modules may be mapped to a literal, or may be mapped to another RML module, thus allowing linkages between maps in much the same way that the object of an RDF triple may become the subject of another triple. RML modules therefore may then be assembled into a complete map representing both the structure and the semantics of an RDF representation of a data source. RML maps themselves take the form of RDF documents, and can be published on the Web, discovered, and reused, via standard Web technologies and protocols. RML therefore fulfils each of the desiderata for FAIR Profiles, and as such, we selected this technology as the candidate for their implementation. Comparing with related technologies, this portion of our interoperability prototype serves a similar purpose to the XML Schema (XSD; Fallside & Walmsley, 2004) definitions within the output component of a Web Services Description Language (WSDL) document, but unlike XSD, is capable of describing the structure and semantics of RDF graphs.</ns0:p><ns0:p>Of particular interest to us was the modularity of RML -its ability to model individual triples. This speaks directly to our desiderata 4, where we do not wish to require (and should not expect) a modeller to invest the time and effort required to fully model every facet of a potentially very complex dataset. Far more often, individuals will have an interest in only one or a few facets of a dataset. As such, we chose to utilize RML models at their highest level of granularity -that is, we require a distinct RML model for each triple pattern (subject+type, predicate, object+type) of interest. We call these small RML models 'Triple Descriptors'. An exemplar Triple Descriptor is diagrammed in Figure <ns0:ref type='figure'>2</ns0:ref>. There may be many Triple Descriptors associated with a single data resource. Moreover, multiple Triple Descriptors may model the same facet within that data resource, using different URI structures, subject/object semantic types, or predicates, thus acting as different 'views' of that data facet. Finally, then, the aggregation of all Triple Descriptors associated with a specific data resource produces a FAIR Profile of that data. Note that FAIR Profiles are not necessarily comprehensive; however, by aggregating the efforts of all modellers, FAIR Profiles model only the data facets that are most important to the community. FAIR Profiles enable view harmonization over compatible but structurally non-integrable data, possibly in distinct repositories. The Profiles of one data resource can be compared to the Profiles of another data resource to identify commonalities between their Triple Descriptors at the semantic level, even if the underlying data is semantically opaque and/or structurally distinct -a key step toward Interoperability. FAIR Profiles, therefore, have utility, independent of any actuated transformation of the underlying data, in that they facilitate compatible data discovery. Moreover, with respect to desiderata 5, Triple Descriptors, and sometimes entire FAIR Profiles, are RDF documents published on the Web, and therefore may be reused to describe new data resources, anywhere on the Web, that contain similar data elements, regardless of the native representation of that new resource, further simplifying the goal of data harmonization.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref>: Diagram of the structure of an exemplar Triple Descriptor representing a hypothetical record of a SNP in a patient's genome. In this descriptor, the Subject will have the URL structure http://example.org/patient/{id}, and the Subject is of type PatientRecord. The Predicate is hasVariant, and the Object will have URL structure http://identifiers.org/dbsnp/{snp} with the rdf:type from the sequence ontology '0000694' (which is the concept of a 'SNP'). The two nodes shaded green are of the same ontological type, showing the iterative nature of RML, and how individual RML Triple Descriptors will be concatenated into full FAIR Profiles. The three nodes shaded yellow are the nodes that define the subject type, predicate and object type of the triple being described.</ns0:p></ns0:div>
<ns0:div><ns0:head>Data Interoperability: Data transformation with FAIR Projectors and Triple Pattern Fragments</ns0:head><ns0:p>The ability to identify potentially integrable data within opaque file formats is, itself, a notable achievement compared to the status quo. Nevertheless, beyond just discovery of relevant data, our interoperability layer aims to support and facilitate cross-resource data integration and query answering. This requires that the data is not only semantically described, but is also semantically and syntactically transformed into a common structure. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Having just presented a mechanism to describe the structure and semantics of data -Triple Descriptors in RML -what remains lacking is a way to retrieve data consistent with those Triple Descriptors. We require a means to expose transformed data without worsening the existing critical barrier to interoperability -opaque, non-machine-readable interfaces and API proliferation <ns0:ref type='bibr' target='#b21'>(Verborgh & Dumontier, 2016)</ns0:ref>. What is required is a universally-applicable way of retrieving data generated by a (user-defined) data extraction or transformation process, that does not result in yet another API. The Triple Pattern Fragments (TPF) specification <ns0:ref type='bibr'>(Verborgh et al., 2016)</ns0:ref> defines a REST interface for publishing triples. The server receives HTTP GET calls on URLs that contain a triple pattern [S,P,O], where any component of that pattern is either a constant or a variable. In response, a TPF server returns pages with all triples from its data source that match the incoming pattern. As such, any given triple pattern has a distinct URL. We propose, therefore, to combine three elements -data transformed into RDF, which is described by Triple Descriptors, and served via TPF-compliant URLs. We call this combination of technologies a 'FAIR Projector'. A FAIR Projector, therefore, is a Web resource (i.e., something identified by a URL) that is associated with both a particular data source, and a particular Triple Descriptor. Calling HTTP GET on the URL of the FAIR Projector produces RDF triples from the data source that match the format defined by that Projector's Triple Descriptor. The originating data source behind a Projector may be a database, a data transformation script, an analytical web service, another FAIR Projector, or any other static or dynamic data-source. Note that we do not include a transformation methodology in this proposal; however, we address this issue and provide suggestions in the Discussion section. There may, of course, be multiple projectors associated with any given data source, serving a variety of triples representing different facets of that data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Linking the Components: FAIR Projectors and the FAIR Accessor</ns0:head><ns0:p>At this point, we have a means for requesting triples with a particular structure -TPF Serversand we have a means of describing the structure and semantics of those triples -Triple Descriptors. Together with a source of RDF data, these define a FAIR Projector. However, we still lack a formal mechanism for linking TPF-compliant URLs with their associated Triple Descriptors, such that the discovery of a Triple Descriptor with the desired semantics for a particular data resource, also provides its associated Projector URL. We propose that this association can be accomplished, without defining any novel API or standard, if the output of a FAIR Projector is considered a DCAT Distribution of a particular data source, and included within the MetaRecord of a FAIR Accessor. The URL of the Projector, and its Triple Descriptor, become metadata facets of a new dcat:Distribution element in the MetaRecord. This is diagrammed in Figure <ns0:ref type='figure' target='#fig_4'>3</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Thus, all components of this interoperability system -from the top level repository metadata, to the individual data cell -are now associated with one another in a manner that allows mechanized data discovery, harmonization, and retrieval, including relevant citation information. No novel technology or API was required, thus allowing this rich combination of data and metadata to be explored using existing Web tools and crawlers. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>In the previous section, we provided the URL to a simple exemplar FAIR Accessor published on Zenodo. To demonstrate the interoperability system in its entirety -including both the Accessor and the Projector components -we will now proceed through a second exemplar involving the special-purpose repository for protein sequence information, UniProt. In this example, we examine a FAIR Accessor to a dataset, created through a database query, that consists of a specific 'slice' of the Protein records within the UniProt database -that is, the set of proteins in Aspergillus nidulans FGSC A4 (NCBI Taxonomy ID 227321) that are annotated as being involved in mRNA Processing (Gene Ontology Accession GO:0006397). We first demonstrate the functionality of the two layers of the FAIR Accessor in detail. We then demonstrate a FAIR Projector, and show how its metadata integrates into the FAIR Accessor. In this example, the Projector modifies the ontological framework of the UniProt data such that the ontological terms used by UniProt are replaced by the terms specified in EDAM -an ontology of bioinformatics operations, datatypes, and formats <ns0:ref type='bibr'>(Ison et al., 2013)</ns0:ref>. We will demonstrate that this transformation is specified, in a machine-readable way, by the FAIR Triple Descriptor that accompanies each Projector's metadata.</ns0:p></ns0:div>
<ns0:div><ns0:head>The two-step FAIR Accessor</ns0:head><ns0:p>The example FAIR Accessor accesses a database of RDF hosted by UniProt, and issues the following query over that database (expressed in the standard RDF query language SPARQL): Of particular note are the following metadata elements: http://purl.org/dc/elements/1.1/license https://creativecommons.org/licenses/by-nd/4.0/ • License information is provided as an HTML + RDFa document, following one of the primary standard license forms published by Creative Commons. This allows the license to be unambiguously interpreted by both machines and people prior to accessing any data elements, an important feature that will be discussed later. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science discussion of machine-actionable pagination will not be included here). These URLs are the MetaRecord Resource URLs shown in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. Following the flow in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, the next step in the FAIR Accessor is to resolve a MetaRecord Resource URL. For clarity, we will first show the metadata document that is returned if there are no FAIR Projectors for that dataset. This will be similar to the document returned by calling a FAIR MetaRecord URL in the Zenodo use case discussed in the earlier Methods section. Calling HTTP GET on a MetaRecord Resource URL returns a document that include metadata elements and structure shown in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>. Note that Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref> is not the complete MetaRecord; rather it has been edited to include only those elements relevant to the aspects of the interoperability infrastructure that have been discussed so far. More complete examples of the MetaRecord RDF, including the elements describing a Projector, are described in Figures <ns0:ref type='figure' target='#fig_9'>7, 8, and 9</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Many properties in this metadata document are similar to those at the higher level of the FAIR Accessor. Notably, however, the primary topic of this document is the UniProt record, indicating a shift in the focus of the document from the provider of the Accessor to the provider of the originating Data. Therefore, the values of these facets now reflect the authorship and contact information for that record. We do, recognize that MetaRecords are themselves scholarly works and should be properly cited. The MetaRecord includes the 'in dataset' predicate, which refers back to the first level of the FAIR Accessor, thus this provides one avenue for capturing the provenance information for the MetaRecord. If additional provenance detail is required, we propose (but no not describe further here) that this information could be contained in a separate named graph, in a manner akin to that used by <ns0:ref type='bibr'>NanoPublications (Kuhn et al., 2016)</ns0:ref>. The important distinctive property in this document is the 'distribution' property, from the DCAT ontology. For clarity, an abbreviated document in Turtle format is shown in Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref>, containing only the 'distribution' elements and their values. @prefix dc: <http://purl.org/dc/elements/1.1/>. @prefix dcat: <http://www.w3.org/ns/dcat#>. @prefix Uni: <http://linkeddata.systems/Accessors/UniProtAccessor/>. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>Uni:C8V1L6 dcat:distribution <#DistributionD7566F52-C143-11E6-897C-26245D07C3DD>, <#DistributionD75682F8-C143-11E6-897C-26245D07C3DD>; <#DistributionD7566F52-C143-11E6-897C-</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>There are two DCAT Distributions in this document. The first is described as being in format 'application/rdf+xml', with its associated download URL. The second is described as being in format 'text/html', again with the correct URL for that representation. Both are typed as Distributions from the DCAT ontology. These distributions are published by UniProt themselves, and the UniProt URLs are used. Additional metadata in the FAIR Accessor (not shown in Figure <ns0:ref type='figure' target='#fig_8'>6</ns0:ref>) describes the keywords that relate to that record in both machine and human-readable formats, access policy, and license, allowing machines to more accurately determine the utility of this record prior to retrieving it. Several things are important to note before moving to a discussion of FAIR Projectors. First, the two levels of the FAIR Accessor are not interdependent. The Container layer can describe relevant information about the scope and nature of a repository, but might not provide any further links to MetaRecords. Similarly, whether or not to provide a distribution within a MetaRecord is entirely at the discretion of the data owner. For sensitive data, an owner may chose to simply provide (even limited) metadata, but not provide any direct link to the data itself, and this is perfectly conformant with the FAIR guidelines. Further, when publishing a single data record, it is not obligatory to publish the Container level of the FAIR Accessor; one could simply provide the MetaRecord document describing that data file, together with an optional link to that file as a Distribution. Finally, it is also possible to publish containers of containers, to any depth, if such is required to describe a multi-resource scenario (e.g. an institution hosting multiple distinct databases).</ns0:p><ns0:p>The FAIR Projector FAIR Projectors can be used for many purposes, including (but not limited to) publishing transformed Linked Data from non-Linked Data; publishing transformed data from a Linked Data source into a distinct structure or ontological framework; load-management/query-management; or as a means to explicitly describe the ontological structure of an underlying data source in a searchable manner. In this demonstration, the FAIR Projector publishes dynamically transformed data, where the transformation involves altering the semantics of RDF provided by UniProt into a different ontological framework (EDAM). This FAIR Projector's TPF interface is available at: http://linkeddata.systems:3001/fragments Data exposed as a TPF-compliant Resource require a subject and/or predicate and/or object value to be specified in the URL; a request for the all-variable pattern (blank, as above) will return nothing. How can a software agent know what URLs are valid, and what will be returned from such a request? In this interoperability infrastructure, we propose that Projectors should be considered as DCAT Distributions, and thus TPF URLs, with appropriate parameters bound, are included in the distribution section of the MetaRecord metadata. An example is shown in Figure <ns0:ref type='figure'>7</ns0:ref>, again rendered using Tabulator.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>7</ns0:ref>. A portion of the output from resolving the MetaRecord Resource of the FAIR Accessor for record C8UZX9, rendered into HTML by the Tabulator Firefox plugin. The columns have the same meaning as in Figure <ns0:ref type='figure' target='#fig_6'>4</ns0:ref>. Comparing the structure of this document to that in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref> shows that there are now four values for the 'distribution' predicate. An RDF and HTML representation, as in Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>, and two additional distributions with URLs conforming to the TPF design pattern (highlighted).</ns0:p><ns0:p>Note that there are now four distributions -two of them are the html and rdf distributions discussed above (Figure <ns0:ref type='figure' target='#fig_7'>5</ns0:ref>). The two new distributions are those provided by a FAIR Projector.</ns0:p><ns0:p>Again, looking at an abbreviated and simplified Turtle document for clarity (Figure <ns0:ref type='figure' target='#fig_9'>8</ns0:ref>) we can see the metadata structure of one of these two new distributions.</ns0:p><ns0:p>@prefix dc: <http://purl.org/dc/elements/1.1/>. @prefix dcat: <http://www.w3.org/ns/dcat#>. @prefix rr: <http://www.w3.org/ns/r2rml#>. @prefix ql: <http://semweb.mmlab.be/ns/ql#>. @prefix rml: <http://semweb.mmlab.be/ns/rml#>. @prefix Uni: <http://linkeddata.systems/Accessors/UniProtAccessor//>. @prefix void: <http://rdfs.org/ns/void#>. @prefix Uni: </cgi-bin/Accessors/UniProtAccessor/>. @prefix FAI: <http://datafairport.org/ontology/FAIR-schema.owl#>. @prefix core: <http://purl.uniprot.org/core/>. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science @prefix edam: <http://edamontology.org/>.</ns0:p><ns0:p>Uni:C8V1L6 dcat:distribution <#Distribution9EFD1238-C1F6-11E6-8812-3E445D07C3DD>, <#Distribution9EFD1238-C1F6-11E6-8812-3E445D07C3DD> dc:format 'application/rdf+xml', 'application/x-turtle', 'text/html'; rml:hasMapping <#Mappings9EFD1238-C1F6-11E6-8812-3E445D07C3DD>; a FAI:Projector, dc:Dataset, void:Dataset, dcat:Distribution; dcat:downloadURL <http://linkeddata.systems:3001/fragments?subject=http%3A%2F%2Fidentifiers%2Eorg %2Funiprot%2FC8V1L6&predicate=http%3A%2F%2Fpurl%2Euniprot%2Eorg%2Fcore%2Fclassified With>.</ns0:p><ns0:formula xml:id='formula_1'><#Mappings9EFD1238-C1F6-11E6-8812-3E445D07C3DD> rr:subjectMap <#SubjectMap9EFD1238-C1F6-11E6-8812-3E445D07C3DD>. rr:predicateObjectMap <#POMap9EFD1238-C1F6-11E6-8812-3E445D07C3DD>;</ns0:formula><ns0:p><#SubjectMap9EFD1238-C1F6-11E6-8812-3E445D07C3DD> rr:class edam:data_0896; rr:template 'http://identifiers.org/uniprot/{ID}'.</ns0:p><ns0:p><#POMap9EFD1238-C1F6-11E6-8812-3E445D07C3DD> rr:objectMap <#ObjectMap9EFD1238-C1F6-11E6-8812-3E445D07C3DD>; rr:predicate core:classifiedWith.</ns0:p><ns0:formula xml:id='formula_2'><#ObjectMap9EFD1238-C1F6-11E6-8812-3E445D07C3DD> rr:parentTriplesMap <#SubjectMap29EFD1238-C1F6-11E6-8812-3E445D07C3DD>. <#SubjectMap29EFD1238-C1F6-11E6-8812-3E445D07C3DD></ns0:formula><ns0:p>rr:class ed:data_1176; rr:template 'http://identifiers.org/taxon/{TAX}'. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>indicate that the URL will respond to HTTP Content Negotiation, and may return any of those three formats.</ns0:p><ns0:p>Following the Triple Pattern Fragments behaviour, requesting the downloadURL with HTTP GET will trigger the Projector to restrict its output to only those data from UniProt where the subject is UniProt record C8V1L6, and the property of interest is 'classifiedWith' from the UniProt Core ontology. The triples returned in response to this call, however, will not match the native semantics of UniProt, but rather will match the semantics and structure defined in the RML Mappings block. The schematic structure of this Mapping RML is diagrammed in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>The Mappings describes a Triple where the subject will be of type edam:data_0896 ('Protein record'), the predicate will be 'classifiedWith' from the UniProt Core ontology, and the object will be of type edam:data_1176 ('GO Concept ID'). Specifically, the triples returned are: @prefix uni: <http://identifiers.org/uniprot/>. @prefix obo: <http://purl.obolibrary.org/obo/>. uni:C8V1L6 core:classifiedWith obo:GO_0000245, obo:GO_0045292 . This is accompanied by a block of hypermedia controls (not shown) using the Hydra vocabulary (Lanthaler & Gütl; Das, Sundara & Cyganiak, 27 September, 2012) that provide machinereadable instructions for how to navigate the remainder of that dataset -for example, how to get the entire row, or the entire column for the current data-point. Though the subject and object are not explicitly typed in the output from this call to the Projector, further exploration of the Projector's output, via those TPF's hypermedia controls, would reveal that the Subject and Object are in fact typed according to the EDAM ontology, as declared in the RML Mapping. Thus, this FAIR Projector served data transformed from UniProt Core semantic types, to the equivalent data represented within the EDAM semantic framework, as shown in Figure <ns0:ref type='figure'>9</ns0:ref>. Also note that the URI structure for the UniProt entity has been changed from the UniProt URI scheme to a URI following the Identifiers.org scheme. The FAIR Projector, in this case, is a script that dynamically transforms data from a query of UniProt into the appropriately formatted triples; however, this is opaque to the client. The Projector's TPF interface, from the perspective of the client, would be identical if the Projector was serving pre-transformed data from a static document, or even generating novel data from an analytical service. Thus, FAIR Projectors harmonize the interface to retrieving RDF data in a desired semantic/structure, regardless of the underlying mechanism for generating that data. This example was chosen for a number of reasons. First, to contrast with the static Zenodo example provided earlier, where this Accessor/Projector combination are querying the UniProt database dynamically. In addition, because we wished to demonstrate the utility of the Projector's ability to transform the semantic framework of existing FAIR data in a discoverable Manuscript to be reviewed Computer Science way. For example, in UniProt, Gene Ontology terms do not have a richer semantic classification than 'owl:Class'. With respect to interoperability, this is problematic, as the lack of rich semantic typing prevents them from being used for automated discovery of resources that could potentially consume them, or use them for integrative, cross-domain queries. This FAIR Accessor/Projector advertises that it is possible to obtain EDAM-classified data, from UniProt, simply by resolving the Projector URL. Figure <ns0:ref type='figure'>9</ns0:ref>: Data before and after FAIR Projection. Bolded segments show how the URI structure and the semantics of the data were modified, according to the mapping defined in the Triple Descriptor (data_0896 = 'Protein report' and data_1176 = 'GO Concept ID'). URI structure transformations may be useful for integrative queries against datasets that utilize the Identifiers.org URI scheme such as OpenLifeData <ns0:ref type='bibr'>(González et al., 2014)</ns0:ref>. Semantic transformations allow integrative queries across datasets that utilize diverse and redundant ontologies for describing their data, and in this example, may also be used to add semantics where there were none before.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Interoperability is hard. It was immediately evident that, of the four FAIR principles, Interoperability was going to be the most challenging. Here we have designed a novel infrastructure with the primary objective of interoperability for both metadata and data, but with an eye to all four of the FAIR Principles. We wished to provide discoverable and interoperable access to a wide range of underlying data sources -even those in computationally opaque formats -as well as supporting a wide array of both academic and commercial end-user applications above these data sources. In addition, we imposed constraints on our selection of technologies; in particular, that the implementation should re-use existing technologies as much as possible, and should support multiple and unpredictable end-uses. Moreover, it was accepted from the outset that the trade-off between simplicity and power was one that could not be avoided, since a key objective was to maximize uptake over the broadest range of data repositories, spanning all domains -this would be nearly impossible to achieve through, for example, attempting to impose a 'universal' API or novel query language. Thus, with the goal of maximizing global uptake and adoption of this interoperability infrastructure, and democratizing the cost of implementation over the entire stakeholder community -both users and providerswe opted for lightweight, weakly integrative, REST solutions, that nevertheless lend themselves to significant degrees of mechanization in both discovery and integration. We now look more closely at how this interoperability infrastructure meets the expectations within the FAIR Principles.</ns0:p></ns0:div>
<ns0:div><ns0:head>FAIR facet(s) addressed by the Container Resource:</ns0:head><ns0:p>• Findable -The container has a distinct globally unique and resolvable identifier, allowing it to be discovered and explicitly, unambiguously cited. This is important because, in many cases, the dataset being described does not natively possess an identifier, as in our example above where the dataset represented the results of a query. In addition, the container's metadata describes the research object, allowing humans and machines to evaluate the potential utility of that object for their task. • Accessible -the Container URL resolves to a metadata record using standard HTTP GET. In addition to describing the nature of the research object, the metadata record should include information regarding licensing, access restrictions, and/or the access protocol for the research object. Importantly, the container metadata exists independently of the research object it describes, where FAIR Accessibility requires metadata to be persistently available even if the data itself is not. • Interoperable -The metadata is provided in RDF -a globally-applicable syntax for data and knowledge sharing. In addition, the metadata uses shared, widelyadopted public ontologies and vocabularies to facilitate interoperability at the metadata level. • Reusable -the metadata includes citation information related to the authorship of the container and/or its contents, and license information related to the reuse of the data, by whom, and for what purpose.</ns0:p></ns0:div>
<ns0:div><ns0:head>Other features of the Container Resource</ns0:head><ns0:p>• Privacy protection -The container metadata provides access to a rich description of the content of a resource, without exposing any data within that resource. While a provider may choose to include MetaRecord URLs within this container, they are not required to do so if, for example, the data is highly sensitive, or no longer easily accessible; however, the contact information provided within the container allows potential users of that data to inquire as to the possibility of gaining access in some other way. As such, this container facilitates a high degree of FAIRness, while still providing a high degree of privacy protection. FAIR Facet(s) Addressed by the MetaRecord:</ns0:p><ns0:p>• Findable -The MetaRecord URL is a globally-unique and resolvable identifier for a data entity, regardless of whether or not it natively possesses an identifier. The metadata it resolves to allows both humans and machines to interrogate the nature of a data element before deciding to access it. • Accessible -the metadata provided by accessing the MetaRecord URL describes the accessibility protocol and license information for that record, and describes all available formats. • Interoperable -as with the Container metadata, the use of shared ontologies and RDF ensures that the metadata is interoperable. • Reusable -the MetaRecord metadata should carry record-level citation information to ensure proper attribution if the data is used. We further propose, but do not demonstrate, that authorship of the MetaRecord itself could be carried in a second named-graph, in a manner similar to that proposed by the NanoPublication specification.</ns0:p></ns0:div>
<ns0:div><ns0:head>Other features of the MetaRecord</ns0:head><ns0:p>• Privacy protection -the MetaRecord provides for rich descriptive information about a specific member of a collection, where the granularity of that description is entirely under the control of the data owner. As such, the MetaRecord can provide a high degree of FAIRness at the level of an individual record, without necessarily exposing any identifiable information. In addition, the provider may choose to stop at this level of FAIRness, and not include further URLs giving access to the data itself. • Symmetry of traversal -Since we predict that clients will, in the future, query over indexes of FAIR metadata searching for dataset or records of interest, it is not possible to predict the position at which a client or their agent will enter your FAIR Accessor. While the container metadata provides links to individual MetaRecords, the MetaRecord similarly provides a reference back 'upwards' to its container. Thus a client can access repository-level metadata (e.g. curation policy, ownership, linking policy) for any given data element it discovers. This became particularly relevant as a result of the European Court of Justice decision (Court of Justice, 2016) that puts the burden of proof on those who create hyperlinks to ensure the document they link to is not, itself, in violation of copyright. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science protocol, license, and/or usage-constraints, thus providing fine-grained control of the access/use of individual elements within a repository. FAIR Facet(s) Addressed by the Triple Descriptors and FAIR Projectors:</ns0:p><ns0:p>• Findable -Triple Descriptors, in isolation or when aggregated into FAIR Profiles, provide one or more semantic interpretations of data elements. By indexing these descriptors, it would become possible to search over datasets for those that contain data-types of interest. Moreover, FAIR Projectors, as a result of the TPF URI structure, create a unique URL for every data-point within a record. This has striking consequences with respect to scholarly communication. For example, it becomes possible to unambiguously refer-to, and therefore 'discuss' and/or annotate, individual spreadsheet cells from any data repository. • Accessible -Using the TPF design patterns, all data retrieval is accomplished in exactly the same way -via HTTP GET. The response includes machine-readable instructions that guide further exploration of the data without the need to define an API. FAIR Projectors also give the data owner high granularity access control; rather than publishing their entire dataset, they can select to publish only certain components of that dataset, and/or can put different access controls on different data elements, for example, down to the level of an individual spreadsheet cell. • Interoperable -FAIR Projectors provide a standardized way to export any type of underlying data in a machine-readable structure, using widely used, public shared vocabularies. Data linkages that were initially implicit in the datastore, identifiers for example, become explicit when converted into URIs, resulting in qualified linkages between formerly opaque data deposits. Similarly, data that resides within computationally opaque structures or formats can also be exposed, and published in a FAIR manner if there is an algorithm capable of extracting it and exposing it via the TPF interface. • Reusable -All data points now possess unique identifiers, which allows them to be explicitly connected to their citation and license information (i.e. the MetaRecord). In this way, every data point, even when encountered in isolation, provides a path to trace-back to its reusability metadata.</ns0:p></ns0:div>
<ns0:div><ns0:head>Other features of FAIR Projection</ns0:head><ns0:p>• Native formats are preserved -As in many research domains, bioinformatics has created a large number of data/file formats. Many of these, especially those that hold 'big data', are specially formatted flat-files that focus on size-efficient representation of data, at the expense of general machine-accessibility. The analytical tooling that exists in this domain is capable of consuming these various formats. While the FAIR Data community has never advocated for wholesale Interoperable representations of these kinds of data -which would be inefficient, wasteful, and lacking in utility -the FAIR Projector provides a middle-ground. Projection allows software to query the core content of a file in a repository prior to downloading it; for example, to determine if it contains data about an entity or </ns0:p></ns0:div>
<ns0:div><ns0:head>Incentives and Barriers to Implementation</ns0:head><ns0:p>Looking forward, there is every indication that FAIRness will soon be a requirement of funding agencies and/or journals. As such, infrastructures such as the one described in this exemplar will almost certainly become a natural part of scholarly data publishing in the future. Though the FAIR infrastructure proposed here may appear difficult to achieve, we argue that a large portion of these behaviours -for example, the first two layers of the Accessor -can be accomplished using simple fill-in-the-blank templates. Such templating tools are, in fact, already being created by several of the co-authors, and will be tested on the biomedical data publishing community in the near future to ensure they are clear and usable by this key target-audience. Projection, however, is clearly a complex undertaking, and one that is unlikely to be accomplished by non-informaticians on their own. Transformation from unstructured or semistructured formats into interoperable formats cannot be fully automated, and we do not claim to have fully solved the interoperability bottleneck. We do, however, claim to have created an infrastructure that improves on the status quo in two ways: First, we propose to replace the wasteful, one-off, 'reuseless' data transformation activities currently undertaken on a daily basis throughout the biomedical community (and beyond), with a common, reusable, and machinereadable approach, by suggesting that all data transformations should be described in RML and transformed data exposed using TPF. Second, the solution we propose may, in many cases, partially automate the data transformation process itself. RML can be used, in combination with generic software such as RML Processor (http://github.com/RMLio) to actuate a data transformation over many common file formats such as CSV or XML. As such, by focusing on building RML models, in lieu of reuseless data transformation scripts, data publishers achieve both the desired data transformation, as well as a machine-readable interface that provides that transformed data to all other users. This may be incentivized even more by creating repositories of RML models that can be reused by those needing to do data transformations. Though the infrastructure for capturing these user-driven transformation events and formalizing them into FAIR Projectors does not yet exist, it does not appear on its surface to be a complex problem. Thus, we expect that such infrastructure should appear soon after FAIRness becomes Manuscript to be reviewed Computer Science a scholarly publishing requirement, and early prototypes of these infrastructures are being built by our co-authors. Several communities of data providers are already planning to use this, or related FAIR implementations, to assist their communities to find, access, and reuse their valuable data holdings. For example, the Biobanking and Rare disease communities will be given end-user tools that utilize/generate such FAIR infrastructures to: guide discovery by researchers; help both biobankers and researchers to re-code their data to standard ontologies building on the SORTA system <ns0:ref type='bibr' target='#b6'>(Pang et al., 2015)</ns0:ref>; assist to extend the MOLGENIS/BiobankConnect system <ns0:ref type='bibr'>(Pang et al., 2016)</ns0:ref>; add FAIR interfaces to the BBMRI (Biobanking and BioMolecular resources Research Infrastructure) and RD-connect national and European biobank data and sample catalogues. There are also a core group of FAIR infrastructure authors who are creating largescale indexing and discovery systems that will facilitate the automated identification and retrieval of relevant information, from any repository, in response to end-user queries, portending a day when currently unused -'lost' -data deposits once again provide return-oninvestment through their discovery and reuse.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusions</ns0:head><ns0:p>There is a growing movement of governing bodies and funding organizations towards a requirement for open data publishing, following the FAIR Principles. It is, therefore, useful to have an exemplar 'reference implementation' that demonstrates the kinds of behaviours that are expected from FAIR resources. Of the four FAIR Principles, Interoperability is arguably the most difficult FAIR facet to achieve, and has been the topic of decades of informatics research. Several new standards and frameworks have appeared in recent months that addressed various aspects of the Interoperability problem. Here, we apply these in a novel combination, and show that the result is capable of providing interoperability between formerly incompatible data formats published anywhere on the Web. In addition, we note that the other three aspects of FAIR -Findability, Accessibility, and Reusability -are easily addressed by the resulting infrastructure. The outcome, therefore, provides machine-discoverable access to richly described data resources in any format, in any repository, with the possibility of interoperability of the contained data down to the level of an individual 'cell'. No new standards or APIs were required; rather, we rely on REST behaviour, with all entities being resources with a resolvable identifier that allow hypermedia-driven 'drill-down' from the level of a repository descriptor, all the way to an individual data point in the record. Such an interoperability layer may be created and published by anyone, for any data source, without necessitating an interaction with the data owner. Moreover, the majority of the interoperability layer we describe may be achieved through dynamically generated files from software, or even (for the Accessor portion) through static, manually-edited files deposited in any public repository. As such, knowledge of how to build or deploy Web infrastructure is not required to achieve a large portion of these FAIR behaviours. The trade-off between power and simplicity was considered acceptable, as a means to hopefully encourage wide adoption. The modularity of the solution was also important because, in a manner akin to crowdsourcing, we anticipate that the implementation will spread through the community on a needs-driven basis, with the most critical resource components being targeted early -the result of individual researchers requiring interoperable access to datasets/subsets of interest to them. The interoperability design patterns presented here provide a structured way for these individuals to contribute and share their individual effort -effort they would have invested anyway -in a collaborative manner, piece-by-piece building a much larger interoperable and FAIR data infrastructure to benefit the global community.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 diagrams how multiple DCAT Distributions may be a part of a single MetaRecord. As with Container resources, MetaRecords may be published by anyone, and independently of the original data publisher.</ns0:figDesc><ns0:graphic coords='12,116.24,136.18,379.53,284.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1 The two layers of the FAIR Accessor. Inspired by the LDP Container, there are two resources in the FAIR Accessor. The first resource is a Container, which responds to an HTTP GET request by providing FAIR metadata about a composite research object, and optionally a list of URLs representing MetaRecords that describe individual components within the collection. The MetaRecord resources resolve by HTTP GET to documents containing metadata about an individual data component and, optionally, a set of links structured as DCAT Distributions that lead to various representations of that data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>3, where Distribution_3 and Distribution_4 include Triple Pattern Fragment-formatted URLs representing the FAIR Projector, and the Triple Descriptor RML model describing the structure and semantics of the data returned by calling that Projector. PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Integration of FAIR Projectors into the FAIR Accessor. Resolving the MetaRecord resource returns a metadata document containing multiple DCAT Distributions for a given record, as in Figure 1. When a FAIR Projector is available, additional DCAT Distributions are included in this metadata document. These Distributions contain a URL (purple text) representing a Projector, and a Triple Descriptor that describes, in RML, the structure and semantics of the Triple(s) that will be obtained from that Projector resource if it is resolved. These Triple Descriptors may be aggregated into FAIR Profiles, based on the Record that they are associated with (Record R, in the figure) to give a full mapping of all available representations of the data present in Record R.</ns0:figDesc><ns0:graphic coords='18,72.00,194.37,442.23,371.50' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>PREFIX up:<http://purl.uniprot.org/core/> PREFIX taxon:<http://purl.uniprot.org/taxonomy/> PREFIX rdf:<http://www.w3.org/1999/02/22-rdf-syntax-ns#> PREFIX GO:<http://purl.obolibrary.org/obo//rdfs:subClassOf GO:0006397 . BIND(substr(str(?protein), 33) as ?id) } Accessor output is retrieved from the Container Resource URL: http://linkeddata.systems/Accessors/UniProtAccessor The result of calling GET on the Container Resource URL is visualized in Figure4, whereTabulator (Tim Berners-lee et al., 2006) is used to render the output as HTML for humanreadability.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. A representative portion of the output from resolving the Container Resource of the FAIR Accessor, rendered into HTML by the Tabulator Firefox plugin. The three columns show the label of the Subject node of all RDF Triples (left), the label of the URI in the predicate position of each Triple (middle), and the value of the Object position (right), where blue text indicates that the value is a Resource, and black text indicates that the value is a literal.</ns0:figDesc><ns0:graphic coords='20,73.50,107.09,459.82,412.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. A representative (incomplete) portion of the output from resolving the MetaRecord Resource of the FAIR Accessor for record C8V1L6 (at http://linkeddata.systems/Accessors/UniProtAccessor/C8V1L6), rendered into HTML by the Tabulator Firefox plugin. The columns have the same meaning as in Figure 4.</ns0:figDesc><ns0:graphic coords='22,73.50,325.28,434.41,310.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure6. Turtle representation of the subset of triples from the MetaRecord metadata pertaining to the two DCAT Distributions. Each distribution specifies an available representation (media type), and a URL from which that representation can be downloaded.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Turtle representation of the subset of triples from the MetaRecord metadata pertaining to one of the FAIR Projector DCAT Distributions of the MetaRecord shown in Figure7. The text is colourcoded to assist in visual exploration of the RDF. The DCAT Distribution blocks of the two Projector distributions (black bold) have multiple media-type representations (red), and are connected to an RML Map (Dark blue) by the hasMapping predicate, which is a block of RML that semantically describes the subject, predicate, and object (green, orange, and purple respectively) of the Triple Descriptor for that Projector. This block of RML is schematically diagrammed in Figure2. The three media-types (red)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='25,73.50,136.18,442.09,240.14' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>•</ns0:head><ns0:label /><ns0:figDesc>High granularity of access control -individual elements of a collection may have distinct access constraints or licenses. For example, individual patients within a study may have provided different consent. MetaRecords allow each element within a collection to possess, and publish, its own access policy, access</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Standardized interface to (some) Web APIs</ns0:head><ns0:label /><ns0:figDesc>of interest. FAIR Projectors, therefore, enable efficient discovery of data of-interest, without requiring wasteful transformation of all data content into a FAIR format.• Semantic conversion of existing Triplestores -It is customary to re-cast the semantic types of entities within triplestores using customized SPARQL BIND or CONSTRUCT clauses. FAIR Projectors provide a standardized, SPARQL-free, and discoverable way to accomplish the same task. This further harmonizes data, and simplifies interoperability. • -Many Web APIs in the biomedical domain have a single input parameter, generally representing an identifier for some biochemical entity. FAIR Projectors can easily replace these myriad Web APIs with a common TPF interface, thus dramatically enhancing discoverability, machine-readability, and interoperability between these currently widely disparate services.</ns0:figDesc><ns0:table /><ns0:note>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017) Manuscript to be reviewed Computer Science identifier</ns0:note></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:13730:1:0:NEW 23 Feb 2017) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "RESPONSES TO REVIEWERS
Reviewer 1
“the lack of a novel API means that the information is accessible to generic Web-crawling agents, and may
also be processed if that agent “understands” the vocabularies used”.... My question is, is possible to
address this problem with not-known vocabularies or remains as an open problem? What are the
vocabularies to be used or where can they be retrieved? Just please explain a bit more regarding
this.
The reviewer points-out that this is a/the core problem of interoperability, and not one for which there are
off-the-shelf solutions. Agreement on, or mapping between, vocabularies is required to achieve deep
interoperability and/or machine “understanding” of any novel resources they autonomously encounter.
This is why we phrased this sentence the way we did - to point-out that the key contribution in this regard
(i.e., the elimination of any novel API) is that the data infrastructure can be explored by any Web agent.
The ability of those agents to process what they find is, however, not within our control. Nevertheless, by
creating this infrastructure, the incentive to use well-established vocabularies increases, because
discoverability, interoperability, and reuse is greatly facilitated by the removal of other interface barriers
via our approach.
In line 328 authors claim about the support of a multitude of source data formats (including for
example, Excel, CSV, JSON, …) but all of this formats are structured formats (even Excel with its
own API). However, is it possible to deal with binary files such as PDF that also need specific
APIs to get the data and the result will be given, normally, in unstructured text? In line 336 the
authors claim that the approach selected was based on the premise that data, in any format,
could be metamodeled as a first step towards interoperability, and honestly I’ve serious doubts
that this could be really done always, mainly based on the type of object (data source – file) to be
used and/or the internal structure. More information regarding this possible limitation would be
helpful to understand your approach.
And…
The idea of the FAIR projectors is interesting itself, but I couldn’t catch the idea that you are
proposing. In line 446 authors said: “what is required is a universally-applicable way of retrieving
data from any transformation script (or any data source), without inventing a new API. We now
describe our suggestion for how to achieve this behaviour, and we refer to such transformation
tools as FAIR projectors”. What exactly does the FAIR projectors tools? How are they able to
universally transform the data retrieved from any transformation script or data source to the
appropriate triples? I’m sorry but I’m a bit lost over here.
These two questions are tightly related, so we feel it is appropriate to respond to them as a unit.
We agree with the reviewer that we should be far more clear about what we are claiming, to ensure that
we do not appear to be over-claiming or over-promising. In line 424 we claim that the ability to serve
data from a wide variety
of formats is one of the d
esiderata driving our selection of technologies. We do
stand by the claim that any data can be metamodelled - were that not true, in our opinion, it wouldn’t be
data. The representation of that data as, for example, an image, does not alter the inherent properties
(“structure/model”) of the data being represented by that image. We agree with the reviewer that certain
types of file format, like images or PDF, may make it difficult to extract the data, but this is a slightly
different problem. For example, in many cases, especially in fields such as biomedical imaging, there is
a plethora of image analysis technology that can extract structured data from images. Once data is
extracted, it can be metamodelled and published using our proposed infrastructure.
The sentence quoted by the reviewer needs to be read with a particular emphasis: “what is required is a
universally-applicable way of retrieving data f rom any transformation script (or any data source),
without inventing a new API.” To be perfectly clear, that sentence does not claim that we have invented
a universal data extractor. The sentence states
that we need to s
tandardize the way to retrieve the
data once it has been extracted. What we have done is to harmonize the outward-facing interface in all
cases to be HTTP GET, regardless of what the data extraction algorithm or API is “under the hood”. To
make this clear, we have included “binary” as a file format in our list of competencies in the desiderata
section, and have added numerous clarifying sentences to the section “FAIR Facet(s) Addressed by the
Triple Descriptors and FAIR Projectors:”. For example, one relevant sentence now reads “Similarly, data
that resides within computationally opaque structures or formats can also be exposed, and published in
an interoperable format, if there is an algorithm capable of extracting it and passing it to the TPF
interface.” To enhance the
clarity of our claims (and to emphasise what we are not claiming), we have
also changed the sentence quoted by the reviewer to read “We propose, therefore, that what is required
is a universally-applicable way of retrieving data generated by a (user-defined) data extraction or
transformation script, without inventing a new API.”
I tried to do the GET petition to the Container Resource and I successfully obtained the FAIR
Accessor. From the FAIR Accessor I got what I understand/think is a MetaRecord (the first record
in fact provided as result from the query provided in the Accessor
(http://linkeddata.systems/Accessors/UniProtAccessor/C8UZX9 ). According to Figure 5, the
MetaRecord should provide me some information for record C8UZX9, and it is true, this
information is provided. But this information is provided after lot of triples that honestly I don’t
know what they mean. Some of the triples seems to be related with the type of information
obtained (taxon, organism, …) but some others are like this one:
<rdf:Description xmlns:ns1='http://semweb.mmlab.be/ns/rml#'
xmlns:ns2='http://www.w3.org/ns/r2rml#'
rdf:about='http://datafairport.org/local/MappingsFCCC8188-99F0-11E6-A61C-84165D07C3DD'>
<ns1:logicalSource
And the URLs contained over these triples are not, currently, resolvable being difficult to get
more information about them.
Ok, now I see that this information is part of the projector. It would be interesting to make some reference
to this when the MetaRecord examples are provided because if the reader tries to get the data at the same
time that he is reading, probably could get lost. In any case, I suppose that the maps should resolve, and
they are not currently doing it.
It is, indeed, difficult to read RDF, as it is not intended to be consumed by humans; however, having
provided a high-level graphical view earlier in the manuscript, we cannot avoid providing more detailed
examples of the output in this section. We have tried to filter-out the most complex portions of the RDF
to focus on only those elements relevant to the aspects being discussed - hence, you see more detailed
output when you resolve the URL (as you did!) compared to what we present in the manuscript. We
initially indicated this in the figure legends (e.g. in Figure 5) by saying “A representative portion of the
output”; however after your suggestion we have further emphasised this point by including the sentence
“Calling HTTP GET on a MetaRecord Resource URL returns a document that include metadata
elements and structure with the structure shown in Figure 5. Note that Figure 5 is not the complete
MetaRecord; rather it has been edited to include only those elements relevant to the aspects of the
interoperability infrastructure that have been discussed so far. More complete examples of the
MetaRecord RDF are described in Figures 7, 8, and 9.”. Finally, we have also added a second, simple
use case, that has only a simple FAIR Accessor.
With respect to URL resolution - you are correct that those URLs do not resolve; they are local to that
document. We agree that this is not best-practice, and therefore have modified the triples in our
MetaRecord to use internal document-fragments rather than non-resolvable URIs. Thank you for this
suggestion, as it makes us more standards-compliant, and in fact, more FAIR!
Regarding the transformation performed by the projector, I’m not sure if I understand completely the aim.
As far as I saw in the example, the projector is trying to transform the native RDF semantics from UniProt
to EDAM. However, the RML is describing a triple where the subject type is ‘Protein Record’ and the object
‘GO Concept ID’. Where is exactly this transformation of semantics? As far as I understand is returning an
upper-class type for both elements of the triple (subject and object) that match with the predicate
considered (classifiedWith). Is that the aim? I was wondering to replace with the equivalent elements (if
applicable) for those instances in EDAM. What is the reason of performing this transformation? Please
clarify. I wonder if in this case it has been just done with this upper class because it is not possible to do it
in other way, but in other cases can be achieved.
We apologize that the aim of the demonstrative projector is not clear. We also apologize that we cannot precisely
understand what you are asking or proposing above. We believe that the Before and After data structures in Figure 9
show explicitly what the semantic transformation was. The purpose of the transformation was to convert the
semantics of the data from that of UniProt - defined by the UniProt ontology (or in the case of the GO identifiers, with
no explicit semantics at all beyond owl:Class) - to data that uses the semantics provided by the EDAM ontology. As
with all other aspects of our proposed solution, this must be done RESTfully, without defining any API. Thus, we
provide an RML model of the output (the model describes that the data structures will be described using the EDAM
ontology), and a URL from which to retrieve that output.
As a final comment, I would like to introduce some kind of more descriptive workflow figure with all the
elements involved. At the end it is difficult to understand the relation of the different elements provided
with the previous one, and probably this kind of figure would help to have a better understanding.
We believe that Figure 3 is what the reviewer is requesting. It includes all elements described in the manuscript in a
graphical overview showing their relationship to one another.
Regarding the conclusions, although I consider the initiative very interesting, I think that would be very
difficult to achieve the goals covet by the authors. It is necessary the development of more easy and
friendly tools that allows the researchers to not waste too much time in thinking how to deploy their data
according to FAIR principles (even if journals are requiring it..), but that’s just my opinion.
We agree with the reviewer, and indeed believe that the approach we report in the paper has been designed to
support FAIR compliance either through its direct usage by technology experts or by serving as foundations for the
developments of end user-targeted tools. We are already making notable progress towards ensuring that this
user-friendly tooling will exist in the near-term! Among these solutions we have an user friendly metadata editor and
a server application named FAIR Data Point which provides FAIR metadata and accessor.
To emphasise this, we have changed the first two paragraphs of the discussion section to read: “Looking forward,
there is every indication that FAIRness will be a requirement of funding agencies and/or journals. As such,
infrastructures such as the one described in this exemplar will almost certainly become a natural part of scholarly data
publishing in the future. Though the FAIR infrastructure proposed here may appear difficult to achieve, we argue that a
large portion of these behaviors - for example, the first two layers of the Accessor - can be accomplished using simple
fill-in-the-blank templates. Such templating tools are, in fact, already being created by several of the co-authors, and
will be tested on the biomedical data publishing community in the near future to ensure they are clear and usable by
this key target-audience.
Projection, however, is clearly a complex undertaking, and one that is unlikely to be accomplished by
non-informaticians on their own. Transformation from unstructured or semi-structured formats into interoperable
formats cannot be fully automated, and we do not claim to have fully solved the interoperability bottleneck. We do,
however, claim to have created an infrastructure that improves on the status quo in two ways: First, we propose to
replace the wasteful, one-off, 'reuseless' data transformation activities currently undertaken throughout our
community, with a common, reusable, and machine-readable approach, by suggesting that all data transformations
should be described in RML and exposed using TPF. Second, the solution we propose may, in many cases, partially
automate the data transformation process itself. RML can be used, in combination with generic software, to actuate a
data transformation over many common file formats such as CSV or XML. As such, by focusing on building RML
models, in lieu of reuse-less data transformation scripts, data publishers achieve both the desired data transformation,
as well as a machine-readable interface that provides that transformation to all other data users. This may be
incentivized even more by creating repositories of RML models that can be reused by those needing to do data
transformations. Though the infrastructure for capturing these user-driven transformation events and formalizing them
into FAIR Projectors does not yet exist, it does not appear on its surface to be a complex problem. Thus, we expect
that such infrastructure should appear soon after FAIRness becomes a scholarly publishing requirement, and early
prototypes of these infrastructures are being built by our co-authors, and will be published as an independent work,
targeted at a non-computer-science audience.
address this reviewer concern.
We have now included all of these ideas in the discussion section, to
Reviewer 2
The paper need to be re-organized. I had to jump from one section to the other to understand the topic. The
authors concern to address the FAIR principles lead to interruption of flow of ideas. Starting with the
methodology section without concrete example made it difficult to follow the contribution. A concrete
example given in the introduction would have been enough to show the challenge and how the solution
would work.
We apologize that the reviewer found the paper disorganized. We have now provided a concrete example in the
introductory section, as suggested, and hope that this provides a means to traverse this admittedly intertwined
combination of technologies. We have also carefully walked-through the text to ensure that each concept is clearly
introduced before it is used in an example. This is difficult, given the “circularity of dependency” of these technologies,
however we hope that it is now easier to follow. We ordered the topics in approximate order of complexity. First,
metadata interoperability, then data metamodelling, then compatible data discovery, and finally actuation of data
transformations. This is done first at the conceptual level, with diagrams, and then is repeated in the same order, with
a walkthrough example. We believe that this is the most effective way to reach the maximum breadth of readership,
by providing a diagrammatic view of the entire proposal that should be accessible to all readers, prior to going into
details of the actual (complex!) data structures that likely will reach only our immediate peer-researchers.
I could not find serious discussion about previous work related to the topic. How the authors
place their solution compared to previous projects. Was it possible that the FAIR principles
would be achieved by a minor tweaking of current systems?
The brief discussion of similar technologies was moved from the discussion section to the Introduction section, and
expanded to provide many more examples; however, even those are only a few of the many attempts to achieve
interoperability that we could have mentioned. Hopefully, the reviewer will find this list sufficient. With respect to the
question of whether or not FAIR could be achieved by tweaking current systems, we would in fact argue that this is
precisely what we did! We have invented nothing. We use only existing technologies, albeit in a novel combination.
Our proposed approach to interoperability should, in fact, not require even the smallest amount of “tweaking” on the
part of any data host, since the solution we propose (at least at the level of the Accessor) is little more than a few RDF
documents that follow a specific set of standards.
Data-level interoperability is, however, hard! As we argue in the new sections of the introduction, this problem
has been resilient to decades of dedicated research and development. As such, we are quite convinced that the
solution to this problem requires more than a “minor tweak”. We propose, here, a different perspective on this
problem. Rather than focusing on how to make data interoperable in an automated way (which is likely impossible, as
we have noted in prior publications, e.g. doi:10.1186/2041-1480-2-8), we rather focus on how to publish interoperable
data - from any source, including dynamic transformations - in a discoverable and standardized way. By doing this,
we hope to capture the (largely manual) efforts of the community in data transformation, by having these data experts
all “pulling in the same direction”, coding to a simple, common interface, and creating reusable tools as they go.
The suggested method is not really easy and straightforward to implement. What is the
technological requirements at both the server and client side to achieve this? How this can be
simplified?
We hope that the new revision of the manuscript makes a more compelling case as to the simplicity of this approach,
from both the client and the publisher’s perspective. We believe that the detailed answer to this concern been largely
addressed in the answer to Reviewer 1’s similar comment about complexity, and by our inclusion of a second example
of usage (the deposit to Zenodo in the Methods section of the manuscript - second-to-last paragraph of the “Metadata
Interoperability - The “FAIR Accessor” and the Linked Data Platform” section). Briefly, given the soon-to-be available
toolkits the FAIR Accessor can be templated, thus can be deployed by non-experts through a simple fill-in-the-blank
form. FAIR Projection is, by necessity, more complex; however, in many cases the transformations of others can be
reused/duplicated simply by reusing the same RML model for the same/similar datatype. Moreover, the RML model
can be used in combination with generic, publicly available software, to execute the data transformation itself, thus
requiring no scripting whatsoever in the case of the most commonly-used datatypes (such as those coming from
widely used high-throughput technologies). Thus, the “simplification” at this level is achieved through, among other
things, reuse of other people’s effort.
The requirements client-side are the ability to access the Web, since every step of the Accessor/Projector
combination use only HTTP GET. The requirements server-side are negligible with respect to the Accessor portion,
since we show that we can publish an Accessor in a Zenodo deposit, with no additional infrastructure. For the
Projector, the implementer will need to have access to a publically-accessible Web server, possibly one capable of
running data transformation scripts if the projector is to be dynamic - these are widely available even through monthly
rental. They will also need to know the basics of RDF/OWL, and a scripting language. These are, however, not
uncommon talents.
What makes this method appealing for non-computer scientists?
We agree with the reviewer that this manuscript would not be appealing to non-computer scientists; however, we
selected PeerJ CompSci with the intent of targeting computer scientists as the readership. We will be writing a
second manuscript targeting researchers with little or no experience in formal data publishing, and as such, this
manuscript focuses primarily on the approach to combining these three independently-created technologies into a
unified solution. We believe, however, that the added text of the new example (Zenodo deposit) and in the discussion
section, will help clarify what is gained by the end-user.
The use of different confusing technologies contradicts the concept of minimal design. Is there any
solution to simplify this?
We have reworded the document extensively in order to more gradually and clearly introduce the various technical
components, none of which should, in isolation, create any confusion, as they are all existing standards. The FAIR
Accessor - is little more than a Web page with hyperlinks, following a W3C standard. The second part of the
infrastructure - the Projector - is, perhaps, complex; however, it could be imagined as a highly simplified WSDL, where
it provides an endpoint to call (the TPF URL, exclusively with HTTP GET) and a model describing the structure of the
data that will be returned (using RML rather than XSD). We have now included these “similar to” examples in the text,
which may help orient the reader a bit more.
For FAIR, the concept of “minimal design” was just as important client-side as server-side (possibly moreso,
since clients cannot be expected to have any expertise). As such, we would argue that HTTP GET achieves the
minimalist architecture envisioned by FAIR. Server-side, we demonstrate that we can achieve metadata FAIRness
and interoperability even using a legacy, static data repository (Zenodo), without any need to coordinate with them.
As such, we would argue that achieving metadata FAIRness also requires minimal effort, and we will simplify this even
further through templating (see answers above). With respect to data level interoperability, we believe that our
response to your earlier comment explains our perspective on how this solution is as minimal as we could/can
achieve.
Reviewer 3
Focusing on the Introduction, the authors might include more information about existing
alternatives and previous attempts to provide interoperability and easy integration of data
repositories for bioinformatics or other related disciplines. Actually, it is briefly mentioned in the
Discussion (lines 709-717), but I would suggest providing further information in this regard as
part of the motivation at the beginning of the paper.
We have moved the prior art discussion to the introduction section, and have significantly lengthened it. Starting in
paragraph 3 of the Introduction section. We also expand on our motivation, as suggested by other reviewers, by
providing a compelling use-case (one which we eventually solve, as a high-level, simple example, in the Methods
section, prior to discussing the system in more technical detail)
The introduction to the results might be improved if a brief description or external references to
UniProt and EDAM are provided. In this sense, the paper assumes that the reader has high
expertise on several semantic aspects (protein records, ontologies...), which might hamper its
readability. In the same lines, the code example shown in lines 508-524 would be rather difficult
to understand by readers without knowledge on query languages. It is a particular issue that
clearly needs revision, as the role of SPARQL was not explained in Section 2 (methods).
We thank the reviewer for this observation, as it is true that a reader of PeerJ CompSci will likely have little or no
familiarity with UniProt or protein records. We now define what UniProt is in several places in the manuscript,
including the introduction (line 233) and in the Methods section (line 574).
With respect to SPARQL, we have modified the sentence preceding the query example to read, “The example
FAIR Accessor accesses a database of RDF hosted by UniProt, and issues the following query over that database
(expressed in the standard RDF query language SPARQL):”. Hopefully this should sufficiently explain the purpose of
that exemplar query (though understanding the query is not necessary to achieve understanding of the example).
With respect to providing extensive explanations of, ontologies, subject, predicate, and object: Unfortunately,
since every component of our proposed solution utilizes Semantic Web standards, in place since 2004, we feel that an
extensive explanation of these core technologies would detract from our core message, and may in fact act as a
frustration to our key target audience of specialist data publishers. As such, we have written this manuscript with the
assumption that the reader will have a basic understanding of the core technologies. We do, however, introduce
these concepts much earlier in the introduction (see answer to your next comment) and this helps to put them
in-context, which we agree is an important improvement, even if the details of these core technologies is not
discussed. We have made several small modifications to the text to provide a more gentle introduction to concepts
such as subject/predicate/object (e.g. line 459), which we hope will suffice to respond to this concern.
Similarly, the last paragraph of the Introduction (l. 156-162) could be extended in order to focus
the scope of the paper. For instance, a brief introduction to the basic foundations of the selected
technologies and how them are applied to address the challenge of data integration and
availability might be useful in order to provide the reader with a more detailed overview of the
proposed solution. I think that both aspects (background and contribution) can strengthen an
already good introduction.
We thank the reviewer for their supportive comments about the introduction section. We have now strengthened it in
exactly the manner you describe, adding several additional paragraphs of examples, and a compelling use-case that
demonstrates what currently cannot be done.
In general, the figures provide illustrative examples of the concepts surrounding the different
technologies. Only two of them need some changes. On the one hand, Figure 1 is located at the
beginning of Section 2 (methods) but it is not explained in the context of that section. It is only
referred in Section 3 (results), even though the figure does not contain specific information about
the case study.
We believe all figures are now referenced in the proper sections; we will work with the production team to
ensure they appear in appropriate locations in the final text.
Moreover, DCAT has not been explained at this point, so it can be a bit confusing to the reader.
DCAT is now defined in the paragraph beginning at line 296
On the other hand, Figure 2 assumes that the reader have strong knowledge on the 'Triple
Descriptor' representation.
Thank you for this observation - we did, indeed, somehow lose a block of text defining the Triple Descriptor, and this
would no doubt make the text difficult to follow! Our apologies for that. The entire Methods section has been
extensively rewritten, and includes crucial definitions such as the Triple Descriptor. We hope the reviewer will take
some minutes to review the entire section to evaluate its clarity in the new form.
The validity of the findings may be compromised due to the lack of experimental results. In this sense,
providing public access to the case study should be considered, if possible. Even though the
implementation is not complete, it would be really interesting to let readers 'play' with the data and
navigate across the different FAIR profiles. It will be also in line with the policy of the journal that promotes
sharing data and materials.
And
The validity of the findings may be compromised due to the lack of experimental results. In this sense,
providing public access to the case study should be considered, if possible. Even though the
implementation is not complete, it would be really interesting to let readers 'play' with the data and
navigate across the different FAIR profiles. It will be also in line with the policy of the journal that promotes
sharing data and materials.
We deeply apologize for this misunderstanding; however, everything described in the original manuscript was (and is)
“live”, even during the original review process. We welcome the reviewer to “play” as much as they wish! We have
now also included a second example - a static set of documents acting as a FAIR Accessor - in a Zenodo deposit, and
these are also publicly available at the URL mentioned in the manuscript.
It is worth mentioning that the paper provides a comprehensive justification of the technologies selected to
build the system. It really helps to present the general idea behind the system and then go into more
specific details.
Thank you!!
The only weak point of this approach I have found is that concepts like 'subject', 'predicate' and 'object'
in RDF/RML were not described somewhere in between. For instance, lines 361-365 mention some
restrictions about 'objects' and 'subjects' in the context of RML maps, but the general meaning of these
terms seems to come from RDF, which might be unknown for the reader. Similarly, the acronym [S,P,O] is
used later (l. 452) to refer to triple pattern fragments (TPF), but understanding its relation to the
aforementioned terms is not straightforward.
We have included, throughout the manuscript, several places where we remind the reader about the meanings of
these acronyms - hopefully this will improve clarity. In addition, we thank the reviewer for pointing-out that we
completely failed to explain what a Triple Pattern Fragment was, and how it relates to subject, predicate, and object!
This was a terrible oversight that is now fixed by an addition to the paragraph beginning at line 310.
I find really interesting the idea of multiple FAIR profiles to describe the same data, so that the system can
also integrate different data formats.
Thank you! So do we!
As I understand it, different data users/providers could create new profiles for the same data resource, so a
question here is whether the system can deal with possible inconsistencies between the information
provided by different profiles.
The short answer to this is “yes”; the longer answer is “what is an inconsistency?”; and the very long answer is that we
do not want to pre-suppose how clients/registries will use the information provided by FAIR Profiles. As such, we
think it is premature to create any restrictions (or even suggestions) for how they are used/integrated/interpreted.
Moreover, we (this infrastructure) do not process the FAIR Profile, we only publish it, and make the note that a Profile
could contain multiple representations of the same thing. The safest thing to do, when concerned about
inconsistencies, is to consider them independently from one another; however, this seems to be an issue that should
be decided by the consumer of the Profile rather than by us - if they can handle inconsistencies (whatever that means
to them) then they are free to do whatever they wish.
Similarly, some kind of control regarding who publishes new content and what is being published might
be required in the future.
Agreed, It may be required. But that isn’t our decision to make…
As part of the conclusions, the authors might discuss the challenges behind the final
implementation of the proposed architecture. It includes dealing with huge volume of data and
guaranteeing quality of service, among others. Similarly, current limitations and lines of future
work should be specified.
We have renamed a portion of the discussion section “Incentives and Barriers” to emphasise that we discuss the
issues raised by the reviewer - both the challenges and limitations - in that section. We do not mention quality of
service because this isn’t something addressed by our technology in any way. In principle, it may be deployed over a
repository that has strong support and perfect up-time, or may equally well be deployed over a “hobby” database that
is unavailable most days. This issue is simply not addressed, as it is not something we can predict.
" | Here is a paper. Please give your review comments after reading it. |
734 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Biases against women in the workplace have been documented in a variety of studies. This paper presents a large scale study on gender bias, where we compare acceptance rates of contributions from men versus women in an open source software community. Surprisingly, our results show that women's contributions tend to be accepted more often than men's. However, for contributors who are outsiders to a project and their gender is identifiable, men's acceptance rates are higher. Our results suggest that although women on GitHub may be more competent overall, bias against them exists nonetheless.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>In 2012, a software developer named Rachel Nabors wrote about her experiences trying to fix bugs in open source software. 1 Nabors was surprised that all of her contributions were rejected 1 http://rachelnabors.com/2012/04/of-github-and-pull-requests-and-comics/ </ns0:p></ns0:div>
<ns0:div><ns0:head>Related Work</ns0:head><ns0:p>A substantial part of activity on GitHub is done in a professional context, so studies of gender bias in the workplace are relevant. Because we cannot summarize all such studies here, we instead turn to Davison and Burke's meta-analysis of 53 papers, each studying between 43 and 523 participants, finding that male and female job applicants generally received lower ratings for opposite-sex-type jobs (e.g., nurse is a female sex-typed job, whereas carpenter is male sex-typed) <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>.</ns0:p><ns0:p>The research described in Davison and Burke's meta-analysis can be divided into experi-ments and field studies. Experiments attempt to isolate the effect of gender bias by controlling for extrinsic factors, such as level of education. For example, Knobloch-Westerwick and colleagues asked 243 scholars to read and evaluate research paper abstracts, then systematically varied the gender of each author; overall, scholars rated papers with male authors as having higher scientific quality <ns0:ref type='bibr' target='#b21'>[21]</ns0:ref>. In contrast to experiments, field studies examine existing data to infer where gender bias may have occurred retrospectively. For example, Roth and colleagues' meta-analysis of such studies, encompassing 45,733 participants, found that while women tend to receive better job performance ratings than men, women also tend to be passed up for promotion <ns0:ref type='bibr' target='#b33'>[32]</ns0:ref>.</ns0:p><ns0:p>Experiments and retrospective field studies each have advantages. The advantage of experiments is that they can more confidently infer cause and effect by isolating gender as the predictor variable. The advantage of retrospective field studies is that they tend to have higher ecological validity because they are conducted in real-world situations. In this paper, we use a retrospective field study as a first step to quantify the effect of gender bias in open source.</ns0:p><ns0:p>Several other studies have investigated gender in the context of software development. Burnett and colleagues analyzed gender differences in 5 studies that surveyed or interviewed a total of 2991 programmers; they found substantial differences in software feature usage, tinkering with and exploring features, and in self-efficacy <ns0:ref type='bibr' target='#b5'>[6]</ns0:ref>. Arun and Arun surveyed 110 Indian software developers about their attitudes to understand gender roles and relations but did not investigate bias <ns0:ref type='bibr' target='#b2'>[3]</ns0:ref>. Drawing on survey data, Graham and Smith demonstrated that women in computer and math occupations generally earn only about 88% of what men earn <ns0:ref type='bibr' target='#b16'>[17]</ns0:ref>. Lagesen contrasts the cases of Western versus Malaysian enrollment in computer science classes, finding that differing rates of participation across genders results from opposing perspectives of whether computing is a 'masculine' profession <ns0:ref type='bibr' target='#b23'>[23]</ns0:ref>. The present paper builds on this prior work by looking at a larger population of developers in the context of open source communities. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Some research has focused on differences in gender contribution in other kinds of virtual collaborative environments, particularly Wikipedia. Antin and colleagues followed the activity of 437 contributors with self-identified genders on Wikipedia and found that, of the most active users, men made more frequent contributions while women made larger contributions <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>.</ns0:p><ns0:p>There are two gender studies about open source software development specifically. The first study is Nafus' anthropological mixed-methods study of open source contributors, which found that 'men monopolize code authorship and simultaneously de-legitimize the kinds of social ties necessary to build mechanisms for women's inclusion', meaning values such as politeness are favored less by men <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>. The other is Vasilescu and colleagues' study of 4,500 GitHub contributors, where they inferred the contributors' gender based on their names and locations (and validated 816 of those genders through a survey); they found that gender diversity is a significant and positive predictor of productivity <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>. Our work builds on this by investigating bias systematically and at a larger scale.</ns0:p></ns0:div>
<ns0:div><ns0:head>General Methodology</ns0:head></ns0:div>
<ns0:div><ns0:head>Our main research question was</ns0:head><ns0:p>To what extent does gender bias exist when pull requests are judged on GitHub?</ns0:p><ns0:p>We answer this question from the perspective of a retrospective cohort study, a study of the differences between two groups previously exposed to a common factor to determine its influence on an outcome <ns0:ref type='bibr' target='#b10'>[11]</ns0:ref>. One example of a similar retrospective cohort study was Krumholz and colleagues' review of 2473 medical records to determine whether there exists gender bias in the treatment of men and women for heart attacks <ns0:ref type='bibr' target='#b22'>[22]</ns0:ref>. Other examples include the analysis of 6244 school discipline files to evaluate whether gender bias exists in the administration of corporal punishment <ns0:ref type='bibr' target='#b11'>[12]</ns0:ref> and the analysis of 1851 research articles to evaluate whether gender bias Google's terms of service. 4 Third, to protect the identities of the people described in this study to the extent possible, we do not plan to release our data that links GitHub users to genders.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results</ns0:head><ns0:p>We describe our results in this section; data is available in the Supplimental Files section on PeerJ.</ns0:p></ns0:div>
<ns0:div><ns0:head>Are women's pull requests less likely to be accepted?</ns0:head><ns0:p>We hypothesized that pull requests made by women are less likely to be accepted than those made by men. Prior work on gender bias in hiring -that a job application with a woman's name is evaluated less favorably than the same application with a man's name <ns0:ref type='bibr' target='#b29'>[28]</ns0:ref> -suggests that this hypothesis may be true.</ns0:p><ns0:p>To evaluate this hypothesis, we looked at the pull status of every pull request submitted by women compared to those submitted by men. We then calculate the merge rate and corresponding confidence interval, using the Clopper-Pearson exact method <ns0:ref type='bibr' target='#b8'>[9]</ns0:ref>, and find the following:</ns0:p></ns0:div>
<ns0:div><ns0:head>Gender</ns0:head><ns0:p>Open Experience Effects. Perhaps only a few highly successful and prolific women, responsible for a substantial part of overall success, are skewing the results. To test this, we calculated the pull request acceptance rate for each woman and man with 5 or more pull requests, then found the average acceptance rate across those two groups. The results are displayed in Figure <ns0:ref type='figure'>2</ns0:ref>. We notice that women tend to have a bimodal distribution, typically being either very successful (> 90% acceptance rate) or unsuccessful (< 10%). But these data tell the same story as the overall acceptance rate; women are more likely than men to have their pull requests accepted.</ns0:p><ns0:p>Why might women have a higher acceptance rate than men, given the gender bias documented in the literature? In the remainder of this section, we will explore this question by evaluating several hypotheses that might explain the result. </ns0:p></ns0:div>
<ns0:div><ns0:head>Do women's pull request acceptance rates start low and increase over time?</ns0:head><ns0:p>One plausible explanation is that women's first few pull requests get rejected at a disproportionate rate compared to men's, so they feel dejected and do not make future pull requests. This explanation is supported by Reagle's account of women's participation in virtual collaborative environments, where an aggressive argument style is necessary to justify one's own contributions, a style that many women may find to be not worthwhile <ns0:ref type='bibr'>[31]</ns0:ref>. Thus, the overall higher acceptance rate for women would be due to survivorship bias within GitHub; the women who Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p><ns0:formula xml:id='formula_0'>• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • •• • •• • • •• ••• •• •• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • ••• ••• • • •• •• • ••• ••• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • 60% 70% 80% 90%<ns0:label>1</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Women Men</ns0:head><ns0:p>Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>: Pull request acceptance rate over time remain and do the majority of pull requests would be better equipped to contribute, and defend their contributions, than men. Thus, we might expect that women have a lower acceptance rate than men for early pull requests but have an equivalent acceptance rate later.</ns0:p><ns0:p>To evaluate this hypothesis, we examine pull request acceptance rate over time, that is, the mean acceptance rate for developers on their first pull request, second pull request, and so on. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>While developers making their initial pull requests do get rejected more often, women generally still maintain a higher rate of acceptance throughout. The acceptance rate of women tends to fluctuate at the right of the graph, because the acceptance rate is affected by only a few individuals. For instance, at 128 pull requests, only 103 women are represented. Intuitively, where the shaded region for women includes the corresponding data point for men, the reader can consider the data too sparse to conclude that a substantial difference exists between acceptance rates for women and men. Nonetheless, between 1 and 64 pull requests, women's higher acceptance rate remains. Thus, the evidence casts doubt on our hypothesis.</ns0:p><ns0:p>Are women focusing their efforts on fewer projects?</ns0:p><ns0:p>One possible explanation for women's higher acceptance rates is that they are focusing their efforts more than men; perhaps their success is explained by doing pull requests on few projects, whereas men tend to do pull requests on more projects.</ns0:p><ns0:p>First, the data do suggest that women tend to contribute to fewer projects than men. While the median number of projects contributed to via pull request is 1 for both genders (that is, the 50th percentile of developers); at the 75th percentile it is 2 for women and 3 for men, and at the 90th percentile it is 4 for women and 7 for men.</ns0:p><ns0:p>But the fact that women tend to contribute to fewer projects does not explain why women tend to have a higher acceptance rate. To see why, consider Figure <ns0:ref type='figure'>4</ns0:ref>; on the y axis is mean acceptance rate by gender, and on the x axis is number of projects contributed to. When contributing to between 1 and 5 projects, women have a higher acceptance rate as they contribute to more projects. Beyond 5 projects, the 95% confidence interval indicates women's data are too sparse to draw conclusions confidently. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Are women making pull requests that are more needed?</ns0:p><ns0:formula xml:id='formula_1'>• • • • • • • • • • • • • • • • • • • • 60% 70% 80% 90%<ns0:label>1</ns0:label></ns0:formula><ns0:p>Another explanation for women's pull request acceptance rate is that, perhaps, women disproportionately make contributions that projects need more specifically. What makes a contribution 'needed' is difficult to assess from a third-party perspective. One way is to look at which pull requests link to issues in projects' GitHub issue trackers. If a pull request references an issue, we consider it to serve a more specific and recognized need than an otherwise comparable one that does not. To support this argument with data, we randomly selected 30 pull request de-scriptions that referenced issues; in 28 cases, the reference was an attempt to fix all or part of an issue. Based on this high probability, we can assume that when someone references an issue in a pull request description, they usually intend to fix a specific problem in the project. Thus, if women more often submit pull requests that address an documented need and this is enough to improve acceptance rates, we would expect that these same requests are more often linked to issues.</ns0:p><ns0:p>We evaluate this hypothesis by parsing pull request descriptions and calculating the percentage of pulls that reference an issue. This data show a statistically significant difference (χ 2 (df = 1, n = 1, 417, 004) = 24, p < .001). Contrary to the hypothesis, women are slightly less likely to submit a pull request that mentions an issue, suggesting that women's pull requests are less likely to fulfill an documented need. Note that this does not imply women's pull requests are less valuable, but instead that the need they fulfill appears less likely to be recognized and documented before the pull request was created. Regardless, the result suggests that women's increased success rate is not explained by making more specifically needed pull requests.</ns0:p></ns0:div>
<ns0:div><ns0:head>Are women making smaller changes?</ns0:head><ns0:p>Maybe women are disproportionately making small changes that are accepted at a higher rate because the changes are easier for project owners to evaluate. This is supported by prior work on pull requests suggesting that smaller changes tend to be accepted more than larger ones <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>.</ns0:p><ns0:p>We evaluated the size of the contributions by analyzing lines of code, modified files, and The bottom of this chart includes Welch's t-test statistics, comparing women's and men's metrics, including 95% confidence intervals for the mean difference. For three of four measures of size, women's pull requests are significantly larger than men's.</ns0:p><ns0:p>One threat to this analysis is that lines added or removed may exaggerate the size of a change whenever a refactoring is performed. For instance, if a developer moves a 1000-line class from one folder to another, even though the change may be relatively benign, the change will show up as 1000 lines added and 1000 lines removed. This difference is also statistically significant. So even in the face of refactoring, the conclusion holds: women make pull requests that add and remove more lines of code, and contain more commits. This is consistent with larger changes women make on Wikipedia <ns0:ref type='bibr' target='#b0'>[1]</ns0:ref>. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Are women's pull requests more successful when contributing code?</ns0:p><ns0:p>One potential explanation for why women get their pull requests accepted more often is that the kinds of changes they make are different. For instance, changes to HTML could be more likely to be accepted than changes to C code, and if women are more likely to change HTML, this may explain our results. Thus, if we look only at acceptance rates of pull requests that make changes to program code, women's high acceptance rates might disappear. For this, we define program code as files that have an extension that corresponds to a Turing-complete programming language. We categorize pull requests as belonging to a single type of source code change when the majority of lines modified were to a corresponding file type. For example, if a pull request changes 10 lines in .js (javascript) files and 5 lines in .html files, we include that pull request and classify it as a .js change.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>5</ns0:ref> shows the results for the 10 most common programming language files (top) and the 10 most common non-programming language files (bottom). Each pair of bars summarizes pull requests classified as part of a programming language file extension, where the height of each bar represents the acceptance rate and each bar contains a 95% Clopper-Pearson confidence interval. An asterisk (*) next to a language indicates a statistically significant difference between men and women for that language using a chi-squared test, after a Benjamini-Hochberg correction <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref> to control for false discovery.</ns0:p><ns0:p>Overall, we observe that women's acceptance rates are higher than men's for almost every programming language. The one exception is .m, which indicates Objective-C and Matlab, for which the difference is not statistically significant. Is a woman's pull request accepted more often because she appears to be a woman?</ns0:p><ns0:p>Another explanation as to why women's pull requests are accepted at a higher rate would be what McLoughlin calls Type III bias: 'the singling out of women by gender with the intention to help' <ns0:ref type='bibr' target='#b26'>[26]</ns0:ref>. In our context, project owners may be biased towards wanting to help women who submit pull requests, especially outsiders to the project. In contrast, male outsiders without this benefit may actually experience the opposite effect, as distrust and bias can be stronger in Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>stranger-to-stranger interactions <ns0:ref type='bibr' target='#b24'>[24]</ns0:ref>. Thus, we expect that women who can be perceived as women are more likely to have their pull requests accepted than women whose gender cannot be easily inferred, especially when compared to male outsiders.</ns0:p><ns0:p>We evaluate this hypothesis by comparing pull request acceptance rate of developers who have gender-neutral GitHub profiles and those who have gendered GitHub profiles. We define a gender-neutral profile as one where a gender cannot be readily inferred from their profile. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> gives an example of a gender-neutral GitHub user, 'akofink', who uses an identicon, an automatically generated graphic, and does not have a gendered name that is apparent from the login name. Likewise, we define a gendered profile as one where the gender can be readily inferred from the image or the name. Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> also gives an example of a gendered profile; the profile of 'JustinAMiddleton' is gendered because it uses a login name (Justin) commonly associated with men, and because the image depicts a person with masculine features (e.g., pronounced brow ridge <ns0:ref type='bibr' target='#b4'>[5]</ns0:ref>). Clicking on a user's name in pull requests reveals their profile, which may contain more information such as a user-selected display name (like 'Justin Middleton').</ns0:p><ns0:p>Identifiable Analysis. To obtain a sample of gendered and gender-neutral profiles, we used a combination of automated and manual techniques. For gendered profiles, we included GitHub users who used a profile image rather than an identicon and that Vasilescu and colleagues' tool could confidently infer a gender from the user's name <ns0:ref type='bibr' target='#b35'>[34]</ns0:ref>. For gender-neutral profiles, we included GitHub users that used an identicon, that the tool could not infer a gender for, and that a mixed-culture panel of judges could not guess the gender for.</ns0:p><ns0:p>While acceptance rate results so far have been robust to differences between insiders (people who are owners or collaborators of a project) versus outsiders (everyone else), for this analysis, there is a substantial difference between the two, so we treat each separately. Figure <ns0:ref type='figure' target='#fig_4'>6</ns0:ref> shows the acceptance rates for men and women when their genders are identifiable versus when they are Identifiable Results. For insiders, we observe little evidence of bias when we compare women with gender-neutral profiles and women with gendered profiles, since both have similar acceptance rates. This can be explained by the fact that insiders likely know each other to some degree, since they are all authorized to make changes to the project, and thus may be aware of each others' gender.</ns0:p><ns0:p>For outsiders, we see evidence for gender bias: women's acceptance rates drop by 12.0%</ns0:p><ns0:p>when their gender is identifiable, compared to when it is not (χ 2 (df = 1, n = 16, 258) = 158, p < .001). There is a smaller 3.8% drop for men (χ 2 (df = 1, n = 608, 764) = 39, p < Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>.001). Women have a higher acceptance rate of pull requests overall (as we reported earlier), but when they are outsiders and their gender is identifiable, they have a lower acceptance rate than men.</ns0:p></ns0:div>
<ns0:div><ns0:head>Are Acceptance Rates Different If We Control for Covariates?</ns0:head><ns0:p>In analyses of pull request acceptance rates up until this point, covariates other than the variable of interest (gender) may also contribute to acceptance rates. We have previously shown an imbalance in covariate distributions for men and women (e.g. number of projects contributed to and number of changes made) and this imbalance may confound the observed gender differences. In this section, we re-analyze acceptance rates while controlling for these potentially confounding covariates using propensity score matching, a technique that supports causal inference by transforming a dataset from a non-randomized field study into a dataset that 'looks closer to one that would result from a perfectly blocked (and possibly randomized) experiment' <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>. That is, by making gender comparisons between subjects having the same propensity scores, we are able to remove the confounding effects, giving stronger evidence that any observed differences are primarily due to gender bias.</ns0:p><ns0:p>While full details of the matching procedure can be found in the appendix, in short, propensity score matching works by matching data from one group to similar data in another group (in our case, men's and women's pull requests), then discards the data that do not match. This discarded data represent outliers, and thus the results from analyzing matched data may differ substantially from the results from analyzing the original data. The advantage of propensity score matching is that it controls for any differences we observed earlier that are caused by a measured covariate, rather than gender bias. One negative side effect of matching is that statistical power is reduced because the matched data are smaller than from the original dataset. We may also observe different results than in the larger analysis because we are excluding certain</ns0:p><ns0:p>Manuscript to be reviewed subjects from the population having atypical covariate value combinations that could influence the effects in the previous analyses.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>7</ns0:ref> shows acceptance using matched data for all pull requests, for just pull requests from outsiders, and for just pull requests on projects that are open source (OSS) licenses. Asterisks (*) indicate that each difference is statistically significant using a chi-squared test, though the magnitude of the difference between men and women is smaller than for unmatched data.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_7'>8</ns0:ref> shows acceptance rates for matched data, analogous to Figure <ns0:ref type='figure'>5</ns0:ref>. We calculate statistical significance with a chi-squared test, with a Benjamini-Hochberg correction <ns0:ref type='bibr' target='#b3'>[4]</ns0:ref>. For programming languages, acceptance rates for three (Ruby, Python, and C++) are significantly higher for women, and one (PHP) is significantly higher for men.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>9</ns0:ref> shows acceptance rates for matched data by pull request index, that is, for each user's first pull request, second and third pull request, fourth through seventh pull request, and so on. We perform chi-squared tests and Benjamini-Hochberg corrections here as well. Com- significance.</ns0:p><ns0:formula xml:id='formula_2'>pared</ns0:formula><ns0:p>From Figure <ns0:ref type='figure' target='#fig_8'>9</ns0:ref>, we might hypothesize that the overall difference in acceptance rates between genders is due to just the first pull request. To examine this, we separate the pull request acceptance rate into:</ns0:p><ns0:p>• One-Timers: Pull requests from people who only ever submit one pull request. • Regulars' First: First pull requests from people who go on to submit other pull requests.</ns0:p><ns0:p>• Regulars' Rest: All other (second and beyond) pull requests.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_0'>10</ns0:ref> shows the results. Overall, women maintain a significantly higher acceptance rate beyond the first pull request, disconfirming the hypothesis.</ns0:p><ns0:p>We next investigate acceptance rate by gender and perceived gender using matched data.</ns0:p><ns0:p>Here we match slightly differently, matching on identifiability (gendered, unknown, or neutral) rather than use of an identicon. Unfortunately, matching on identifiability (and the same co- variates described in this section) reduces the sample size of gender neutral pulls by an order of magnitude, substantially reducing statistical power. 5 Consequently, here we relax the matching criteria by broadening the equivalence classes for numeric variables. Figure <ns0:ref type='figure' target='#fig_10'>11</ns0:ref> plots the result.</ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head><ns0:note type='other'>Computer Science</ns0:note><ns0:p>For outsiders, while men and women perform similarly when their genders are neutral, when their genders are apparent, men's acceptance rate is 1.2% higher than women's (χ 2 (df = 1, n = 419, 411) = 7, p < .01).</ns0:p><ns0:p>How has this matched analysis of the data changed our findings? Our observation about overall acceptance rates being higher for women remains, although the difference is smaller.</ns0:p><ns0:p>Our observation about womens' acceptance rates being higher than mens' for all programming languages is now mixed; instead, women's acceptance rate is significantly higher for three 5 For the sake of completeness, the result of that matching process is included in the Supplimental Files. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>languages, but significantly lower for one language. Our observation that womens' acceptance rates continue to outpace mens' becomes less clear. Finally, for outsiders, although genderneutral women's acceptance rates no longer outpace men's to a statistically significant extent, men's pull requests continue to be accepted more often than women's when the contributor's gender is apparent.</ns0:p></ns0:div>
<ns0:div><ns0:head>Discussion</ns0:head><ns0:p>Why Do Differences Exist in Acceptance Rates?</ns0:p><ns0:p>To summarize this paper's observations:</ns0:p><ns0:p>1. Women are more likely to have pull requests accepted than men.</ns0:p><ns0:p>2. Women continue to have high acceptance rates as they do pull requests on more projects.</ns0:p><ns0:p>3. Women's pull requests are less likely to serve an documented project need.</ns0:p><ns0:p>4. Women's changes are larger.</ns0:p><ns0:p>5. Women's acceptance rates are higher for some programming languages.</ns0:p><ns0:p>6. Men outsiders' acceptance rates are higher when they are identifiable as men.</ns0:p><ns0:p>We next consider several alternative theories that may explain these observations as a whole.</ns0:p><ns0:p>Given observations 1-5, one theory is that a bias against men exists, that is, a form of reverse discrimination. However, this theory runs counter to prior work (e.g., <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>), as well as observation 6.</ns0:p><ns0:p>Another theory is that women are taking fewer risks than men. This theory is consistent with Byrnes' meta-analysis of risk-taking studies, which generally find women are more risk-averse than men <ns0:ref type='bibr' target='#b6'>[7]</ns0:ref>. However, this theory is not consistent with observation 4, because women tend Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>to change more lines of code, and changing more lines of code correlates with an increased risk of introducing bugs <ns0:ref type='bibr' target='#b27'>[27]</ns0:ref>.</ns0:p><ns0:p>Another theory is that women in open source are, on average, more competent than men.</ns0:p><ns0:p>In Lemkau's review of the psychology and sociology literature, she found that women in maledominated occupations tend to be highly competent <ns0:ref type='bibr' target='#b25'>[25]</ns0:ref>. This theory is consistent with observations 1-5. To be consistent with observation 6, we need to explain why women's pull request acceptance rate drops when their gender is apparent. An addition to this theory that explains observation 6, and the anecdote described in the introduction, is that discrimination against women does exist in open source.</ns0:p><ns0:p>Assuming this final theory is the best one, why might it be that women are more competent, on average? One explanation is survivorship bias: as women continue their formal and informal education in computer science, the less competent ones may change fields or otherwise drop out. Then, only more competent women remain by the time they begin to contribute to open source. In contrast, less competent men may continue. While women do switch away from STEM majors at a higher rate than men, they also have a lower drop out rate then men <ns0:ref type='bibr' target='#b7'>[8]</ns0:ref>, so the difference between attrition rates of women and men in college appears small. Another explanation is self-selection bias: the average woman in open source may be better prepared than the average man, which is supported by the finding that women in open source are more likely to hold Master's and PhD degrees <ns0:ref type='bibr' target='#b1'>[2]</ns0:ref>. Yet another explanation is that women are held to higher performance standards than men, an explanation supported by Gorman and Kmec's analysis of the general workforce <ns0:ref type='bibr' target='#b12'>[13]</ns0:ref>, as well as Heilman and colleagues' controlled experiments <ns0:ref type='bibr' target='#b17'>[18]</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Are the Differences Meaningful?</ns0:head><ns0:p>We have demonstrated statistically significant differences between men's and women's pull request acceptance rates, such as that, overall, women's acceptance rates are 4.1% higher than PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science men's. We caution the reader from interpreting too much from statistical significance; for big data studies such as this one, even small differences can be statistically significant. Instead, we encourage the reader to examine the size of the observed effects. We next examine effect size from two different perspectives.</ns0:p><ns0:p>Using our own data, let us compare acceptance rate to two other factors that correlate with pull request acceptance rates. First, the slope of the lines in Figure <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>, indicate that, generally, as developers become more experienced, their acceptance rates increases fairly steadily. For instance, as experience doubles from 16 to 32 pull requests for men, pull acceptance rate increases by 2.9%. Second, the larger a pull request is, the less likely it is to be accepted <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>. In our pull request data, for example, increasing the number of files changed from 10 to 20 decreases the acceptance rate by 2.0%.</ns0:p><ns0:p>Using others' data, let us compare our effect size to effect sizes reported in other studies of gender bias. Davison and Burke's meta-analysis of sex discrimination found an average Pearson correlation of r = .07, a standardized effect size that represents the linear dependence between gender and job selection <ns0:ref type='bibr' target='#b9'>[10]</ns0:ref>. In comparison, our 4.1% overall acceptance rate difference is equivalent to r = .02. 6 Thus, the effect we have uncovered is only about a quarter of the effect in typical studies of gender bias.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>In closing, as anecdotes about gender bias persist, it is imperative that we use big data to better understand the interaction between genders. While our big data study does not prove that differences between gendered interactions are caused by bias among individuals, combined with qualitative data about bias in open source <ns0:ref type='bibr' target='#b30'>[29]</ns0:ref>, the results are troubling.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Our results show that women's pull requests tend to be accepted more often than men's, yet women's acceptance rates are higher only when they are not identifiable as women. In the context of existing theories of gender in the workplace, plausible explanations include the presence of gender bias in open source, survivorship and self-selection bias, and women being held to higher performance standards.</ns0:p><ns0:p>While bias can be mitigated -such as through 'bias busting' workshops, 7 open source codes of conduct, 8 and blinded interviewing 9 -the results of this paper do not suggest which, if any, of these measures should be adopted. More simply, we hope that our results will help the community to acknowledge that biases are widespread, to reevaluate the claim that open source is a pure meritocracy, and to recognize that bias makes a practical impact on the practice of software development.</ns0:p><ns0:p>as an insider when the pull request explicitly listed the person as a collaborator or owner, 10 and classified them as an outsider otherwise. This analysis has inaccuracies because GitHub users can change roles from outsider to insider and vice-versa. As an example, about 5.9% of merged pull requests from both outsider female and male users were merged by the outsider pull-requestor themselves, which is not possible, since outsiders by definition do not have the authority to self-merge. We emailed such an outsider, who indicated that, indeed, she was an insider when she made that pull request. We attempted to mitigate this problem by using a technique similar to that used in prior work <ns0:ref type='bibr' target='#b14'>[15,</ns0:ref><ns0:ref type='bibr' target='#b37'>36]</ns0:ref>. From contributors that we initially marked as outsiders, for a given pull request on a project, we instead classified them as insiders when they met any of three conditions. The first condition was that they had closed an issue on the project within 90 days prior to opening the given pull request. The second condition was that they had merged the given pull request or any other pull request on the project in the prior 90 days. The third condition was that they had closed any pull request that someone else had opened in the prior 90 days. Meeting any of these conditions implies that, even if the contributor was an outsider at the time of our scraping, they were probably an insider at the time of the pull request.</ns0:p></ns0:div>
<ns0:div><ns0:head>Gender Linking</ns0:head><ns0:p>To evaluate gender bias on GitHub, we first needed to determine the genders of GitHub users.</ns0:p><ns0:p>Our technique uses several steps to determine the genders of GitHub users. First, from the GHTorrent data set, we extract the email addresses of GitHub users. Second, for each email address, we use the search engine in the Google+ social network to search for users with that email address. The search works for both Google users' email addresses (@gmail.com), as well as other email addresses (such as @ncsu.edu). Third, we parse the returned users' 'About' page to scrape their gender. Finally, we include only the genders 'Male' and 'Female' (334,578 users who make pull requests) because there were relatively few other options chosen (159 users). We also automated and parallelized this process. This technique capitalizes on several properties of the Google+ social network. First, if a Google+ user signed up for the social network using an email address, the search results for that email address will return just that user, regardless of whether that email address is publicly listed or not. Second, signing up for a</ns0:p><ns0:p>Google account currently requires you to specify a gender (though 'Other' is an option) 11 , and, in our discussion, we interpret their use of 'Male' and 'Female' in gender identification (rather than sex) as corresponding to our use of the terms 'man' and 'woman'. Third, when Google+ was originally launched, gender was publicly visible by default. 12 </ns0:p></ns0:div>
<ns0:div><ns0:head>Merged Pull Requests</ns0:head><ns0:p>Throughout this study, we measure pull requests that are accepted by calculating developers' merge rates, that is, the number of pull requests merged divided by the sum of the number of pull requests merged, closed, and still open. We include pull requests still open in the denominator in this calculation because pull requests that are still open could be indicative of a pull requestor being ignored, which has the same practical impact as rejection.</ns0:p></ns0:div>
<ns0:div><ns0:head>Project Licensing</ns0:head><ns0:p>To determine whether a project uses an open source license, we used an experimental GitHub API that uses heuristics to determine a project's license. 13 We classified a project (and thus the pull request on that project) as open source if the API reported a license that the Open Source Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Initiative considers in compliance with the Open Source Definition, 14 which were afl-3.0, agpl-3.0, apache-2.0, artistic-2.0, bsd-2-clause, bsd-3-clause, epl-1.0, eupl-1.1, gpl-2.0, gpl-3.0, isc, lgpl-2.1, lgpl-3.0, mit, mpl-2.0, ms-pl, ms-rl, ofl-1.1, and osl-3.0. Projects were not considered open source if the API did not return a license for a project, or the license was bsd-3-clauseclear, cc-by-4.0, cc-by-sa-4.0, cc0-1.0, other, unlicense, or wtfpl.</ns0:p></ns0:div>
<ns0:div><ns0:head>Determining Gender Neutral and Gendered Profiles</ns0:head><ns0:p>To determine gendered profiles, we first parsed GitHub profile pages to determine whether each user was using a profile image or an identicon. Of the users who performed at least one pull request, 213,882 used a profile image and 104,648 used an identicon. We then ran display names and login names through a gender inference program, which maps a name to a gender. 15 .</ns0:p><ns0:p>We classified a GitHub profile as gendered if each of the following were true:</ns0:p><ns0:p>• a profile image (rather than an identicon) was used, and</ns0:p><ns0:p>• the gender inference tool output a gender at the highest level of confidence (that is, 'male' or 'female,' rather than 'mostly male,' 'mostly female,' or 'unknown').</ns0:p><ns0:p>We classified profile images as identicons using ImageMagick 16 , looking for an identiconspecific file size, image dimension, image class, and color depth. In an informal inspection into profile images, we found examples of non-photographic images that conveyed gender cues, so we did not attempt to distinguish between photographic and non-photographic images when classifying profiles as gendered.</ns0:p><ns0:p>To classify profiles as gender neutral, we added a manual step. Given a GitHub profile that used an identicon (thus, a gender could not be inferred from a profile image) and a name that the gender inference tool classified as 'unknown', we manually verified that the profile could not be easily identified as belonging to a specific gender. We did this in two phases. In the first phase, we assembled a panel of 3 people to evaluate profiles for 10 seconds each. The panelists were a convenience sample of graduate and undergraduate students from North Carolina State University. Panelists were of American (man), Chinese (man), and Indian (woman) origin, representative of the three most common nationalities on GitHub. We used different nationalities because we wanted the panel to be able to identify, if possible, the genders of GitHub usernames with different cultural origins. In the second phase, we eliminated two inefficiencies from the first phase: (a) because the first panel estimated that for 99% of profiles, they only looked at login names and display names, we only showed this information to the second panel, and (b) because the first panel found 10 seconds was usually more time than was necessary to assess gender, we allowed panelists at the second phase to assess names at their own pace. Across both phases, panelists were instructed to signal if they could identify the gender of the GitHub profile. To estimate panelists' confidence, we considered using a threshold like '90% confident of the gender,' but found that this was too ambiguous in pilot panels. Instead, we instructed panelists to signal if they would be comfortable addressing the GitHub user as 'Mister' or 'Miss' in an email, given the only thing they knew about the user was their profile. We considered a GitHub profile as gender neutral if all of the following conditions were met:</ns0:p><ns0:p>• an identicon (rather than a profile image) was used,</ns0:p><ns0:p>• the gender inference tool output a 'unknown' for the user's login name and display name, and</ns0:p><ns0:p>• none of the panelists indicated that they could identify the user's gender.</ns0:p><ns0:p>Rather than asking a panel to laboriously evaluate every profile for which the first two criteria applied, we instead asked panelists to inspect a random subset. Across both panels, panelists inspected 3000 profiles of roughly equal numbers of women and men. We chose the number 3000 by doing a rough statistical power analysis using the results of the first panel to determine how many profiles panelists should inspect during the second panel to obtain statistically significant results. Of the 3000, panelists eliminated 409 profiles for which at least one panelist could infer a gender.</ns0:p></ns0:div>
<ns0:div><ns0:head>Matching Procedure</ns0:head><ns0:p>To enable more confident causal inferences about the effect of gender, we used propensity score matching to remove the effect of confounding factors from our acceptance rate analyses. In our analyses, we used men as the control group and women as the treatment group. We treated each pull request as a data point. The covariates we matched were number of lines added, number of lines removed, number of commits, number of files changed, pull index (the creator's n th pull request), number of references to issues, license (open source or not), creator type (owner, collaborator, or outsider), file extension, and whether the pull requestor used an identicon. We excluded pull requests for which we were missing data for any covariate.</ns0:p><ns0:p>We used the R library MatchIt <ns0:ref type='bibr' target='#b18'>[19]</ns0:ref>. Although MatchIt offers a variety of matching techniques, such as full matching and nearest neighbor, we found that only the exact matching technique completed the matching process, due to our large number of covariates and data points. With exact matching, each data point in the treatment group must match exactly with one or more data points in the control group. This presents a problem for covariates with wide distributions (such as lines of code) because it severely restricts the technique's ability to find matches. For instance, if a woman made a pull request with 700 lines added and a man made a pull request with 701 lines added that was otherwise identical (same number of lines removed, same file extension, and so on), these two data points would not be matched and excluded from further analysis. Consequently, we pre-processed each numerical variable into the floor of the log2 of it. Thus, for example, both 700 and 701 are transformed into 5, and thus can be exactly matched.</ns0:p><ns0:p>After exact matching, the means of all covariates are balanced, that is, their weighted means are equal across genders. Raw numerical data, since we transformed it, is not perfectly balanced, but is substantially more balanced than the original data; each covariate showed a 96% or better balance improvement.</ns0:p><ns0:p>Finally, as we noted in the matching procedure for gendered and gender-neutral contributors, to retain reasonable sample sizes, we relaxed the matching criteria by broadening the equivalence classes for numeric variables. Specifically, for lines added, lines removed, commits, files changed, pull index, and references, we transformed the data using log10 rather than log2.</ns0:p></ns0:div>
<ns0:div><ns0:head>Missing Data</ns0:head><ns0:p>In some cases, data were missing when we scraped the web to obtain data to supplement the GHTorrent data. We describe how we dealt with these data here.</ns0:p><ns0:p>First, information on file types was missing for pull requests that added or deleted more than 1000 lines. The problem was that GitHub does not include file type data on initial page response payloads for large changes, presumably for efficiency reasons. This missing data affects the results of the file type analysis and the propensity score matching analysis; in both cases, pull requests of over 1000 lines added or deleted are excluded.</ns0:p><ns0:p>Second, when retrieving GitHub user images, we occasionally received abnormal server response errors, typically in the form of HTTP 404 errors. Thus, we were unable to determine if the user used a profile image or identicon in 10,458 (3.2% of users and 2.0% of pull requests).</ns0:p><ns0:p>We excluded these users and pull requests when analyzing data on gendered users.</ns0:p><ns0:p>Third, when retrieving GitHub pull request web pages, we occasionally received abnormal server responses as well. In these cases, we were unable to obtain data on the size of the change (lines added, files changed, etc.), the state (closed, merged, or open), the file type, or the user who merged or closed it, if any. This data comprises 5.15% of pull requests for which we had genders of the pull request creator. These pull requests are excluded from all analyses.</ns0:p></ns0:div>
<ns0:div><ns0:head>Threats</ns0:head><ns0:p>One threat to this analysis is that additional covariates, including ones that we could not collect, may influence acceptance rate. One example is that we did not account for the GitHub user judging pull requests, even though such users are central to the pull request process. Another example is pull requestors' programming experience outside of GitHub. Two covariates we collected, but did not control for, is the project the pull request is made to and the developer deciding on the pull request. We did not control for these covariates because we reasoned that it would discard too many data points during matching.</ns0:p><ns0:p>Another threat to this analysis is the existence of robots that interact with pull requests.</ns0:p><ns0:p>For example, 'Snoopy Crime Cop' 17 appears to be a robot that has made thousands of pull requests. If such robots used an email address that linked to a Google profile that listed a gender, our merge rate calculations might be skewed unduly. To check for this possibility, we examined profiles of GitHub users that we have genders for and who have made more than 1000 pull requests. The result was tens of GitHub users, none of whom appeared to be a robot. So in terms of our merge calculation, we are somewhat confident that robots are not substantially influencing the results.</ns0:p><ns0:p>Another threat is if men and women misrepresent their genders. If so, we inaccurately label some men on GitHub as women, and vice-versa. While emailing the thousands of pull re- Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>questors described in this study to confirm their gender is feasible, doing so is ethically questionable; GHTorrent no longer includes personal email addresses, after GitHub users complained of receiving too many emails from researchers. 18 Another threat is GitHub developers' use of aliases <ns0:ref type='bibr' target='#b36'>[35]</ns0:ref>; the same person may appear as multiple GitHub users. Each alias artificially inflates the number of developers shown in the histograms in Figure <ns0:ref type='figure'>2</ns0:ref>. Most pull request-level analysis, which represents most of the analyses performed in this paper, are unaffected by aliases that use the same email address.</ns0:p><ns0:p>Another threat is inaccuracies in our assessment of whether a GitHub member's gender is identifiable. First, the threat in part arises from our use the genderComputer program. When genderComputer labels a GitHub profile as belonging to a man, but a human would perceive the user's profile as belonging to a woman (or vice-versa), then our classification of gendered profiles is inaccurate in such cases. We attempted to mitigate this risk by discarding any profiles in the gendered profile analysis that genderComputer classified with low-confidence. Second, the threat in part arises from our panel. For profiles we labeled as gender-neutral, our panel may not have picked out subtle gender features in GitHub users' profiles. Moreover, project owners may have used gender signals that we did not; for example, if a pull requestor sends an email to a project owner, the owner may be able to identify the requestor's gender even though our technique could not.</ns0:p><ns0:p>A similar threat is that users who judge pull requests encounter gender cues by searching more deeply than we assume. We assume that the majority of users judging pull requests will look only at the pull request itself (containing the requestor's username and small profile image) and perhaps the requestor's GitHub profile (containing username, display name, larger profile image, and GitHub contribution history). Likewise, we assume that very few users judging pull requests will look into the requestor further, such as into their social media profiles. Although 18 https://github.com/ghtorrent/ghtorrent.org/issues/32 judges could have theoretically found requestors' Google+ profiles using their email addresses (as we did), this seems unlikely for two reasons. First, while pull requests have explicit links to a requestor's GitHub profile, they do not have explicit links to a requestor's social media profile;</ns0:p><ns0:p>the judge would have to instead seek them out, possibly using a difficult-to-access email address.</ns0:p><ns0:p>Second, we claim that our GitHub-to-Google+ linking technique is a novel research technique;</ns0:p><ns0:p>assuming that it is also novel in practice, users who judge pull requests would not know about it and therefore would be unable to look up a user's gender on their Google+ profile.</ns0:p><ns0:p>Another threat is that of construct validity, whether we are measuring what we aim to measure. One example is our inclusion of 'open' pull requests as a sign of rejection, in addition to the 'closed' status. Rather than a sign of rejection, open pull requests may simply have not yet been decided upon. However, these pull requests had been open for at least 126 days, the time between when the last pull request was included in GHTorrent and when we did our web scrape. Given Gousios and colleagues' finding that 95% of pull requests are closed within 26 days <ns0:ref type='bibr' target='#b14'>[15]</ns0:ref>, insiders likely had ample time to decide on open pull requests. Another example is whether pull requests that do not link to issues signals that the pull request does not fulfill an documented need. A final example is that a GitHub user might be an insider without being an explicit owner or collaborator; for instance, a user may be well-known and trusted socially, yet not be granted collaborator or owner status, in an effort to maximize security by minimizing a project's attack surface <ns0:ref type='bibr' target='#b20'>[20]</ns0:ref>.</ns0:p><ns0:p>Another threat is that of external validity; do the results generalize beyond the population studied? While we chose GitHub because it is the largest open source community, other communities such as SourceForge and BitBucket exist, along with other ways to make pull requests, such at through the git version control system directly. Thus, our study provides limited generalizability to other open source ecosystems. Moreover, while we studied a large population of contributors, they represent only part of the total population of developers on GitHub, be-</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1: GitHub user 'JustinAMiddleton' makes a pull request; the repository owner 'akofink' accepts it by merging it. The changes proposed by JustinAMiddleton are now incorporated into the project.</ns0:figDesc><ns0:graphic coords='4,77.67,108.86,453.54,326.65' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>4</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>9</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3 displays the results. Orange points represent the mean acceptance rate for women, and purple points represent acceptance rates for men. Shaded regions indicate the pointwise 95% Clopper-Pearson confidence interval.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 6 :</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6: Pull request acceptance rate by gender and perceived gender, with 95% Clopper-Pearson confidence intervals, for insiders (left) and outsiders (right)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>18 PeerJ</ns0:head><ns0:label>18</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 7 :</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7: Acceptance rates for men and women for all data, outsiders, and open source projects using matched data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 8 :</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8: Acceptance rates for men and women using matched data by file type for programming languages (top) and non-programming languages (bottom).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 9 :</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9: Pull request acceptance rate over time using matched data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>22 PeerJFigure 10 :</ns0:head><ns0:label>2210</ns0:label><ns0:figDesc>Figure 10: Acceptance rates for men and women broken down by category.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 11 :</ns0:head><ns0:label>11</ns0:label><ns0:figDesc>Figure 11: Pull request acceptance rate by gender and perceived gender, using matched data.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>24 PeerJ</ns0:head><ns0:label>24</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>25 PeerJ</ns0:head><ns0:label>25</ns0:label><ns0:figDesc>Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>17 https://github.com/snoopycrimecop 41 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:note place='foot' n='2'>https://github.com/about/press 2 PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='4'>https://sites.google.com/site/bughunteruniversity/nonvuln/ discover-your-name-based-on-e-mail-address</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2016:07:12325:1:0:NEW 1 Mar 2017)</ns0:note>
<ns0:note place='foot' n='6'>Calculated using the chies function in the compute.es R package (https://cran.r-project. org/web/packages/compute.es/compute.es.pdf)</ns0:note>
<ns0:note place='foot' n='10'>https://help.github.com/articles/what-are-the-different-access-permissions/ #user-accounts</ns0:note>
<ns0:note place='foot' n='14'>https://opensource.org/licenses<ns0:ref type='bibr' target='#b14'>15</ns0:ref> This tool was builds on Vasilescu and colleagues' tool<ns0:ref type='bibr' target='#b35'>[34]</ns0:ref>, but we removed some of Vasilescu and colleagues' heuristics to be more conservative. Our version of the tool can be found here: https://github. com/DeveloperLiberationFront/genderComputer<ns0:ref type='bibr' target='#b15'>16</ns0:ref> http://www.graphicsmagick.org/GraphicsMagick.html</ns0:note>
</ns0:body>
" | "We thank the reviewers for their careful and thoughtful comments. We have carefully
considered the reviewers comments and replied in line below.
There is one change that we have initiated ourselves that we need to point out. In inspecting our
analysis scripts, we found a mistake in our matched identifiability analysis (the last analysis in
the paper, the results of which appear in Figure 11). Specifically, we matched data on
“identicon” (use of an identicon rather than a picture) rather than “identifiability” (identifiable as
gendered or gender neutral through genderComputer, profile image analysis, and the panel).
Unfortunately the fix to this problem was not straightforward, as we now explain in the paper:
Unfortunately, matching on identifiability (and the same covariates described in
this section) reduces the sample size of gender neutral pulls by an order of
magnitude, substantially reducing statistical power [Footnote: For the sake of
completeness, the result of that matching process is included in the Supplimental
Files.]. Consequently, here we relax matching criteria by broadening the
equivalence classes for numeric variables. Figure 11 plots the result.
And the details in the appendix:
Finally, as we noted in the matching procedure for gendered and gender-neutral
contributors, to retain reasonable sample sizes, we relaxed the matching criteria
by broadening the equivalence classes for numeric variables. Specifically, for
lines added, lines removed, commits, files changed, pull index, and references,
we transformed the data using log10 rather than log2.
The results are updated accordingly.
A minor note is that we have added the reviewers (including the named ones) to the appendix
as acknowledgements.
Finally, we have attached tracked changes to the bottom of this letter; Acrobat was somewhat
aggressive with marking changes, but it should still serve as a reasonable guide.
Meta Review
All reviewers agree that the authors present an impressive and important statistical analysis relating
gender identity and pull request acceptance.
Reviewer 1 expresses concerns that most of the write up of the findings does not address the core
research question about bias. Reviewer 1 suggests a reorganization of the material that could
strengthen the connection with the primary research question, and which would also help to
articulate the points of the insider and competence effects.
Reviewer 2 suggests a number of small changes that help to avoid mis-interpretation of the paper's
key results.
Reviewer 3 indicates that certain key elements of the approach require clarification, in particular the
identification of non-identifiable women, the identification of insiders, and the use of propensity score
matching.
Reviewer 4 raises concerns that seem related to those of Reviewer 1, namely the connection of the
findings with bias. The comments by Reviewer 4 call for a critical explanation to what extent the
results can be truly attributed to bias.
Reviewer 4 also missed an outlook of what can be done with the results: Could the paper discuss to
what actions these results might lead?
All in all the reviews call for the following:
1. Consider whether a re-ordering of the material as suggested by reviewer 1 helps to get the
message better across
2. Explicitly discuss, maybe in a separate discussion subsection, how the presented empirical
evidence leads to the conclusion of bias (or how such a conclusion could be drawn)
3. Consider discussing potential implications / actionability of the results (e.g., what can
projects do? what can open source contributors do?)
We thank the reviewers for these comments, and have discuss each issue below with each
reviewer’s original points.
I also was surprised that an otherwise strong and compelling paper ends with a somewhat weak five
line conclusion section. This seems like the ideal place to clearly articulate the key findings of the
paper.
We agree. We’ve extended and clarified the conclusion accordingly:
Our results show that women’s pull requests tend to be accepted more often than
men’s, yet women’s acceptance rates are higher only when they are not identifiable
as women. In the context of existing theories of gender in the workplace, plausible
explanations include the presence of gender bias in open source, survivorship and
self-selection bias, and women being held to higher performance standards.
While bias can be mitigated – such as through “bias busting” workshops, open
source codes of conduct, and blinded interviewing– the results of this paper do not
suggest which, if any, of these measures should be adopted. More simply, we hope
that our results will help the community to acknowledge that biases are widespread,
to reevaluate the claim that open source is a pure meritocracy, and to recognize that
bias makes a practical impact on the practice of software development.
The appendices contain important material. I am not sure why they are put as an appendix, but I do
consider them as an integral part of the paper that needs to be included in the final version of the
paper as well.
We included the methodology section in the end, to facilitate readability, in the style of a
paper in the journal Science.
The reviews don't call for additional research or experiments. I hope and trust that the authors will be
able to address the various concerns expressed, which mostly amount to strengthening the analysis
and presentation.
Note that Reviewer 1 has identified herself, and has offered the possibility to talk about the paper.
This may also be helpful for addressing the concerns expressed by reviewer 4.
I look forward to receiving the revised manuscript and author response to these suggestions!
Basic reporting
The paper is clearly written. The literature pertaining to the main finding of the paper, that is the
existence of gender bias, and the larger context of women in computing, is coherent and relevant.
Figures are relevant and easy to interpret.
Experimental design
This study is indeed one of the largest field data source on gender bias. Few studies exist that
pertain to “real world” contexts and go beyond testing the existence of bias in an experimental
settings. As such, this paper provides a significant contribution to the literature, and is beneficial to
the literature.
Validity of the findings
The data are unique, and at an unprecedented scale, offering significant validity. The research
question is relevant, meaningful, and well designed. The investigation is rigorous and the methods
well described. One issue to correct is to more clearly link the main research question to the
analysis. Most of the write up of the findings do not address the main research question – that is, the
question stated on page 5 “to what extent does gender bias exist among people who judge GH
requests” is not answered until page 23. Most of the analysis is spent answering the question “why
are women’s contributions accepted at higher rates than men”.
We can see how the reader would feel this way with the benefit of hindsight. But in actuality
our question was always about gender bias -- it’s just the the bias was initially masked by
insider/outsider status and competency differences.
I would recommend re-organizing the write up of the findings – start with the evidence on bias, then
ask the question about women’s higher acceptance rate when bias is removed (e.g. when the
gender is unknown to the reviewer), controlling for the appropriate factors and exploring why that
might be. We also do not know much in the paper about the people who judge GH requests,
although they are central to the research question. This limitation should be noted.
We agree that the paper says nothing explicitly about the people who judge the pull
requests. We have modified the research question accordingly:
●
To what extent does gender bias exist when pull requests are judged on GitHub?
Comments for the Author
Note: my comments come from the perspective of social science – therefore, my recommendations
on methods and specific literature to look to may or may not perfectly match the computing literature
and approaches.
This paper makes a significant contribution to the literature by providing large-scale real world
evidence of gender bias in Open Source. These findings have significant implications for the design
of open source communities and technology workplaces and should be published.
There is a flaw in the way the paper is constructed that warrants a rewrite/re-organization but I think
is absolutely achievable:
The paper starts with a research question on the prevalence of bias in OS, and then spends almost
all of its analysis on trying to explain a “surprising” finding that women’s contributions are rated
higher when bias is removed. This leaves the reader wondering why they are spending so much
time answering what seems to be a different question. The finding that clearly answers the research
question, is found in Figure 11: that is, the paper demonstrates that women's contributions are
significantly more likely to be accepted when their gender is masked (consistent with the literature),
and that the effect is worst for outsiders (which is an important contribution to the literature on how
'insider status' may mitigate bias).
I would recommend starting the analysis with answering the main research question - to what extent
is bias present in women's OS contributions? Show the findings from Figure 11th. Then, pose the
follow up question 'Why are women's acceptance rates so much higher than men's in the gender
neutral condition – what could explain this?' - Then, posit one series of hypotheses that could
explain the difference and present a single statistical model with all the control variables as opposed
to a series of single variable cotrols (if possible). The one by one analysis does not offer one model
to control for all variables and understanding the relative impact of one variable compared to
another. Then, go into the further delving of the differences by over time data and language.
We agree that such an approach is generally preferable for investigating bias. However, the
primary reason for not doing it that way is because the gender neutral subsample is so small
is makes it impossible to do statistically meaningful follow ups, such as dividing the sample
by programming language. The current presentation also allows for a gradual buildup of
methodology (the easiest analysis first, the most complicated last), which eases readability.
We interpret this criticism as a presentation issue rather than a validity issue, so we have
opted to let the current structure remain.
In our matched analysis, our current presentation does enable cross-variable effect
comparison because non-gender variables are held constant in the matching process.
The discussion can also strengthen articulating the theoretical contribution of the paper by
elaborating on 1) the mitigating effect of being insiders on bias, and 2) the finding that women may
be more competent in this context and link back to the social science literature that demonstrates
that women have to work harder to be perceived as competent (see the work of Elizabeth Gorman
and Julie Kmec for example), especially in masculine domains, and therefore explaining those
women who participate in Open Source are indeed likely to be more prepared.
In addition to our existing reference to Gorman and Kmec in the prior version, we have
strengthened our results' connections to existing theories in the social science literature with
relevant references to Lemkau and Heilman.
With respect to the insider effect mitigating gender bias, we think higher acceptance rates
for insiders are somewhat self-evident. One might argue that ingroup bias (e.g. Mullen et al,
1992) is responsible for insiders' higher acceptance rates than outsiders, but our personal
experience suggests that it is more likely that insiders are simply more familiar with projects'
needs and norms.
If the reviewer was thinking of a different theoretical contextualization, we'd be happy to
read any specific papers you provide.
A final comment for the editors and authors: it is commendable and critically important to have
multidisciplinary perspectives and approaches to address critical social issues. I feel like it is
absolutely within reach for the paper to solidify its ties and contributions to the social science
literature while preserving and highlighting the power of computational approaches to analysis.
Happy to speak live if authors have questions or want to discuss ideas more at length
([email protected])
Basic reporting
I have found three minor wording problems that could lead to misunderstandings or
misinterpretations, and have suggested minor revisions to avoid such difficulties.
1. Regarding this sentence in the abstract (p.1):
2. “This study presents the largest study to date on gender bias, where we compare
acceptance rates of contributions from men versus women in an open source software
community”
Taken as a whole, the overall sentence can be interpreted as being limited to a study of bias in an
open-source software community. However, the initial phrase “the largest study to date” undermines
such interpretation, and is also refuted by prior large scale demographic studies of gender bias in
women’s vs men’s earnings, etc. These difficulties/ambiguities can be avoided (see related issue in
2.) by the following minor revision:
Change: “This study presents the largest study to date on gender bias, . . . ”
To: “This study presents a large scale study on gender bias, . . . ”
Fixed.
1. Regarding this sentence at the bottom of p2:
2. “To our knowledge, this is the largest scale study of gender bias to date.”
This sentence, if/when taken out of context (by media, for example), appears to make an
unsupportable claim (due to prior large-scale studies of men’s vs women’s earnings, etc). However,
within the context of open-source studies the statement is apparently valid. The difficulty here can be
avoided via the following minor revision:
Change: “. . . largest scale study of gender bias to date.”
To: “. . . largest scale study to date of gender bias in open source communities.”
Fixed. We had not previously considered the earnings studies based on census data.
1. Regarding the lead sentence in the concluding paragraph on p6:
2. “As an aside, we believe that our gender linking approach raises privacy concerns, which we
have taken several steps to address.”
This wording weakens an important awareness and response, which can be avoided via the
following minor revision:
Change: “As an aside, we believe that our gender linking approach . . . ”
To: “We recognize that our gender linking approach . . . ”
Fixed.
Basic reporting
Summary: This article is absolutely interesting! The study investigates the rate of pull request
acceptance for women vs. men. This is the largest study to date on gender bias and it also has
much significance on the impact of gender bias in the computer science workforce. One of the key
results is that women's pull requests are accepted at a higher rate, which the authors explain based
on several theories on gender bias.
Experimental design
The study uses almost 5 years worth of pull requests from GHTorrent data set. This is a very large
set of data to draw insights from. Therefore the findings from this study carry much weight and make
a strong impact in the area of gender bias studies.
One of the significant result is that, 'Women outsiders' acceptance rates are higher but, only when
they are not identifiable as women.' This part of the methodology is very confusing, which can
impact the validity of this finding. In particular, the reviewer was not able to understand how the
authors identified a group of female github users who are not apparently identifiable as women. I
understand that the researchers used the method of linking a user's email address to the Google+
social network and extracting the gender information from the Google+ site. However, isn't this
something that other users of github could have done for users, where they only see identicons and
they need to handle their pull requests? Therefore, I am confused how the study was able to identify
a group of 'true female contributors' who are not easily identifiable as women.
We have added this threat:
A similar threat is that users who judge pull requests encounter gender cues by
searching more deeply than we assume. We assume that the majority of users
judging pull requests will look only at the pull request itself (containing the requestor’s
username and small profile image) and perhaps the requestor’s GitHub profile
(containing username, display name, larger profile image, and GitHub contribution
history). Likewise, we assume that very few users judging pull requests will look into
the requestor further, such as into their social media profiles. Although judges could
have theoretically found requestors’ Google+ profiles using their email addresses (as
we did), this seems unlikely for two reasons. First, while pull requests have explicit
links to a requestor’s GitHub profile, they do not have explicit links to a requestor’s
social media profile; the judge would have to instead seek them out, possibly using a
difficult-to-access email address. Second, we claim that our GitHub-to-Google+
linking technique is a novel research technique; assuming that it is also novel in
practice, users who judge pull requests would not know about it and therefore would
be unable to look up a user’s gender on their Google+ profile.
Along this line, page 17 mentions that 'a mixed culture panel of judges could not guess the gender
for.'-- please explain who are 'mixed culture panel of judges' and how they came to consensus. I
think it would have been very useful to know whether there was a manual investigation of 'gender'
by contacting the github users directly through emails to confirm their gender.
Additional details about the panel can be found in the appendix, but we realize that we
submitted as an appendix to PeerJ was confusing. We originally referred to an “appendix”,
but instead labeled it as “Material and Methods”. We suspect that the reviewer missed it as a
consequence. We have re-titled the section “Appendix: Materials and Methods.”
To the reviewer’s original point, we have added these details:
The panelists were a convenience sample of graduate and undergraduate students
from North Carolina State University.
…
While emailing the thousands of pull requestors described in this study to confirm
their gender is feasible, doing so is ethically questionable; GHTorrent no longer
includes personal email addresses, after GitHub users complained of receiving too
many emails from researchers.
On page 18, there is an analysis of 'insiders' vs. 'outsiders' -- this part is confusing. How do the
authors classify 'insiders' vs. 'outsiders'? The text mentions 'who are owners or contributors' vs.
'everyone else' but is this distinction clear cut for every open source project out there on the github?
Some company projects hosted on the github have many 'insiders' a.k.a contributors in the same
company who may not be given the role/ permission of 'contributors.' Please clarify the classification
process in detail.
In the appendix, the paper says: “We determined whether a pull requester was an insider
or an outsider during our scraping process because the data was not available in the
GHTorrent dataset. We classified a user as an insider when the pull request listed the
person as a collaborator or owner, and classified them as an outsider otherwise.” [More
details follow.]
Yes, the distinction is clear cut, at least for explicitly labeled contributors on GitHub. We
have add a link to the GitHub documentation to clarify.
For the reviewers' situation, yes, there could be trusted and known “insiders” who don't
have owner or contributor status. We have added this as a threat: “A final example is
that a GitHub user might be an insider without being an explicit owner or collaborator; for
instance, a user may be well-known and trusted socially, yet not be granted collaborator
or owner status, in an effort to maximize security by minimizing a project’s attack surface
[18].”
I hope you can add some more detailed process of 'controlling co-variates' with a concrete example.
There's a citation to 'propensity score matching' but the reader is not able to understand how the
authors identify 'similar' data and what is the notion of 'similarity' in the subsequent analysis for
each dimension?
Similar data, in our case, means exact equivalence (after log transformations of some
covariates) for every covariate. In the appendix, there is a section “Matching Procedure” that
contains an example, starting with “For instance”.
I also have a related question. Does this analysis require the size of data from one group to be the
same as the size of another similar group?
No.
What if the size of similar groups are very different? The section on page 19 talks about 'matched
data' but it is not clear how 'matched data' are identified.
In practice, we have found that in the matched samples, the number of data points is
reduced more substantially for the original larger sample. In other words, when matching
men's and women's pull requests, many of the men's pull requests are excluded, but few of
the women's are.
Validity of the findings
●
●
●
The results are definitely interesting and the authors do very good job leading the audience
from one question to the next investigation question and interpreting the results in the
context of rich literature on gender bias. For example, it starts with the high level question of
'are women's pull requests less likely to be accepted' to the question of 'do women's pull
request acceptance rates start low and increase over time?' I also appreciate the author's
effort of mitigating the internal validity of the study by contrasting group of users with similar
characteristics.
The related work was an excellent introduction to those who are not familiar gender bias
studies.
The reader very much appreciated the authors' effort of interpreting the results in the context
of many theories on gender bias. For example, the theory of the theory of 'survivorship bias'
and 'self selection bias' could explain the observation very well. I also agree that 'women
are often held to higher performance standards than men' and therefore expected to submit
higher quality of contributions.
●
The section on 'are the differences meaningful?' seems very important as any study with
large participants can easily find a statistically significant difference. The part that is very
confusing about this section is that the methodology is written at a very high level. I suggest
the authors to add some more details on the 'effect size' analysis method. For example,
'Davison and Burke's meta-analysis of sex discrimination found an average Pearson
correlation of r = .07, and the study's overall 4.1% acceptance rate difference is equivalent to
r = .02. How does '4.1%' map to r=.02. How should the readers interpret r value?
We have modified the text to say: “Davison and Burke’s meta-analysis of sex discrimination
found an average Pearson correlation of r = .07, a standardized effect size that represents
the linear dependence between gender and job selection [10]. In comparison, our 4.1%
overall acceptance rate difference is equivalent to r = .02. 6 Thus, the effect we have
uncovered is only about a quarter of the effect in typical studies of gender bias.”
●
What is the overall conclusion of this number on the significance of the entire study?
See prior comment.
Basic reporting
The paper presents a study of pull request acceptance differences between genders. The authors
ask the question of whether differences exist between (self proclaimed) male and female among
GitHub users and find a statistically significant difference of around 4% between the acceptance
rates. They then identify several confounding factors and attempt to explain this difference; they find
that women's PRs are larger and are less likely to serve a project need and that women continue to
have larger acceptance even as their tenure in the project increases. However, when they are
outsiders in a project (typical case in GitHub OSS), and their gender is identifiable through their
profile pics or name, they tend to be less successful than men. The authors attribute this last
difference to the existence of bias against women in OSS.
The paper is written in a straightforward manner that is very refreshing; without too much fanfare, the
authors convey large amounts of information. I really enjoyed reading this paper. I also think the
plots and the tables in the paper convey all information necessary. To make it short, I believe that in
terms of presentation this paper is very good.
The authors chose not to redistribute the data due to privacy concerns. I understand this decision.
However, this limits my ability as a reviewer to check the code that generated the data and analysis
scripts. In this case, I think it is very important to ensure data quality through transparency. I would
strongly recommend to the authors to release the tools and analysis scripts they used.
We agree that the analysis code should be released for inspection by the community. We
have added a zip file containing it in our supplementary material.
Experimental design
I have no general comments about the experimental design; it looks like solid science done on an
almost full population study. The comments below reflect specific issues I had with parts of the
analysis.
l17: Why is under-representation an illustration of bias? If there are population differences on sites
with gender-oriented content would that constitute bias (active discrimination) or different content
choices between genders? Please find a reference to support this claim.
We agree that it does not constitute evidence of bias. We have reworked that paragraph:
Research suggests that, indeed, gender bias pervades open source. In Nafus’
interviews with women in open source, she found that “sexist behavior is...as
constant as it is extreme” [26]. In Vasilescu and colleagues’ study of Stack Overflow,
a question and answer community for programmers, they found “a relatively
‘unhealthy’ community where women disengage sooner, although their activity levels
are comparable to men’s” [30]. These studies are especially troubling in light of
recent research which suggests that diverse software development teams are more
productive than homogeneous teams [31]. Nonetheless, in a 2013 survey of the
more than 2000 open source developers who indicated a gender, only 11.2% were
women [2].
l115: I do not see the connection between interviewee evaluation and PR evaluation. At the very
least, the interviewee gender is immediately apparent, which is not true for PRs (as you also show).
We have clarified what type of study were talking about in the text: “Prior work on gender
bias in hiring – that a job application with a woman’s name is evaluated less favorably than
the same application with a man’s name [25] – suggests that this hypothesis may be true.”
l120: The Clopper-Pearson method assumes a Bernoulli process where the chance of success is
constant. I am not sure this is the case here, as the differentiated variable (PR acceptance) varies
significantly across projects. Can you justify your choice of test here, or select a non-parametric test
such as the Matt-Witney-Wilcoxon rank sum (also to test for significance as in l123)?
The reviewer brings up a technical issue that can be dealt with by defining what the
probabilities/proportions represent.
In the first table (overall acceptance rate by gender, formerly l120) , the proportion (i.e.
probability of success) represents a parameter for a randomly selected pull request which
takes into account the uncertainty surrounding potential differentiated variables. We may
then consider the probability of success to be constant across any random draw.
In all the other cases, we are representing a probability that is conditioned on values of a
single or multiple differentiated variables. Conditioning on these values, we can assume the
probability of success is constant since we are again considering this probability as
describing an event that randomly draws a pull request.
Regarding using Wilcoxon rank sum for this table, this would be redundant with the
chi-squared test we use directly below the table. Chi-squared is nonparametric, as the
reviewer requests.
l123 and elsewhere: You test for significance, but you do not report effect sizes. Can you use an
appropriate test (such as Cliffs delta) and report its results?
Difference in acceptance rates is the effect size, which we report throughout. The advantage
of this effect size, compared to a standardized effect size like Cliff's delta, is that the
difference in acceptance rates are more intuitively understandable in the context of pull
requests.
l294-onwards: I appreciate the fact that you used propensity analysis here; this is novel in software
engineering. However, when you use matched data, the differences the main effect you are
observing (e.g. differences in the acceptance rate for women when the gender is obvious) vanishes
(comparing Fig 7 and 11). Moreover, it looks like (Fig 11) that the drop in acceptance rate for women
and increase for men is within the confidence interval of the gendered-neutral case for both genders.
What does this indicate? In my view, that controlling for covariates, there is a trivial difference (if any)
in the acceptance rate of PRs between genders.
Materials and methods section: Given that the differences between acceptance rates are very small,
it is important to report ALL statistics when describing data analysis methods.
We neglected to mention our supplementary material in the data.pdf file; it includes a large
number of tables with exact values, which likely includes the statistics R4 is looking for. We
have added a reference to this supplementary material in the main article. If any other data is
missing, please let us know.
l493-500: What is the PR acceptance rate after applying all treatments? -
We have added: “We performed this process for all pull requests submitted by GitHub users
that we had labeled as either a man or woman. In the end, the pull request acceptance rate
was 74.8% for all processed pull requests.”
l501-510: Gousios et al. in ref [13] and Yu et al. in their MSR 2015 paper calculate the core team as
the number of people that have either committed directly or merged a PR the last 3 months before
the specific processed PR. If you keep track of project core team members for each PR, you can
detect members that were added or removed to the core team by looking ahead in time. So if user A
was in the core team for PR 12 but not for PRs 13-last PR, you can definitely say that PR is an
outsider for PRs 13 onwards.
This idea is an appealing was to improve the accuracy of our approach, and we have now
implemented it in part using pull request and issue history, but not commit history. There are
two reasons we did not implement it wholly:
-
In looking at GHTorrent’s commit history, we found many commits that were present
that originated from rejected pull requests. Using GHTorrent, we see no way to
-
exclude these commits. Looking back at Gousious and Yu, it’s not clear how they
dealt with this problem, if they even analyzed commits at all.
It’s not clear to us that this approach is a priori more accurate than our original
approach. Our original approach is inaccurate when people have changed roles. The
Gousious/Yu approach is inaccurate whenever an insider pull requestor does not
have a recent history of changing a project’s issues or pull requests.
Nonetheless, it seems reasonable to consider a hybrid approach, first using our approach,
then shifting anyone from outsider status if they meet certain conditions. As we now explain
in the paper:
We attempted to mitigate this problem by using a technique similar to that used in
prior work [ ]. From contributors that we initially marked as outsiders, for a given pull
request on a project, we instead classified them as insiders when they met any of
three conditions. The first condition was that they had closed an issue on the project
within 90 days prior to opening the given pull request. The second condition was that
they had merged the given pull request or any other pull request on the project in the
prior 90 days. The third condition was that they had closed any pull request that
someone else had opened in the prior 90 days. Meeting any of these conditions
implies that, even if the contributor was an outsider at the time of our scraping, they
were probably an insider at the time of the pull request.
Note that changing insiders and outsiders in this way changes the raw results of a
substantial portion of the paper (anything that deals only with outsiders and every matching
result). However, the final results were the same, with two exceptions:
-
Women had significantly higher acceptance rates in the matched analysis for one
additional programming language in the matched analysis (C++).
Men had significantly higher acceptance rates in the matched analysis for one
programming language in the matched analysis (PHP).
l511: What is the technique? This is crucial to report as gives an indication of the coverage of your
study over the general population.
In the indicated location, the paper states “ To evaluate gender bias on GitHub, we first
needed to determine the genders of GitHub users. Our technique uses several steps to
determine the genders of GitHub users. First, from…” and then goes on to describe each
step in the technique. So we are unclear what specifically is missing here. S
ince the
reviewer talks about coverage, they could mean how many GitHub users are on Google+,
but that data is available in Tables 1 and 2. However, the reviewer may have missed those
tables because we had previously referred to the data in the appendix, but it was not actually
there. The reason was that the PeerJ staff asked us to extract the Figure Data section into a
separate “Supplemental Data” file. We’ve made this more clear in the text, and the reviewer
can find that data there.
l518: quantify 'relatively few'
That number was available in the accompanying table, but we have now duplicated it in the
text: “Finally, we include only the genders ‘Male’ and ‘Female’ (334,578 users) because there
were relatively few other options chosen (159 users).”
l527: Gousios et al. found that 95% of PRs are closed in 25 days; you could have removed open pull
requests that are older than 25 days from your denominator.
This is a good idea, and we almost went back and implemented it (although we believe the
reviewer means excluding PRs from the denominator that are less than 25 days old).
However, we realized that we already did it implicitly. To explain, we have added the text:
However, these pull requests had been open for at least 126 days, the time between
when the last pull request was included in GHTorrent and when we did our web
scrape. Given Gousios and colleagues' finding that 95% of pull requests are closed
within 26 days [ ], insiders likely had ample time to decide on open pull requests.
l543: How did you determine whether somebody has a profile image rather than an identicon by
parsing the source page? How about profile images that use other types of computer graphics (e.g.
cartoons, logos etc)?
We have now clarified, by adding the following:
We classified profile images as identicons using ImageMagick, looking for an
identicon-specific file size, image dimension, image class, and color depth. In an
informal inspection into profile images, we found examples of non-photographic
images that conveyed gender cues, so we did not attempt to distinguish between
photographic and non-photographic images when classifying profiles as gendered.
l546: Did you check the accuracy of genderComputer? If yes, what is the recall and precision? How
did your changes improve those? In general, how can you be sure that what genderComputer
calculates is valid?
We did not check the accuracy of genderComputer, that is, whether it predicts someone's
self-reported gender, because that kind of accuracy is not relevant to how we use the tool
(unlike in, for instance, Vasilescu’s study). Rather, we used genderComputer as a partial
method to determine someone's perceived gender, in other words, whether their name is
typically associated with males or females.
Nonetheless, inaccurate assessment of perceived gender is a threat; we have now added it:
“When genderComputer labels a GitHub profile as belonging to a man, but a human would
perceive the user's profile as belonging to a woman (or vice-versa), then our classification of
gendered profiles is inaccurate in such cases. We attempted to mitigate this risk by
discarding any profiles in the gendered profile analysis that genderComputer classified with
low-confidence.”
l547-578: This is an interesting approach. How did you scale the panel's findings to the whole
population? Did you implement a machine learning based tool?
We did not scale up to the whole population; we used only data from the panel. We have
clarified, adding to the text: “Rather than asking a panel to laboriously evaluate every profile
for which the first two criteria applied, we instead asked panelists to inspect a random
subset.”
l580: I am not sure what the matching procedure attempts to do. My best guess is that you are trying
to make causal inferences about the effect of gender and other covariates on whether a PR will be
merged or not. Can you please add a few lines about what you where trying to achieve with it?
We agree that we did not state the goal up front in that section; we now lead the section with
a goal statement.
Also, what are the types of features used (factors, integers etc) esp for the 'file type' feature?
Unlike with regression modeling or nearest-neighbor propensity score matching, the types
are actually unimportant for our analysis. The reason is that every covariate in each data
point in one sample has to be matched exactly, that is, has to be equal to a data point in the
other sample.
I understand that much of what I am discussing here is mostly presented in l306 onwards, but it
helps to have the discussion about propensity score matching in one place, either here (which I
recommend) or before.
See above; we added a forward reference to the appendix in what was formerly l306, in the
main body of the paper.
Validity of the findings
I have one major issue with the paper in its current form: the fact that the observed differences are
attributed to bias.
Please see our comments further down.
Bias is an overloaded word: in science, it usually means deviation from the truth, while in every day
life it is a synonym for discrimination. There are various documented forms of bias that are related to
research, both in a statistical sense (e.g. detection bias) and also relating to assumptions and
pre-occupations on the researcher's part (various forms of cognitive bias). The authors acknowledge
most (all?) of those in their threats to validity section. For bias (as in discrimination) to occur one
group or individual needs to consciously or unconsciously ACT against another. Otherwise put, there
needs to be a causal link between the fact that an individual is a woman whose profile details are
open and the fact that their PRs will be rejected.
The authors do identify an association between women with open profile details and drop in PRs
acceptance rates, but, in my opinion, they do not identify a causal link. They attempt twice to do so:
With their propensity analysis, they attempt to control for covariates in the PR process, but in the end
the gender profiles from both men and women have exactly the same acceptance rate (Fig 11). The
authors may consider as bias the drop in acceptance rates between gender-neutral and gendered
profiles for women (and corresponding increase in men); however, the drop is very low (3%) and
possibly within the confidence intervals (Fig 11) of both groups.
In the original version of the paper, it was not within the confidence intervals; you could and
can check the data in the supplementary material. The new data is slightly different, as
explained elsewhere in this rebuttal.
In the discussion, the authors attempt a step-wise elimination of possible reasons for the observed
difference. Rejecting a set of theories that fail to explain a result does not automatically make a
newly proposed one valid.
This section is important; eliminating possible alternative theories is critical to provide
evidence for causality. Trochim and Donnelly 1 give three criteria for establishing cause and
effect (temporal precedence, covariation of cause and effect, and no plausible alternative
explanations). First, the section in question attempts to argue the last piece: “In order for you
to argue that you have demonstrated internal validity -- that you have shown there's a causal
relationship -- you have to ‘rule out’ the plausible alternative explanations.” Second,
covariation (“association”, in the reviewer’s terms) was the data analysis in the paper. Third,
temporal precedence is by the design of the pull request process.
What I think the authors are measuring are differences in how the two genders perform on various
activities wrt pull request handling, including their under/over-representation in the process. This is,
in my opinion, an important finding on its own. But I do not believe this constitutes bias, at least
given the evidence presented by the authors.
The use of large scale retrospective data as evidence for gender bias is not uncommon, at
least outside of computer science. We have clarified the research lens through which we are
claiming that the evidence constitutes gender bias. We have added the following to the
introduction, just after the research question:
We answer this question from the perspective of a retrospective cohort study, a study
of the differences between two groups exposed to a common factor to determine its
influence on an outcome [11]. One example of a similar retrospective cohort study
was Krumholz and colleagues’ review of 2473 medical records to determine whether
there exists gender bias in the treatment of men and women for heart attacks [22].
Other examples include the analysis of 6244 school discipline files to evaluate
whether gender bias exists in the administration of corporal punishment [12] and the
analysis of 1851 research articles to evaluate whether gender bias exists in the peer
reviewing process for the Journal of the American Medical Association [33].
Given the use of GitHub, I would also avoid to extrapolate my findings to OSS in general.
Although we noted some of the differences between GitHub and other open source ecosystems in
the limitations section, we have added a sentence explicitly noting the limited generalizability to
that section.
Finally, one thing I am missing from the paper is an outlook: given the differences that the authors
find, what would they propose projects to do?
We have added some suggestions in the final paragraph of the conclusion.
1
Trochim, William MK, and James P. Donnelly. 'Research methods knowledge base.' (2001). Article:
Establishing Cause and Effect.
" | Here is a paper. Please give your review comments after reading it. |
735 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The timely publication of scientific results is essential for dynamic advances in science.</ns0:p><ns0:p>The ubiquitous availability of computers which are connected to a global network made the rapid and low-cost distribution of information through electronic channels possible.</ns0:p><ns0:p>New concepts, such as Open Access publishing and preprint servers are currently changing the traditional print media business towards a community-driven peer production.</ns0:p><ns0:p>However, the cost of scientific literature generation, which is either charged to readers, authors or sponsors, is still high. The main active participants in the authoring and evaluation of scientific manuscripts are volunteers, and the cost for online publishing infrastructure is close to negligible. A major time and cost factor is the formatting of manuscripts in the production stage. In this article we demonstrate the feasibility of writing scientific manuscripts in plain markdown (MD) text files, which can be easily converted into common publication formats, such as PDF, HTML or EPUB, using pandoc.</ns0:p><ns0:p>The simple syntax of markdown assures the long-term readability of raw files and the development of software and workflows. We show the implementation of typical elements of scientific manuscripts -formulas, tables, code blocks and citations -and present tools for editing, collaborative writing and version control. We give an example on how to prepare a manuscript with distinct output formats, a DOCX file for submission to a journal, and a LATEX/PDF version for deposition as a PeerJ preprint. Further, we implemented new features for supporting 'semantic web' applications, such as the 'journal article tag suite' -JATS, and the 'citation typing ontology' -CiTO standard. Reducing the work spent on manuscript formatting translates directly to time and cost savings for writers, publishers, readers and sponsors. Therefore, the adoption of the MD format contributes to the agile production of open science literature. Pandoc Scholar is freely available from https://github.com/pandoc-scholar.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Agile development of science depends on the continuous exchange of information between researchers (Woel e, <ns0:ref type='bibr'>Olliaro & Todd, 2011)</ns0:ref>. In the past, physical copies of scienti c works had to be produced and distributed. Therefore, publishers needed to invest considerable resources for typesetting and printing.</ns0:p><ns0:p>Since the journals were mainly nanced by their subscribers, their editors not only had to decide on the scienti c quality of a submitted manuscript, but also on the potential interest to their readers. The availability of globally connected computers enabled the rapid exchange of information at low cost. Yochai <ns0:ref type='bibr'>Benkler (2006)</ns0:ref> predicts important changes in the information production economy, which are based on three observations:</ns0:p><ns0:p>1. A nonmarket motivation in areas such as education, arts, science, politics and theology.</ns0:p><ns0:p>2. The actual rise of nonmarket production, made possible through networked individuals and coordinate e ects.</ns0:p><ns0:p>3. The emergence of large-scale peer production, e.g. of software and encyclopedias.</ns0:p><ns0:p>Immaterial goods such as knowledge and culture are not lost when consumed or shared -they are 'nonrival' -, and they enable a networked information economy, which is not commercially driven <ns0:ref type='bibr'>(Benkler, 2006)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Preprints and e-prints</ns0:head><ns0:p>In some areas of science a preprint culture, i.e. a paper-based exchange system of research ideas and results, already existed when Paul Ginsparg in 1991 initiated a server for the distribution of electronic preprints -'e-prints' -about high-energy particle theory at the Los Alamos National Laboratory (LANL), USA <ns0:ref type='bibr'>(Ginsparg, 1994)</ns0:ref>. Later, the LANL server moved with Ginsparg to Cornell University, USA, and was renamed as arXiv <ns0:ref type='bibr'>(Butler, 2001)</ns0:ref>. Currently, arXiv (https://arxiv.org/) publishes e-prints related to physics, mathematics, computer science, quantitative biology, quantitative nance and statistics.</ns0:p><ns0:p>Just a few years after the start of the rst preprint servers, their important contribution to scienti c communication was evident <ns0:ref type='bibr'>(Ginsparg, 1994;</ns0:ref><ns0:ref type='bibr'>Youngen, 1998;</ns0:ref><ns0:ref type='bibr'>Brown, 2001)</ns0:ref>. In 2014, arXiv reached the impressive number of 1 million e-prints <ns0:ref type='bibr'>(Van Noorden, 2014)</ns0:ref>.</ns0:p><ns0:p>In more conservative areas, such as chemistry and biology, accepting the publishing prior peer-review took more time <ns0:ref type='bibr'>(Brown, 2003)</ns0:ref>. A preprint server for life sciences (http://biorxiv.org/) was launched by the Cold Spring Habor Laboratory, USA, in 2013 <ns0:ref type='bibr'>(Callaway, 2013)</ns0:ref>. PeerJ preprints (https://peerj.com/preprints/), started in the same year, accepts manuscripts from biological sciences, medical sciences, health sciences and computer sciences.</ns0:p><ns0:p>The terms 'preprints' and 'e-prints' are used synonymously, since the physical distribution of preprints has become obsolete. A major drawback of preprint publishing are the sometimes restrictive policies of scienti c publishers. The SHERPA/RoMEO project informs about copyright policies and self-archiving options of individual publishers (http://www.sherpa.ac.uk/romeo/).</ns0:p></ns0:div>
<ns0:div><ns0:head>Open Access</ns0:head><ns0:p>The term 'Open Access' (OA) was introduced 2002 by the Budapest Open Access Initiative and was de ned as:</ns0:p><ns0:p>'Barrier-free access to online works and other resources. OA literature is digital, online, free of charge (gratis OA), and free of needless copyright and licensing restrictions (libre OA).' <ns0:ref type='bibr'>(Suber, 2012)</ns0:ref> Frustrated by the difficulty to access even digitized scienti c literature, three scientists founded the Public Library of Science (PLoS). In 2003, PLoS Biology was published as the rst fully Open Access journal for biology <ns0:ref type='bibr'>(Brown, Eisen & Varmus, 2003;</ns0:ref><ns0:ref type='bibr'>Eisen, 2003)</ns0:ref>.</ns0:p><ns0:p>Thanks to the great success of OA publishing, many conventional print publishers now o er a so-called 'Open Access option', i.e. to make accepted articles free to read for an additional payment by the authors.</ns0:p><ns0:p>The copyright in these hybrid models might remain with the publisher, whilst fully OA usually provide OA literature is only one component of a more general open philosophy, which also includes the access to scholarships, software, and data <ns0:ref type='bibr'>(Willinsky, 2005)</ns0:ref>. Interestingly, there are several di erent 'schools of thought' on how to understand and de ne Open Science, as well the position that any science is open by de nition, because of its objective to make generated knowledge public <ns0:ref type='bibr'>(Fecher & Friesike, 2014)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Cost of journal article production</ns0:head><ns0:p>In a recent study, the article processing charges (APCs) for research intensive universities in the USA and Canada were estimated to be about 1,800 USD for fully OA journals and 3,000 USD for hybrid OA journals <ns0:ref type='bibr'>(Solomon & Björk, 2016)</ns0:ref>. PeerJ (https://peerj.com/), an OA journal for biological and computer sciences launched in 2013, drastically reduced the publishing cost, o ering its members a life-time publishing plan for a small registration fee <ns0:ref type='bibr'>(Van Noorden, 2012)</ns0:ref>; alternatively the authors can choose to pay an APC of 1,095 USD, which may be cheaper, if multiple co-authors participate. In 2009, a study was carried out concerning the 'Economic Implications of Alternative Scholarly Publishing Models', which demonstrates an overall societal bene t by using OA publishing model <ns0:ref type='bibr'>(Houghton et al., 2009)</ns0:ref>. In the same report, the real publication costs are evaluated. The relative costs of an article for the publisher are represented in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Examples such as the</ns0:head><ns0:p>Conventional publishers justify their high subscription or APC prices with the added value, e.g. journalism (stated in the graphics as 'non-article processing'). But also stakeholder pro ts, which could be as Data from <ns0:ref type='bibr'>(Houghton et al., 2009)</ns0:ref>. high as 50%, must be considered, and are withdrawn from the science budget <ns0:ref type='bibr'>(Van Noorden, 2013)</ns0:ref>.</ns0:p><ns0:p>Generally, the production costs of an article could be roughly divided into commercial and academic/ technical costs (Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>). For nonmarket production, the commercial costs such as margins/ pro ts, management etc. can be drastically reduced. Hardware and services for hosting an editorial system, such as Open Journal Systems of the Public Knowledge Project (https://pkp.sfu.ca/ojs/) can be provided by public institutions. Employed scholars can perform editor and reviewer activities without additional cost for the journals. Nevertheless, 'article processing', which includes the manuscript handling during peer review and production represents the most expensive part.</ns0:p><ns0:p>Therefore, we investigated a strategy for the efficient formatting of scienti c manuscripts.</ns0:p></ns0:div>
<ns0:div><ns0:head>Current standard publishing formats</ns0:head><ns0:p>Generally speaking, a scienti c manuscript is composed of contents and formatting. While the content, i.e. text, gures, tables, citations etc., may remain the same between di erent publishing forms and journal styles, the formatting can be very di erent. Most publishers require the formatting of submitted manuscripts in a certain format. Ignoring this Guide for Authors, e.g. by submitting a manuscript with a di erent reference style, gives a negative impression with a journal's editorial sta . Too carelessly prepared manuscripts can even provoke a straight 'desk-reject' <ns0:ref type='bibr'>(Volmer & Stokes, 2016)</ns0:ref>.</ns0:p><ns0:p>Currently DOC(X), LATEX and/ or PDF le formats are the most frequently used formats for journal submission platforms. But even if the content of a submitted manuscript might be accepted during the peer review 'as is', the format still needs to be adjusted to the particular publication style in the production stage. For the electronic distribution and archiving of scienti c works, which is gaining more and more importance, additional formats (EPUB, (X)HTML, JATS) need to be generated. Tab. 1 lists the le formats which are currently the most relevant ones for scienti c publishing.</ns0:p><ns0:p>Although the content elements of documents, such as title, author, abstract, text, gures, tables, etc., remain the same, the syntax of the le formats is rather di erent. Tab. 2 demonstrates some simple examples of di erences in di erent markup languages. Documents with the commonly used Office Open XML (DOCX Microsoft Word les) and OpenDocument (ODT LibreOffice) le formats can be opened in a standard text editor after unzipping. However, content and formatting information is distributed into various folders and les. Practically speaking, those le formats require the use of special word processing software.</ns0:p><ns0:p>From a writer's perspective, the use of What You See Is What You Get (WYSIWYG) programs such as Microsoft Word, WPS Office or LibreOffice might be convenient, because the formatting of the document is directly visible. But the complicated syntax speci cations often result in problems when using di erent software versions and for collaborative writing. Simple conversions between le formats can be difficult or impossible. In a worst-case scenario, 'old' les cannot be opened any more for lack of compatible software.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2016:12:15098:1:1:NEW 18 Mar 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In some parts of the scienti c community therefore LATEX, a typesetting program in plain text format, is very popular. With LATEX, documents with highest typographic quality can be produced. However, the source les are cluttered with LATEX commands and the source text can be complicated to read.</ns0:p><ns0:p>Causes of compilation errors in LATEX are sometimes difficult to nd. Therefore, LATEX is not very user friendly, especially for casual writers or beginners. In academic publishing, it is additionally desirable to create di erent output formats from the same source text:</ns0:p><ns0:p>• For the publishing of a book, with a print version in PDF and an electronic version in EPUB.</ns0:p><ns0:p>• For the distribution of a seminar script, with an online version in HTML and a print version in PDF.</ns0:p><ns0:p>• For submitting a journal manuscript for peer-review in DOCX, as well as a preprint version with another journal style in PDF.</ns0:p><ns0:p>• For archiving and exchanging article data using the Journal Article Tag Suite (JATS) (National Information Standards Organization, 2012), a standardized format developed by the NLM.</ns0:p><ns0:p>Some of the tasks can be performed e.g. with LATEX, but an integrated solution remains a challenge.</ns0:p><ns0:p>Several programs for the conversion between documents formats exist, such as the e-book library program calibre http://calibre-ebook.com/. But the results of such conversions are often not satisfactory and require substantial manual corrections.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2016:12:15098:1:1:NEW 18 Mar 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Therefore, we were looking for a solution that enables the creation of scienti c manuscripts in a simple format, with the subsequent generation of multiple output formats. The need for hybrid publishing has been recognized outside of science <ns0:ref type='bibr'>(Kielhorn, 2011;</ns0:ref><ns0:ref type='bibr'>DPT Collective, 2015)</ns0:ref>, but the requirements speci c to scienti c publishing have not been addressed so far. Therefore, we investigated the possibility to generate multiple publication formats from a simple manuscript source le.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCEPTS OF MARKDOWN AND PANDOC</ns0:head><ns0:p>Markdown was originally developed by John Gruber in collaboration with Aaron Swartz, with the goal to simplify the writing of HTML documents http://daringfireball.net/projects/markdown/.</ns0:p><ns0:p>Instead of coding a le in HTML syntax, the content of a document is written in plain text and annotated with simple tags which de ne the formatting. Subsequently, the Markdown (MD) les are parsed to generate the nal HTML document. With this concept, the source le remains easily readable and the author can focus on the contents rather than formatting. Despite its original focus on the web, the MD format has been proven to be well suited for academic writing <ns0:ref type='bibr'>(Ovadia, 2014)</ns0:ref>. In particular, pandoc- </ns0:p></ns0:div>
<ns0:div><ns0:head>MARKDOWN EDITORS AND ONLINE EDITING</ns0:head><ns0:p>The usability of a text editor is important for the author, either writing alone or with several co-authors. In this section we present software and strategies for di erent scenarios. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Markdown editors</ns0:head><ns0:p>Due to MD's simple syntax, basically any text editor is suitable for editing markdown les. The formatting tags are written in plain text and are easy to remember. Therefore, the author is not distracted by looking around for layout options with the mouse. For several popular text editors, such as vim (http://www. </ns0:p></ns0:div>
<ns0:div><ns0:head>Online editing and collaborative writing</ns0:head><ns0:p>Storing manuscripts on network drives (The Cloud) has become popular for several reasons:</ns0:p><ns0:p>• Protection against data loss.</ns0:p><ns0:p>• Synchronization of documents between several devices.</ns0:p><ns0:p>• Collaborative editing options.</ns0:p><ns0:p>Markdown les on a Google Drive (https://drive.google.com) for instance can be edited online with StackEdit (https://stackedit.io). Fig. <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> demonstrates the online editing of a markdown le on an ownCloud (https://owncloud.com/) installation. OwnCloud is an Open Source software platform, which allows the set-up of a le server on personal webspace. The functionality of an ownCloud installation can be enhanced by installing plugins.</ns0:p><ns0:p>Even mathematical formulas are rendered correctly in the HTML live preview window of the ownCloud markdown plugin (Fig. <ns0:ref type='figure' target='#fig_10'>6</ns0:ref> ).</ns0:p><ns0:p>The collaboration and authoring platform Authorea (https://www.authorea.com/) also supports markdown as one of multiple possible input formats. This can be bene cial for collaborations in which one or more authors are not familiar with markdown syntax.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>The headings and the alignment of the cells are given in the rst two lines. The cell width is variable. The pandoc parameter --columns=NUM can be used to de ne the length of lines in characters. If contents do not t, they will be wrapped.</ns0:p><ns0:p>Complex tables, e.g. tables featuring multiple headers or those containing cells spanning multiple rows or columns, are currently not representable in markdown format. However, it is possible to embed LATEX and HTML tables into the document. These format-speci c tables will only be included in the output if a document of the respective format is produced. This is method can be extended to apply any kind of format-speci c typographic functionality which would otherwise be unavailable in markdown syntax.</ns0:p></ns0:div>
<ns0:div><ns0:head>Figures and images</ns0:head><ns0:p>Images are inserted as follows:</ns0:p><ns0:formula xml:id='formula_0'>![alt text](image location/ name) e.g.</ns0:formula><ns0:p>The alt text is used e.g. in HTML output. Image dimensions can be de ned in braces:</ns0:p><ns0:p>As well, an identi er for the gure can be de ned with #, resulting e.g. in the image attributes {#figure1 height=30%}.</ns0:p><ns0:p>A paragraph containing only an image is interpreted as a gure. The alt text is then output as the gure's caption.</ns0:p></ns0:div>
<ns0:div><ns0:head>Symbols</ns0:head><ns0:p>Scienti c texts often require special characters, e.g. Greek letters, mathematical and physical symbols etc.</ns0:p><ns0:p>The UTF-8 standard, developed and maintained by Unicode Consortium, enables the use of characters across languages and computer platforms. The encoding is de ned as RFC document 3629 of the Network Working group <ns0:ref type='bibr'>(Yergeau, 2003)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Manuscript to be reviewed</ns0:head></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To facilitate the input of speci c characters, so-called mnemonics can be enabled in some editors (e.g. in atom by the character-table package). For example, the 2-character Mnemonics ':u' gives 'ü' (diaeresis), or 'D*' the Greek Δ. The possible character mnemonics and character sets are listed in RFC 1345 http://www.faqs.org/rfcs/rfc1345.html <ns0:ref type='bibr'>(Simonsen, 1992)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Formulas</ns0:head><ns0:p>Formulas are written in LATEX mode using the delimiters $. E.g. the formula for calculating the standard deviation 𝑠 of a random sampling would be written as:</ns0:p><ns0:formula xml:id='formula_1'>$s=\sqrt{\frac{1}{N-1}\sum_{i=1}^N(x_i-\overline{x})^{2}}$</ns0:formula><ns0:p>and gives:</ns0:p><ns0:formula xml:id='formula_2'>𝑠 = √ 1 𝑁−1 ∑ 𝑁 𝑖=1 (𝑥 𝑖 − 𝑥) 2</ns0:formula><ns0:p>with 𝑥 𝑖 the individual observations, 𝑥 the sample mean and 𝑁 the total number of samples.</ns0:p><ns0:p>Pandoc parses formulas into internal structures and allows conversion into formats other than LATEX.</ns0:p><ns0:p>This allows for format-speci c formula representation and enables computational analysis of the formulas <ns0:ref type='bibr'>(Corbí & Burgos, 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Code listings</ns0:head><ns0:p>Verbatim code blocks are indicated by three tilde symbols:</ns0:p><ns0:formula xml:id='formula_3'>~~263 verbatim code</ns0:formula></ns0:div>
<ns0:div><ns0:head>~~265</ns0:head><ns0:p>Typesetting inline code is possible by enclosing text between back ticks.</ns0:p></ns0:div>
<ns0:div><ns0:head>`inline code267</ns0:head></ns0:div>
<ns0:div><ns0:head>Other document elements</ns0:head><ns0:p>These examples are only a short demonstration of the capacities of pandoc concerning scienti c documents. For more detailed information, we refer to the official manual ( http://pandoc.org/MANUAL. html).</ns0:p></ns0:div>
<ns0:div><ns0:head>CITATIONS AND BIOGRAPHY</ns0:head><ns0:p>The efficient organization and typesetting of citations and bibliographies is crucial for academic writing.</ns0:p><ns0:p>Pandoc supports various strategies for managing references. For processing the citations and the creation of the bibliography, the command line parameter --filter pandoc-citeproc is used, with variables for the reference database and the bibliography style. The bibliography will be located automatically at the header # References or # Bibliography.</ns0:p></ns0:div>
<ns0:div><ns0:head>Reference databases</ns0:head><ns0:p>Pandoc is able to process all mainstream literature database formats, such as RIS, BIB, etc. However, for maintaining compatibility with LATEX/ BIBTEX, the use of BIB databases is recommended. The used database either can be de ned in the YAML metablock of the MD le (see below) or it can be passed as parameter when calling pandoc.</ns0:p></ns0:div>
<ns0:div><ns0:head>11/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Inserting citations</ns0:head><ns0:p>For inserting a reference, the database key is given within square brackets, and indicated by an '@'. It is also possible to add information, such as page:</ns0:p><ns0:p>[@suber_open_2012; @benkler_wealth_2006, 57 ff.]</ns0:p><ns0:p>gives <ns0:ref type='bibr'>(Benkler, 2006, p. 57 .;</ns0:ref><ns0:ref type='bibr'>Suber, 2012)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Styles</ns0:head><ns0:p>The Citation Style Language (CSL) http://citationstyles.org/ is used for the citations and bibliographies. This le format is supported e.g. by the reference management programs Mendeley https: //www.mendeley.com/, Papers http://papersapp.com/ and Zotero https://www.zotero.org/.</ns0:p><ns0:p>CSL styles for particular journals can be found from the Zotero style repository https://www.zotero.</ns0:p><ns0:p>org/styles. The bibliography style that pandoc should use for the target document can be chosen in the YAML block of the markdown document or can be passed in as an command line option. The latter is more recommendable, because distinct bibliography style may be used for di erent documents.</ns0:p></ns0:div>
<ns0:div><ns0:head>Creation of LATEX natbib citations</ns0:head><ns0:p>For citations in scienti c manuscripts written in LATEX, the natbib package is widely used. To create a LATEX output le with natbib citations, pandoc simply has to be run with the --natbib option, but without the --filter pandoc-citeproc parameter.</ns0:p></ns0:div>
<ns0:div><ns0:head>Database of cited references</ns0:head><ns0:p>To share the bibliography for a certain manuscript with co-authors or the publisher's production team, it is often desirable to generate a subset of a larger database, which only contains the cited references. If LATEX output was generated with the --natbib option, the compilation of the le with LATEX gives an AUX le (in the example named md-article.aux), which subsequently can be extracted using BibTool https://github.com/ge-ne/bibtool: ~~306 bibtool -x md-article.aux -o bibshort.bib</ns0:p></ns0:div>
<ns0:div><ns0:head>~~308</ns0:head><ns0:p>In this example, the article database will be called bibshort.bib.</ns0:p><ns0:p>For the direct creation of an article speci c BIB database without using LATEX, we wrote a simple Perl script called mdbibexport (https://github.com/robert-winkler/mdbibexport). <ns0:ref type='bibr'>Bourne (2005)</ns0:ref> argues that journals should be e ectively equivalent to biological databases: both provide data which can be referenced by unique identi ers like DOI or e.g. gene IDs. Applying the semantic-web ideas of <ns0:ref type='bibr'>Berners-Lee & Hendler (2001)</ns0:ref> to this domain can make this vision a reality. Here we show how metadata can be speci ed in markdown. We propose conventions, and demonstrate their suitability to enable interlinked and semantically enriched journal articles.</ns0:p></ns0:div>
<ns0:div><ns0:head>META INFORMATION OF THE DOCUMENT</ns0:head><ns0:p>Document information such as title, authors, abstract etc. can be de ned in a metadata block written in YAML syntax. YAML ('YAML Ain't Markup Language', http://yaml.org/) is a data serialization standard in simple, human readable format. Variables de ned in the YAML section are processed by pandoc and integrated into the generated documents. The YAML metadata block is recognized by three hyphens (---) at the beginning, and three hyphens or dots (...) at the end, e.g.:</ns0:p></ns0:div>
<ns0:div><ns0:head>12/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>--title: Formatting Open Science subtitle: agile creation of multiple document types date: 2017-02-10 ...</ns0:p><ns0:p>The public availability of all relevant information is a central aspect of Open Science. Analogous to article contents, data should be accessible via default tools. We believe that this principle must also be applied to article metadata. Thus, we created a custom pandoc writer that emits the article's data as JSON-LD <ns0:ref type='bibr'>(Lanthaler & Gütl, 2012)</ns0:ref>, allowing for informational and navigational queries of the journal's data with standard tools of the semantic web. The above YAML information would be output as:</ns0:p><ns0:p>{ '@context': { '@vocab': 'http://schema.org/', 'date': 'datePublished', 'title': 'headline', 'subtitle': 'alternativeTitle' }, '@type': 'ScholarlyArticle', 'title': 'Formatting Open Science', 'subtitle': 'agile creation of multiple document types', 'date': '2017-02-10' } This format allows processing of the information by standard data processing software and browsers.</ns0:p></ns0:div>
<ns0:div><ns0:head>Flexible metadata authoring</ns0:head><ns0:p>We developed a method to allow writers the exible speci cation of authors and their respective affiliations. Author names can be given as a string, via the key of a single-element object, or explicitly as a name attribute of an object. Affiliations can be speci ed directly as properties of the author object, or separately in the institute object.</ns0:p><ns0:p>Additional information, e.g. email addresses or identi ers like ORCID <ns0:ref type='bibr'>(Haak et al., 2012)</ns0:ref>, can be added as additional values:</ns0:p><ns0:p>author:</ns0:p><ns0:p>-John Doe: institute: fs email: [email protected] orcid: 0000-0000-0000-0000 institute:</ns0:p><ns0:p>fs: Science Formatting Working Group</ns0:p></ns0:div>
<ns0:div><ns0:head>JATS support</ns0:head><ns0:p>The journal article tag suite (JATS) was developed by the NLM and standardized by ANSI/NISO as an archiving and exchange format of journal articles and the associated metadata <ns0:ref type='bibr'>(National Information Standards Organization, 2012)</ns0:ref>, including data of the type shown above. The pandoc-jats writer by Martin Fenner is a plugin usable with pandoc to produce JATS-formatted output. The writer was adapted to be compatible with our metadata authoring method, allowing for simple generation of les which contain the relevant metadata.</ns0:p></ns0:div>
<ns0:div><ns0:head>Citation types</ns0:head><ns0:p>Writers can add information about the reason a citation is given. This might help reviewers and readers, and can simplify the search for relevant literature. We developed an extended citation syntax that inte- Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>grates seamlessly into markdown and can be used to add complementary information to citations. Our method is based on CiTO, the Citation Typing Ontology <ns0:ref type='bibr'>(Shotton, 2010)</ns0:ref>, which speci es a vocabulary for the motivation when citing a resource. The type of a citations can be added to a markdown citation using @CITO_PROPERTY:KEY, where CITO_PROPERTY is a supported CiTO property, and KEY is the usual citation key. Our tool extracts that information and includes it in the generated linked data output. A general CiTO property (cites) is used, if no CiTO property is found in a citation key.</ns0:p><ns0:p>The work at hand will always be the subject of the generated semantic subject-predicate-object triples.</ns0:p><ns0:p>Some CiTO predicates cannot be used in a sensical way under this condition. Focusing on author convenience, we use this fact to allow shortening of properties when sensible. E.g. if authors of a biological paper include a reference to the paper describing a method which was used in their work, this relation can be described by the uses_method_in property of the CiTO ontology. The inverse property, pro-vides_method_for, would always be nonsensical in this context as implied by causality. It is therefor not supported by our tool. This allows us to introduce an abbreviation (method) for the latter property, as any ambiguity has been eliminated. Users of western blotting might hence write @method_in:towbin_1979</ns0:p><ns0:p>or even just @method:towbin_1979, where towbin_1979 is the citation identi er of the describing paper by <ns0:ref type='bibr'>Towbin, Staehelin & Gordon (1979)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>EXAMPLE: MANUSCRIPT WITH OUTPUT OF DOCX/ ODT FORMAT AND LATEX/ PDF FOR SUBMISSION TO DIFFERENT JOURNALS.</ns0:head><ns0:p>Scienti c manuscripts have to be submitted in a format de ned by the journal or publisher. At the moment, DOCX is the most common le format for manuscript submission. Some publishers also accept or require LATEX or ODT formats. Additional to the general style of the manuscript -organization of sections, fonts, etc. -the citation style of the journal must also be followed. Often, the same manuscript has to be prepared for di erent journals, e.g. if the manuscript was rejected by a journal and has to be formatted for another one, or if a preprint of the paper is submitted to an archive that requires a distinct document format than the targeted peer-reviewed journal. In this example, we want to create a manuscript for a PLoS journal in DOCX and ODT format for WYSIWYG word processors. Further, a version in LATEX/ PDF should be produced for PeerJ submission and archiving at the PeerJ preprint server.</ns0:p><ns0:p>The examples for DOCX/ ODT are kept relatively simple, to show the proof-of-principle and to provide a plain document for the development of own templates. Nevertheless, the generated documents should be suitable for submission after little manual editing. For speci c journals it may be necessary to create more sophisticated templates or to copy/ paste the generic DOCX/ ODT output into the publisher's template.</ns0:p></ns0:div>
<ns0:div><ns0:head>Development of a DOCX/ ODT template</ns0:head><ns0:p>A rst DOCX document with bibliography in PLoS format is created with pandoc DOCX output:</ns0:p><ns0:p>pandoc -S -s --csl=plos.csl --filter pandoc-citeproc -o pandoc-manuscript.docx agile-editing-pandoc.md</ns0:p><ns0:p>The parameters -S -s generate a typographically correct (dashes, non-breaking spaces etc.) stand-alone document. A bibliography with the PLoS style is created by the citeproc lter setting --csl=plos.csl --filter pandoc-citeproc.</ns0:p><ns0:p>The document settings and styles of the resulting le pandoc-manuscript.docx can be optimized and be used again as document template (--reference-docx=pandoc-manuscript.docx).</ns0:p><ns0:p>pandoc -S -s --reference-docx=pandoc-manuscript.docx --csl=plos.csl --filter pandoc-citeproc -o outfile.docx agile-editing-pandoc.md</ns0:p><ns0:p>It is also possible to directly re-use a previous output le as template (i.e. template and output le have the same le name):</ns0:p><ns0:p>pandoc -S -s --columns=10 --reference-docx=pandoc-manuscript.docx --csl=plos.csl --filter=pandoc-citeproc -o pandoc-manuscript.docx agile-editing-pandoc.md</ns0:p><ns0:p>In this way, the template can be incrementally adjusted to the desired document formatting. The nal document may be employed later as pandoc template for other manuscripts with the same speci cations.</ns0:p><ns0:p>In this case, running pandoc the rst time with the template, the contents of the new manuscript would be lled into the provided DOCX template. A page with DOCX manuscript formatting of this article is shown in Fig. <ns0:ref type='figure' target='#fig_12'>8</ns0:ref>. The same procedure can be applied with an ODT formatted document.</ns0:p></ns0:div>
<ns0:div><ns0:head>Development of a TEX/PDF template</ns0:head><ns0:p>The default pandoc LATEX template can be written into a separate le by: pandoc -D latex > template-peerj.latex</ns0:p><ns0:p>This template can be adjusted, e.g. by de ning Unicode encoding (see above), by including particular packages or setting document options (line numbering, font size). The template can then be used with the pandoc parameter --template=pandoc-peerj.latex.</ns0:p><ns0:p>The templates used for this document are included as Supplemental Material (see section Software and code availability below).</ns0:p></ns0:div>
<ns0:div><ns0:head>Styles for HTML and EPUB</ns0:head><ns0:p>The style for HTML and EPUB formats can be de ned in .css stylesheets. The Supplemental Material contains a simple example .css le for modifying the HTML output, which can be used with the pandoc parameter -c pandoc.css.</ns0:p></ns0:div>
<ns0:div><ns0:head>AUTOMATING DOCUMENT PRODUCTION</ns0:head><ns0:p>The commands necessary to produce the document in a speci c formats or styles can be de ned in a simple Makefile. An example Makefile is included in the source code of this preprint. The desired output le format can be chosen when calling make. E.g. make outfile.pdf produces this preprint in PDF format. Calling make without any option creates all listed document types. A Makefile producing DOCX, ODT, JATS, PDF, LATEX, HTML and EPUB les of this document is provided as Supplemental Material.</ns0:p></ns0:div>
<ns0:div><ns0:head>15/22</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Cross-platform compatibility</ns0:head><ns0:p>The make process was tested on Windows 10 and Linux 64 bit. All documents -DOCX, ODT, JATS, LATEX, PDF, EPUB and HTML -were generated successfully, which demonstrates the cross-platform compatibility of the work ow.</ns0:p></ns0:div>
<ns0:div><ns0:head>PERSPECTIVE</ns0:head><ns0:p>Following the trend to peer production, the formatting of scienti c content must become more efficient.</ns0:p><ns0:p>Markdown/ pandoc has the potential to play a key role in the transition from proprietary to communitydriven academic production. Important research tools, such as the statistical computing and graphics language R (R Core Team, 2014) and the Jupyter notebook project <ns0:ref type='bibr'>(Kluyver et al., 2016)</ns0:ref> have already adopted the MD syntax (e.g. http://rmarkdown.rstudio.com/). The software for writing manuscripts in MD is mature enough to be used by academic writers. Therefore, publishers also should consider implementing the MD format into their editorial platforms.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS</ns0:head><ns0:p>Authoring scienti c manuscripts in markdown (MD) format is straight-forward, and manual formatting is reduced to a minimum. The simple syntax of MD facilitates document editing and collaborative writing.</ns0:p><ns0:p>The rapid conversion of MD to multiple formats such as DOCX, LATEX, PDF, EPUB and HTML can be done easily using pandoc, and templates enable the automated generation of documents according to speci c journal styles.</ns0:p><ns0:p>The additional features we implemented facilitate the correct indexing of meta information of journal articles according to the 'semantic web' philosophy.</ns0:p><ns0:p>Altogether, the MD format supports the agile writing and fast production of scienti c literature. The associated time and cost reduction especially favours community-driven publication strategies.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017) Manuscript to be reviewed Computer Science a liberal license, such as the Creative Commons Attribution 4.0 International (CC BY 4.0, https:// creativecommons.org/licenses/by/4.0/).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>Journal of Statistical Software (JSS, https://www.jstatsoft.org/) and eLife (https://elifesciences.org/) demonstrate the possibility of completely community-supported OA publications. Fig. 1 compares the APCs of di erent OA publishing business models. JSS and eLife are peer-reviewed and indexed by Thomson Reuters. Both journals are located in the Q1 quality quartile in all their registered subject categories of the Scimago Journal & Country Rank (http://www.scimagojr.com/), demonstrating that high-quality publications can be produced without charging the scienti c authors or readers.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Article Processing Charge (APCs) that authors have to pay for with di erent Open Access (OA) publishing models. Data from (Solomon & Björk, 2016) and journal web-pages.</ns0:figDesc><ns0:graphic coords='4,178.42,352.95,340.20,255.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Estimated publishing cost for a 'hybrid' journal (conventional with Open Access option).Data from(Houghton et al., 2009).</ns0:figDesc><ns0:graphic coords='5,178.42,63.78,340.20,116.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>avored MD (http://pandoc.org/) adds several extensions which facilitate the authoring of academic documents and their conversion into multiple output formats. Tab. 2 demonstrates the simplicity of MD compared to other markup languages. Fig.3illustrates the generation of various formatted documents from a manuscript in pandoc MD. Some relevant functions for scienti c texts are explained below in more detail.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Workfow for the generation of multiple document formats with pandoc. The markdown (MD) le contains the manuscript text with formatting tags, and can also refer to external les such as images or reference databases. The pandoc processor converts the MD le to the desired output formats. Documents, citations etc. can be de ned in style les or templates.</ns0:figDesc><ns0:graphic coords='7,235.12,326.04,226.80,254.64' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Fig. 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>summarizes various options for local or networked editing of MD les. 6/22 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Markdown les can be edited on local devices or on cloud drives. A local or remote git repository enables advanced advanced version control.</ns0:figDesc><ns0:graphic coords='8,206.80,63.78,283.44,172.68' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Fig. 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Fig. 5. shows the editing of a markdown le, using the cross-platform editor Atom with several markdown plugins.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Document directory tree, editing window and HTML preview using the Atom editor.</ns0:figDesc><ns0:graphic coords='9,141.73,99.03,413.60,227.05' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Direct online editing of this manuscript with live preview using the ownCloud Markdown Editor plugin by Robin Appelman.</ns0:figDesc><ns0:graphic coords='9,141.73,426.48,413.60,232.66' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Version control and collaborative editing using a git repository on bitbucket.</ns0:figDesc><ns0:graphic coords='10,141.73,213.28,413.60,232.66' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Opening a pandoc-generated DOCX in Microsoft Office 365.</ns0:figDesc><ns0:graphic coords='16,141.73,134.20,413.60,232.66' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Current standard formats for scienti c publishing.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Type</ns0:cell><ns0:cell>Description</ns0:cell><ns0:cell>Use</ns0:cell><ns0:cell>Syntax</ns0:cell><ns0:cell>Reference</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>DOCX Office Open XML WYSIWYG editing ODT OpenDocument WYSIWYG editing PDF portable document print replacement EPUB electronic publishing e-books JATS journal article tag suite journal publishing LATEX typesetting system high-quality print HTML hypertext markup websites MD Markdown lightweight markup</ns0:cell><ns0:cell cols='2'>XML, ZIP XML, ZIP PDF HTML5, ZIP XML TEX (X)HTML (Raggett et al., 1999; Hickson et al., (Ngo, 2006) (Brauer et al., 2005) (International Organization for Standardization, 2013) (Eikebrokk, Dahl & Kessel, 2014) (National Information Standards Organization, 2012) (Lamport, 1994) 2014) plain text MD (Ovadia, 2014; Leonard, 2016)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Examples for formatting elements and their implementations in di erent markup languages.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Element</ns0:cell><ns0:cell>Markdown</ns0:cell><ns0:cell>LATEX</ns0:cell><ns0:cell>HTML</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>structure section subsection text style bold italics links HTTP link <https:// # Intro ## History **text** *text* arxiv.org></ns0:cell><ns0:cell>\section{Intro} \subsection{History} \textbf{text} \textit{text} \usepackage{url} \url{https://arxiv.org}</ns0:cell><ns0:cell><h1><Intro></h1> <h2><History></h2> <b>text</b> <i>text</i> <a href='https:// arxiv.org'></a></ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='14'>/22 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='16'>/22 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15098:1:1:NEW 18 Mar 2017) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dr. Robert Winkler
Profesor Investigador
Departamento Biotecnología y Bioquímica
CINVESTAV Unidad Irapuato
Km. 9.6 Libramiento Norte Carr. Irapuato-León
36821 Irapuato Gto., México
[email protected]
Tel.: +52-462-6239-635
PEERJ EDITOR
Response Letter for Manuscript:
Formatting Open Science: agile creation of multiple document types by writing
academic manuscripts with Pandoc Scholar
Irapuato, 14th of March, 2017
Dear Editor and dear Reviewers,
Thank you very much for the comments on our manuscript. Following we give a point-by-point
response.
Reviewer 1
Basic reporting
The manuscript does not follow the traditional structure of a research article but is still easy to
follow. However, I consider that it would benefit a more traditional IMRAD approach since that
would highlight the actual individual parts (and contribution) of the manuscript better than the
current custom format where parts are more blended together.
Authors: The article is following a structure similar to IMRAD: OCAR - Opening, Challenge,
Action, Resolution – structure; a more modern interpretation of IMRAD. We think that for this
particulate case, the structure is adequate, because the article is intended for a broad
readership.
Rev. 1: I think the literature review concerning Open Access and article processing fees is only
tangentially relevant to the article, would consider de-emphasizing that and rather expanding
the review of literature that concerns the different markup languages (which I see as the main
contribution of this article).
Authors: The time and money currently spent in document formatting justify our proposal to
use pandoc markdown formatting. Therefore, this introducing part is important. The important
point is the (virtual) incompatibility of commonly used markup languages. We present a
possibility to overcome this problem by using markdown/ pandoc.
Rev. 1: Language is of good quality but should further be improved, counted a handful of
simple mistakes during my time with the manuscript.
Authors: The manuscript was revised by a native speaker; various mistakes were corrected.
Rev. 1: The discussion section is non-existent, there needs to be ties back into both existing
research as well as describing avenues for how this work can be used as an outgoing step for
future research.
Authors: Thank you very much for this observation. Our work presents a well supported study
about the advantages and the feasibility of using markdown in document formatting. For a
broad implementation, the participation of the community (researchers and publishers) is
necessary. Therefore, we suggest in the revised manuscript the adoption of MD based
workflows in academic environments/ publishing and also mention the availability of MD
packages in widely used scientific software such as R.
Experimental design
There is not really a research question in the traditional sense and that I think is the main
problem - the manuscript needs to be written more in the style of an academic article rather
than a more casual guide for how-to guide for using markdown.
I am not fully convinced that this article fulfils the criteria of being a 'research article' since f
markdown and pandoc are presented largely in a vacuum from the section 'CONCEPTS OF
MARKDOWN
AND
PANDOC'
onwards.
Could some kind of evaluation criteria be introduced to compare the different markup
languages to each other to really emphasise the benefits of markdown? I think something like
this would be needed.
Authors: Our manuscript is certainly not a classic hypothesis-driven research project (which is
also true for most of high-throughput ‘Omics’ approaches in biology and medicine, see Winkler,
Frontiers
in
Plant
Science,
2016,
http://journal.frontiersin.org/article/10.3389/fpls.2016.00195/full). Rather, we expound the
current tendencies of publishing (Open Science and rapid divulgation of results using preprints
=> OPENING), the obstacles in the transition from traditional publishing models due to costs
(=> CHALLENGE) and test the feasibility of pandoc markdown for the generation different
types of document formats (=> ACTION). As result, pandoc markdown is a very attractive
option for writing academic articles (=> RESOLUTION). This OCAR structure is pretty common
for scientific works (Schimel, Writing Science, 2011, https://www.amazon.com/WritingScience-Papers-Proposals-Funded/dp/0199760233/ref=mt_hardcover?_encoding=UTF8&me=
).
In the last part of the section “Current standard publishing formats”, we explain the necessity
to produce different output formats of a manuscript and the difficulty of frequently needed
format conversions. We are not looking for the simplest markup language, but (last phrase of
this section) “ we investigated the possibility to generate multiple publication formats from a
simple manuscript source file.”.
Validity of the findings
The application of markdown in the context of academic writing warrants focus and I think it is
a worthy pursuit to conduct research around it. However, the current manuscript does not fulfil
some of the basic criteria for a research-based article but includes to much
anecdotal,unquestioned, or unevaluated solutions. I would like to read more about the benefits
of markdown vs. LATEX other than being more simple in its syntax.
Authors: Please refer to our answers regarding the experimental design and the research
question.
In the revised version, we put more emphasis on new functions which were implemented by
ourselves. In the original version, we already provided a perl parser, which allows to generate
an article-specific reference database from cited literature.
Meanwhile we implemented the extended options to include meta-information into the
document, following the vision of the “semantic web”.
As well, we implemented an extension for the citation of literature. The reason of citing a
particular reference can be given by a CiTo compliant tag.
These new features have been implemented by us for use with pandoc.
Comments for the author
I am interested in seeing how you may improve on the manuscript since I see potential in the
topic, however, the current packaging has a lot that can and should be improved before
publication.
Authors: We answered all comments of the reviewers. The revised manuscript contains various
improvements compared to the previous submission and original software solutions.
Reviewer 2
Basic reporting
This paper describes a strategy for manuscript writing that facilitates the creation of various
output documents. The work is well-written and technically sound. Related work is enough
detailed, introducing Open Access and justifying the motivation of their work.
Authors: Thank you very much for your positive feedback.
In my opinion, some improvements could be introduced.
Rev. 2: - Some figures could be resized: Figure 1, 3 and 4 could be smaller. Figure 5, 6 and 7
could be bigger.
Authors: The figures 1, 3 and 4 were resized in the revised document. Figures 5-7 are
screenshots which cannot be made larger in the pdf output without extensive workarounds.
However, for e.g. HTML output those figures could be made bigger and thus more readible.
Rev. 2: - Table 2 may have some error. Tags h1 and h2 may have an additional '<' and '>'.
Example of bold typeface in HTML could have unnecessary elements (**).
Authors: Thank you for your observation, the syntax was corrected.
Rev. 2: - Some elements of Figure 3 could be better explained. Why MD is linked with bib? In
this very figure, template is included in pandoc but no explication is given about it.
Authors: Figure 3 was not clear and we also expanded the figure caption to explain better the
workflow of markdown/ pandoc document generation.
Rev. 2: - Figure 6 includes an example of owncloud, however, a reader who does not know this
platform could not understand what it is. Maybe, a very little introduction could be helpful.
Authors: We included a short description of the ownCloud platform.
Experimental design
The proposed methodology is compared with others well-known methods and some examples
are depicted.
However, some changes could be included:
Rev. 2: - Example of how to define additional parameters in figures should be included (Page
10, Line 244).
Authors: We give in the revised version more information about additional parameters, such as
image width and identifiers, together with the syntax and code examples.
Rev. 2: - In subsection Symbols (Page 10) an example of latex is included. In my opinion, it is
not clear if the authors are comparing with latex or this example is the instruction in case of
you want to transform MD to Latex.
Authors: This part was not written clear enough. It refers to the processing of MD to
LATEX/PDF. We changed the text to “The correct processing of MD with UTF-8 encoding to
LATEX/PDF output requires the use ..”
Rev. 2: - Section of 'Manuscript with output of DOCX/ODT format and LATEX/PDF for
submission to different journals' should be better explained. The listed command could be
described. In my opinion, the aim of the section and the commands should be better defined.
The use of templates is interesting but it is not well-defined.
Authors: We explain now more in detail the objective of this section and provide explanations
on the given command examples. As mentioned below, our intention here is to demonstrate
the feasibility of the parallel creation of LATEX and DOCX/ODT documents according to
defined journal/ bibliography styles.
Validity of the findings
Conclusions are clear and well stated.
Comments for the author
Rev. 2: The use of templates are one of the most interesting parts of Pandoc, but almost no
information is provided. This is the weak link of the article.
Authors: For the revised version we cleaned up the DOC/ODT templates and explain the
usability and the limitations of those. The complexity of some official journal templates makes
the quick adoption to pandoc templates problematic. But our intention here is to demonstrate
the proof-of-concept and to give a starting point for own journal specific templates. This is
now stipulated clearly in the revised manuscript.
Since we expanded the scientific content of our manuscript and replied constructively to all
comments of the reviewers, we hope that our paper is now ready for acceptance in PeerJ.
Best Regards,
Albert Krewinkel and Robert Winkler
" | Here is a paper. Please give your review comments after reading it. |
736 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Monitoring fauna is an important task for ecologists, natural resource managers, and conservationists.</ns0:p><ns0:p>Historically, most data were collected manually by scientists that went to the field and annotated their observations <ns0:ref type='bibr' target='#b27'>(Terborgh et al., 1990)</ns0:ref>. This generally limited the spatial and temporal extend of the data. Furthermore, given that the data were based on an individual's observations, the information was difficult to verify, reducing its utility for understanding long-term ecological processes <ns0:ref type='bibr' target='#b0'>(Acevedo and Villanueva-Rivera, 2006)</ns0:ref>.</ns0:p><ns0:p>To understand the impacts of climate change and deforestation on the fauna, the scientific community needs long-term, wide-spread and frequent data <ns0:ref type='bibr' target='#b31'>(Walther et al., 2002)</ns0:ref>. Passive acoustic monitoring (PAM) can contribute to this need because it facilitates the collection of large amounts of data from many sites simultaneously, and with virtually no impact to the fauna and environment <ns0:ref type='bibr' target='#b8'>(Brandes, 2008;</ns0:ref><ns0:ref type='bibr' target='#b19'>Lammers et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b28'>Tricas and Boyle, 2009;</ns0:ref><ns0:ref type='bibr' target='#b10'>Celis-Murillo et al., 2012)</ns0:ref>. In general, PAM systems include a microphone or a hydrophone connected to a self powered system and enough memory to store various weeks or months of recordings, but there are also permanent systems that use solar panels and an Internet connection to upload recordings in real time to a cloud based analytical platform <ns0:ref type='bibr' target='#b3'>(Aide et al., 2013)</ns0:ref>.</ns0:p><ns0:p>Passive recorders can easily create a very large data set (e.g. 100,000s of recordings) that is overwhelming to manage and analyze. Although researchers often collect recordings twenty-four hours a day for weeks or months <ns0:ref type='bibr' target='#b0'>(Acevedo and Villanueva-Rivera, 2006;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brandes, 2008;</ns0:ref><ns0:ref type='bibr' target='#b19'>Lammers et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b25'>Sueur et al., 2008;</ns0:ref><ns0:ref type='bibr'>Marques et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b6'>Blumstein et al., 2011)</ns0:ref>, in practice, most studies have only analyzed a small percentage of the total number of recordings.</ns0:p><ns0:p>Web-based applications have been developed to facilitate data management of these increasingly large datasets <ns0:ref type='bibr' target='#b3'>(Aide et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b30'>Villanueva-Rivera and Pijanowski, 2012)</ns0:ref>, but the biggest challenge is to develop efficient and accurate algorithms for detecting the presence or absence of a species in many recordings. Algorithms for species identification have been developed using spectrogram matched filtering <ns0:ref type='bibr' target='#b12'>(Clark et al., 1987;</ns0:ref><ns0:ref type='bibr' target='#b11'>Chabot, 1988)</ns0:ref>, statistical feature extraction <ns0:ref type='bibr' target='#b26'>(Taylor, 1995;</ns0:ref><ns0:ref type='bibr' target='#b15'>Grigg et al., 1996)</ns0:ref>, k-Nearest neighbor algorithm <ns0:ref type='bibr' target='#b17'>(Hana et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b16'>Gunasekaran and Revathy, 2010)</ns0:ref>, Support Vector Machine <ns0:ref type='bibr' target='#b13'>(Fagerlund, 2007;</ns0:ref><ns0:ref type='bibr' target='#b1'>Acevedo et al., 2009)</ns0:ref>, tree-based classifiers <ns0:ref type='bibr' target='#b2'>(Adams et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b18'>Henderson and Hildebrand, 2011)</ns0:ref> and template based detection <ns0:ref type='bibr' target='#b5'>(Anderson et al., 1996;</ns0:ref><ns0:ref type='bibr' target='#b24'>Mellinger and Clark, 2000)</ns0:ref>, PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science but most of these algorithms are built for a specific species and there was no infrastructure provided for the user to create models for other species.</ns0:p><ns0:p>In this study, we developed a method that detects the presence or absence of a species' specific call type in recordings with a response time that allows researchers to create, run, tune and re-run models in real time as well as detect hundreds of thousands of recordings in a reasonable time. The main objective of the study was to compare the performance (e.g. efficiency and accuracy) of three variants of a template-based detection algorithm and incorporate the best into the ARBIMON II bioacoustics platform. The first variant is the Structural Similarity Index described in <ns0:ref type='bibr' target='#b32'>Wang et al. (2004)</ns0:ref>, a widely use method to find how similar two images are (in our case the template with the tested recording). The second method filters the recordings with the dynamic thresholding method described in <ns0:ref type='bibr' target='#b32'>Wang et al. (2004)</ns0:ref> and then use the Frobenius norm to find similarities with the template. The final method uses the Structural Similarity Index, but it is only applied to regions with high match probability determined by the OpenCV's matchTemplate procedure <ns0:ref type='bibr' target='#b7'>(Bradski, 2000)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Passive acoustic data acquisition</ns0:head><ns0:p>We gathered recordings from five locations, four in Puerto Rico and one in Peru. Some of the recordings were acquired using the Automated Remote Biodiversity Monitoring Network (ARBIMON) data acquisition system described in <ns0:ref type='bibr' target='#b3'>Aide et al. (2013)</ns0:ref>, while others were acquired using the newest version of ARBIMON permanent recording station, which uses an Android cell phone and transmits the recorded data through a cellular network. All recordings have a sampling rate of 44.1kHz, a sampling depth of 16-bit and an approximate duration of 60 seconds (±.5s)</ns0:p><ns0:p>The locations in Puerto Rico were the Sabana Seca permanent station in Toa Baja, the Casa la Selva station in Carite Mountains (Patillas), El Yunque National Forest in Rio Grande and Mona Island (see Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). The location in Peru was the Amarakaeri Communal Reserve in the Madre de Dios Region (see Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). In all the locations, the recorders were programmed to record one minute of audio every 10 minutes. The complete dataset has more than 100,000 1-minute recordings. We randomly chose 362 recordings from Puerto Rico and 547 recordings from Peru for comparing the three algorithm variants. We used the ARBIMON II web application interface to annotate the presence or absence of 21 species in all the recordings. Regions in the recording where a species emits a sound were also marked using the web interface. Each region of interest (ROI) is a rectangle delimited by starting time, ending time, lowest frequency and highest frequency along with a species and sound type. The species included in the analysis are listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, along with the number of total recordings and the number of recordings where the species is present or absent.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm</ns0:head><ns0:p>The algorithm recognition process is divided into three phases: 1) Template Computation, 2) Model Training and 3) Detection (see Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> In the following sections the Template Computation process will be explained, then the process of using the Template to extract features from a recording is presented and finally, the procedures to use the features to train the model and to detect recordings are discussed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Template Computation</ns0:head><ns0:p>The template refers to the combination of all ROIs in the training data. To create a template, we first start</ns0:p><ns0:p>with the examples of the specific call of interest (i.e. ROIs) that were annotated from a set of recordings for a given species and a specific call type (e.g. common, alarm). Each ROI encompasses an example of the call, and is an instance of time between time t 1 and time t 2 of a given recording and low and high boundary frequencies of f 1 and f 2 , where t 1 < t 2 and f 1 < f 2. In a general sense, we combine these examples to produce a template of a specific song type of a single species.</ns0:p><ns0:p>Specifically, for each recording that has an annotated ROI, a spectrogram matrix (SM) is computed using the Short Time Fourier Transform with a frame size of 1024 samples, 512 samples of overlap and a Hann analysis window, thus the matrices have 512 rows. For a recording with a sampling rate of 44,100 Hz, the matrix bin bandwidth is approximately 43.06 Hz. The SM is arranged so that the row of index 0 represents the lowest frequency and the row with index 511 represents the highest frequency of the spectrum. Properly stated the columns c 1 to c 2 and the rows from r 1 to r 2 of SM were extracted, where:</ns0:p><ns0:formula xml:id='formula_0'>c 1 = ⌊t 1 × 44100⌋, c 2 = ⌊t 2 × 44100⌋, r 1 = ⌊ f 1 /43.06⌋ and r 2 = ⌊ f 2 /43.06⌋.</ns0:formula><ns0:p>The rows and columns that represent the ROI in the recording (between frequencies f 1 and f 2 and between times t 1 and t 2 ) are extracted. The submatrix of SM that contains only the area bounded by the ROI is define as SM ROI and refer in the manuscript as the ROI matrix.</ns0:p><ns0:p>Since the ROI matrices can vary in size, to compute the aggregation from the ROI matrices we have to take into account the difference in the number of rows and columns of the matrices. All recordings have the same sampling rate, 44100Hz. Thus, the rows from different SMs, computed with the same parameters, will represent the same frequencies, i.e. rows with same indexes represent the same frequency.</ns0:p><ns0:p>After the ROI matrix, SM ROI , has been extracted from SM, the rows of SM ROI will also represent specific frequencies. Thus, if we were to perform an element-wise matrix sum between two ROI matrices with potentially different number of rows, we should only sum rows that represent the same frequency.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_6'>2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To take into account the difference in the number of columns of the ROI matrices, we use the Frobenius norm to optimize the alignment of the smaller ROI matrices and perform element-wise sums between rows that represent the same frequency. We present that algorithm in the following section and a flow chart of the process in Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Template Computation Algorithm:</ns0:head><ns0:p>1. Generate the set of SM ROI matrices by computing the short time Fourier Transform of all the user generated ROIs.</ns0:p><ns0:p>2. Create matrix SM max , a duplicate of the first created matrix among the matrices with the largest number of columns.</ns0:p><ns0:p>3. Set c max as the number of columns in SM max 4. Create matrix T temp , with the same dimensions as SM max and all entries equal to 0. This matrix will contain the element-wise addition of all the extracted SM ROI matrices.</ns0:p><ns0:p>5. Create matrix W with the same dimensions of SM max and all entries equal to 0. This matrix will hold the count on the number of SM ROI matrices that participate in the calculation of each element of T temp .</ns0:p><ns0:p>6. For each one of the SM i ROI matrices in SM ROI : (a) If SM i has the same number of columns as T temp : i. Align the rows of SM i and T temp so they represent equivalent frequencies and perform an element-wise addition of the matrices and put the result in T temp .</ns0:p><ns0:p>ii. Add one to all the elements of the W matrix where the previous addition participated.</ns0:p><ns0:p>(b) If the number of columns differs between SM i and T temp , then find the optimal alignment with SM max as follows:</ns0:p><ns0:p>i. Set c i as the number of columns in SM i .</ns0:p><ns0:p>ii. Define (SM max ) I as the set of all submatrices of SM max with the same dimensions as</ns0:p><ns0:formula xml:id='formula_1'>SM i . Note that the cardinality of (SM max ) I is c max − c i . iii. For each Sub k ∈ (SM max ) I : A. Compute d k = NORM(Sub k − SM i )</ns0:formula><ns0:p>where NORM is the Frobenius norm defined as:</ns0:p><ns0:formula xml:id='formula_2'>NORM(A) = ∑ (i, j) |a 2 i, j |</ns0:formula><ns0:p>where a i, j are the elements of matrix A.</ns0:p><ns0:p>iv. Define Sub min{d k } as the Sub k matrix with the minimum d k . This is the optimal alignment of SM i with SM max .</ns0:p><ns0:p>v. Align the rows of Sub min{d k } and T temp so they represent equivalent frequencies, perform an element-wise addition of the matrices and put the result in T temp .</ns0:p><ns0:p>vi. Add one to all the elements of the W matrix where the previous addition participated.</ns0:p><ns0:p>7. Define the matrix T template as the element-wise division between the T temp matrix and the W matrix.</ns0:p><ns0:p>The resulting T template matrix summarizes the information available in the ROI matrices submitted by the user and it will be used to extract information from the audio recordings that are to be analyzed. In this article each species T template was created using five ROIs.</ns0:p><ns0:p>In Figure <ns0:ref type='figure' target='#fig_4'>5a</ns0:ref> a training set for the Eleutherodactylus coqui is presented and in Figure <ns0:ref type='figure' target='#fig_4'>5b</ns0:ref> the resulting template can be seen. This tool is very useful because the user can see immediately the effect of adding or subtracting a specific sample to the training set.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Model Training</ns0:head><ns0:p>The goal of this phase is to train a random forest model. The input to train the random forest are a series of statistical features extracted from vectors V i that are created by computing a recognition function (similarity measure) between the computed T template and submatrices of the spectrogram matrices of a series of recordings.</ns0:p><ns0:p>In the following section we present the details of the algorithm that processes a recording to create the recognition function vector and in Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>, we present a flowchart of the process.</ns0:p><ns0:p>Algorithm to Create the Similarity Vector:</ns0:p><ns0:p>1. Compute matrix SPEC, the submatrix of the spectrogram matrix that contains the frequencies in T template . Note that we are dealing with recordings that have the same sample rate as the recordings used to compute the T template .</ns0:p><ns0:p>2. Define c SPEC , the number of columns of SPEC. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.'>Define</ns0:head><ns0:p>Computer Science (b) Increase i by 1. Note that this is equivalent to progressing step columns in the SPEC matrix. 9. Define the vector V as the vector containing the n similarity measures resulting from the previous</ns0:p><ns0:formula xml:id='formula_3'>steps. That is, V = [meas 1 , meas 2 , meas 3 , • • • , meas n ].</ns0:formula></ns0:div>
<ns0:div><ns0:head>Recognition Function</ns0:head><ns0:p>We used three variations of a pattern match procedure to define the similarity measure vector V . First, the Structural Similarity Index described in <ns0:ref type='bibr' target='#b32'>Wang et al. (2004)</ns0:ref> and implemented in van der Walt et al. ( <ns0:ref type='formula'>2014</ns0:ref>) as compare_ssim with the default window size of seven unless the generated pattern is smaller. It will be referred in the rest of the manuscript as the SSIM variant. For the SSIM variant we define meas i as:</ns0:p><ns0:formula xml:id='formula_4'>meas i = SSI(T template , SPEC i ) ,</ns0:formula><ns0:p>where SPEC i is the submatrix of SPEC that spans the columns from i × step to i × step + c template and the same number of rows as Second, the dynamic thresholding method (threshold_adaptive) described in Wang et al. ( <ns0:ref type='formula'>2004</ns0:ref>) with a block size of 127 and an arithmetic mean filter is used over both T template and SPEC i before multiplying them and applying the Frobenius norm and normalized by the norm of a matrix with same dimensions as T template and all elements equal to one. Therefore, meas i for the NORM variant is defined as:</ns0:p><ns0:formula xml:id='formula_5'>T template and V = [meas 1 , meas 2 , meas 3 , • • • , meas n ] with n = c SPEC − c template step + 1.</ns0:formula><ns0:formula xml:id='formula_6'>meas i = FN DT M(T template ) . * DT M(SPEC i ) /FN(U) ,</ns0:formula><ns0:p>where again SPEC i is the submatrix of SPEC that spans the columns from i × step to i × step + c template , FN is the Frobenius norm, DTM is the dynamic thresholding method, U is a matrix with same dimensions as T template with all elements equal to one and . * performs an element-wise multiplication of the matrices.</ns0:p><ns0:formula xml:id='formula_7'>Again, V = [meas 1 , meas 2 , meas 3 , • • • , meas n ] with n = c SPEC − c template step + 1.</ns0:formula><ns0:p>Finally, for the CORR variation we first apply the OpenCV's matchTemplate procedure <ns0:ref type='bibr' target='#b7'>(Bradski, 2000)</ns0:ref> with the Normalized Correlation Coefficient option to SPEC i , the submatrix of SPEC that spans the columns from i × step to i × step + c template . However, for this variant, SPEC i includes two additions rows above and below, thus it is slightly larger than the T template . With these we can define:</ns0:p><ns0:formula xml:id='formula_8'>meas j,i = CORR(T template , SPEC j,i )</ns0:formula><ns0:p>where SPEC j,i is the submatrix of SPEC i that starts at row j (note that there are 5 such SPEC j,i matrices).</ns0:p><ns0:p>174 Now, we select 5 points at random from all the points above the 98.5 percentile of meas j,i and apply the Structural Similarity Index 5 strongly-matching regions. The size of these regions is eight thirds (8/3) of the length of T template , 4/3 before and 4/3 after the strongly-matched point and was selected. Then, define FilterSPEC as the matrix that contains these 5 strongly-matching regions and FilterSPEC i as the submatrix of FilterSPEC that spans the columns from i to i + c template then, the similarity measure for this variant is define as:</ns0:p><ns0:formula xml:id='formula_9'>meas i = SSI(T template , FilterSPEC i )</ns0:formula><ns0:p>and the resulting vector V = [meas 1 , meas 2 , meas 3 , • • • , meas n ] but this time with n = 5 × 8/3 × c template + 1 .</ns0:p></ns0:div>
<ns0:div><ns0:head>8/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>It is important to note that no matter which variant is used to calculate the similarity measures, the result will always be a vector of measurements V . The idea is that the statistical properties of these computed recognition functions have enough information to distinguish between a recording that has the target species present and a recording that does not have the target species present. However, notice that since c SPEC , the length of SPEC, is much larger that c template the length of the vector V for the CORR variant is much smaller than the other two.</ns0:p></ns0:div>
<ns0:div><ns0:head>Random Forest Model Creation</ns0:head><ns0:p>After calculating V for many recording we can train a random forest model. First, we need a set of validated recordings with the specific species vocalization present in some recordings and absent in others.</ns0:p><ns0:p>Then for each recording we compute a vector V i as described in the previous section and extract the statistical features presented in <ns0:ref type='table' target='#tab_2'>2</ns0:ref>. The statistical features extracted from vector V random forest model, which will be used to detect recordings for presence or absence of a species call event. These 12 features along with the species presence information are used as input to a random forest classifier with a 1000 trees.</ns0:p></ns0:div>
<ns0:div><ns0:head>Recording Detection</ns0:head><ns0:p>Now that we have a trained model to detect a recording, we have to compute the statistical features from the similarity vector V of the selected recording. This is performed in the same way as it was described in the previous section. These features are then used as the input dataset to the previously trained random forest classifier and a label indicating presence or absence of the species in the recording is given as output.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Experiment</ns0:head><ns0:p>To decide which of the three variants should be incorporated into the ARBIMON web-based system, we performed the algorithm explained in the previous section with each of the similarity measures. We computed 10-fold validations on each of the variants to obtained measurements of the performance of the algorithm. In each validation 90% of the data is used as training and 10% of the data is used as validation data. Each algorithm variant used the same 10-fold validation partition for each species. The measures calculated were the area under the receiver operating characteristic (ROC) curve (AUC), accuracy or correct detection rate (Ac), negative predictive value (N pv), precision or positive predictive value (Pr), sensitivity, recall or true positive rate (Se) and specificity or true negative rate (Sp). To calculate the AUC, the ROC curve is created by plotting the false positive rate (which can be calculated as 1 -specificity) against the true positive rate (sensitivity), then, the AUC is created by calculating the area under that curve. Notice that the further the AUC is from 0.5 the better. The rest of the measures are defined as follows:</ns0:p><ns0:formula xml:id='formula_10'>Ac = t p + t n t p + t n + f p + f n , N pv = t n t n + f n , Pr = t p t p + f p , Se = t p t p + f n</ns0:formula><ns0:p>and Sp = t n t n + f p with t p the number of true positives (number of times both the expert and the algorithm agree that the species is present), t n the number of true negatives (number of times both the expert and the algorithm Manuscript to be reviewed Computer Science agree that the species is not present), f p the number of false positives (number of times the algorithm states that the species is present while the expert states is absent) and f n the number of false negatives (number of times the algorithm states that the species is not present while the expert states it is present).</ns0:p><ns0:p>Note that accuracy is a weighted average of the sensitivity and the specificity.</ns0:p><ns0:p>Although we present and discuss all measures, we gave accuracy and the AUC more importance because they include information on the true positive and true negative rates. Specifically, AUC is important when the number of positives is different than the number of negatives as is the case with some of the species.</ns0:p><ns0:p>The experiment was performed in a computer with an Intel i7 4790K 4 cores processor at 4.00 GHz with 32GB of RAM and running Ubuntu Linux. The execution time needed to detect each recording was registered and the mean and standard deviation of the execution times were calculated for each variant of the algorithm. We also computed the quantity of pixels on all the T template matrices and correlated with the execution time of each of the variants.</ns0:p><ns0:p>A global one-way analysis of variance (ANOVA) was performed on the five calculated measures across all of the 10-fold validations to identify if there was a significant difference between the variants of the algorithm. Then a post-hoc Tukey HSD comparison test was performed to identify which one of the variants was significantly different at the 95% confidence level. Additionally, an ANOVA was performed locally between the 10-fold validation of each species and on the mean execution time for each species across the algorithm variants to identify if there was any significant execution time difference at the 95% confidence level. Similarly, a post-hoc Tukey HSD comparison test was performed on the execution times.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The six measurements (area under the ROC curve -AUC, accuracy, negative predictive value, precision, sensitivity and specificity) computed to compared the model across the three variants varied greatly among the 21 species. The lowest scores were among bird species while most of the highest scores came from amphibian species. Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref> presents a summary of the results of the measurements comparing the three variants of the algorithm (for a detail presentation see Appendix 1). The NORM variant did not have the highest value for any of the measures summarized in Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>, while the CORR variant had a greater number of species with 80% or greater for all the measures and an overall median accuracy of 81%. We considered these two facts fundamental for a general-purpose species detection system.</ns0:p><ns0:p>The local species ANOVA suggested that there are significant accuracy differences at the 95% significance level for 6 of the 21 species studied as well as 4 in terms of precision and 3 in terms of specificity (see supplemental materials). The algorithm variant CORR had a higher mean and median AUC at 78% and 81% respectively, but the SSIM variant seems to be more stable with a standard deviation of 20%. In terms of accuracy, both the SSIM and CORR have higher mean accuracy than the NORM variant. Nevertheless, variant CORR had the highest median accuracy of 81%, which is slightly higher than the median accuracy of the SSIM variant at 76%. In addition, variant CORR had more species with an accuracy of 80% or greater.</ns0:p><ns0:p>In terms of median precision, the three variants had similar values, although in terms of mean precision variants SSIM and CORR have greater values than the NORM variant. Moreover, the median and mean precision of the SSIM variant were only 1% higher than the median and mean precision of the CORR variant. In terms of sensitivity, variants SSIM and CORR had greater values than the NORM variant. It is only in terms of specificity that the CORR variant has greater values than all other variants. Figures <ns0:ref type='figure' target='#fig_8'>7 and 8</ns0:ref> present a summary of these results with whisker graphs.</ns0:p><ns0:p>In terms of execution times, an ANOVA analysis on the mean execution times suggests a difference between the variants (F = 9.9341e + 30, d f = 3, p < 2.2e − 16). The CORR variant had the lowest mean execution time at 0.255s followed closely by the NORM variant with 0.271s, while the SSIM variant had the slowest mean execution time of 2.269s (Figure <ns0:ref type='figure' target='#fig_9'>9</ns0:ref>). The Tukey HSD test suggests that there was no statistical significant difference between the mean execution times of the NORM and CORR variants (p = 0.999). However, there was a statistical significant difference at the 95% confidence level between the mean execution times of all other pairs of variants, specifically variants SSIM and CORR (p < 2.2e − 16). Moreover, the mean execution time of the SSIM variant increased as the number of pixels in the T template matrix increases (Figure <ns0:ref type='figure' target='#fig_9'>9b</ns0:ref>). There was no statistically significant relationship between the T template pixel size and the execution time for the other two variants ( In summary, variants SSIM and CORR outperform the NORM variant in most of the statistical measures computed having statistically significant high accuracy for three species each. In terms of execution time, the CORR variant was faster than the SSIM variant (Table <ns0:ref type='table' target='#tab_4'>3</ns0:ref>), and the mean execution time of CORR variant did not increase with increasing T template size (Table <ns0:ref type='table' target='#tab_3'>4</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The algorithm used by the ARBIMON system was selected by comparing three variants of a templatebased method for the detection of presence or absence of a species vocalization in recordings. The most important features for selecting the algorithm were that it works well for many types of species calls and that it can process hundreds of thousands of recordings in a reasonable amount of time. The CORR algorithm was selected because of its speed and its comparable performance in terms of detection where c SPEC and c template are the number of columns in SPEC and T template respectively and r template is the number of rows in T template . The only explanation we can give is that the SSIM function uses an uniformly distributed filter (uniform_filter) that has a limit on the size of the memory buffer (4000 64-bit doubles divided by the number of elements in the dimension been process). Therefore, as the size of T template increases the number of calls to allocate the buffer, free and allocate again can become a burden since it has a smaller locality of reference even when the machine has enough memory and cache to handle the process. Further investigation is required to confirm this.</ns0:p><ns0:p>An interesting comparison is the method described in the work by <ns0:ref type='bibr' target='#b14'>(Fodor, 2013)</ns0:ref> and adapted and tested by <ns0:ref type='bibr' target='#b20'>(Lasseck, 2013)</ns0:ref>. This method was design for the Neural Information Processing Scaled for Bioacoustics (NIPS4B) competition and although the results are very good they do not report on time of execution. As we have mention it is very important to us to have a method that provides good response times and the execution time of Lasseck's method seems to be greater than ours given the extensive pre-processing that method performs.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>Now that passive autonomous acoustic recorders are readily available the amount of data is growing exponentially. For example, one permanent station recording one minute of every 10 minutes every day of the year generates 52,560 one minute recordings. If this is multiplied by the need to monitor thousands of locations across the planet, one can understand the magnitude of the task at hand.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science We have shown how the algorithm used in the ARBIMON II web-based cloud-hosted system was selected. We compared the performance in terms of the ability to detect and the efficiency in terms of time execution of three variants of a template-based detection algorithm. The result was a method that uses the power of a widely use method to determine the similarity between two images (Structural Similarity Index <ns0:ref type='bibr' target='#b32'>(Wang et al., 2004)</ns0:ref>), but to accelerate the detection process, the analysis was only done in regions where there was a strong-match determined by the OpenCV's matchTemplate procedure <ns0:ref type='bibr' target='#b7'>(Bradski, 2000)</ns0:ref>.</ns0:p><ns0:p>The results show that this method performed better both in terms of ability to detect as well as in terms of execution time .</ns0:p></ns0:div>
<ns0:div><ns0:head>13/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed A fast and accurate general-purpose algorithm for detecting presence or absence of a species compliments the other tools of the ARBIMON system, such as options for creating playlists based on many different parameters including user-created tags (see Table <ns0:ref type='table' target='#tab_6'>5</ns0:ref>). For example, the system currently has 1,749,551 1-minute recordings uploaded by 453 users and 659 species specific models have been created</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>and run over 3,780,552 minutes of recordings of which 723,054 are distinct recordings.</ns0:p><ns0:p>While this research was a prove of concept, we provide the tools and encourage users to increase the size of the training data set as this should improve the performance of the algorithm. In addition, we will pursue other approaches, such as multi-label learning <ns0:ref type='bibr' target='#b33'>(Xie et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Zhang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b9'>Briggs et al., 2012)</ns0:ref> as a way to filter out sound activity not related to fauna. As a society, it is fundamental that we study the effects of climate change and deforestation on the fauna and we have to do it with the best possible tools. We are collecting a lot of data, but until recently there was not an intuitive and user-friendly system that allowed scientists to manage and analyze large number of recordings. Here we presented a web-based cloud-hosted system that provides a simple way to manage large quantities of recordings with a general-purpose method to detect their presence in recordings. 0.20 0.12 0.09 0.12 0.13 0.12 0.21 0.14 0.12 0.13 0.15 0.16 0.21 0.14 0.13 0.16 0.16 0.17 Table <ns0:ref type='table'>6</ns0:ref>. Area Under the ROC Curve (AUC), Accuracy (Ac), negative predictive value (Npv), precision (Pr), sensitivity (Se) and specificity (Sp) of the 21 species and three variants of the algorithm. Best values are shaded and the cases where the ANOVA suggested a significant difference between the algorithm variants at the 95% confidence level are in bold . </ns0:p></ns0:div>
<ns0:div><ns0:head>APPENDIX 1</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Recording locations in Puerto Rico. Map data: Google, Image -Landsat / Copernicus and Data -SIO, NOAA, US Navy, NGA and GEBCO.</ns0:figDesc><ns0:graphic coords='3,141.73,419.31,413.56,135.59' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Recording location in Peru. Map data: Google, US Dept. of State Geographer, Image -Landsat / Copernicus and Data -SIO, NOAA, US Navy, NGA and GEBCO.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The three phases of the algorithm to create the species-specific models. In the Model Training phase Rec i is a recording, V i is the vector generated by the recognition function on Rec i and in the Detection phase V is the vector generated by the recognition function on the incoming recording.</ns0:figDesc><ns0:graphic coords='4,141.73,439.59,413.57,242.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Flowchart of the algorithm to generate the template of each species.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.57,408.38' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. (a) A training set with 16 examples of the call of E. coqui. (b) The resulting template from the training set.</ns0:figDesc><ns0:graphic coords='8,141.73,63.78,413.57,203.51' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Flowchart of the algorithm to generate the similarity vector of each recording.</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,413.58,236.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Whisker boxes of the 10-fold validations for the three variants of the presented algorithm for: a) Area under the ROC curve and b) Accuracy.</ns0:figDesc><ns0:graphic coords='13,141.73,166.66,413.58,217.93' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Whisker boxes of the 10-fold validations for the three variants of the presented algorithm for: a) Negative predictive value, b) Precision, c) Sensitivity and d) Specificity.</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,413.58,512.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. (a) Whisker boxes of the execution times of the three algorithms. (b) Execution times as a function of the size of the template in number of pixels.</ns0:figDesc><ns0:graphic coords='15,141.73,63.78,413.58,279.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Sample of species that the CORR variant presented better accuracy. a) E. cochranae, b) M. leucophrys, c) B. bivittatus, d) C. carmioli, e) M. marginatus, f) M. nudipes, g) E. brittoni, h) E. guttatus and i) L. thoracicus. Species a, b and c are statistically significant.</ns0:figDesc><ns0:graphic coords='21,141.73,89.29,413.58,358.02' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,141.73,356.13,413.57,283.63' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,141.73,342.66,413.55,213.05' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Species, class, location and count of recordings with validated data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>These statistical features represent the dataset used to train the</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features</ns0:cell></ns0:row><ns0:row><ns0:cell>1. mean</ns0:cell></ns0:row><ns0:row><ns0:cell>2. median</ns0:cell></ns0:row><ns0:row><ns0:cell>3. minimum</ns0:cell></ns0:row><ns0:row><ns0:cell>4. maximum</ns0:cell></ns0:row><ns0:row><ns0:cell>5. standard deviation</ns0:cell></ns0:row><ns0:row><ns0:cell>6. maximum -minimum</ns0:cell></ns0:row><ns0:row><ns0:cell>7. skewness</ns0:cell></ns0:row><ns0:row><ns0:cell>8. kurtosis</ns0:cell></ns0:row><ns0:row><ns0:cell>9. hyper-skewness</ns0:cell></ns0:row><ns0:row><ns0:cell>10. hyper-kurtosis</ns0:cell></ns0:row><ns0:row><ns0:cell>11. Histogram</ns0:cell></ns0:row><ns0:row><ns0:cell>12. Cumulative frequency histogram</ns0:cell></ns0:row><ns0:row><ns0:cell>Table</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>). 10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Summary of the measures of the three variants of the algorithm. Best values are in bold.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>efficiency with the SSIM variant. It achieved AUC and accuracy of 0.80 or better in 12 of the 21 species and sensitivity of 0.80 or more in 11 of the 21 species and the average execution time of 0.26s per minute per recording means that it can process around 14,000 minutes of recordings per hour.The difference in execution time between the SSIM variant and the other two was due to a memory Summary of the execution times of the three variants of the algorithm. Best values are in bold. PPMCC is the Pearson product-moment correlation coefficient.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>11/20PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Summary of the usage of the ARBIMON2 system and its model creation feature.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Number of users in the system</ns0:cell><ns0:cell>453</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of recordings in the system</ns0:cell><ns0:cell>1,749,551</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of models created by users</ns0:cell><ns0:cell>659</ns0:cell></ns0:row><ns0:row><ns0:cell>Total number of detected recordings</ns0:cell><ns0:cell>3,780,552</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of distinct detected recordings</ns0:cell><ns0:cell>723,054</ns0:cell></ns0:row><ns0:row><ns0:cell>Average times a recording is detected</ns0:cell><ns0:cell>5.22</ns0:cell></ns0:row><ns0:row><ns0:cell>Standard deviation of the number of times a recording is detected</ns0:cell><ns0:cell>7.78</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum number of times a recordings has been detected</ns0:cell><ns0:cell>58</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head /><ns0:label /><ns0:figDesc>Table6provide a detail presentation of the performance of each variant of the algorithm: The area under the ROC curve, mean accuracy, mean precision, mean sensitivity and mean specificity values for each species, of the 10-fold validations for the three variants of the presented algorithm (SSIM, NORM and CORR). The mean, median and standard deviation values across all species are presented at the bottom of the table.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Species</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>Ac</ns0:cell><ns0:cell>SSIM Npv</ns0:cell><ns0:cell>Pr</ns0:cell><ns0:cell>Se</ns0:cell><ns0:cell>Sp</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>Ac</ns0:cell><ns0:cell>NORM Npv Pr</ns0:cell><ns0:cell>Se</ns0:cell><ns0:cell>Sp</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>Ac</ns0:cell><ns0:cell>CORR Npv Pr</ns0:cell><ns0:cell>Se</ns0:cell><ns0:cell>Sp</ns0:cell></ns0:row><ns0:row><ns0:cell>E. brittoni</ns0:cell><ns0:cell cols='6'>1.00 0.92 0.81 0.77 0.72 0.95</ns0:cell><ns0:cell cols='5'>0.42 0.89 0.83 0.80 0.77 0.92</ns0:cell><ns0:cell cols='5'>1.00 0.98 0.84 0.80 0.77 1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>E. cochranae</ns0:cell><ns0:cell cols='6'>1.00 0.87 0.84 0.94 0.88 0.85</ns0:cell><ns0:cell cols='5'>0.88 0.72 0.70 0.81 0.77 0.68</ns0:cell><ns0:cell cols='5'>1.00 0.98 0.96 1.00 0.97 1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>M. guatemalae</ns0:cell><ns0:cell cols='6'>0.50 0.93 0.81 0.50 0.45 0.97</ns0:cell><ns0:cell cols='5'>1.00 0.97 0.82 0.50 0.45 1.00</ns0:cell><ns0:cell cols='5'>0.50 0.90 0.80 0.47 0.45 0.87</ns0:cell></ns0:row><ns0:row><ns0:cell>E. cooki</ns0:cell><ns0:cell cols='6'>1.00 0.96 0.85 0.77 0.77 0.97</ns0:cell><ns0:cell cols='5'>0.72 0.82 0.78 0.73 0.67 0.87</ns0:cell><ns0:cell cols='5'>0.88 0.89 0.82 0.72 0.73 0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown Insect</ns0:cell><ns0:cell cols='6'>1.00 0.90 0.79 0.84 0.75 0.82</ns0:cell><ns0:cell cols='5'>1.00 0.92 0.84 0.83 0.82 0.83</ns0:cell><ns0:cell cols='5'>1.00 0.90 0.79 0.84 0.75 0.82</ns0:cell></ns0:row><ns0:row><ns0:cell>E. coqui</ns0:cell><ns0:cell cols='6'>0.88 0.90 0.75 0.96 0.93 0.70</ns0:cell><ns0:cell cols='5'>0.92 0.86 0.75 0.88 0.96 0.47</ns0:cell><ns0:cell cols='5'>1.00 0.88 0.85 0.89 0.98 0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>M. leucophrys</ns0:cell><ns0:cell cols='6'>0.98 0.87 0.88 0.87 0.89 0.87</ns0:cell><ns0:cell cols='5'>0.77 0.76 0.79 0.74 0.81 0.72</ns0:cell><ns0:cell cols='5'>0.98 0.88 0.87 0.89 0.87 0.90</ns0:cell></ns0:row><ns0:row><ns0:cell>E. juanariveroi</ns0:cell><ns0:cell cols='6'>0.20 0.78 0.69 0.60 0.48 0.79</ns0:cell><ns0:cell cols='5'>0.50 0.88 0.70 0.55 0.48 0.83</ns0:cell><ns0:cell cols='5'>0.63 0.81 0.69 0.47 0.45 0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>M. nudipes</ns0:cell><ns0:cell cols='6'>0.90 0.74 0.76 0.75 0.77 0.74</ns0:cell><ns0:cell cols='5'>0.84 0.81 0.84 0.80 0.85 0.79</ns0:cell><ns0:cell cols='5'>0.90 0.85 0.83 0.88 0.82 0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>B. bivittatus</ns0:cell><ns0:cell cols='6'>0.77 0.59 0.65 0.65 0.64 0.65</ns0:cell><ns0:cell cols='5'>0.90 0.74 0.78 0.73 0.80 0.73</ns0:cell><ns0:cell cols='5'>0.95 0.85 0.84 0.88 0.83 0.87</ns0:cell></ns0:row><ns0:row><ns0:cell>C. carmioli</ns0:cell><ns0:cell cols='6'>0.78 0.77 0.75 0.83 0.73 0.83</ns0:cell><ns0:cell cols='5'>0.78 0.73 0.75 0.73 0.76 0.72</ns0:cell><ns0:cell cols='5'>0.83 0.81 0.80 0.86 0.80 0.84</ns0:cell></ns0:row><ns0:row><ns0:cell>L. thoracicus</ns0:cell><ns0:cell cols='6'>0.70 0.73 0.71 0.76 0.67 0.79</ns0:cell><ns0:cell cols='5'>0.90 0.76 0.80 0.73 0.80 0.77</ns0:cell><ns0:cell cols='5'>0.97 0.81 0.83 0.82 0.84 0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>F. analis</ns0:cell><ns0:cell cols='6'>0.82 0.81 0.81 0.79 0.82 0.79</ns0:cell><ns0:cell cols='5'>0.68 0.63 0.65 0.63 0.69 0.57</ns0:cell><ns0:cell cols='5'>0.57 0.58 0.59 0.58 0.62 0.55</ns0:cell></ns0:row><ns0:row><ns0:cell>E. guttatus</ns0:cell><ns0:cell cols='6'>0.74 0.69 0.70 0.69 0.70 0.69</ns0:cell><ns0:cell cols='5'>0.72 0.75 0.76 0.77 0.77 0.75</ns0:cell><ns0:cell cols='5'>0.78 0.77 0.77 0.78 0.77 0.77</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>M. hemimelaena 0.75 0.76 0.71 0.77 0.67 0.82</ns0:cell><ns0:cell cols='5'>0.61 0.59 0.59 0.58 0.60 0.57</ns0:cell><ns0:cell cols='5'>0.61 0.63 0.62 0.63 0.65 0.59</ns0:cell></ns0:row><ns0:row><ns0:cell>B. chrysogaster</ns0:cell><ns0:cell cols='6'>0.56 0.68 0.66 0.67 0.62 0.74</ns0:cell><ns0:cell cols='5'>0.69 0.75 0.70 0.72 0.65 0.83</ns0:cell><ns0:cell cols='5'>0.80 0.73 0.69 0.64 0.66 0.78</ns0:cell></ns0:row><ns0:row><ns0:cell>S. grossus</ns0:cell><ns0:cell cols='6'>0.70 0.66 0.66 0.68 0.66 0.67</ns0:cell><ns0:cell cols='5'>0.78 0.74 0.72 0.75 0.70 0.76</ns0:cell><ns0:cell cols='5'>0.81 0.71 0.73 0.74 0.78 0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>P. lophotes</ns0:cell><ns0:cell cols='6'>0.73 0.71 0.68 0.73 0.63 0.78</ns0:cell><ns0:cell cols='5'>0.58 0.58 0.60 0.59 0.62 0.57</ns0:cell><ns0:cell cols='5'>0.65 0.61 0.63 0.62 0.64 0.61</ns0:cell></ns0:row><ns0:row><ns0:cell>H. subflava</ns0:cell><ns0:cell cols='6'>0.74 0.64 0.64 0.64 0.66 0.61</ns0:cell><ns0:cell cols='5'>0.51 0.51 0.51 0.52 0.53 0.49</ns0:cell><ns0:cell cols='5'>0.51 0.51 0.52 0.51 0.56 0.48</ns0:cell></ns0:row><ns0:row><ns0:cell>M. marginatus</ns0:cell><ns0:cell cols='6'>0.58 0.59 0.55 0.60 0.59 0.51</ns0:cell><ns0:cell cols='5'>0.32 0.49 0.43 0.47 0.39 0.47</ns0:cell><ns0:cell cols='5'>0.69 0.61 0.62 0.61 0.66 0.56</ns0:cell></ns0:row><ns0:row><ns0:cell>T. schistaceus</ns0:cell><ns0:cell cols='6'>0.62 0.58 0.58 0.61 0.51 0.67</ns0:cell><ns0:cell cols='5'>0.30 0.50 0.46 0.45 0.49 0.43</ns0:cell><ns0:cell cols='5'>0.28 0.52 0.48 0.49 0.44 0.52</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean Values</ns0:cell><ns0:cell cols='6'>0.76 0.77 0.73 0.73 0.69 0.77</ns0:cell><ns0:cell cols='5'>0.71 0.73 0.71 0.68 0.68 0.70</ns0:cell><ns0:cell cols='5'>0.78 0.77 0.74 0.72 0.72 0.74</ns0:cell></ns0:row><ns0:row><ns0:cell>Median Values</ns0:cell><ns0:cell cols='6'>0.75 0.76 0.71 0.75 0.67 0.79</ns0:cell><ns0:cell cols='5'>0.72 0.75 0.75 0.73 0.70 0.73</ns0:cell><ns0:cell cols='5'>0.81 0.81 0.79 0.74 0.75 0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Standard Dev.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>Note that for recordings with a sample rate of 44100 when we calculate the STFT with a window of size 512 and a 50% overlap, one step is equivalent to 5.8 milliseconds, therefore, 16 steps is less than 100 milliseconds. Although this procedure may miss the strongest match, the length of the calls are much longer than the step interval; therefore, there is a high probability of detecting the species-specific call.7/20PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:note>
<ns0:note place='foot' n='14'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)</ns0:note>
<ns0:note place='foot' n='17'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:1:1:NEW 14 Mar 2017) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Reviewer 1
Basic reporting
The article is in general clearly written and well-structured. The algorithm statements are commendably clear.
The paragraph between lines 167 and 168 is unclear. I believe that the '5 selected neighbourhoods' are 5 strongly-matching temporal regions. I also believe that 266% is a freely-chosen parameter. Please rewrite the paragraph to be clearer.
Changes were made to clarify this.
The article does not make enough connection with related literature on the topic.
* A very obvious point of comparison is the template-matching species classification introduced by Gabor Fodor and used by Mario Lasseck to perform species classification and get extremely strong results in the annual BirdCLEF contests. Since that method is currently a leading method in the exact topic of this paper, it should be discussed.
While the Lasseck approach has better results, we were also interested in optimizing the execution time and this is not reported by them. We suspect that their execution time is much greater due to extensive pre-processing (the MEL filter cpestra coefficients are computed then they perform a median clipping, then they perform closing, delation and median filtering and after that they do feature extraction and feature selection before sending the data through a decision tree). We have added text in the discussion.
* The two-stage method in this paper (cross-correlation followed by SSIM refinement) is somewhat related to Ross & Allen (2013, Ecological Informatics). This comparison could be made in the discussion.
The major difference between Ross and Allen and our method is that, in the first stage they create a random forest model that is tested and refined before the next stage. In our case, we run the two stages simultaneously without supervision.
Line 292 claims that the method is 'non-species specific' - this is not a good description of the method, since the method relies on templates for the target species and so is species-specific. It would be appropriate to call the method 'generic' or 'general-purpose'.
DONE. We have addressed this concern in:
1. The last line of first paragraph of the Results,
2. The second line of first paragraph of discussion
3. The second line of second paragraph in the Conclusions
Experimental design
Broadly fine. However some issues:
The step size used for moving through each spectrogram (in step 5 of 'Algorithm to create the similarity vector') is potentially problematic. Moving by a jump of 16 steps rather than 1 carries quite a strong risk (in fact, a 94% chance) of missing the strongest-matching alignment. The authors should have tested for the procedure's sensitivity to this parameter choice.
Although a smaller step would insure a stronger match, we were interested in the trade-off between detection and execution time. This was done in an informal way, but not in a sufficiently systematic way to report here. The thinking was that although we may miss the strongest match, the nature of the calls (i.e. much longer than the step interval) gave us a good probability of detecting the species. We state this as a footnote in page 7.
It would be better to use AUC (area under the ROC curve) rather than accuracy. This is widely-known and is especially important when the classes are unbalanced, as is the case here.
DONE
The authors claim (line 196) that accuracy is a suitable proxy for AUC in the balanced case: this argument is misleading and must be removed, since the authors are not considering the balanced case.
The statement was removed.
It is a shame that the authors only used around 900 recordings, when they had access to many more. Some of the folds must have as few as 3 examples, for some species. However, since the main outcomes appear to be significant, this is not a fatal flaw.
We agree, increasing the dataset was included as future work.
Validity of the findings
The findings appear to be valid, and significance is appropriately determined. The results would be more reliable if AUC rather than accuracy was used.
DONE
Comments for the author
The title mentions both 'detection' and 'classification'. I understand why.
However the introduction should clarify for the reader what task is being attempted here: it is to develop a binary classifier for the presence/absence of a single species, and then applying it to each of a set of species of interest. At present the article does not make this quite explicit, the reader must infer it.
DONE
I would argue that the title should really refer to a comparison of three detection algorithms not three classification algorithms, since the three template-matching procedures used here are used for detection.
DONE. The title was changed to “Species-specific audio detection: A comparison of three template-based detection algorithms using random forests”
The introduction MUST tell the reader something about the three similarity measures that are being tested here, since that is the main comparison. The full algorithm statement comes later, but the intro needs to prepare the reader for what the paper is about. Why these three? What properties does one expect of them?
DONE, this information has been incorporated into the last paragraph of the introduction.
Typos:
* 'Hanning' -> 'Hann'
* 'lets define' -> 'define'
* 'algorithms have order of' -> 'algorithms have time complexity of'
* 'analize' -> 'analyze'
DONE
Reviewer 2
Basic reporting
The manuscript compares three template-based classification algorithms using random forests. A web-based cloud-hosted system is quite useful to the bioacoustic community. This paper is well organized, but has some grammar errors, which needs to be improved.
Line 17, Done
Line 24, Done
Line 34, Done
Line 63-64 1 to one, Done
Line 69 id? Done
Line 101 optimize.... Done
Experimental design
In this paper, the template is created by all ROIs submitted by the user. The template is highly affected by the training data. Since animal calls are often different in different regions and different times, can you figure out some methods to address this problem? or can you update the template when doing the comparison to make it less sensitive to those factors?
The template is for a species’ specific call type. We included text in the introduction to make this clearer.
In environmental recordings, there are often multiple species in an individual recording, a single-instance single-label classifier might not suitable for the classification task. However, multi-instance multi-label learning or multi-label learning have shown better performance in previous studies, such as 'Acoustic classification of multiple simultaneous bird species: A multi-instance multi-label approach' 'Using multi-label classification for acoustic pattern detection and assisting bird species surveys', 'Detecting Frog Calling Activity Based on Acoustic Event Detection and Multi-label Learning'.
We included this in future work.
Validity of the findings
The novelty of this paper is limited, but the impact is high. The data is robust and the conclusion is well stated. The conclusion and future work section is weak and should be expanded and added, respectively. Done
" | Here is a paper. Please give your review comments after reading it. |
737 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Monitoring fauna is an important task for ecologists, natural resource managers, and conservationists.</ns0:p><ns0:p>Historically, most data were collected manually by scientists that went to the field and annotated their observations <ns0:ref type='bibr' target='#b27'>(Terborgh et al., 1990)</ns0:ref>. This generally limited the spatial and temporal extend of the data. Furthermore, given that the data were based on an individual's observations, the information was difficult to verify, reducing its utility for understanding long-term ecological processes <ns0:ref type='bibr' target='#b0'>(Acevedo and Villanueva-Rivera, 2006)</ns0:ref>.</ns0:p><ns0:p>To understand the impacts of climate change and deforestation on the fauna, the scientific community needs long-term, wide-spread and frequent data <ns0:ref type='bibr' target='#b31'>(Walther et al., 2002)</ns0:ref>. Passive acoustic monitoring (PAM) can contribute to this need because it facilitates the collection of large amounts of data from many sites simultaneously, and with virtually no impact to the fauna and environment <ns0:ref type='bibr' target='#b8'>(Brandes, 2008;</ns0:ref><ns0:ref type='bibr' target='#b19'>Lammers et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b28'>Tricas and Boyle, 2009;</ns0:ref><ns0:ref type='bibr' target='#b10'>Celis-Murillo et al., 2012)</ns0:ref>. In general, PAM systems include a microphone or a hydrophone connected to a self powered system and enough memory to store various weeks or months of recordings, but there are also permanent systems that use solar panels and an Internet connection to upload recordings in real time to a cloud based analytical platform <ns0:ref type='bibr' target='#b3'>(Aide et al., 2013)</ns0:ref>.</ns0:p><ns0:p>Passive recorders can easily create a very large data set (e.g. 100,000s of recordings) that is overwhelming to manage and analyze. Although researchers often collect recordings twenty-four hours a day for weeks or months <ns0:ref type='bibr' target='#b0'>(Acevedo and Villanueva-Rivera, 2006;</ns0:ref><ns0:ref type='bibr' target='#b8'>Brandes, 2008;</ns0:ref><ns0:ref type='bibr' target='#b19'>Lammers et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b25'>Sueur et al., 2008;</ns0:ref><ns0:ref type='bibr'>Marques et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b6'>Blumstein et al., 2011)</ns0:ref>, in practice, most studies have only analyzed a small percentage of the total number of recordings.</ns0:p><ns0:p>Web-based applications have been developed to facilitate data management of these increasingly large datasets <ns0:ref type='bibr' target='#b3'>(Aide et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b30'>Villanueva-Rivera and Pijanowski, 2012)</ns0:ref>, but the biggest challenge is to develop efficient and accurate algorithms for detecting the presence or absence of a species in many recordings. Algorithms for species identification have been developed using spectrogram matched filtering <ns0:ref type='bibr' target='#b12'>(Clark et al., 1987;</ns0:ref><ns0:ref type='bibr' target='#b11'>Chabot, 1988)</ns0:ref>, statistical feature extraction <ns0:ref type='bibr' target='#b26'>(Taylor, 1995;</ns0:ref><ns0:ref type='bibr' target='#b15'>Grigg et al., 1996)</ns0:ref>, k-Nearest neighbor algorithm <ns0:ref type='bibr' target='#b17'>(Hana et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b16'>Gunasekaran and Revathy, 2010)</ns0:ref>, Support Vector Machine <ns0:ref type='bibr' target='#b13'>(Fagerlund, 2007;</ns0:ref><ns0:ref type='bibr' target='#b1'>Acevedo et al., 2009)</ns0:ref>, tree-based classifiers <ns0:ref type='bibr' target='#b2'>(Adams et al., 2010;</ns0:ref><ns0:ref type='bibr' target='#b18'>Henderson and Hildebrand, 2011)</ns0:ref> and template based detection <ns0:ref type='bibr' target='#b5'>(Anderson et al., 1996;</ns0:ref><ns0:ref type='bibr' target='#b24'>Mellinger and Clark, 2000)</ns0:ref>, PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science but most of these algorithms are built for a specific species and there was no infrastructure provided for the user to create models for other species.</ns0:p><ns0:p>In this study, we developed a method that detects the presence or absence of a species' specific call type in recordings with a response time that allows researchers to create, run, tune and re-run models in real time as well as detect hundreds of thousands of recordings in a reasonable time. The main objective of the study was to compare the performance (e.g. efficiency and accuracy) of three variants of a template-based detection algorithm and incorporate the best into the ARBIMON II bioacoustics platform. The first variant is the Structural Similarity Index described in <ns0:ref type='bibr' target='#b32'>Wang et al. (2004)</ns0:ref>, a widely use method to find how similar two images are (in our case the template with the tested recording). The second method filters the recordings with the dynamic thresholding method described in <ns0:ref type='bibr' target='#b32'>Wang et al. (2004)</ns0:ref> and then use the Frobenius norm to find similarities with the template. The final method uses the Structural Similarity Index, but it is only applied to regions with high match probability determined by the OpenCV's matchTemplate procedure <ns0:ref type='bibr' target='#b7'>(Bradski, 2000)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>MATERIALS AND METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Passive acoustic data acquisition</ns0:head><ns0:p>We gathered recordings from five locations, four in Puerto Rico and one in Peru. Some of the recordings were acquired using the Automated Remote Biodiversity Monitoring Network (ARBIMON) data acquisition system described in <ns0:ref type='bibr' target='#b3'>Aide et al. (2013)</ns0:ref>, while others were acquired using the newest version of ARBIMON permanent recording station, which uses an Android cell phone and transmits the recorded data through a cellular network. All recordings have a sampling rate of 44.1kHz, a sampling depth of 16-bit and an approximate duration of 60 seconds (±.5s)</ns0:p><ns0:p>The locations in Puerto Rico were the Sabana Seca permanent station in Toa Baja, the Casa la Selva station in Carite Mountains (Patillas), El Yunque National Forest in Rio Grande and Mona Island (see Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>). The location in Peru was the Amarakaeri Communal Reserve in the Madre de Dios Region (see Figure <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). In all the locations, the recorders were programmed to record one minute of audio every 10 minutes. The complete dataset has more than 100,000 1-minute recordings. We randomly chose 362 recordings from Puerto Rico and 547 recordings from Peru for comparing the three algorithm variants. We used the ARBIMON II web application interface to annotate the presence or absence of 21 species in all the recordings. Regions in the recording where a species emits a sound were also marked using the web interface. Each region of interest (ROI) is a rectangle delimited by starting time, ending time, lowest frequency and highest frequency along with a species and sound type. The species included in the analysis are listed in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>, along with the number of total recordings and the number of recordings where the species is present or absent.</ns0:p></ns0:div>
<ns0:div><ns0:head>Algorithm</ns0:head><ns0:p>The algorithm recognition process is divided into three phases: 1) Template Computation, 2) Model Training and 3) Detection (see Figure <ns0:ref type='figure' target='#fig_2'>3</ns0:ref> In the following sections the Template Computation process will be explained, then the process of using the Template to extract features from a recording is presented and finally, the procedures to use the features to train the model and to detect recordings are discussed.</ns0:p></ns0:div>
<ns0:div><ns0:head>Template Computation</ns0:head><ns0:p>The template refers to the combination of all ROIs in the training data. To create a template, we first start</ns0:p><ns0:p>with the examples of the specific call of interest (i.e. ROIs) that were annotated from a set of recordings for a given species and a specific call type (e.g. common, alarm). Each ROI encompasses an example of the call, and is an instance of time between time t 1 and time t 2 of a given recording and low and high boundary frequencies of f 1 and f 2 , where t 1 < t 2 and f 1 < f 2. In a general sense, we combine these examples to produce a template of a specific song type of a single species.</ns0:p><ns0:p>Specifically, for each recording that has an annotated ROI, a spectrogram matrix (SM) is computed using the Short Time Fourier Transform with a frame size of 1024 samples, 512 samples of overlap and a Hann analysis window, thus the matrices have 512 rows. For a recording with a sampling rate of 44,100 Hz, the matrix bin bandwidth is approximately 43.06 Hz. The SM is arranged so that the row of index 0 represents the lowest frequency and the row with index 511 represents the highest frequency of the spectrum. Properly stated the columns c 1 to c 2 and the rows from r 1 to r 2 of SM were extracted, where:</ns0:p><ns0:formula xml:id='formula_0'>c 1 = ⌊t 1 × 44100⌋, c 2 = ⌊t 2 × 44100⌋, r 1 = ⌊ f 1 /43.06⌋ and r 2 = ⌊ f 2 /43.06⌋.</ns0:formula><ns0:p>The rows and columns that represent the ROI in the recording (between frequencies f 1 and f 2 and between times t 1 and t 2 ) are extracted. The submatrix of SM that contains only the area bounded by the ROI is define as SM ROI and refer in the manuscript as the ROI matrix.</ns0:p><ns0:p>Since the ROI matrices can vary in size, to compute the aggregation from the ROI matrices we have to take into account the difference in the number of rows and columns of the matrices. All recordings have the same sampling rate, 44100Hz. Thus, the rows from different SMs, computed with the same parameters, will represent the same frequencies, i.e. rows with same indexes represent the same frequency.</ns0:p><ns0:p>After the ROI matrix, SM ROI , has been extracted from SM, the rows of SM ROI will also represent specific frequencies. Thus, if we were to perform an element-wise matrix sum between two ROI matrices with potentially different number of rows, we should only sum rows that represent the same frequency.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_5'>2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>To take into account the difference in the number of columns of the ROI matrices, we use the Frobenius norm to optimize the alignment of the smaller ROI matrices and perform element-wise sums between rows that represent the same frequency. We present that algorithm in the following section and a flow chart of the process in Figure <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Template Computation Algorithm:</ns0:head><ns0:p>1. Generate the set of SM ROI matrices by computing the short time Fourier Transform of all the user generated ROIs.</ns0:p><ns0:p>2. Create matrix SM max , a duplicate of the first created matrix among the matrices with the largest number of columns.</ns0:p><ns0:p>3. Set c max as the number of columns in SM max 4. Create matrix T temp , with the same dimensions as SM max and all entries equal to 0. This matrix will contain the element-wise addition of all the extracted SM ROI matrices.</ns0:p><ns0:p>5. Create matrix W with the same dimensions of SM max and all entries equal to 0. This matrix will hold the count on the number of SM ROI matrices that participate in the calculation of each element of T temp .</ns0:p><ns0:p>6. For each one of the SM i ROI matrices in SM ROI : (a) If SM i has the same number of columns as T temp : i. Align the rows of SM i and T temp so they represent equivalent frequencies and perform an element-wise addition of the matrices and put the result in T temp .</ns0:p><ns0:p>ii. Add one to all the elements of the W matrix where the previous addition participated.</ns0:p><ns0:p>(b) If the number of columns differs between SM i and T temp , then find the optimal alignment with SM max as follows:</ns0:p><ns0:p>i. Set c i as the number of columns in SM i .</ns0:p><ns0:p>ii. Define (SM max ) I as the set of all submatrices of SM max with the same dimensions as</ns0:p><ns0:formula xml:id='formula_1'>SM i . Note that the cardinality of (SM max ) I is c max − c i . iii. For each Sub k ∈ (SM max ) I : A. Compute d k = NORM(Sub k − SM i )</ns0:formula><ns0:p>where NORM is the Frobenius norm defined as:</ns0:p><ns0:formula xml:id='formula_2'>NORM(A) = ∑ (i, j) |a 2 i, j |</ns0:formula><ns0:p>where a i, j are the elements of matrix A.</ns0:p><ns0:p>iv. Define Sub min{d k } as the Sub k matrix with the minimum d k . This is the optimal alignment of SM i with SM max .</ns0:p><ns0:p>v. Align the rows of Sub min{d k } and T temp so they represent equivalent frequencies, perform an element-wise addition of the matrices and put the result in T temp .</ns0:p><ns0:p>vi. Add one to all the elements of the W matrix where the previous addition participated.</ns0:p><ns0:p>7. Define the matrix T template as the element-wise division between the T temp matrix and the W matrix.</ns0:p><ns0:p>The resulting T template matrix summarizes the information available in the ROI matrices submitted by the user and it will be used to extract information from the audio recordings that are to be analyzed. In this article each species T template was created using five ROIs.</ns0:p><ns0:p>In Figure <ns0:ref type='figure'>5a</ns0:ref> a training set for the Eleutherodactylus coqui is presented and in Figure <ns0:ref type='figure'>5b</ns0:ref> the resulting template can be seen. This tool is very useful because the user can see immediately the effect of adding or subtracting a specific sample to the training set.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science </ns0:p></ns0:div>
<ns0:div><ns0:head>Model Training</ns0:head><ns0:p>The goal of this phase is to train a random forest model. The input to train the random forest are a series of statistical features extracted from vectors V i that are created by computing a recognition function (similarity measure) between the computed T template and submatrices of the spectrogram matrices of a series of recordings.</ns0:p><ns0:p>In the following section we present the details of the algorithm that processes a recording to create the recognition function vector and in Figure <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>, we present a flowchart of the process.</ns0:p><ns0:p>Algorithm to Create the Similarity Vector:</ns0:p><ns0:p>1. Compute matrix SPEC, the submatrix of the spectrogram matrix that contains the frequencies in T template . Note that we are dealing with recordings that have the same sample rate as the recordings used to compute the T template .</ns0:p><ns0:p>2. Define c SPEC , the number of columns of SPEC. (b) Increase i by 1. Note that this is equivalent to progressing step columns in the SPEC matrix. 9. Define the vector V as the vector containing the n similarity measures resulting from the previous</ns0:p><ns0:formula xml:id='formula_3'>steps. That is, V = [meas 1 , meas 2 , meas 3 , • • • , meas n ].</ns0:formula></ns0:div>
<ns0:div><ns0:head>Recognition Function</ns0:head><ns0:p>We used three variations of a pattern match procedure to define the similarity measure vector V . First, the Structural Similarity Index described in <ns0:ref type='bibr' target='#b32'>Wang et al. (2004)</ns0:ref> and implemented in van der Walt et al. ( <ns0:ref type='formula'>2014</ns0:ref>) as compare_ssim with the default window size of seven unless the generated pattern is smaller. It will be referred in the rest of the manuscript as the SSIM variant. For the SSIM variant we define meas i as:</ns0:p><ns0:formula xml:id='formula_4'>meas i = SSI(T template , SPEC i ) ,</ns0:formula><ns0:p>where SPEC i is the submatrix of SPEC that spans the columns from i × step to i × step + c template and the same number of rows as Second, the dynamic thresholding method (threshold_adaptive) described in Wang et al. ( <ns0:ref type='formula'>2004</ns0:ref>) with a block size of 127 and an arithmetic mean filter is used over both T template and SPEC i before multiplying them and applying the Frobenius norm and normalized by the norm of a matrix with same dimensions as T template and all elements equal to one. Therefore, meas i for the NORM variant is defined as:</ns0:p><ns0:formula xml:id='formula_5'>T template and V = [meas 1 , meas 2 , meas 3 , • • • , meas n ] with n = c SPEC − c template step + 1.</ns0:formula><ns0:formula xml:id='formula_6'>meas i = FN DT M(T template ) . * DT M(SPEC i ) /FN(U) ,</ns0:formula><ns0:p>where again SPEC i is the submatrix of SPEC that spans the columns from i × step to i × step + c template , FN is the Frobenius norm, DTM is the dynamic thresholding method, U is a matrix with same dimensions as T template with all elements equal to one and . * performs an element-wise multiplication of the matrices.</ns0:p><ns0:formula xml:id='formula_7'>Again, V = [meas 1 , meas 2 , meas 3 , • • • , meas n ] with n = c SPEC − c template step + 1.</ns0:formula><ns0:p>Finally, for the CORR variation we first apply the OpenCV's matchTemplate procedure <ns0:ref type='bibr' target='#b7'>(Bradski, 2000)</ns0:ref> with the Normalized Correlation Coefficient option to SPEC i , the submatrix of SPEC that spans the columns from i × step to i × step + c template . However, for this variant, SPEC i includes two additions rows above and below, thus it is slightly larger than the T template . With these we can define:</ns0:p><ns0:formula xml:id='formula_8'>meas j,i = CORR(T template , SPEC j,i )</ns0:formula><ns0:p>where SPEC j,i is the submatrix of SPEC i that starts at row j (note that there are 5 such SPEC j,i matrices).</ns0:p><ns0:p>174 Now, we select 5 points at random from all the points above the 98.5 percentile of meas j,i and apply the Structural Similarity Index 5 strongly-matching regions. The size of these regions is eight thirds (8/3) of the length of T template , 4/3 before and 4/3 after the strongly-matched point and was selected. Then, define FilterSPEC as the matrix that contains these 5 strongly-matching regions and FilterSPEC i as the submatrix of FilterSPEC that spans the columns from i to i + c template then, the similarity measure for this variant is define as: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>It is important to note that no matter which variant is used to calculate the similarity measures, the result will always be a vector of measurements V . The idea is that the statistical properties of these computed recognition functions have enough information to distinguish between a recording that has the target species present and a recording that does not have the target species present. However, notice that since c SPEC , the length of SPEC, is much larger that c template the length of the vector V for the CORR variant is much smaller than the other two.</ns0:p></ns0:div>
<ns0:div><ns0:head>Random Forest Model Creation</ns0:head><ns0:p>After calculating V for many recording we can train a random forest model. First, we need a set of validated recordings with the specific species vocalization present in some recordings and absent in others.</ns0:p><ns0:p>Then for each recording we compute a vector V i as described in the previous section and extract the statistical features presented in <ns0:ref type='table' target='#tab_1'>2</ns0:ref>. The statistical features extracted from vector V random forest model, which will be used to detect recordings for presence or absence of a species call event. These 12 features along with the species presence information are used as input to a random forest classifier with a 1000 trees.</ns0:p></ns0:div>
<ns0:div><ns0:head>Recording Detection</ns0:head><ns0:p>Now that we have a trained model to detect a recording, we have to compute the statistical features from the similarity vector V of the selected recording. This is performed in the same way as it was described in the previous section. These features are then used as the input dataset to the previously trained random forest classifier and a label indicating presence or absence of the species in the recording is given as output.</ns0:p></ns0:div>
<ns0:div><ns0:head>The Experiment</ns0:head><ns0:p>To decide which of the three variants should be incorporated into the ARBIMON web-based system, we performed the algorithm explained in the previous section with each of the similarity measures. We computed 10-fold validations on each of the variants to obtained measurements of the performance of the algorithm. In each validation 90% of the data is used as training and 10% of the data is used as validation data. Each algorithm variant used the same 10-fold validation partition for each species. The measures calculated were the area under the receiver operating characteristic (ROC) curve (AUC), accuracy or correct detection rate (Ac), negative predictive value (N pv), precision or positive predictive value (Pr), sensitivity, recall or true positive rate (Se) and specificity or true negative rate (Sp). To calculate the AUC, the ROC curve is created by plotting the false positive rate (which can be calculated as 1 -specificity) against the true positive rate (sensitivity), then, the AUC is created by calculating the area under that curve. Notice that the further the AUC is from 0.5 the better. The rest of the measures are defined as follows:</ns0:p><ns0:formula xml:id='formula_9'>Ac = t p + t n t p + t n + f p + f n , N pv = t n t n + f n , Pr = t p t p + f p , Se = t p t p + f n</ns0:formula><ns0:p>and Sp = t n t n + f p with t p the number of true positives (number of times both the expert and the algorithm agree that the species is present), t n the number of true negatives (number of times both the expert and the algorithm Manuscript to be reviewed Computer Science agree that the species is not present), f p the number of false positives (number of times the algorithm states that the species is present while the expert states is absent) and f n the number of false negatives (number of times the algorithm states that the species is not present while the expert states it is present).</ns0:p><ns0:p>Note that accuracy is a weighted average of the sensitivity and the specificity.</ns0:p><ns0:p>Although we present and discuss all measures, we gave accuracy and the AUC more importance because they include information on the true positive and true negative rates. Specifically, AUC is important when the number of positives is different than the number of negatives as is the case with some of the species.</ns0:p><ns0:p>The experiment was performed in a computer with an Intel i7 4790K 4 cores processor at 4.00 GHz with 32GB of RAM and running Ubuntu Linux. The execution time needed to detect each recording was registered and the mean and standard deviation of the execution times were calculated for each variant of the algorithm. We also computed the quantity of pixels on all the T template matrices and correlated with the execution time of each of the variants.</ns0:p><ns0:p>A global one-way analysis of variance (ANOVA) was performed on the five calculated measures across all of the 10-fold validations to identify if there was a significant difference between the variants of the algorithm. Then a post-hoc Tukey HSD comparison test was performed to identify which one of the variants was significantly different at the 95% confidence level. Additionally, an ANOVA was performed locally between the 10-fold validation of each species and on the mean execution time for each species across the algorithm variants to identify if there was any significant execution time difference at the 95% confidence level. Similarly, a post-hoc Tukey HSD comparison test was performed on the execution times.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>The six measurements (area under the ROC curve -AUC, accuracy, negative predictive value, precision, sensitivity and specificity) computed to compared the model across the three variants varied greatly among the 21 species. The lowest scores were among bird species while most of the highest scores came from amphibian species. Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref> presents a summary of the results of the measurements comparing the three variants of the algorithm (for a detail presentation see Appendix 1). The NORM variant did not have the highest value for any of the measures summarized in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>, while the CORR variant had a greater number of species with 80% or greater for all the measures and an overall median accuracy of 81%. We considered these two facts fundamental for a general-purpose species detection system.</ns0:p><ns0:p>The local species ANOVA suggested that there are significant accuracy differences at the 95% significance level for 6 of the 21 species studied as well as 4 in terms of precision and 3 in terms of specificity (see supplemental materials). The algorithm variant CORR had a higher mean and median AUC at 78% and 81% respectively, but the SSIM variant seems to be more stable with a standard deviation of 20%. In terms of accuracy, both the SSIM and CORR have higher mean accuracy than the NORM variant. Nevertheless, variant CORR had the highest median accuracy of 81%, which is slightly higher than the median accuracy of the SSIM variant at 76%. In addition, variant CORR had more species with an accuracy of 80% or greater.</ns0:p><ns0:p>In terms of median precision, the three variants had similar values, although in terms of mean precision variants SSIM and CORR have greater values than the NORM variant. Moreover, the median and mean precision of the SSIM variant were only 1% higher than the median and mean precision of the CORR variant. In terms of sensitivity, variants SSIM and CORR had greater values than the NORM variant. It is only in terms of specificity that the CORR variant has greater values than all other variants. Figures <ns0:ref type='figure' target='#fig_9'>7 and 8</ns0:ref> present a summary of these results with whisker graphs.</ns0:p><ns0:p>In terms of execution times, an ANOVA analysis on the mean execution times suggests a difference between the variants (F = 9.9341e + 30, d f = 3, p < 2.2e − 16). The CORR variant had the lowest mean execution time at 0.255s followed closely by the NORM variant with 0.271s, while the SSIM variant had the slowest mean execution time of 2.269s (Figure <ns0:ref type='figure' target='#fig_10'>9</ns0:ref>). The Tukey HSD test suggests that there was no statistical significant difference between the mean execution times of the NORM and CORR variants (p = 0.999). However, there was a statistical significant difference at the 95% confidence level between the mean execution times of all other pairs of variants, specifically variants SSIM and CORR (p < 2.2e − 16). Moreover, the mean execution time of the SSIM variant increased as the number of pixels in the T template matrix increases (Figure <ns0:ref type='figure' target='#fig_10'>9b</ns0:ref>). There was no statistically significant relationship between the T template pixel size and the execution time for the other two variants ( In summary, variants SSIM and CORR outperform the NORM variant in most of the statistical measures computed having statistically significant high accuracy for three species each. In terms of execution time, the CORR variant was faster than the SSIM variant (Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>), and the mean execution time of CORR variant did not increase with increasing T template size (Table <ns0:ref type='table' target='#tab_2'>4</ns0:ref>).</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>The algorithm used by the ARBIMON system was selected by comparing three variants of a templatebased method for the detection of presence or absence of a species vocalization in recordings. The most important features for selecting the algorithm were that it works well for many types of species calls and that it can process hundreds of thousands of recordings in a reasonable amount of time. The CORR algorithm was selected because of its speed and its comparable performance in terms of detection where c SPEC and c template are the number of columns in SPEC and T template respectively and r template is the number of rows in T template . The only explanation we can give is that the SSIM function uses an uniformly distributed filter (uniform_filter) that has a limit on the size of the memory buffer (4000 64-bit doubles divided by the number of elements in the dimension been process). Therefore, as the size of T template increases the number of calls to allocate the buffer, free and allocate again can become a burden since it has a smaller locality of reference even when the machine has enough memory and cache to handle the process. Further investigation is required to confirm this.</ns0:p><ns0:p>An interesting comparison is the method described in the work by <ns0:ref type='bibr' target='#b14'>(Fodor, 2013)</ns0:ref> and adapted and tested by <ns0:ref type='bibr' target='#b20'>(Lasseck, 2013)</ns0:ref>. This method was design for the Neural Information Processing Scaled for Bioacoustics (NIPS4B) competition and although the results are very good they do not report on time of execution. As we have mention it is very important to us to have a method that provides good response times and the execution time of Lasseck's method seems to be greater than ours given the extensive pre-processing that method performs.</ns0:p></ns0:div>
<ns0:div><ns0:head>CONCLUSIONS AND FUTURE WORK</ns0:head><ns0:p>Now that passive autonomous acoustic recorders are readily available the amount of data is growing exponentially. For example, one permanent station recording one minute of every 10 minutes every day of the year generates 52,560 one minute recordings. If this is multiplied by the need to monitor thousands of locations across the planet, one can understand the magnitude of the task at hand.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science We have shown how the algorithm used in the ARBIMON II web-based cloud-hosted system was selected. We compared the performance in terms of the ability to detect and the efficiency in terms of time execution of three variants of a template-based detection algorithm. The result was a method that uses the power of a widely use method to determine the similarity between two images (Structural Similarity Index <ns0:ref type='bibr' target='#b32'>(Wang et al., 2004)</ns0:ref>), but to accelerate the detection process, the analysis was only done in regions where there was a strong-match determined by the OpenCV's matchTemplate procedure <ns0:ref type='bibr' target='#b7'>(Bradski, 2000)</ns0:ref>.</ns0:p><ns0:p>The results show that this method performed better both in terms of ability to detect as well as in terms of execution time .</ns0:p></ns0:div>
<ns0:div><ns0:head>13/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:p><ns0:p>Manuscript to be reviewed A fast and accurate general-purpose algorithm for detecting presence or absence of a species compliments the other tools of the ARBIMON system, such as options for creating playlists based on many different parameters including user-created tags (see Table <ns0:ref type='table' target='#tab_5'>5</ns0:ref>). For example, the system currently has 1,749,551 1-minute recordings uploaded by 453 users and 659 species specific models have been created</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>and run over 3,780,552 minutes of recordings of which 723,054 are distinct recordings.</ns0:p><ns0:p>While this research was a prove of concept, we provide the tools and encourage users to increase the size of the training data set as this should improve the performance of the algorithm. In addition, we will pursue other approaches, such as multi-label learning <ns0:ref type='bibr' target='#b33'>(Xie et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b34'>Zhang et al., 2016;</ns0:ref><ns0:ref type='bibr' target='#b9'>Briggs et al., 2012)</ns0:ref>. As a society, it is fundamental that we study the effects of climate change and deforestation on the fauna and we have to do it with the best possible tools. We are collecting a lot of data, but until recently there was not an intuitive and user-friendly system that allowed scientists to manage and analyze large number of recordings. Here we presented a web-based cloud-hosted system that provides a simple way to manage large quantities of recordings with a general-purpose method to detect their presence in recordings. 0.20 0.12 0.09 0.12 0.13 0.12 0.21 0.14 0.12 0.13 0.15 0.16 0.21 0.14 0.13 0.16 0.16 0.17 Table <ns0:ref type='table'>6</ns0:ref>. Area Under the ROC Curve (AUC), Accuracy (Ac), negative predictive value (Npv), precision (Pr), sensitivity (Se) and specificity (Sp) of the 21 species and three variants of the algorithm. Best values are shaded and the cases where the ANOVA suggested a significant difference between the algorithm variants at the 95% confidence level are in bold . </ns0:p></ns0:div>
<ns0:div><ns0:head>APPENDIX 1</ns0:head></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Recording locations in Puerto Rico. Map data: Google, Image -Landsat / Copernicus and Data -SIO, NOAA, US Navy, NGA and GEBCO.</ns0:figDesc><ns0:graphic coords='3,141.73,419.31,413.56,135.59' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Recording location in Peru. Map data: Google, US Dept. of State Geographer, Image -Landsat / Copernicus and Data -SIO, NOAA, US Navy, NGA and GEBCO.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. The three phases of the algorithm to create the species-specific models. In the Model Training phase Rec i is a recording, V i is the vector generated by the recognition function on Rec i and in the Detection phase V is the vector generated by the recognition function on the incoming recording.</ns0:figDesc><ns0:graphic coords='4,141.73,439.59,413.57,242.80' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Flowchart of the algorithm to generate the template of each species.</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.57,408.38' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>Figure 5. (a) A training set with 16 examples of the call of E. coqui. (b) The resulting template from the training set.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Flowchart of the algorithm to generate the similarity vector of each recording.</ns0:figDesc><ns0:graphic coords='9,141.73,63.78,413.58,236.48' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head /><ns0:label /><ns0:figDesc>meas i = SSI(T template , FilterSPEC i ) and the resulting vector V = [meas 1 , meas 2 , meas 3 , • • • , meas n ] but this time with n = 5 × 8/3 × c template + 1 . 8/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. Whisker boxes of the 10-fold validations for the three variants of the presented algorithm for: a) Area under the ROC curve and b) Accuracy.</ns0:figDesc><ns0:graphic coords='13,141.73,166.66,413.58,217.93' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Whisker boxes of the 10-fold validations for the three variants of the presented algorithm for: a) Negative predictive value, b) Precision, c) Sensitivity and d) Specificity.</ns0:figDesc><ns0:graphic coords='14,141.73,63.78,413.58,512.84' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. (a) Whisker boxes of the execution times of the three algorithms. (b) Execution times as a function of the size of the template in number of pixels.</ns0:figDesc><ns0:graphic coords='15,141.73,63.78,413.58,279.23' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure 12. Sample of species that the CORR variant presented better accuracy. a) E. cochranae, b) M. leucophrys, c) B. bivittatus, d) C. carmioli, e) M. marginatus, f) M. nudipes, g) E. brittoni, h) E. guttatus and i) L. thoracicus. Species a, b and c are statistically significant.</ns0:figDesc><ns0:graphic coords='21,141.73,89.29,413.58,358.02' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='8,141.73,63.78,413.57,203.51' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='19,141.73,359.12,413.57,283.63' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='20,141.73,342.66,413.55,213.05' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Species, class, location and count of recordings with validated data.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>These statistical features represent the dataset used to train the</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Features</ns0:cell></ns0:row><ns0:row><ns0:cell>1. mean</ns0:cell></ns0:row><ns0:row><ns0:cell>2. median</ns0:cell></ns0:row><ns0:row><ns0:cell>3. minimum</ns0:cell></ns0:row><ns0:row><ns0:cell>4. maximum</ns0:cell></ns0:row><ns0:row><ns0:cell>5. standard deviation</ns0:cell></ns0:row><ns0:row><ns0:cell>6. maximum -minimum</ns0:cell></ns0:row><ns0:row><ns0:cell>7. skewness</ns0:cell></ns0:row><ns0:row><ns0:cell>8. kurtosis</ns0:cell></ns0:row><ns0:row><ns0:cell>9. hyper-skewness</ns0:cell></ns0:row><ns0:row><ns0:cell>10. hyper-kurtosis</ns0:cell></ns0:row><ns0:row><ns0:cell>11. Histogram</ns0:cell></ns0:row><ns0:row><ns0:cell>12. Cumulative frequency histogram</ns0:cell></ns0:row><ns0:row><ns0:cell>Table</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc /><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row></ns0:table><ns0:note>). 10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Summary of the measures of the three variants of the algorithm. Best values are in bold.</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_4'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>efficiency with the SSIM variant. It achieved AUC and accuracy of 0.80 or better in 12 of the 21 species and sensitivity of 0.80 or more in 11 of the 21 species and the average execution time of 0.26s per minute per recording means that it can process around 14,000 minutes of recordings per hour.The difference in execution time between the SSIM variant and the other two was due to a memory Summary of the execution times of the three variants of the algorithm. Best values are in bold. PPMCC is the Pearson product-moment correlation coefficient.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>11/20</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_5'><ns0:head>Table 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Summary of the usage of the ARBIMON2 system and its model creation feature.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Number of users in the system</ns0:cell><ns0:cell>453</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of recordings in the system</ns0:cell><ns0:cell>1,749,551</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of models created by users</ns0:cell><ns0:cell>659</ns0:cell></ns0:row><ns0:row><ns0:cell>Total number of detected recordings</ns0:cell><ns0:cell>3,780,552</ns0:cell></ns0:row><ns0:row><ns0:cell>Number of distinct detected recordings</ns0:cell><ns0:cell>723,054</ns0:cell></ns0:row><ns0:row><ns0:cell>Average times a recording is detected</ns0:cell><ns0:cell>5.22</ns0:cell></ns0:row><ns0:row><ns0:cell>Standard deviation of the number of times a recording is detected</ns0:cell><ns0:cell>7.78</ns0:cell></ns0:row><ns0:row><ns0:cell>Maximum number of times a recordings has been detected</ns0:cell><ns0:cell>58</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head /><ns0:label /><ns0:figDesc>Table6provide a detail presentation of the performance of each variant of the algorithm: The area under the ROC curve, mean accuracy, mean precision, mean sensitivity and mean specificity values for each species, of the 10-fold validations for the three variants of the presented algorithm (SSIM, NORM and CORR). The mean, median and standard deviation values across all species are presented at the bottom of the table.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Species</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>Ac</ns0:cell><ns0:cell>SSIM Npv</ns0:cell><ns0:cell>Pr</ns0:cell><ns0:cell>Se</ns0:cell><ns0:cell>Sp</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>Ac</ns0:cell><ns0:cell>NORM Npv Pr</ns0:cell><ns0:cell>Se</ns0:cell><ns0:cell>Sp</ns0:cell><ns0:cell>AUC</ns0:cell><ns0:cell>Ac</ns0:cell><ns0:cell>CORR Npv Pr</ns0:cell><ns0:cell>Se</ns0:cell><ns0:cell>Sp</ns0:cell></ns0:row><ns0:row><ns0:cell>E. brittoni</ns0:cell><ns0:cell cols='6'>1.00 0.92 0.81 0.77 0.72 0.95</ns0:cell><ns0:cell cols='5'>0.42 0.89 0.83 0.80 0.77 0.92</ns0:cell><ns0:cell cols='5'>1.00 0.98 0.84 0.80 0.77 1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>E. cochranae</ns0:cell><ns0:cell cols='6'>1.00 0.87 0.84 0.94 0.88 0.85</ns0:cell><ns0:cell cols='5'>0.88 0.72 0.70 0.81 0.77 0.68</ns0:cell><ns0:cell cols='5'>1.00 0.98 0.96 1.00 0.97 1.00</ns0:cell></ns0:row><ns0:row><ns0:cell>M. guatemalae</ns0:cell><ns0:cell cols='6'>0.50 0.93 0.81 0.50 0.45 0.97</ns0:cell><ns0:cell cols='5'>1.00 0.97 0.82 0.50 0.45 1.00</ns0:cell><ns0:cell cols='5'>0.50 0.90 0.80 0.47 0.45 0.87</ns0:cell></ns0:row><ns0:row><ns0:cell>E. cooki</ns0:cell><ns0:cell cols='6'>1.00 0.96 0.85 0.77 0.77 0.97</ns0:cell><ns0:cell cols='5'>0.72 0.82 0.78 0.73 0.67 0.87</ns0:cell><ns0:cell cols='5'>0.88 0.89 0.82 0.72 0.73 0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>Unknown Insect</ns0:cell><ns0:cell cols='6'>1.00 0.90 0.79 0.84 0.75 0.82</ns0:cell><ns0:cell cols='5'>1.00 0.92 0.84 0.83 0.82 0.83</ns0:cell><ns0:cell cols='5'>1.00 0.90 0.79 0.84 0.75 0.82</ns0:cell></ns0:row><ns0:row><ns0:cell>E. coqui</ns0:cell><ns0:cell cols='6'>0.88 0.90 0.75 0.96 0.93 0.70</ns0:cell><ns0:cell cols='5'>0.92 0.86 0.75 0.88 0.96 0.47</ns0:cell><ns0:cell cols='5'>1.00 0.88 0.85 0.89 0.98 0.47</ns0:cell></ns0:row><ns0:row><ns0:cell>M. leucophrys</ns0:cell><ns0:cell cols='6'>0.98 0.87 0.88 0.87 0.89 0.87</ns0:cell><ns0:cell cols='5'>0.77 0.76 0.79 0.74 0.81 0.72</ns0:cell><ns0:cell cols='5'>0.98 0.88 0.87 0.89 0.87 0.90</ns0:cell></ns0:row><ns0:row><ns0:cell>E. juanariveroi</ns0:cell><ns0:cell cols='6'>0.20 0.78 0.69 0.60 0.48 0.79</ns0:cell><ns0:cell cols='5'>0.50 0.88 0.70 0.55 0.48 0.83</ns0:cell><ns0:cell cols='5'>0.63 0.81 0.69 0.47 0.45 0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>M. nudipes</ns0:cell><ns0:cell cols='6'>0.90 0.74 0.76 0.75 0.77 0.74</ns0:cell><ns0:cell cols='5'>0.84 0.81 0.84 0.80 0.85 0.79</ns0:cell><ns0:cell cols='5'>0.90 0.85 0.83 0.88 0.82 0.86</ns0:cell></ns0:row><ns0:row><ns0:cell>B. bivittatus</ns0:cell><ns0:cell cols='6'>0.77 0.59 0.65 0.65 0.64 0.65</ns0:cell><ns0:cell cols='5'>0.90 0.74 0.78 0.73 0.80 0.73</ns0:cell><ns0:cell cols='5'>0.95 0.85 0.84 0.88 0.83 0.87</ns0:cell></ns0:row><ns0:row><ns0:cell>C. carmioli</ns0:cell><ns0:cell cols='6'>0.78 0.77 0.75 0.83 0.73 0.83</ns0:cell><ns0:cell cols='5'>0.78 0.73 0.75 0.73 0.76 0.72</ns0:cell><ns0:cell cols='5'>0.83 0.81 0.80 0.86 0.80 0.84</ns0:cell></ns0:row><ns0:row><ns0:cell>L. thoracicus</ns0:cell><ns0:cell cols='6'>0.70 0.73 0.71 0.76 0.67 0.79</ns0:cell><ns0:cell cols='5'>0.90 0.76 0.80 0.73 0.80 0.77</ns0:cell><ns0:cell cols='5'>0.97 0.81 0.83 0.82 0.84 0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>F. analis</ns0:cell><ns0:cell cols='6'>0.82 0.81 0.81 0.79 0.82 0.79</ns0:cell><ns0:cell cols='5'>0.68 0.63 0.65 0.63 0.69 0.57</ns0:cell><ns0:cell cols='5'>0.57 0.58 0.59 0.58 0.62 0.55</ns0:cell></ns0:row><ns0:row><ns0:cell>E. guttatus</ns0:cell><ns0:cell cols='6'>0.74 0.69 0.70 0.69 0.70 0.69</ns0:cell><ns0:cell cols='5'>0.72 0.75 0.76 0.77 0.77 0.75</ns0:cell><ns0:cell cols='5'>0.78 0.77 0.77 0.78 0.77 0.77</ns0:cell></ns0:row><ns0:row><ns0:cell cols='7'>M. hemimelaena 0.75 0.76 0.71 0.77 0.67 0.82</ns0:cell><ns0:cell cols='5'>0.61 0.59 0.59 0.58 0.60 0.57</ns0:cell><ns0:cell cols='5'>0.61 0.63 0.62 0.63 0.65 0.59</ns0:cell></ns0:row><ns0:row><ns0:cell>B. chrysogaster</ns0:cell><ns0:cell cols='6'>0.56 0.68 0.66 0.67 0.62 0.74</ns0:cell><ns0:cell cols='5'>0.69 0.75 0.70 0.72 0.65 0.83</ns0:cell><ns0:cell cols='5'>0.80 0.73 0.69 0.64 0.66 0.78</ns0:cell></ns0:row><ns0:row><ns0:cell>S. grossus</ns0:cell><ns0:cell cols='6'>0.70 0.66 0.66 0.68 0.66 0.67</ns0:cell><ns0:cell cols='5'>0.78 0.74 0.72 0.75 0.70 0.76</ns0:cell><ns0:cell cols='5'>0.81 0.71 0.73 0.74 0.78 0.62</ns0:cell></ns0:row><ns0:row><ns0:cell>P. lophotes</ns0:cell><ns0:cell cols='6'>0.73 0.71 0.68 0.73 0.63 0.78</ns0:cell><ns0:cell cols='5'>0.58 0.58 0.60 0.59 0.62 0.57</ns0:cell><ns0:cell cols='5'>0.65 0.61 0.63 0.62 0.64 0.61</ns0:cell></ns0:row><ns0:row><ns0:cell>H. subflava</ns0:cell><ns0:cell cols='6'>0.74 0.64 0.64 0.64 0.66 0.61</ns0:cell><ns0:cell cols='5'>0.51 0.51 0.51 0.52 0.53 0.49</ns0:cell><ns0:cell cols='5'>0.51 0.51 0.52 0.51 0.56 0.48</ns0:cell></ns0:row><ns0:row><ns0:cell>M. marginatus</ns0:cell><ns0:cell cols='6'>0.58 0.59 0.55 0.60 0.59 0.51</ns0:cell><ns0:cell cols='5'>0.32 0.49 0.43 0.47 0.39 0.47</ns0:cell><ns0:cell cols='5'>0.69 0.61 0.62 0.61 0.66 0.56</ns0:cell></ns0:row><ns0:row><ns0:cell>T. schistaceus</ns0:cell><ns0:cell cols='6'>0.62 0.58 0.58 0.61 0.51 0.67</ns0:cell><ns0:cell cols='5'>0.30 0.50 0.46 0.45 0.49 0.43</ns0:cell><ns0:cell cols='5'>0.28 0.52 0.48 0.49 0.44 0.52</ns0:cell></ns0:row><ns0:row><ns0:cell>Mean Values</ns0:cell><ns0:cell cols='6'>0.76 0.77 0.73 0.73 0.69 0.77</ns0:cell><ns0:cell cols='5'>0.71 0.73 0.71 0.68 0.68 0.70</ns0:cell><ns0:cell cols='5'>0.78 0.77 0.74 0.72 0.72 0.74</ns0:cell></ns0:row><ns0:row><ns0:cell>Median Values</ns0:cell><ns0:cell cols='6'>0.75 0.76 0.71 0.75 0.67 0.79</ns0:cell><ns0:cell cols='5'>0.72 0.75 0.75 0.73 0.70 0.73</ns0:cell><ns0:cell cols='5'>0.81 0.81 0.79 0.74 0.75 0.80</ns0:cell></ns0:row><ns0:row><ns0:cell>Standard Dev.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>Note that for recordings with a sample rate of 44100 when we calculate the STFT with a window of size 512 and a 50% overlap, one step is equivalent to 5.8 milliseconds, therefore, 16 steps is less than 100 milliseconds. Although this procedure may miss the strongest match, the length of the calls are much longer than the step interval; therefore, there is a high probability of detecting the species-specific call.7/20PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:note>
<ns0:note place='foot' n='14'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)</ns0:note>
<ns0:note place='foot' n='17'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)Manuscript to be reviewed</ns0:note>
<ns0:note place='foot' n='20'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15269:2:0:NEW 3 Apr 2017)Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Editor's Comments
Please address the comment of Reviewer 1 concerning multi-label method.
Reviewer 1 (Anonymous)
Basic reporting
The mention of multi-label methods added to the conclusion is a bit odd. I recognise that reviewer 2 requested it, but the authors present it as 'a way to filter out sound activity not related to fauna', which is not really a good description of what multi-label provides.
The description of the method was deleted.
" | Here is a paper. Please give your review comments after reading it. |
738 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Shotgun metagenomics of microbial communities reveals information about strains of relevance for applications in medicine, biotechnology and ecology. Recovering their genomes is a crucial, but very challenging step, due to the complexity of the underlying biological system and technical factors. Microbial communities are heterogeneous, with oftentimes hundreds of present genomes deriving from different species or strains, all at varying abundances and with different degrees of similarity to each other and reference data. We present a versatile probabilistic model for genome recovery and analysis, which aggregates three types of information that are commonly used for genome recovery from metagenomes. As potential applications we showcase metagenome contig classification, genome sample enrichment and genome bin comparisons. The open source implementation MGLEX is available via the Python Package Index and on GitHub and can be embedded into metagenome analysis workflows and programs.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Shotgun sequencing of DNA extracted from a microbial community recovers genomic data from different community members while bypassing the need to obtain pure isolate cultures. It thus enables novel insights into ecosystems, especially for those genomes which are inaccessible by cultivation techniques and isolate sequencing. However, current metagenome assemblies are oftentimes highly fragmented, including unassembled reads, and require further processing to separate data according to the underlying genomes. Assembled sequences, called contigs, that originate from the same genome are placed together in this process, which is known as metagenome binning <ns0:ref type='bibr' target='#b31'>(Tyson et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b5'>Dröge & McHardy, 2012)</ns0:ref> and for which many programs have been developed. Some are trained on reference sequences, using contig k-mer frequencies or sequence similarities as sources of information <ns0:ref type='bibr' target='#b21'>(McHardy et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b6'>Dröge, Gregor & McHardy, 2014;</ns0:ref><ns0:ref type='bibr' target='#b35'>Wood & Salzberg, 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gregor et al., 2016)</ns0:ref>, which can be adapted to specific ecosystems. Others cluster the contigs into genome bins, using contig k-mer frequencies and read coverage <ns0:ref type='bibr' target='#b4'>(Chatterji et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kislyuk et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b36'>Wu et al., 2014;</ns0:ref><ns0:ref type='bibr'>Nielsen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Imelfort et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alneberg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b12'>Kang et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lu et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Recently, oftentimes multiple biological or technical samples of the same environment are sequenced to produce distinct genome copy numbers across samples, sometimes using different sequencing protocols and technologies, such as Illumina and PacBio sequencing <ns0:ref type='bibr' target='#b9'>(Hagen et al., 2016)</ns0:ref>. Genome copies are reflected by corresponding read coverage variation in the assemblies which allows to resolve samples with many genomes. The combination of experimental techniques helps to overcome platform-specific shortcomings such as short reads or high error rates in the data analysis. However, reconstructing To recover genomes from environmental sequencing data, the illustrated processes can be iterated. Different programs can be run for each process and iteration. MGLEX can be applied in all steps: (a) to classify contigs or to cluster by embedding the probabilistic model into an iterative procedure; (b) to enrich a metagenome for a target genome to reduce its size and to filter out irrelevant sequence data; (c) to select contigs of existing bins based on likelihoods and p-values and to repeat the binning process with a reduced data-set; (d) to refine existing bins, for instance to merge bins as suggested by bin analysis.</ns0:p><ns0:p>high-quality bins of individual strains remains difficult without very high numbers of replicates. Often, genome reconstruction may improve by manual intervention and iterative analysis (Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) or additional sequencing experiments.</ns0:p><ns0:p>Genome bins can be constructed by consideration of genome-wide sequence properties. Currently, oftentimes the following types of information are considered:</ns0:p><ns0:p>• Read contig coverage: sequencing read coverage of assembled contigs, which reflects the genome copy number (organismal abundance) in the community. Abundances can vary across biological or technical replicates, and co-vary for contigs from the same genome, supplying more information to resolve individual genomes <ns0:ref type='bibr' target='#b2'>(Baran & Halperin, 2012;</ns0:ref><ns0:ref type='bibr' target='#b0'>Albertsen et al., 2013)</ns0:ref>.</ns0:p><ns0:p>• Nucleotide sequence composition: the frequencies of short nucleotide subsequences of length k called k-mers. The genomes of different species have a characteristic k-mer spectrum <ns0:ref type='bibr' target='#b13'>(Karlin, Mrazek & Campbell, 1997;</ns0:ref><ns0:ref type='bibr' target='#b21'>McHardy et al., 2007)</ns0:ref>.</ns0:p><ns0:p>• Sequence similarity to reference sequences: a proxy for the phylogenetic relationship to species which have already been sequenced. The similarity is usually inferred by alignment to a reference collection and can be expressed using taxonomy <ns0:ref type='bibr' target='#b21'>(McHardy et al., 2007)</ns0:ref>.</ns0:p><ns0:p>Probabilities represent a convenient and efficient way to represent and combine information that is uncertain by nature. Here, we</ns0:p><ns0:p>• propose a probabilistic aggregate model for binning based on three commonly used information sources, which can easily be extended to include new features.</ns0:p><ns0:p>• outline the features and submodels for each information type. As the feature types listed above derive from distinct processes, we define for each of them independently a suitable probabilistic submodel.</ns0:p><ns0:p>• showcase several applications related to the binning problem Submission v0.5.4s</ns0:p></ns0:div>
<ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A model with data-specific structure poses an advantage for genome recovery in metagenomes because it uses data more efficiently for fragmented assemblies with short contigs or a low number of samples for differential coverage binning. Being probabilistic, it generates probabilities instead of hard labels so that a contig can be assigned to several, related genome bins and the uncertainty can easily be assessed.</ns0:p><ns0:p>The models can be applied in different ways, not just classification, which we show in our application examples. Most importantly, there is a rich repertoire of higher-level procedures based on probabilistic models, including Expectation Maximization (EM) and Markov Chain Monte Carlo (MCMC) methods for clustering without or with few prior knowledge of the modeled genomes.</ns0:p><ns0:p>We focus on defining explicit probabilistic models for each feature type and their combination into an aggregate model. In contrast, binning methods often concatenate and transform features <ns0:ref type='bibr' target='#b4'>(Chatterji et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Imelfort et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alneberg et al., 2014)</ns0:ref> before clustering. Specific models for the individual data types can be better tailored to the data generation process and will therefore generally enable a better use of information and a more robust fit of the aggregate model while requiring fewer data. We propose a flexible model with regard to both the included features and the feature extraction methods. There already exist parametric likelihood models in the context of clustering, for a limited set of features. For instance, <ns0:ref type='bibr' target='#b15'>Kislyuk et al. (2009)</ns0:ref> use a model for nucleotide composition and Wu et al.</ns0:p><ns0:p>(2014) integrated distance-based probabilities for 4-mers and absolute contig coverage using a Poisson model. We extend and generalize this work so that the model can be used in different contexts such as classification, clustering, genome enrichment and binning analysis. Importantly, we are not providing an automatic solution to binning but present a flexible framework to target problems associated with binning.</ns0:p><ns0:p>This functionality can be used in custom workflows or programs for the steps illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. As input, the model incorporates genome abundance, nucleotide composition and additionally sequence similarity (via taxonomic annotation). The latter is common as taxonomic binning output <ns0:ref type='bibr' target='#b6'>(Dröge, Gregor & McHardy, 2014;</ns0:ref><ns0:ref type='bibr' target='#b35'>Wood & Salzberg, 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gregor et al., 2016)</ns0:ref> and for quality assessment but has rarely been systematically used as features in binning <ns0:ref type='bibr' target='#b4'>(Chatterji et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lu et al., 2016)</ns0:ref>. We show that taxonomic annotation is valuable information that can improve binning considerably.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Classification models</ns0:head><ns0:p>Classification is a common concept in machine learning. Usually, such algorithms use training data for different classes to construct a model which then contains the condensed information about the important properties that distinguish the data of the classes. In probabilistic modeling, we describe these properties as parameters of likelihood functions, often written as θ. After θ has been determined by training, the model can be applied to assign novel data to the modeled classes. In our application, classes are genomes, or bins, and the data are nucleotide sequences like contigs. Thus, contigs can be assigned to genomes bins but we need to provide training sequences for the genomes. Such data can be selected by different means, depending on the experimental and algorithmic context. One can screen metagenomes for genes which are unique to clades, or which can be annotated by phylogenetic approaches, and use the corresponding sequence data for training <ns0:ref type='bibr' target='#b8'>(Gregor et al., 2016)</ns0:ref>. Independent assemblies or reference genomes can also serve as training data for genome bins <ns0:ref type='bibr' target='#b3'>(Brady & Salzberg, 2009;</ns0:ref><ns0:ref type='bibr' target='#b25'>Patil et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gregor et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Another direct application is to learn from existing genome bins, which were derived by any means, and then to (re)assign contigs to these bins. This is useful for short contigs which are often excluded from binning and analysis due to their high variability. Finally, probabilistic models can be embedded into iterative clustering algorithms with random initialization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Aggregate model</ns0:head><ns0:p>Let 1 ≤ i ≤ D be an index referring to D contigs resulting from a shotgun metagenomic experiment. In the following we will present a generative probabilistic aggregate model that consists of components, indexed by 1 ≤ k ≤ M, which are generative probabilistic models in their own right, yielding probabilities • sample abundance feature vectors a i and r i , one entry per sample • a compositional feature vector c i , one entry per compositional feature (e.g. a k-mer)</ns0:p><ns0:p>• a taxonomic feature vector t i , one entry per taxon</ns0:p><ns0:p>We define the individual feature vectors in the corresponding sections. As mentioned before, each of the M features gives rise to a probability P k (contig i | genome) that contig i belongs to a specific genome by means of its component model. Those probabilities are then collected into an aggregate model that transforms those feature specific probabilities P k (i | genome) into an overall probability P(i | genome) that contig i is associated with the genome. In the following, we describe how we construct this model with respect to the individual submodels P k (i | genome), the feature representation of the contigs and how we determine the optimal set of parameters from training sequences.</ns0:p><ns0:p>For the i th contig, we define a joint likelihood for genome bin g (Equation <ns0:ref type='formula'>1</ns0:ref>, the probabilities written as a function of the genome parameters), which is a weighted product over M independent component likelihood functions, or submodels, for the different feature types. For the k th submodel, Θ k is the corresponding parameter vector, F i,k the feature vector of the i th contig and α k defines the contribution of the respective submodel or feature type. β is a free scaling parameter to adjust the smoothness of the aggregate likelihood distribution over the genome bins (bin posterior).</ns0:p><ns0:formula xml:id='formula_0'>L(Θ g | F i ) =         M k=1 L(Θ gk | F ik ) α k         β (1)</ns0:formula><ns0:p>We assume statistical independence of the feature subtypes and multiply likelihood values from the corresponding submodels. This is a simplified but reasonable assumption: e.g., the species abundance in a community can be altered by external factors without impacting the nucleotide composition of the genome or its taxonomic position. Also, there is no direct relation between a genome's k-mer distribution and taxonomic annotation via reference sequences.</ns0:p><ns0:p>All model parameters, Θ g , α and β, are learned from training sequences. We will explain later, how the weight parameters α and β are chosen and begin with a description of the four component likelihood functions, one for each feature type.</ns0:p><ns0:p>In the following, we denote the j th position in a vector x i with x i, j . To simplify notation, we also define the sum or fraction of two vectors of the same dimension as the positional sum or fraction and write the length of vector x as len(x).</ns0:p></ns0:div>
<ns0:div><ns0:head>Absolute abundance</ns0:head><ns0:p>We derive the average number of reads covering each contig position from assembler output or by mapping the reads back onto contigs. This mean coverage is a proxy for the genome abundance in the sample because it is roughly proportional to the genome copy number. A careful library preparation causes the copy numbers of genomes to vary differently over samples, so that each genome has a distinct relative read distribution. Depending on the amount of reads in each sample being associated with every genome, we obtain for every contig a coverage vector a i where len(a i ) is the number of samples. Therefore, if more sample replicates are provided, contigs from different genomes are generally better separable since every additional replicate adds an entry to the feature vectors.</ns0:p><ns0:p>Random sequencing followed by perfect read assembly theoretically produces positional read counts which are Poisson distributed, as described in <ns0:ref type='bibr' target='#b16'>Lander & Waterman (1988)</ns0:ref>. In Equation <ns0:ref type='formula'>2</ns0:ref>, we derived a similar likelihood using mean coverage values (see Supplementary Methods for details). The likelihood function is a normalized product over the independent Poisson functions P θ j (a i, j ) for each sample. The Manuscript to be reviewed Computer Science expectation parameter θ j represents the genome copy number in the j th sample.</ns0:p><ns0:formula xml:id='formula_1'>L(θ | a i ) = len(a i ) len(a i ) j=1 P θ j (a i, j ) = len(a i ) len(a i ) j=1 θ a i, j j a i, j ! e −θ j (2)</ns0:formula><ns0:p>The Poisson explicitly accounts for low and zero counts, unlike a Gaussian model. Low counts are often observed for undersequenced and rare taxa. Note that a i, j is independent of θ. We derived the model likelihood function from the joint Poisson over all contig positions by approximating the first data-term with mean coverage values (Supplementary Methods).</ns0:p><ns0:p>The maximum likelihood estimate (MLE) for θ on training data is the weighted average of mean coverage values for each sample in the training data (Supplementary Methods).</ns0:p><ns0:formula xml:id='formula_2'>θ = N i=1 w i a i N i=1 w i<ns0:label>(3)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Relative abundance</ns0:head><ns0:p>In particular for shorter contigs, the absolute read coverage is often overestimated. Basically, the Lander-Waterman assumptions <ns0:ref type='bibr' target='#b16'>(Lander & Waterman, 1988)</ns0:ref> are violated if reads do not map to their original locations due to sequencing errors or if they 'stack' on certain genome regions because they are ambiguous (i.e. for repeats or conserved genes), rendering the Poisson model less appropriate. The Poisson, when constrained on the total sum of coverages in all samples, leads to a binomial distribution as shown by <ns0:ref type='bibr' target='#b26'>(Przyborowski & Wilenski, 1940)</ns0:ref>. Therefore, we model differential abundance over different samples using a binomial in which the parameters represent a relative distribution of genome reads over the samples. For instance, if a particular genome had the same copy number in a total of two samples, the genome's parameter vector θ would simply be [0.5, 0.5]. As for absolute abundance, the model becomes more powerful with a higher number of samples. Using relative frequencies as model parameters instead of absolute coverages, however, has the advantage that any constant coverage factor cancels in the division term. For example, if a genome has two similar gene copies which are collapsed during assembly, twice as many reads will map onto the assembled gene in every sample but the relative read frequencies over samples will stay unaffected. This makes the binomial less sensitive to read mapping artifacts but requires two or more samples because one degree of freedom (DF) is lost by the division.</ns0:p><ns0:p>The contig features r i are the mean coverages in each sample, which is identical to a i in the absolute abundance model, and the model's parameter vector θ holds the relative read frequencies in the samples, as explained before. In Equation <ns0:ref type='formula'>4</ns0:ref>we ask: how likely is the observed mean contig coverage r i, j in sample j given the genome's relative read frequency θ j of the sample and the contig's total coverage R i for all samples. The corresponding likelihood is calculated as a normalized product over the binomials B R i ,θ j (r i, j ) for every sample.</ns0:p><ns0:formula xml:id='formula_3'>L(θ | r i ) = len(r i ) len(r i ) j=1 B R i ,θ j (r i, j ) = len(r i ) len(r i ) j=1 R i r i, j θ r i, j j 1 − θ j (Ri−ri,j) (4)</ns0:formula><ns0:p>R i is the sum of the abundance vector r i . Because both R i and r i can contain real numbers, we need to generalize the binomial coefficient to positive real numbers via the gamma function Γ. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>n k = Γ(n + 1)Γ(k + 1) Γ(n − k + 1)<ns0:label>(5</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Because the binomial coefficient is a constant factor and independent of θ, it can be omitted in ML classification (when comparing between different genomes) or be retained upon parameter updates. As for the Poisson, the model accounts for low and zero counts (by the binomial coefficient). We derived the likelihood function from the joint distribution over all contig positions by approximating the binomial data-term with mean coverage values (see Supplementary Methods).</ns0:p><ns0:p>The MLE θ for the model parameters on training sequence data corresponds to the amount of read data (base pairs) in each sample divided by the total number of base pairs in all samples. We express this as a weighted sum of contig mean coverage values (see Supplementary Methods).</ns0:p><ns0:formula xml:id='formula_5'>θ = N i=1 w i r i N i=1 w i R i<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>It is obvious that absolute and relative abundance models are not independent when the identical input vectors (here a i = r i ) are used. However, we can instead apply the Poisson model to the total coverage R i</ns0:p><ns0:p>(summed over all samples) because this sum also follows a Poisson distribution. To illustrate the total abundance, this compares to mixing the samples before sequencing so that the resolution of individual samples is lost. The binomial, in contrast, only captures the relative distribution of reads over the samples (one DF is lost in the ratio transform). This way, we can combine both absolute and relative abundance submodels in the aggregate model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Nucleotide composition</ns0:head><ns0:p>Microbial genomes have a distinct 'genomic fingerprint' <ns0:ref type='bibr' target='#b13'>(Karlin, Mrazek & Campbell, 1997)</ns0:ref> which is typically determined by means of k-mers. Each contig has a relative frequency vector c i for all possible k-mers of size k. The nature of shotgun sequencing demands that each k-mer is counted equally to its reverse complement because the orientation of the sequenced strand is typically unknown. With increasing k, the feature space grows exponentially and becomes sparse. Thus, it is common to select k from 4 to 6 <ns0:ref type='bibr' target='#b30'>(Teeling et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b21'>McHardy et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kislyuk et al., 2009)</ns0:ref>. Here, we simply use 5-mers (len(c i ) = 4 5 2 = 512) but other choices can be made.</ns0:p><ns0:p>For its simplicity and effectiveness, we chose a likelihood model assuming statistical independence of features so that the likelihood function in Equation 7 becomes a simple product over observation probabilities (or a linear model when transforming into a log-likelihood). Though k-mers are not independent due to their overlaps and reverse complementarity <ns0:ref type='bibr' target='#b15'>(Kislyuk et al., 2009)</ns0:ref>, the model has been successfully applied to k-mers <ns0:ref type='bibr' target='#b33'>(Wang et al., 2007)</ns0:ref>, and we can replace k-mers in our model with better-suited compositional features, i.e. using locality-sensitive hashing <ns0:ref type='bibr' target='#b20'>(Luo et al., 2016)</ns0:ref>. A genome's background distribution θ is a vector which holds the probabilities to observe each k-mer and the vector c i does the same for the i th contig. The composition likelihood for a contig is a weighted and normalized product over the background frequencies.</ns0:p><ns0:formula xml:id='formula_6'>L(θ | c i ) = len(c i ) i=1 θ c i i (7)</ns0:formula><ns0:p>The genome parameter vector θ that maximizes the likelihood on training sequence data can be estimated by a weighted average of feature counts (Supplementary Methods). </ns0:p><ns0:formula xml:id='formula_7'>θ = N i=1 w i c i N i=1 w i<ns0:label>(</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Similarity to reference</ns0:head><ns0:p>We can compare contigs to reference sequences, for instance by local alignment. Two contigs that align to closely related taxa are more likely to derive from the same genome than sequences which align to distant clades. We convert this indirect relationship to explicit taxonomic features which we can compare without direct consideration of reference sequences. A taxon is a hierarchy of nested classes which can be written as a tree path, for example, the species E. coli could be written as [Bacteria, Gammaproteobacteria,</ns0:p><ns0:formula xml:id='formula_8'>Enterobacteriaceae, E. coli].</ns0:formula><ns0:p>We assume that distinct regions of a contig, such as genes, can be annotated with different taxa. Each taxon has a corresponding weight which in our examples is a positive alignment score. The weighted taxa define a spectrum over the taxonomy for every contig and genome. It is not necessary that the alignment reference be complete or include the respective species genome but all spectra must be equally biased.</ns0:p><ns0:p>Since each contig is represented by a hierarchy of L numeric weights, we incorporated these features into our multi-layer model. First, each contig's taxon weights are transformed to a set of sparse feature vectors</ns0:p><ns0:formula xml:id='formula_9'>t i = {t i,l | 1 ≤ l ≤ L},</ns0:formula><ns0:p>one for each taxonomic level, by inheriting and accumulating scores for higher-level taxa (see Table <ns0:ref type='table'>1</ns0:ref> and Figure <ns0:ref type='figure'>2</ns0:ref>).</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Calculating the contig features t i for a simplified taxonomy. There are five original integer alignment scores for nodes (c), (e), (f), (g) and (h) which are summed up at higher levels to calculate the feature vectors t i,l . The corresponding tree structure is shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>Node Taxon Level l Index j Score t i,l, j</ns0:p><ns0:formula xml:id='formula_10'>a Bacteria 1 1 0 7 b Gammaproteobacteria 2 1 0 6 c Betaproteobacteria 2 2 1 1 d Enterobacteriaceae 3 1 0 5 e Yersiniaceae 3 2 1 1 f E. vulneris 4 1 1 1 g E. coli 4 2 3 3 h Yersinia sp. 4 3 1 1</ns0:formula><ns0:p>Each vector t i,l contains the scores for all T l possible taxa at level l. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science likelihood model corresponds to a set of simple frequency models, one for each layer. The full likelihood is a product of the level likelihoods.</ns0:p><ns0:formula xml:id='formula_11'>L(θ | t i ) = L l=1 T l j=1 θ t i,l, j l, j<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>For simplicity, we assume that layer likelihoods are independent which is not quite true but effective.</ns0:p><ns0:p>The MLE for each θ l is then derived from training sequences similar to the simple frequency model (Supplementary Methods).</ns0:p><ns0:formula xml:id='formula_12'>θl = N i=1 t i,l T l j=1 N i=1 t i,l<ns0:label>(10)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Inference of weight parameters</ns0:head><ns0:p>The aggregate likelihood for a contig in Equation 1 is a weighted product of submodel likelihoods. The weights in vector α balance the contributions, assuming that they must not be equal. When we write the likelihood in logarithmic form (Equation <ns0:ref type='formula' target='#formula_13'>11</ns0:ref>), we see that each weight α k sets the variance or width of the contigs' submodel log-likelihood distribution. We want to estimate α k in a way which is not affected by the original submodel variance because the corresponding normalization exponent is somewhat arbitrary.</ns0:p><ns0:p>For example, we normalized the nucleotide composition likelihood as a single feature and the abundance likelihoods as a single sample to limit the range of the likelihood values, because we simply cannot say how much each feature type counts.</ns0:p><ns0:formula xml:id='formula_13'>l(Θ | F i ) = β M k=1 α k l(Θ k | F i,k )<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>For any modeled genome, each of the M submodels produces a distinct log-likelihood distribution of contig data. Based on the origin of the contigs, which is known for model training, the distribution can be split into two parts, the actual genome (positive class) and all other genomes (negative class), as illustrated in Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>. The positive distribution is roughly unimodal and close to zero whereas the negative distribution, which represents many genomes at once, is diverse and yields strongly negative values. Intuitively, we want to select α such that the positive class is well separated from the negative class in the aggregate log-likelihood function in Equation <ns0:ref type='formula' target='#formula_13'>11</ns0:ref>.</ns0:p><ns0:p>Because α cannot be determined by likelihood maximization, the contributions are balanced in a robust way by setting α to the inverse standard deviation of the genome (positive class) log-likelihood distributions. More precisely, we calculate the average standard deviation over all genomes weighted by the amount of contig data (bp) for each genome and calculate α k as the inverse of this value. This scales down submodels with a high average variance. When we normalize the standard deviation of genome log-likelihood distributions in all submodels before summation, we assume that a high variance means uncertainty. This form of weight estimation requires that for at least some of the genomes, a sufficient number of sequences must be available to estimate the standard deviation. In some instances, it might be necessary to split long contigs into smaller sequences to generate a sufficient number of data points for estimation.</ns0:p><ns0:p>Parameter Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_14'>β</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>original log-likelihood weighted log-likelihood <ns0:ref type='formula' target='#formula_13'>11</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_15'>l(Θ 2 | F i,2 ) l(Θ 1 | F i,1 ) α 1 l(Θ 1 | F i,1 ) α 2 l(Θ 2 | F i,2 ) submodel 1 submodel 2 -∞ -∞ -∞ -∞ = aggregate</ns0:formula></ns0:div>
<ns0:div><ns0:head>Data simulation</ns0:head><ns0:p>We simulated reads of a complex microbial community from 400 publicly available genomes (Supplementary Methods and Supplementary Table <ns0:ref type='table'>1</ns0:ref>). These comprised 295 unique and 44 species with each two or three strain genomes to mimic strain heterogeneity. Our aim was to create a difficult benchmark dataset under controlled settings, minimizing potential biases introduced by specific software. We sampled abundances from a lognormal distribution because it has been described as a realistic model <ns0:ref type='bibr' target='#b28'>(Schloss & Handelsman, 2006)</ns0:ref>. We then simulated a primary community which was then subject to environmental changes resulting in exponential growth of 25% of the community members at growth rates which where chosen uniformly at random between one and ten whereas the other genome abundances remained unchanged. We applied this procedure three times to the primary community which resulted in one primary and three secondary artificial community abundances profiles. With these, we generated 150 bp long Illumina HiSeq reads using the ART simulator <ns0:ref type='bibr' target='#b10'>(Huang et al., 2012)</ns0:ref> and chose a yield of 15 Gb per sample.</ns0:p><ns0:p>The exact amount of read data for all four samples after simulation was 59.47 Gb. To avoid any bias caused by specific metagenome assembly software and to assure a constant contig length, we divided the original genome sequences into non-overlapping artificial contigs of 1 kb length and selected a random 500 kb of each genome to which we mapped the simulated reads using Bowtie2 <ns0:ref type='bibr' target='#b17'>(Langmead & Salzberg, 2012)</ns0:ref>. By the exclusion of some genome reference, we imitated incomplete genome assemblies when mapping reads, which affects the coverage values. Finally, we subsampled 300 kb contigs per genome with non-zero read coverage in at least one of the samples to form the demonstration dataset (120 Mb), which has 400 genomes (including related strains), four samples and contigs of size 1 kb. Due to the short contigs and few samples, this is a challenging dataset for complete genome recovery <ns0:ref type='bibr'>(Nielsen et al., 2014)</ns0:ref> but suitable to demonstrate the functioning of our model with limited data. For each contig we derived 5-mer frequencies, taxonomic annotation (removing species-level genomes from the reference sequence data) and average read coverage per sample, as described in the Supplementary Methods. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Maximum likelihood classification</ns0:head><ns0:p>We evaluated the performance of the model when classifying contigs to the genome with the highest likelihood, a procedure called Maximum Likelihood (ML) classification. We applied a form of three-fold cross-validation, dividing the simulated data set into three equally-sized parts with 100 kb from every genome. We used only 100 kb (training data) of every genome to infer the model parameters and the other 200 kb (test data) to measure the classification error. 100 kb was used for training because it is often difficult to identify sufficient training data in metagenome analysis. For each combination of submodels, we calculated the mean squared error (MSE) and mean pairwise coclustering (MPC) probability for the predicted (ML) probability matrices (Suppl. Methods), averaged over the three test data partitions. We included the MPC as it can easily be interpreted: for instance, a value of 0.5 indicates that on average 50% of all contig pairs of a genome end up in the same bin after classification. Table <ns0:ref type='table'>2</ns0:ref> shows that the model integrates information from each data source such that the inclusion of additional submodels resulted in a better MPC and also MSE, with a single exception when combining absolute and relative abdundance models which resulted in a marginal increase of the MSE. We also found that taxonomic annotation represents the most powerful information type in our simulation. For comparson, we added scores for NBC <ns0:ref type='bibr' target='#b27'>(Rosen, Reichenberger & Rosenfeld, 2011)</ns0:ref>, a classifier based on nucleotide composition with in-sample training using 5-mers and 15-mers, and Centrifuge <ns0:ref type='bibr' target='#b14'>(Kim et al., 2016)</ns0:ref>, a similarity-based classifier both with in-sample and reference data. These programs were given the same information as the corresponding submodels and they rank close to these. In a further step, we investigated how the presence of very similar genomes impacted the performance of the model. We first collapsed strains from the same species by merging the corresponding columns in the classification likelihood matrix, retaining the entry with the highest likelihood, and then computed the resulting coclustering performance increase ∆MPC ML . Considering assignment on species instead of strain level showed a larger ∆MPC ML for nucleotide composition and taxonomic annotation than for absolute and relative abundance. This is expected, because both do not distinguish among strains, whereas genome abundance does in some, but not all cases.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. Cross-validation performance of ML classification for all possible combinations of submodels. We calculated the mean pairwise coclustering (MPC), the strain to species MPC improvement (∆MPC ML ) and the mean squared error (MSE). AbAb = absolute total abundance; ReAb = relative abundance; NuCo = nucleotide composition; TaAn = taxonomic annotation. NBC (v1.1) and <ns0:ref type='bibr'>Centrifuge (v.1.0.3b)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Soft assignment</ns0:head><ns0:p>The contig length of 1 kb in our simulation is considerably shorter, and therefore harder to classify, than sequences which can be produced by current assembly methods or by some cutting-edge sequencing platforms <ns0:ref type='bibr' target='#b7'>(Goodwin, McPherson & McCombie, 2016)</ns0:ref>. In practice, longer contigs can be classified with higher accuracy than short ones, as more information is provided as a basis for assignment. For instance, a more robust coverage mean, a k-mer spectrum derived from more counts or more local alignments to reference genomes can be inferred from longer sequences. However, as short contigs remain frequent in current metagenome assemblies, 1 kb is sometimes considered a minimum useful contig length <ns0:ref type='bibr' target='#b1'>(Alneberg et al., 2014)</ns0:ref>. To account for the natural uncertainty when assigning short contigs, one can calculate the posterior probabilities over the genomes (see Suppl. Methods), which results in partial assignments of each contig to the genomes. This can reflect situations in which a particular contig is associated with multiple genomes, for instance in case of misassemblies or the presence of homologous regions across genomes.</ns0:p><ns0:p>The free model parameter β in Equation <ns0:ref type='formula'>1</ns0:ref>, which is identical in all genome models, smoothens or <ns0:ref type='table'>2</ns0:ref>. Thus, soft assignment seems more suitable to classify 1 kb contigs, which tend to produce similar likelihoods under more than one genome model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Genome enrichment</ns0:head><ns0:p>Enrichment is commonly known as an experimental technique to increase the concentration of a target substance relative to others in a probe. Thus, an enriched metagenome still contains a mixture of different genomes, but the target genome will be present at much higher frequency than before. This allows a more focused analysis of the contigs or an application of methods which seem prohibitive for the full data by runtime or memory considerations. In the following, we demonstrate how to filter metagenome contigs by p-value to enrich in-silico for specific genomes. Often, classifiers model an exhaustive list of alternative genomes but in practice it is difficult to recognize all species or strains in a metagenome with appropriate training data. When we only look at individual likelihoods, for instance the maximum among the genomes, this can be misleading if the contig comes from a missing genome. For better judgment, a p-value tells us how frequent or extreme the actual likelihood is for each genome. Many if not all binning methods lack explicit significance calculations. We can take advantage of the fact that the classification model compresses all features into a genome likelihood and generate a null (log-)likelihood distribution on training data for each genome. Therefore, we can associate empirical p-values with each newly classified contig and can, for sufficiently small p-values, reject the null hypothesis that the contig belongs to the respective genome. Since this is a form of binary classification, there is the risk to reject a good contig which we measure as sensitivity.</ns0:p><ns0:p>We enriched a metagenome by first training a genome model and then calculating the p-values of remaining contigs using this model. Contigs with higher p-values than the chosen critical value were discarded. The higher this cutoff is, the smaller the enriched sample becomes, but also the target genome Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, is approximately a linear function of the p-value.</ns0:p><ns0:p>will be less complete. We calculated the reduced sample size as a function of the p-value cutoff for our simulation (Figure <ns0:ref type='figure'>5</ns0:ref>). Selecting a p-value threshold of 2.5% shrinks the test data on average down to 5% of the original size. Instead of an empirical p-value, we could also use a parametrized distribution or select a critical log-likelihood value by manual inspection of the log-likelihood distribution (see Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> for an example of such a distribution). This example shows that generally a large part of a metagenome dataset can be discarded while retaining most of the target genome sequence data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Bin analysis</ns0:head><ns0:p>The model can be used to analyze bins of metagenome contigs, regardless of the method that was used to infer these bins. Specifically, one can measure the similarity of two bins in terms of the contig likelihood instead of, for instance, an average euklidean distance based on the contig or genome k-mer and abundance vectors. We compare bins to investigate the relation between the given data, represented by the features in the model, and their grouping into genome bins. For instance, one could ask whether the creation of two genome bins is sufficiently backed up by the contig data or whether they should be merged into a single bin. For readability, we write the likelihood of a contig in bin A to:</ns0:p><ns0:formula xml:id='formula_16'>L(θ A | contig i) = L i (θ A ) = L(θ A ) = L A</ns0:formula><ns0:p>To compare two specific bins, we select the corresponding pair of columns in the classification likelihood matrix and calculate two mixture likelihoods for each contig (rows), L, using the MLE of the parameters for both bins and L swap under the hypothesis that we swap the model parameters of both bins.</ns0:p><ns0:p>The partial assignment weights πA and πB , called responsibilities, are estimated by normalization of the two bin likelihoods. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_17'>L = πA L A + πB L B = L A L A +L B L A + L B L A +L B L B = L 2 A + L 2 B L A + L B<ns0:label>(12</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_18'>L swap = πA L B + πB L A = L A L A +L B L B + L B L A +L B L A = 2L A L B L A + L B<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>For example, if πA and πB assign one third of a contig to the first, less likely bin and two thirds to the second, more likely bin using the optimal parameters, then L swap would simply exchange the contributions in the mixture likelihood so that one third are assigned to the more likely and two thirds to the less likely bin. The ratio L swap / L ranges from zero to one and can be seen as a percentage similarity. We form a joint relative likelihood for all N contigs, weighting each contig by its optimal mixture likelihood L and normalizing over these likelihood values.</ns0:p><ns0:formula xml:id='formula_19'>S(A, B) = Z N i=1        2 L i (θ A ) L i (θ B ) L 2 i (θ A ) + L 2 i (θ B )        L 2 i (θ A )+L 2 i (θ B ) L i (θ A )+L i (θ B ) (14)</ns0:formula><ns0:p>normalized by the total joint mixture likelihood</ns0:p><ns0:formula xml:id='formula_20'>Z = N i=1 L 2 i (θ A ) + L 2 i (θ B ) L i (θ A ) + L i (θ B )<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>The quantity in Equation <ns0:ref type='formula'>14</ns0:ref>ranges from zero to one, reaching one when the two bin models produce identical likelihood values. We can therefore interpret the ratio as a percentage similarity between any two bins. A connection to the Kullback-Leibler divergence can be constructed (Supplementary Methods).</ns0:p><ns0:p>To demonstrate the application, we trained the model on our simulated genomes, assuming they were bins, and created trees (Figure <ns0:ref type='figure' target='#fig_11'>6</ns0:ref>) for a randomly drawn subset of 50 of the 400 genomes using the probabilistic bin distances −log(S ) (Equation <ns0:ref type='formula'>14</ns0:ref>). We computed the distances twice, first with only nucleotide composition and taxonomic annotation submodels and second with the full feature set to compare the bin resolution. The submodel parameters were inferred using the full dataset and β using three-fold crossvalidation. We then applied average linkage clustering to build balanced and rooted trees with equal distance from leave to root for visual inspection. The first tree loosely reflects phylogenetic structure corresponding to the input features. However, many similarities over 50% (outermost ring) show that model and data lack the support for separating these bins. In contrast, the fully informed tree, which additionally includes information about contig coverages, separates the genomes bins, such that only closely related strains remain ambiguous. This analysis shows again that the use of additional features improves the resolution of individual genomes and, specifically, that abundance separates similar genomes.</ns0:p><ns0:p>Most importantly, we show that our model provides a measure of support for a genome binning. We know the taxa of the genome bins in this example but for real metagenomes, such an analysis can reveal binning problems and help to refine the bins as in Figure <ns0:ref type='figure' target='#fig_0'>1d</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>GENOME BIN REFINEMENT</ns0:head><ns0:p>We applied the model to show one of its current use cases on more realistic data. We downloaded the medium complexity dataset from www.cami-challenge.org. This dataset is quite complex (232 genomes, two sample replicates). We also retrieved the results of two highest-performing automatic binning programs, MaxBin and Metawatt, in the CAMI challenge evaluation <ns0:ref type='bibr' target='#b29'>(Sczyrba et al., 2017)</ns0:ref>. We took the simplest possible approach: we trained MLGEX on the genome bins derived by these methods and classified the contigs to the bins with the highest likelihood, thus ignoring all details of contig splitting, β or p-value calculation and changes in the number of genome bins. When contigs were assigned to multiple bins with equal probability, we attributed them to the first bin in the list because the evaluation framework does not allow sharing contigs between bins. We only used information provided to the contestants by the time of the challenge in the process. We report the results for two settings for each method using the Submission v0.5.4s <ns0:ref type='formula'>14</ns0:ref>) to demonstrate the ability of the model to measure bin resolution. This example compares the left (blue) tree, which was constructed only with nucleotide composition and taxonomic annotations, with the right (red) tree, which uses all available features. The tip labels were shortened to fit into the figure. The similarity axis is scaled as log(1-log(S)) to focus on values near one. Bins which are more than 50% similar branch in the outermost ring whereas highly dissimilar bins branch close to the center. We created the trees by applying the R function hclust(method='average') to MGLEX output. Manuscript to be reviewed Computer Science recall, the fraction of overall assigned contigs (bp), and the Adjusted Rand index (ARI) as defined in the CAMI evaluation paper. In the first, we swapped contigs which were originially assigned between bins. In the second, all available contigs were assigned to the bins, thus maximizing the recall. Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> shows that MGLEX bin refinement improved the genome bins in terms of the ARI for both sets of genome bins and increased the recall for Metawatt but not MaxBin. This is likely due to the fact that MaxBin has fewer but relatively complete bins to which the other contigs cannot correctly be recruited. Further improvement would involve disection and merging of bins within and among methods, for which MGLEX likelihoods can be considered. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>We describe an aggregate likelihood model for the reconstruction of genome bins from metagenome data sets and show its value for several applications. The model can learn from and classify nucleotide sequences from metagenomes. It provides likelihoods and posterior bin probabilities for existing genome bins, as well as p-values, which can be used to enrich a metagenome dataset with a target genome. The model can also be used to quantify bin similarity. It builds on four different submodels that make use of different information sources in metagenomics, namely contig coverage, nucleotide composition and previous taxonomic assignments. By its modular design, the model can easily be extended to include additional information sources. This modularity also helps in interpretation and computations. The former, because different features can be analyzed separately and the latter, because submodels can be trained independently and in parallel.</ns0:p><ns0:p>In comparison to previously described parametric binning methods, our model incorporates two new types of features. The first is relative differential coverage, for which, to our knowledge, this is the first attempt to use binomials to account for systematic bias in the read mapping for different genome regions. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As such, the binomial submodel represents the parametric equivalent of covariance distance clustering.</ns0:p><ns0:p>The second new type is taxonomic annotation, which substantially improved the classification results in our simulation. Taxonomic annotations, as used in the model and in our simulation, were not correct up to the species level and need not be, as seen in the classification results. We only require the same annotation method be applied to all sequences. In comparison to previous methods, our aggregate model has weight parameters to combine the different feature types and allows tuning the bin posterior distribution by selection of an optimal smoothing parameter β.</ns0:p><ns0:p>We showed that probabilistic models represent a good choice to handle metagenomes with short contigs or few sample replicates, because they make soft, not hard decisions, and because they can be applied in numerous ways. When the individual submodels are trained, genome bin properties are compressed into fewer model parameters, such as mean values, which are mostly robust to outliers and therefore tolerate a certain fraction of bin pollution. This property allows to reassign contigs to bins, which we demonstrated in the 'Genome bin refinement' section. Measuring the performance of the individual submodels and their corresponding features on short simulated contigs (Table <ns0:ref type='table'>2</ns0:ref>), we find that they discriminate genomes or species pan-genomes by varying degrees. Genome abundance represents, in our simulation with four samples, the weakest single feature type, which will likely become more powerful with increasing sample numbers. Notably, genomes of individual strains are more difficult to distinguish than species level pangenomes using any of the features. In practice, if not using idealized assemblies as in our current evaluation, strain resolution poses a problem to metagenome assembly, which is currently not resolved in a satisfactory manner <ns0:ref type='bibr' target='#b29'>(Sczyrba et al., 2017)</ns0:ref>.</ns0:p><ns0:p>The current MGLEX model is somewhat crude because it makes many simplifying assumptions in the submodel definitions. For instance, the multi-layer model for taxonomic annotation assumes that the probabilities in different layers are independent, the series of binomials for relative abundance should be </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Genome reconstruction workflow. To recover genomes from environmental sequencing data, the illustrated processes can be iterated. Different programs can be run for each process and iteration. MGLEX can be applied in all steps: (a) to classify contigs or to cluster by embedding the probabilistic model into an iterative procedure; (b) to enrich a metagenome for a target genome to reduce its size and to filter out irrelevant sequence data; (c) to select contigs of existing bins based on likelihoods and p-values and to repeat the binning process with a reduced data-set; (d) to refine existing bins, for instance to merge bins as suggested by bin analysis.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>P k (contig i | genome) that contig i belongs to a particular genome. Each of the components k reflects a Submission v0.5.4s 3/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017) Manuscript to be reviewed Computer Science particular feature such as • a weight w i (contig length)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>A genome is represented by a similar set of vectors θ = {θ l | 1 ≤ l ≤ L} with identical dimensions, but here, entries represent relative frequencies on the particular level l, for instance a distribution over all family taxa. The corresponding Submission v0.5.4s 7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>in Equation 11 is only relevant for soft classification but not in the context of ML classification or p-values. It can best be viewed as a sharpening or smoothing parameter of the bin posterior distribution (the probability of a genome or bin given the contig). β is estimated by minimization of the training or test error, as in our simulation. Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Procedure for determination of α k for each submodel. The figure shows a schematic for a single genome and two submodels. The genome's contig log-likelihood distribution (A and B) is scaled to a standard deviation of one (C and D) before adding the term in the aggregate model in Equation 11.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>sharpens the posterior distribution: β = 0 produces a uniform posterior and with very high β, the posterior approaches the sharp ML solution. We determined β by optimizing the MSE on both training and test data, shown in Figure 4. As expected, the classification training error was smaller than the test error because the submodel parameters were optimized with respect to the training data. Because the minima are close to each other, the full aggregate model seems robust to overfitting of β on training data. The comparison of soft vs. hard assignment shows that the former has a smaller average test classification MSE of ∼ 0.28 (the illustrated minimum in Figure 4) compared to the latter (ML) assignment MSE of ∼ 0.33 in Table</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Model training (err) and test error (Err) as a function of β for the complete aggregate model including all submodels and feature types. The solid curve shows the average and the colored shading the standard deviation of the three partitions in cross-validation. The corresponding optimal values for β are marked by black dots and vertical lines. The minimum average training error is 0.238 (β = 2.85) and test error is 0.279 at β = 1.65.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure6. Average linkage clustering of a random subset of 50 out of 400 genomes using probabilistic distances −log(S ) (Equation14) to demonstrate the ability of the model to measure bin resolution. This example compares the left (blue) tree, which was constructed only with nucleotide composition and taxonomic annotations, with the right (red) tree, which uses all available features. The tip labels were shortened to fit into the figure. The similarity axis is scaled as log(1-log(S)) to focus on values near one. Bins which are more than 50% similar branch in the outermost ring whereas highly dissimilar bins branch close to the center. We created the trees by applying the R function hclust(method='average') to MGLEX output.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>replaced by a multinomial to accout for the parameter dependencies or the absolute abdundance Poisson model should incorporate overdispersion to model the data more appropriately. Exploiting this room for improvement can lead to further improvement in the performance while the overall framework and usage of MGLEX stays unchanged. When we devised our model, we had an embedding into more complex routines in mind. In the future, the model can be used in inference procedures such as EM or MCMC to infer or improve an existing genome binning. Thus, MGLEX provides a software package for use in other programs. However, it also represents a powerful stand-alone tool for the adept user in its current form.Currently, MGLEX does not yet have support for multiple processors and only provides the basic functionality presented here. However, training and classification can easily be implemented in parallel because they are expressed as matrix multiplications. The model requires sufficient training data to robustly estimate the submodel weights α using the standard deviation of the empirical log-likelihood distributions and requires linked sequences to estimate β using error minimization. In situations with a limited number of contigs per genome bin, we therefore advise to generate linked training sequences of a certain length, as in our simulation, for instance by splitting assembled contigs. The optimal length for splitting may depend on the overall fragmentation of the metagenome.Our open-source Python package MGLEX provides a flexible framework for metagenome analysis and binning which we intent to develop further together with the metagenomics research community. It can be used as a library to write new binning applications or to implement custom workflows, for example to supplement existing binning strategies. It can build upon a present metagenome binning by taking assignments to bins as input and deriving likelihoods and p-values that allow for critical inspection of the contig assignments. Based on the likelihood, MGLEX can calculate bin similarities to provide insight into the structure of data and community. Finally, genome enrichment of metagenomes can improve the recovery of particular genomes in large datasets.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Taxonomy for Table 1 which is simplified to four levels and eight nodes. A full taxonomy may consist of thousands of nodes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>a</ns0:cell><ns0:cell /><ns0:cell>Domain (level 1)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>b</ns0:cell><ns0:cell>c</ns0:cell><ns0:cell>Class (level 2)</ns0:cell></ns0:row><ns0:row><ns0:cell>d</ns0:cell><ns0:cell>e</ns0:cell><ns0:cell /><ns0:cell>Family (level 3)</ns0:cell></ns0:row><ns0:row><ns0:cell>f</ns0:cell><ns0:cell>g</ns0:cell><ns0:cell>h</ns0:cell><ns0:cell>Species (level 4)</ns0:cell></ns0:row><ns0:row><ns0:cell>Figure 2.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>8) Submission v0.5.4s 6/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>are external classifiers added for comparison. Best values are in bold and worst in italic.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Submodels</ns0:cell><ns0:cell cols='3'>MPC ML ∆MPC ML MSE ML</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb + TaAn</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>+0.10</ns0:cell><ns0:cell>0.36</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb + NuCo + TaAn</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>+0.11</ns0:cell><ns0:cell>0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb + TaAn</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>+0.10</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb + NuCo + TaAn</ns0:cell><ns0:cell>0.68</ns0:cell><ns0:cell>+0.11</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>Submodels</ns0:cell><ns0:cell cols='3'>MPC ML ∆MPC ML MSE ML</ns0:cell></ns0:row><ns0:row><ns0:cell>Centrifuge (in-sample)</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>+0.01</ns0:cell><ns0:cell>0.51</ns0:cell></ns0:row><ns0:row><ns0:cell>NBC (15-mers)</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>+0.00</ns0:cell><ns0:cell>0.66</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>+0.00</ns0:cell><ns0:cell>0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>+0.02</ns0:cell><ns0:cell>0.61</ns0:cell></ns0:row><ns0:row><ns0:cell>Centrifuge (reference)</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>+0.03</ns0:cell><ns0:cell>0.45</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>+0.04</ns0:cell><ns0:cell>0.59</ns0:cell></ns0:row><ns0:row><ns0:cell>NuCo</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>+0.06</ns0:cell><ns0:cell>0.52</ns0:cell></ns0:row><ns0:row><ns0:cell>NBC (5-mers)</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>+0.06</ns0:cell><ns0:cell>0.48</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb + NuCo</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>+0.07</ns0:cell><ns0:cell>0.48</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + NuCo</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell>+0.08</ns0:cell><ns0:cell>0.50</ns0:cell></ns0:row><ns0:row><ns0:cell>TaAn</ns0:cell><ns0:cell>0.46</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb + NuCo</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.44</ns0:cell></ns0:row><ns0:row><ns0:cell>NuCo + TaAn</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.40</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + TaAn</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.39</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + NuCo + TaAn</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>+0.10</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Submission v0.5.4s</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:1:1:NEW 18 Mar 2017) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Genome bin refinement for CAMI medium complexity dataset with 232 genomes and two samples. The recall is the fraction of overall assigned contigs (bp). The Adjusted Rand index (ARI) is a measure of binning precision. The unmodified genome bins are the submissions to the CAMI challenge using the corresponding unsupervised binning methods Metawatt and MaxBin. MGLEX swapped contigs: contigs in original genome bins reassigned to the bin with highest MGLEX likelihood. MGLEX all contigs: all contigs (with originally uncontained) assigned to the bin with highest MGLEX likelihood. The lowest scores are written in italic and highest in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Binner</ns0:cell><ns0:cell>Variant</ns0:cell><ns0:cell cols='2'>Bin count Recall (bp) ARI</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Metawatt unmodified</ns0:cell><ns0:cell>285</ns0:cell><ns0:cell>0.94 0.75</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Metawatt MGLEX swapped contigs</ns0:cell><ns0:cell>285</ns0:cell><ns0:cell>0.94 0.82</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Metawatt MGLEX all contigs</ns0:cell><ns0:cell>285</ns0:cell><ns0:cell>1.00 0.77</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxBin</ns0:cell><ns0:cell>unmodified</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>0.82 0.90</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxBin</ns0:cell><ns0:cell>MGLEX swapped contigs</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>0.82 0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxBin</ns0:cell><ns0:cell>MGLEX all contigs</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>1.00 0.76</ns0:cell></ns0:row><ns0:row><ns0:cell>Implementation</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell cols='4'>We provide a Python package called MGLEX, which includes the described model. Simple text input</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>facilitates the integration of external programs for feature extraction like k-mer counting or read mapping,</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>which are not included. MGLEX can process millions of sequences with vectorized arithmetics using</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>NumPy (Walt, Colbert & Varoquaux, 2011) and includes a command line interface to the main functional-</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>ity, such as model training, classification, p-value and error calculations. It is open source (GPLv3) and</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>freely available via the Python Package Index 1 and on GitHub 2 .</ns0:cell><ns0:cell /></ns0:row></ns0:table></ns0:figure>
</ns0:body>
" | "Helmholtz Centre for Infection Research
Inhoffenstraße 7
38124 Braunschweig, Germany
www.helmholtz-hzi.de
2017-03-13
Dear Dr. Titus Brown,
We would like to thank you, Dinghua Li, Daan Speth and Qingpeng Zhang for their general interest and
many helpful comments. We have addressed them in our revision of the manuscript “A probabilistic
model to recover individual genomes from metagenomes” (#CS-2016:12:15063:0:0:REVIEW).
Particularly, we have extended the available software documentation, we have added two external
classifiers to our performance ranking for better orientation, and we have applied MGLEX to refine
genome bins of a more realistic, assembled metagenome dataset. The changes we have included in
response to the comments are denoted in italics below.
Johannes Dröge
On behalf of all authors: Johannes Dröge, Alexander Schönhuth and Alice C. McHardy.
Reviewer: Dinghua Li
Basic reporting
Overall, the manuscript reads well. The introduction gave a good literature review of the problem
of recovering genomes from metagenomic data, which led to the probabilistic model proposed.
The structure of the manuscript is clear.
Yet I suggest the authors highlight the advantage of the probabilistic model over other methods in
both introduction and discussion section. In fact, some of them were shown amid the main text
(Line 316-318, 334-338), but a summary of them would be useful to highlight the contribution of
this work.
Every advice to improve the structure of our article is very welcome. We have added more details on
the advantages of this probabilistic method over other approaches in the intro and discussion (p. 2-3, l.
64-72). A particular advantage is that probabilistic models are well studied and plug into many standard
higher-level procedures. For instance, the famous k-means clustering is a simple example of an EM
algorithm applied to a Gaussian probabilistic model. While other formulations can result in comparable
results, the probabilistic framework provides a natural way to express influences such as uncertainty in
the data and the binning results.
Experimental design
1. Methods section needs polishing
1.a In Line 100, N stands for the number of contigs of a shotgun metagenomic experiment, while
in Equation 3,6,8 and 10, N seems to stand for the number of contigs in the genome/bin used to
train the parameter θ. A different notation should be used.
True, we have adjusted the formulas (p. 3, l. 109).
1.b The relation between P(contig_i|genome) (line 112) and the likelihood defined by Equation 1
is not clear, especially given that there is not a variable referring to a 'genome' in Equation 1. I
suggest putting a subscript “g” to the parameter term θ. This could help readers keep in mind
that a θ is trained from a genome (or a bin) g, and L(θ_g|F_i) could unambiguously refer to the
likelihood of that contig i originated from genome g.
We have edited Equation 1 and the description correspondingly (p. 4, l. 130-131). Adding a subscript to
theta, as suggested, is a good addition to clarify what theta stands for, in addition to the textual
description.
1.c Line 200 and 229 said Naïve Bayes models were chosen. However, the models are not Bayesian
(also noted by the authors in line 201). They are simply linear models (after taking logarithm to
Equation 7 and 9), and should not be described as “Naïve Bayes models”.
We have removed this term to avoid further confusion (p. 6, l. 209-210).
2. Questions for the experimental design
2.a Nielsen et.al (nbt.2939, 2014) found that using the co-abundance across different samples,
“species can be segregated accurately using as few as 18 samples”, but not fewer. However, it was
noted from line 270 that only four communities of different abundance profiles were simulated in
the paper. It is questionable whether four samples are insufficient. BTW, the authors should cite
Nielsen et.al’s paper.
We already cited Nielsen in the introduction (p. 1, l. 33). We have now added text about the simulated
data in the methods section (p. 10, l. 214-218) where we also cite Nielsen, and discuss the general
strain separation issue (p. 17, l. 458-466). We agree that 4 samples with 1 kb contigs are very likely
insufficient to recover individual genomes, or species pan-genomes, in a real metagenome. We chose a
hard, controlled benchmark dataset to demonstrate the characteristics of the model when it comes to
limited metagenomic data like short contigs or a low number of samples, when binning methods
usually perform poorly. However, our evaluation is a relative performance demonstration to show how
the model works. We have added a note on this in the corresponding section “data simulation”. The
canopy method used by Nielsen et al., exclusively uses individual gene abundances for binning. The
idea of a well informed model is to reduce the amount of required data for binning by using more
features in a better way. It can be seen in Table 2 that extensive taxonomic annotation and nucleotide
composition contribute more information to discriminate the genomes than the abundances and thus the
number of required replicates may well be reduced when this information is used for binning. From our
personal binning experience, we know that many factors influence the quality of resulting genome bins,
such as sequencing technology, assembly, community composition and OTU abundance variation.
Hence, absolute numbers can, in our opinion, only be seen as a rule of thumb for a specific setting.
2.b The taxonomy annotation was generated from BLAST alignment, according to the
supplement. Yet the reference database was not shown. A proper reference database is very
crucial in this experiment. To be realistic, the database used for should discard a significant
proportion of genomes used to simulate the contigs, otherwise, the BLAST alignments alone
would assign most of the contigs to the correct genomes.
We share this view and we have taken appropriate care by using a standardized reference set defined
for the CAMI challenge (www.cami-challenge.org), which, for instance, is also available for download
as a compiled reference dataset for the software taxator-tk (http://research.bifo.helmholtzhzi.de/software/, refpack “microbial-full_20150430”). This particular information was missing, thanks
for noting, and we have added it to the supplement. When we generated the taxonomic annotation, we
discarded all the exact genomes plus all of the genomes of the same species with respect to the
simulated community. This procedure is described in the section “Taxonomic annotation” in the
supplement and we also now clarified this in the section “Data simulation” (p. 10, l. 297-298).
Validity of the findings
1. Though it shows that combining the four models resulted in the best MSE and MPC scores,
readers are not able to judge how good the aggregate model is. A comparison with other methods
(e.g. use Kraken to assign) is welcomed.
We have added performance scores for two classifiers each in two different settings and have reported
them in the text (p. 10, l. 315-318) and in Table 2 (p. 10-11). We understand the reader's desire to
compare our model to other software. MGLEX, being a classification model, it is only feasible to
compare to other classifiers. Although this manuscript was supposed to be more of a theoretical paper
with a strong focus on the details of the model before taken into specific applications, we have
accommodated results for two other methods. These are NBC (Rosen et al. 2011) and Centrifuge (Kim
et al. 2016). NBC is a Naïve Bayes Classifier based on k-mers and similar to our nucleotide
composition submodel. As with our model, we trained it on the in-sample training data using 5-mers
and 15-mers. Kraken could not be trained on the CAMI reference data due to an long outstanding issue
(#36, https://github.com/DerrickWood/kraken/, which we had reported in February 2016). Instead, we
applied Centrifuge, which is similar to Kraken, using the same reference sequences and taxonomy as
was used to generate the taxonomic annotation for MGLEX. In a second setting, we also trained
Centrifuge on the in-sample training data for comparison.
2. In the “Genome enrichment” section, there should be another figure showing the % of the
target contigs being filtered v.s. p-value cut-off.
The requested figure exists as Supplementary Figure 1 and reports the sensitivity (or recall) as a
function of the p-value cutoff. A sensitivity of 90% means that the remaining 10% of the “good”
contigs are filtered out.
3. The statement in line 406 “taxonomic annotations, as used in the model, need not be entirely
correct, as long as the same annotation method is applied to all sequences” lacks experimental
supports. Already noted in my comment 2.b of 'Experimental design', the authors need to select
a proper database to support the statement.
Experimental support is provided by our simulations and we have rephrased the statement to clarify
this (p. 17, l. 460-461). The annotations, which were used, have been generated as described in the
supplement section “Feature generation: Taxonomic annotation”. Removing all species-level genomes
means that most of the annotations derived from the best BLAST hits are actually for incorrect species
or higher-level taxa. However, the corresponding submodel clearly outperformed the submodels for the
other features because the annotations are only used to match the contigs against the model in terms of
taxon frequencies.
Comments for the author
The process of creating the simulated dataset was detailed in supplement, but it is still useful to
put the datasets accessible online.
The supporting data and scripts are provided with Digital Object Identifier (DOI) under
https://doi.org/10.5281/zenodo.201076. They contain the simulated dataset, the derived feature files
and the scripts to generate the results shown in the manuscript. We also report this link in the
supplement section “Supporting Data”.
The implementation MGLEX is open-sourced on Github and very easy to install via python-pip,
which is most appreciated by the community. But it lacks basic document or guidance for users to
start with
We appreciate that the reviewers have installed and checked the software! We have added basic
documentation on how to run MGLEX to the GitHub sources and at Read the Docs but more
documentation will be added later. Program parameters and modules are also briefly explained in the
command line help. We recommend to have a look at the supporting data and scripts to learn by
example.
Reviewer: Daan Speth
Comments for the author
In their manuscript entitled 'a probabilistic model to recover individual genomes from
metagenomes' Droege et al describe a model usable to assign contigs to genome bins, as well as
evaluating the likelihood that bins are correct. The work is of interest, as genome recovery from
metagenomes is only recently becoming a generally used strategy for analysis of metagenomic
data.
To provide context for the review: I am a biologist by training, with some experience in analysis
of metagenomic data in both a gene and genome centered manner. Most of my comments will
thus concern these aspects of the work. I have some remarks on the manuscript, and some on the
python implementation of the model.
On the manuscript:
The manuscript describes the model well and is readable for someone who is not intimately
familiar with modeling. I was wondering if the implementation of relative abundance as it is
described in the manuscript would be sensitive to datasets of uneven sequencing depth. It seems
that without standardization between datasets, datasets with more reads wouyld be valued
higher, because the variations between genomes in more deeply sequenced datasets is bigger?
It is true that samples with higher sequencing depth count more. This is justified by the fact that each
count represents a discrete event (a read) and that more certainty is accumulated with every observed
event. This is the theory, but other normalizations may be tested, for instance in terms of the
classification accuracy. Our framework allows implementing alternative submodels or improving on
individual submodels to adapt better to the observed data without implying substantial changes in the
programs which use MGLEX as a library.
I was a little surprised to see that the authors choose to simulate a realistic metagenome, but
subsequently choose to avoid a realistic treatment of this dataset. While I see this is valuable in
order to assess the performance of the model in a controlled manner, I would like to ask the
authors to also show a usecase for their model in which some of the challenges of a real dataset
are present. Especially a condition where the training data is not a priori known to be correct (as
in a binning of a metagenome assembly) is of interest and I urge the authors to apply their model
to either a real dataset, or a de novo assembled simulated dataset in which the available prior
knowledge is not used to train the model.
Please note that the proposed model is not an automatic binning or clustering procedure but provides an
important element for integration in such programs. Our manuscript thus has a limited scope and we
tried to clarify this in line 74-76 (“Importantly, we are not providing an automatic solution to binning
but present a flexible framework to target problems associated with binning. This functionality can be
used in custom workflows or programs for the steps illustrated in Figure 1.”).
In response to the comment, we have applied the model to show one of its current use cases on more
realistic data and added the results to the text (p. 14-16, l. 406-424). We have downloaded the medium
complexity dataset from www.cami-challenge.org. This dataset is quite complex (232 genomes, two
sample replicates). We also retrieved the results from the two best-performing automatic binning
programs, MaxBin and Metawatt, in the CAMI challenge (https://dx.doi.org/10.1101/099127). In
section “Genome bin refinement” we show that the quality of the results, measured in terms of the
adjusted rand index, is consistently improved by applying MGLEX. This is achieved by model training
on the bins and subsequent classification, which swaps the contigs between genomes and recruits
previously unassigned contigs to the bins. Note, that the genome bins delivered by automatic binning
programs are contaminated and that MGLEX copes with this and corrects for incorrect assignments.
specific comments:
line 28: I think the term binning might have been coined by Banfield and colleagues in their 2004
paper. It would be appropriate to cite that here.
Tyson, Gene W., et al. 'Community structure and metabolism through reconstruction of
microbial genomes from the environment.' Nature 428.6978 (2004): 37-43.
Thanks, we have added this in the introduction (p. 1, l. 28).
line 80: The authors state that taxonomic affiliation has rarely been used as input in binning.
Albertsen et al (2013) do use taxonomic affiliation in their (mostly manual) binning procedure
and should be mentioned here. The banfield lab's ggKbase can also use taxonomic info, but I
think is not completely open to use and I don't know if there is a paper to cite.
This is true, what we meant is to use the taxonomic annotation in a systematic way in the binning
process. We have re-phrased the sentence to make that clear (p. 3, l. 89). Genomes or genome bins are
regularly assessed using taxonomic annotation.
The explanation of using both absolute and relative abundance independently in line 185-189 is
not entirely clear to me.
We have tried to improve the explanation (p. 6, l. 196-200). The reason is twofold: (a) because it is
statistically ok (explanation) and (b) because it indisputably gives better performance (Table 2).
line 228-229 the implementation of taxonomic assignments for contigs is clear, but it is unclear to
me what 'level-specific relative frequencies' means for the entries for genomes.
We have rephrased the text (p. 7, l. 239). A level is a horizontal slice of the taxonomy, for instance the
family rank. The scores counted on that level are simply normalized to give relative frequencies
(summing to one), as for the simple frequency model.
line 253: what is ment by 'amount of contig data'? Total number of bases in contigs or total nr of
contigs, or something else?
This is the number of bases in the contigs, we have added this to the text (p. 8, l. 264).
line 305: missing r in strains
Fixed, thanks.
line 305: One of the problems with metagenome analysis is the partial coassembly of (the core
genome of) strains, in which case genome abundance is not necessarily capable of distinguishing
between strains.
This is quite true, and the reason why the model performance drops when we try to resolve strains.
However, up to some degree, abundances can be used to separate strains (at least for variable genes)
whereas species-level annotation (naturally) and nucleotide 5-mer composition cannot distinguish. This
is what we measured indirectly using \delta MPC_{ML}. We have rephrased the sentence to make this
clearer (p. 10, l. 323) and added a little bit of text on this issue to the discussion part (p. 17, l. 457463).
On the python implementation:
Except implicitly on the https://pypi.python.org/pypi/MGLEX website, it is not mentioned that
this is a python 3 package, which thwarted my intial attempt at installing it on my macbook
We have now added the requirement explicitly in the online README and documentation. (Python 2
should be dropped in favor of Python 3 because it was announced end-of-life).
installing the module on our lab server with an anaconda3 environment was straightforward,
with the only exception of a permission issue adding lines to easy-install.pth
This might seem oddly specific for a comment, but i encourage the authors to provide a very brief
overview of dependencies and installation instructions.
Thanks for testing. We have added installation instructions and dependencies to the software
README. We have created an issue on GitHub but please provide more details on the problem. We
suspect this is a problem with the anaconda environment.
Once installed, the module seems to run fine and the access to the command help is
straightforward. However, a general workflow description is lacking. The various commands are
listed, but it isn't immediately obvious in which order the commands should be run.
Going through the commands, it seems that 'buildmatrix' is run to generate the responsibility file
that most other commands use. It is however unclear to me what the required input 'identifiers'
and 'seeds' for this command are, and what file format they should be.
Then 'train' can be used with this responsibility file, to train the model required for 'classify'. As
in 'buildmatrix' the input format for the various data required for the model features is unclear.
I suggest that the authors provide a less minimal description of the package use, such that it is
accessible to a wider audience in the bioinformatics community
We appreciate that the reviewers have installed and checked the software! We have added basic
documentation on how to run MGLEX to the GitHub sources and at Read the Docs but more
documentation will be added later. Program parameters and modules are also briefly explained in the
command line help. We recommend to have a look at the supporting data and scripts to learn by
example.
Reviewer: Qingpeng Zhang
Basic reporting
1. In general, the manuscript was well written, with appropriate structure and comprehensive
description of the methods and results.
2. In line 355, the sentence of “one would need could quantify bin similarity by direct comparison
of features such as the k-mer vectors or abundances.” is a little bit confusing.
Thanks, we have rephrased the sentence and given an example (p. 13, l. 376-377).
Experimental design
1. In this manuscript, the authors only use simulated data sets to evaluate the method. It will be
useful to show how this method can be applied to some real datasets and how this method
performs in a more realistic setting. It may be enough to only show the performance on one
application, especially the classification/binning. How does this method further improve the
quality of binning results generated by other existing binning tools?
We applied the model to show one of its current use cases on more realistic data and added the results
to the text (p. 14-16, l. 406-424). We have downloaded the medium complexity dataset (two samples
replicates) from www.cami-challenge.org. This dataset is quite complex (232 genomes, two sample
replicates). We also retrieved the results from the two best-performing automatic binning programs,
MaxBin and Metawatt, in the CAMI challenge (https://dx.doi.org/10.1101/099127). In section
“Genome bin refinement” we show that the quality of the results, measured in terms of the adjusted
rand index, is consistently improved by applying MGLEX. This is achieved by model training on the
bins and subsequent classification, which swaps the contigs between genomes and recruits previously
unassigned contigs to the bins. Note, that the genome bins delivered by automatic binning programs are
contaminated and that MGLEX copes with this and corrects for incorrect assignments.
2. With the models built using simulated training set, the best performance of MPC is 0.68 using
all the features. How does this performance compare to other existing binning tools? It will be
interesting to see how existing binning tools work on the “testing data”.
We have added performance scores for two classifiers each in two different settings and have reported
them in the text (p. 10, l. 315-318) and in Table 2 (p. 10-11). We understand the reader's desire to
compare our model to other software. MGLEX, being a classification model, it is only feasible to
compare to other classifiers. Although this manuscript was supposed to be more of a theoretical paper
with a strong focus on the details of the model before taken into specific applications, we have
accommodated results for two other methods. These are NBC (Rosen et al. 2011) and Centrifuge (Kim
et al. 2016). NBC is a Naïve Bayes Classifier based on k-mers and similar to our nucleotide
composition submodel. As with our model, we trained it on the in-sample training data using 5-mers
and 15-mers. Kraken could not be trained on the CAMI reference data due to an long outstanding issue
(#36, https://github.com/DerrickWood/kraken/, we reported in February 2016). Instead, we applied
Centrifuge, which is similar to Kraken, using the same reference sequences and taxonomy as was used
to generate the taxonomic annotation for MGLEX. In a second setting, we also trained Centrifuge on
the in-sample training data instead of the reference data for comparison.
3. More descriptions about the implementation will be appreciated. For example, does the
implementation include functions to generate the k-mer frequency or coverage information? This
can give the audience valuable information about how easily this tool can be integrated into their
pipeline.
We now mention explicitly in the text (p. 16, l. 428-430) that MGLEX itself does not include any feature
extraction functionality; and is not supposed to do. For our simulation, we report the specific programs
and parameters which we used to derive the input features in the supplement, section “Feature
generation”. However, there are many ways to map determine coverage, for instance by using the
assembly output or by mapping read to contigs using the mapping program which seems most
appropriate for the kind of data. Mapping Illumina reads to long PacBio sequences requires different
algorithms than mapping to them to assembled contigs or scaffolds. The same goes for the choice of kmers or alternative compositional features and the taxonomic annotation. The choice is up to the users.
Validity of the findings
1. The method described in this manuscript does need a “training dataset” to build the models.
Does this training dataset of sequences with known bins have to be generated by other
metagenome binning tools? Is it possible to build the models with simulated data as shown in this
manuscript? If this method requires some kind of existing sequence bins to work, this method is
different from many existing binning tools, which can do the binning from scratch.
Please note that the proposed model is not an automatic binning or clustering procedure but provides an
important element for integration in such programs. Our manuscript thus has a limited scope and we
tried to clarify this in line 74-76 (“Importantly, we are not providing an automatic solution to binning
but present a flexible framework to target problems associated with binning. This functionality can be
used in custom workflows or programs for the steps illustrated in Figure 1.”).
Many (automatic) binners start by determining in-sample training data, for instance looking for specific
ribosomal genes or certain gene families in the contigs. All classifiers need to be given or to derive
some sort of prior knowledge. For MGLEX, the kind of training data is determined by the application:
existing bins for bin refinement; contigs or scaffolds for “de-novo” classification or genome
enrichment. It might be possible to make use of reference genomes for training, but in most cases, no
close reference genomes are available.
And what if the performance of existing binning tools on the data set is not good enough? How
will the quality of the bins used as training affect the performance of this MGLEX method? It
will be appreciated if the authors can make this more clear to the audience and discuss more
about how to integrate this method with other existing binning methods to achieve better results.
These are relevant questions and we have added a corresponding part in our discussion (p. 17, l. 463467). We did not in particular investigate into the robustness of the underlaying classification model to
such influences. However, we now applied the model to correct for heterogenous bins in section
“Genome bin refinement” (p. 14-16, l. 408-426).
2. Line 290, “The smaller fraction was used for training because identifying the training data
often represents a limiting factor in metagenome analysis”. What does the “limiting factor” mean
here? It will be nice to explain the smaller fraction used for training in more details.
We have made this clear in the corresponding section and replaced the term “limiting factor” (p. 10, l.
306-307). The smaller fraction in the context of three-fold cross-validation means that we used one
third of each genome's sequence data (100 kb, the smaller fraction) to train the classification model and
two thirds (200 kb, the larger fraction) to evaluate how it performed. The limiting factor was meant to
be the amount of training sequence data available for each genome, for instance the contigs in a
metagenome which are identified using ribosomal genes.
3. In line 416, the authors advise the contigs should be splitted to have a certain length. It seems
that this is not discussed in other sections of this manuscript. It will be nice to give more
explanations about this, like how to determine the proper length.
We have added more information on why we propose to split longer contigs in certain situations in the
section “Inference of weight parameters” (p. 9, l. 267-270). In short, we determine the weights which
define the relative contributions of each feature type or submodel by the concentrations of the
corresponding likelihood distributions. For these, we determine the sample variance and each input
sequence represents a data point in the sample. Splitting contigs into smaller sequences both assures
that the sample size is large enough to give a good estimate of the variance and and that each data point
corresponds to a sequence chunk of similar length or weight. Other binning programs (e.g.
CONCOCT) have adopted a similar kind of subdivision scheme to improve the robustness and
sequence weighting selecting any practical size (10 Kb is mentioned in the CONCOCT manual without
further explanation). The selected size is dependent on the available amount of training data for the
modeled genomes and is a compromise between robustness on one side and computational effort and
precision (short sequences are harder to discriminate) on the other. Having said this, it is not actually
required to split contigs and we see similar performance for datasets (for instance the CAMI medium
complexity dataset) with and without splitting the contigs. This probably due to the robust estimation of
the weight parameters over thousands of contigs and hundreds of genomes.
" | Here is a paper. Please give your review comments after reading it. |
739 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Shotgun metagenomics of microbial communities reveals information about strains of relevance for applications in medicine, biotechnology and ecology. Recovering their genomes is a crucial, but very challenging step, due to the complexity of the underlying biological system and technical factors. Microbial communities are heterogeneous, with oftentimes hundreds of present genomes deriving from different species or strains, all at varying abundances and with different degrees of similarity to each other and reference data. We present a versatile probabilistic model for genome recovery and analysis, which aggregates three types of information that are commonly used for genome recovery from metagenomes. As potential applications we showcase metagenome contig classification, genome sample enrichment and genome bin comparisons. The open source implementation MGLEX is available via the Python Package Index and on GitHub and can be embedded into metagenome analysis workflows and programs.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Shotgun sequencing of DNA extracted from a microbial community recovers genomic data from different community members while bypassing the need to obtain pure isolate cultures. It thus enables novel insights into ecosystems, especially for those genomes which are inaccessible by cultivation techniques and isolate sequencing. However, current metagenome assemblies are oftentimes highly fragmented, including unassembled reads, and require further processing to separate data according to the underlying genomes. Assembled sequences, called contigs, that originate from the same genome are placed together in this process, which is known as metagenome binning <ns0:ref type='bibr' target='#b31'>(Tyson et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b5'>Dröge & McHardy, 2012)</ns0:ref> and for which many programs have been developed. Some are trained on reference sequences, using contig k-mer frequencies or sequence similarities as sources of information <ns0:ref type='bibr' target='#b21'>(McHardy et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b6'>Dröge, Gregor & McHardy, 2014;</ns0:ref><ns0:ref type='bibr' target='#b35'>Wood & Salzberg, 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gregor et al., 2016)</ns0:ref>, which can be adapted to specific ecosystems. Others cluster the contigs into genome bins, using contig k-mer frequencies and read coverage <ns0:ref type='bibr' target='#b4'>(Chatterji et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kislyuk et al., 2009;</ns0:ref><ns0:ref type='bibr' target='#b36'>Wu et al., 2014;</ns0:ref><ns0:ref type='bibr'>Nielsen et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b11'>Imelfort et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alneberg et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b12'>Kang et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lu et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Recently, oftentimes multiple biological or technical samples of the same environment are sequenced to produce distinct genome copy numbers across samples, sometimes using different sequencing protocols and technologies, such as Illumina and PacBio sequencing <ns0:ref type='bibr' target='#b9'>(Hagen et al., 2016)</ns0:ref>. Genome copies are reflected by corresponding read coverage variation in the assemblies which allows to resolve samples with many genomes. The combination of experimental techniques helps to overcome platform-specific shortcomings such as short reads or high error rates in the data analysis. However, reconstructing To recover genomes from environmental sequencing data, the illustrated processes can be iterated. Different programs can be run for each process and iteration. MGLEX can be applied in all steps: (a) to classify contigs or to cluster by embedding the probabilistic model into an iterative procedure; (b) to enrich a metagenome for a target genome to reduce its size and to filter out irrelevant sequence data; (c) to select contigs of existing bins based on likelihoods and p-values and to repeat the binning process with a reduced dataset; (d) to refine existing bins, for instance to merge bins as suggested by bin analysis.</ns0:p><ns0:p>high-quality bins of individual strains remains difficult without very high numbers of replicates. Often, genome reconstruction may improve by manual intervention and iterative analysis (Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) or additional sequencing experiments.</ns0:p><ns0:p>Genome bins can be constructed by consideration of genome-wide sequence properties. Currently, oftentimes the following types of information are considered:</ns0:p><ns0:p>• Read contig coverage: sequencing read coverage of assembled contigs, which reflects the genome copy number (organismal abundance) in the community. Abundances can vary across biological or technical replicates, and co-vary for contigs from the same genome, supplying more information to resolve individual genomes <ns0:ref type='bibr' target='#b2'>(Baran & Halperin, 2012;</ns0:ref><ns0:ref type='bibr' target='#b0'>Albertsen et al., 2013)</ns0:ref>.</ns0:p><ns0:p>• Nucleotide sequence composition: the frequencies of short nucleotide subsequences of length k called k-mers. The genomes of different species have a characteristic k-mer spectrum <ns0:ref type='bibr' target='#b13'>(Karlin, Mrazek & Campbell, 1997;</ns0:ref><ns0:ref type='bibr' target='#b21'>McHardy et al., 2007)</ns0:ref>.</ns0:p><ns0:p>• Sequence similarity to reference sequences: a proxy for the phylogenetic relationship to species which have already been sequenced. The similarity is usually inferred by alignment to a reference collection and can be expressed using taxonomy <ns0:ref type='bibr' target='#b21'>(McHardy et al., 2007)</ns0:ref>.</ns0:p><ns0:p>Probabilities represent a convenient and efficient way to represent and combine information that is uncertain by nature. Here, we</ns0:p><ns0:p>• propose a probabilistic aggregate model for binning based on three commonly used information sources, which can easily be extended to include new features.</ns0:p><ns0:p>• outline the features and submodels for each information type. As the feature types listed above derive from distinct processes, we define for each of them independently a suitable probabilistic submodel.</ns0:p><ns0:p>• showcase several applications related to the binning problem Submission v0.5.5s</ns0:p></ns0:div>
<ns0:div><ns0:head>2/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A model with data-specific structure poses an advantage for genome recovery in metagenomes because it uses data more efficiently for fragmented assemblies with short contigs or a low number of samples for differential coverage binning. Being probabilistic, it generates probabilities instead of hard labels so that a contig can be assigned to several, related genome bins and the uncertainty can easily be assessed.</ns0:p><ns0:p>The models can be applied in different ways, not just classification, which we show in our application examples. Most importantly, there is a rich repertoire of higher-level procedures based on probabilistic models, including Expectation Maximization (EM) and Markov Chain Monte Carlo (MCMC) methods for clustering without or with few prior knowledge of the modeled genomes.</ns0:p><ns0:p>We focus on defining explicit probabilistic models for each feature type and their combination into an aggregate model. In contrast, binning methods often concatenate and transform features <ns0:ref type='bibr' target='#b4'>(Chatterji et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Imelfort et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b1'>Alneberg et al., 2014)</ns0:ref> before clustering. Specific models for the individual data types can be better tailored to the data generation process and will therefore generally enable a better use of information and a more robust fit of the aggregate model while requiring fewer data. We propose a flexible model with regard to both the included features and the feature extraction methods. There already exist parametric likelihood models in the context of clustering, for a limited set of features. For instance, <ns0:ref type='bibr' target='#b15'>Kislyuk et al. (2009)</ns0:ref> use a model for nucleotide composition and Wu et al.</ns0:p><ns0:p>(2014) integrated distance-based probabilities for 4-mers and absolute contig coverage using a Poisson model. We extend and generalize this work so that the model can be used in different contexts such as classification, clustering, genome enrichment and binning analysis. Importantly, we are not providing an automatic solution to binning but present a flexible framework to target problems associated with binning.</ns0:p><ns0:p>This functionality can be used in custom workflows or programs for the steps illustrated in Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>. As input, the model incorporates genome abundance, nucleotide composition and additionally sequence similarity (via taxonomic annotation). The latter is common as taxonomic binning output <ns0:ref type='bibr' target='#b6'>(Dröge, Gregor & McHardy, 2014;</ns0:ref><ns0:ref type='bibr' target='#b35'>Wood & Salzberg, 2014;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gregor et al., 2016)</ns0:ref> and for quality assessment but has rarely been systematically used as features in binning <ns0:ref type='bibr' target='#b4'>(Chatterji et al., 2008;</ns0:ref><ns0:ref type='bibr' target='#b18'>Lu et al., 2016)</ns0:ref>. We show that taxonomic annotation is valuable information that can improve binning considerably.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Classification models</ns0:head><ns0:p>Classification is a common concept in machine learning. Usually, such algorithms use training data for different classes to construct a model which then contains the condensed information about the important properties that distinguish the data of the classes. In probabilistic modeling, we describe these properties as parameters of likelihood functions, often written as θ. After θ has been determined by training, the model can be applied to assign novel data to the modeled classes. In our application, classes are genomes, or bins, and the data are nucleotide sequences like contigs. Thus, contigs can be assigned to genomes bins but we need to provide training sequences for the genomes. Such data can be selected by different means, depending on the experimental and algorithmic context. One can screen metagenomes for genes which are unique to clades, or which can be annotated by phylogenetic approaches, and use the corresponding sequence data for training <ns0:ref type='bibr' target='#b8'>(Gregor et al., 2016)</ns0:ref>. Independent assemblies or reference genomes can also serve as training data for genome bins <ns0:ref type='bibr' target='#b3'>(Brady & Salzberg, 2009;</ns0:ref><ns0:ref type='bibr' target='#b25'>Patil et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b8'>Gregor et al., 2016)</ns0:ref>.</ns0:p><ns0:p>Another direct application is to learn from existing genome bins, which were derived by any means, and then to (re)assign contigs to these bins. This is useful for short contigs which are often excluded from binning and analysis due to their high variability. Finally, probabilistic models can be embedded into iterative clustering algorithms with random initialization.</ns0:p></ns0:div>
<ns0:div><ns0:head>Aggregate model</ns0:head><ns0:p>Let 1 ≤ i ≤ D be an index referring to D contigs resulting from a shotgun metagenomic experiment. In the following we will present a generative probabilistic aggregate model that consists of components, indexed by 1 ≤ k ≤ M, which are generative probabilistic models in their own right, yielding probabilities • sample abundance feature vectors a i and r i , one entry per sample • a compositional feature vector c i , one entry per compositional feature (e.g. a k-mer)</ns0:p><ns0:p>• a taxonomic feature vector t i , one entry per taxon</ns0:p><ns0:p>We define the individual feature vectors in the corresponding sections. As mentioned before, each of the M features gives rise to a probability P k (contig i | genome) that contig i belongs to a specific genome by means of its component model. Those probabilities are then collected into an aggregate model that transforms those feature specific probabilities P k (i | genome) into an overall probability P(i | genome) that contig i is associated with the genome. In the following, we describe how we construct this model with respect to the individual submodels P k (i | genome), the feature representation of the contigs and how we determine the optimal set of parameters from training sequences.</ns0:p><ns0:p>For the i th contig, we define a joint likelihood for genome bin g (Equation <ns0:ref type='formula'>1</ns0:ref>, the probabilities written as a function of the genome parameters), which is a weighted product over M independent component likelihood functions, or submodels, for the different feature types. For the k th submodel, Θ k is the corresponding parameter vector, F i,k the feature vector of the i th contig and α k defines the contribution of the respective submodel or feature type. β is a free scaling parameter to adjust the smoothness of the aggregate likelihood distribution over the genome bins (bin posterior).</ns0:p><ns0:formula xml:id='formula_0'>L(Θ g | F i ) =         M k=1 L(Θ gk | F ik ) α k         β (1)</ns0:formula><ns0:p>We assume statistical independence of the feature subtypes and multiply likelihood values from the corresponding submodels. This is a simplified but reasonable assumption: e.g., the species abundance in a community can be altered by external factors without impacting the nucleotide composition of the genome or its taxonomic position. Also, there is no direct relation between a genome's k-mer distribution and taxonomic annotation via reference sequences.</ns0:p><ns0:p>All model parameters, Θ g , α and β, are learned from training sequences. We will explain later, how the weight parameters α and β are chosen and begin with a description of the four component likelihood functions, one for each feature type.</ns0:p><ns0:p>In the following, we denote the j th position in a vector x i with x i, j . To simplify notation, we also define the sum or fraction of two vectors of the same dimension as the positional sum or fraction and write the length of vector x as len(x).</ns0:p></ns0:div>
<ns0:div><ns0:head>Absolute abundance</ns0:head><ns0:p>We derive the average number of reads covering each contig position from assembler output or by mapping the reads back onto contigs. This mean coverage is a proxy for the genome abundance in the sample because it is roughly proportional to the genome copy number. A careful library preparation causes the copy numbers of genomes to vary differently over samples, so that each genome has a distinct relative read distribution. Depending on the amount of reads in each sample being associated with every genome, we obtain for every contig a coverage vector a i where len(a i ) is the number of samples. Therefore, if more sample replicates are provided, contigs from different genomes are generally better separable since every additional replicate adds an entry to the feature vectors.</ns0:p><ns0:p>Random sequencing followed by perfect read assembly theoretically produces positional read counts which are Poisson distributed, as described in <ns0:ref type='bibr' target='#b16'>Lander & Waterman (1988)</ns0:ref>. In Equation <ns0:ref type='formula'>2</ns0:ref>, we derived a similar likelihood using mean coverage values (see Supplementary Methods for details). The likelihood function is a normalized product over the independent Poisson functions P θ j (a i, j ) for each sample. The Manuscript to be reviewed Computer Science expectation parameter θ j represents the genome copy number in the j th sample.</ns0:p><ns0:formula xml:id='formula_1'>L(θ | a i ) = len(a i ) len(a i ) j=1 P θ j (a i, j ) = len(a i ) len(a i ) j=1 θ a i, j j a i, j ! e −θ j (2)</ns0:formula><ns0:p>The Poisson explicitly accounts for low and zero counts, unlike a Gaussian model. Low counts are often observed for undersequenced and rare taxa. Note that a i, j is independent of θ. We derived the model likelihood function from the joint Poisson over all contig positions by approximating the first data-term with mean coverage values (Supplementary Methods).</ns0:p><ns0:p>The maximum likelihood estimate (MLE) for θ on training data is the weighted average of mean coverage values for each sample in the training data (Supplementary Methods).</ns0:p><ns0:formula xml:id='formula_2'>θ = N i=1 w i a i N i=1 w i<ns0:label>(3)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Relative abundance</ns0:head><ns0:p>In particular for shorter contigs, the absolute read coverage is often overestimated. Basically, the Lander-Waterman assumptions <ns0:ref type='bibr' target='#b16'>(Lander & Waterman, 1988)</ns0:ref> are violated if reads do not map to their original locations due to sequencing errors or if they 'stack' on certain genome regions because they are ambiguous (i.e. for repeats or conserved genes), rendering the Poisson model less appropriate. The Poisson, when constrained on the total sum of coverages in all samples, leads to a binomial distribution as shown by <ns0:ref type='bibr' target='#b26'>(Przyborowski & Wilenski, 1940)</ns0:ref>. Therefore, we model differential abundance over different samples using a binomial in which the parameters represent a relative distribution of genome reads over the samples. For instance, if a particular genome had the same copy number in a total of two samples, the genome's parameter vector θ would simply be [0.5, 0.5]. As for absolute abundance, the model becomes more powerful with a higher number of samples. Using relative frequencies as model parameters instead of absolute coverages, however, has the advantage that any constant coverage factor cancels in the division term. For example, if a genome has two similar gene copies which are collapsed during assembly, twice as many reads will map onto the assembled gene in every sample but the relative read frequencies over samples will stay unaffected. This makes the binomial less sensitive to read mapping artifacts but requires two or more samples because one degree of freedom (DF) is lost by the division.</ns0:p><ns0:p>The contig features r i are the mean coverages in each sample, which is identical to a i in the absolute abundance model, and the model's parameter vector θ holds the relative read frequencies in the samples, as explained before. In Equation <ns0:ref type='formula'>4</ns0:ref>we ask: how likely is the observed mean contig coverage r i, j in sample j given the genome's relative read frequency θ j of the sample and the contig's total coverage R i for all samples. The corresponding likelihood is calculated as a normalized product over the binomials B R i ,θ j (r i, j ) for every sample.</ns0:p><ns0:formula xml:id='formula_3'>L(θ | r i ) = len(r i ) len(r i ) j=1 B R i ,θ j (r i, j ) = len(r i ) len(r i ) j=1 R i r i, j θ r i, j j 1 − θ j (Ri−ri,j) (4)</ns0:formula><ns0:p>R i is the sum of the abundance vector r i . Because both R i and r i can contain real numbers, we need to generalize the binomial coefficient to positive real numbers via the gamma function Γ. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_4'>n k = Γ(n + 1)Γ(k + 1) Γ(n − k + 1)<ns0:label>(5</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Because the binomial coefficient is a constant factor and independent of θ, it can be omitted in ML classification (when comparing between different genomes) or be retained upon parameter updates. As for the Poisson, the model accounts for low and zero counts (by the binomial coefficient). We derived the likelihood function from the joint distribution over all contig positions by approximating the binomial data-term with mean coverage values (see Supplementary Methods).</ns0:p><ns0:p>The MLE θ for the model parameters on training sequence data corresponds to the amount of read data (base pairs) in each sample divided by the total number of base pairs in all samples. We express this as a weighted sum of contig mean coverage values (see Supplementary Methods).</ns0:p><ns0:formula xml:id='formula_5'>θ = N i=1 w i r i N i=1 w i R i<ns0:label>(6)</ns0:label></ns0:formula><ns0:p>It is obvious that absolute and relative abundance models are not independent when the identical input vectors (here a i = r i ) are used. However, we can instead apply the Poisson model to the total coverage R i</ns0:p><ns0:p>(summed over all samples) because this sum also follows a Poisson distribution. To illustrate the total abundance, this compares to mixing the samples before sequencing so that the resolution of individual samples is lost. The binomial, in contrast, only captures the relative distribution of reads over the samples (one DF is lost in the ratio transform). This way, we can combine both absolute and relative abundance submodels in the aggregate model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Nucleotide composition</ns0:head><ns0:p>Microbial genomes have a distinct 'genomic fingerprint' <ns0:ref type='bibr' target='#b13'>(Karlin, Mrazek & Campbell, 1997)</ns0:ref> which is typically determined by means of k-mers. Each contig has a relative frequency vector c i for all possible k-mers of size k. The nature of shotgun sequencing demands that each k-mer is counted equally to its reverse complement because the orientation of the sequenced strand is typically unknown. With increasing k, the feature space grows exponentially and becomes sparse. Thus, it is common to select k from 4 to 6 <ns0:ref type='bibr' target='#b30'>(Teeling et al., 2004;</ns0:ref><ns0:ref type='bibr' target='#b21'>McHardy et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b15'>Kislyuk et al., 2009)</ns0:ref>. Here, we simply use 5-mers (len(c i ) = 4 5 2 = 512) but other choices can be made.</ns0:p><ns0:p>For its simplicity and effectiveness, we chose a likelihood model assuming statistical independence of features so that the likelihood function in Equation 7 becomes a simple product over observation probabilities (or a linear model when transforming into a log-likelihood). Though k-mers are not independent due to their overlaps and reverse complementarity <ns0:ref type='bibr' target='#b15'>(Kislyuk et al., 2009)</ns0:ref>, the model has been successfully applied to k-mers <ns0:ref type='bibr' target='#b33'>(Wang et al., 2007)</ns0:ref>, and we can replace k-mers in our model with better-suited compositional features, i.e. using locality-sensitive hashing <ns0:ref type='bibr' target='#b20'>(Luo et al., 2016)</ns0:ref>. A genome's background distribution θ is a vector which holds the probabilities to observe each k-mer and the vector c i does the same for the i th contig. The composition likelihood for a contig is a weighted and normalized product over the background frequencies.</ns0:p><ns0:formula xml:id='formula_6'>L(θ | c i ) = len(c i ) i=1 θ c i i (7)</ns0:formula><ns0:p>The genome parameter vector θ that maximizes the likelihood on training sequence data can be estimated by a weighted average of feature counts (Supplementary Methods). </ns0:p><ns0:formula xml:id='formula_7'>θ = N i=1 w i c i N i=1 w i<ns0:label>(</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Similarity to reference</ns0:head><ns0:p>We can compare contigs to reference sequences, for instance by local alignment. Two contigs that align to closely related taxa are more likely to derive from the same genome than sequences which align to distant clades. We convert this indirect relationship to explicit taxonomic features which we can compare without direct consideration of reference sequences. A taxon is a hierarchy of nested classes which can be written as a tree path, for example, the species E. coli could be written as [Bacteria, Gammaproteobacteria,</ns0:p><ns0:formula xml:id='formula_8'>Enterobacteriaceae, E. coli].</ns0:formula><ns0:p>We assume that distinct regions of a contig, such as genes, can be annotated with different taxa. Each taxon has a corresponding weight which in our examples is a positive alignment score. The weighted taxa define a spectrum over the taxonomy for every contig and genome. It is not necessary that the alignment reference be complete or include the respective species genome but all spectra must be equally biased.</ns0:p><ns0:p>Since each contig is represented by a hierarchy of L numeric weights, we incorporated these features into our multi-layer model. First, each contig's taxon weights are transformed to a set of sparse feature vectors</ns0:p><ns0:formula xml:id='formula_9'>t i = {t i,l | 1 ≤ l ≤ L},</ns0:formula><ns0:p>one for each taxonomic level, by inheriting and accumulating scores for higher-level taxa (see Table <ns0:ref type='table'>1</ns0:ref> and Figure <ns0:ref type='figure'>2</ns0:ref>).</ns0:p><ns0:p>Table <ns0:ref type='table'>1</ns0:ref>. Calculating the contig features t i for a simplified taxonomy. There are five original integer alignment scores for nodes (c), (e), (f), (g) and (h) which are summed up at higher levels to calculate the feature vectors t i,l . The corresponding tree structure is shown in Figure <ns0:ref type='figure'>2</ns0:ref>.</ns0:p><ns0:p>Node Taxon Level l Index j Score t i,l, j</ns0:p><ns0:formula xml:id='formula_10'>a Bacteria 1 1 0 7 b Gammaproteobacteria 2 1 0 6 c Betaproteobacteria 2 2 1 1 d Enterobacteriaceae 3 1 0 5 e Yersiniaceae 3 2 1 1 f E. vulneris 4 1 1 1 g E. coli 4 2 3 3 h Yersinia sp. 4 3 1 1</ns0:formula><ns0:p>Each vector t i,l contains the scores for all T l possible taxa at level l. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science likelihood model corresponds to a set of simple frequency models, one for each layer. The full likelihood is a product of the level likelihoods.</ns0:p><ns0:formula xml:id='formula_11'>L(θ | t i ) = L l=1 T l j=1 θ t i,l, j l, j<ns0:label>(9)</ns0:label></ns0:formula><ns0:p>For simplicity, we assume that layer likelihoods are independent which is not quite true but effective.</ns0:p><ns0:p>The MLE for each θ l is then derived from training sequences similar to the simple frequency model (Supplementary Methods).</ns0:p><ns0:formula xml:id='formula_12'>θl = N i=1 t i,l T l j=1 N i=1 t i,l<ns0:label>(10)</ns0:label></ns0:formula></ns0:div>
<ns0:div><ns0:head>Inference of weight parameters</ns0:head><ns0:p>The aggregate likelihood for a contig in Equation 1 is a weighted product of submodel likelihoods. The weights in vector α balance the contributions, assuming that they must not be equal. When we write the likelihood in logarithmic form (Equation <ns0:ref type='formula' target='#formula_13'>11</ns0:ref>), we see that each weight α k sets the variance or width of the contigs' submodel log-likelihood distribution. We want to estimate α k in a way which is not affected by the original submodel variance because the corresponding normalization exponent is somewhat arbitrary.</ns0:p><ns0:p>For example, we normalized the nucleotide composition likelihood as a single feature and the abundance likelihoods as a single sample to limit the range of the likelihood values, because we simply cannot say how much each feature type counts.</ns0:p><ns0:formula xml:id='formula_13'>l(Θ | F i ) = β M k=1 α k l(Θ k | F i,k )<ns0:label>(11)</ns0:label></ns0:formula><ns0:p>For any modeled genome, each of the M submodels produces a distinct log-likelihood distribution of contig data. Based on the origin of the contigs, which is known for model training, the distribution can be split into two parts, the actual genome (positive class) and all other genomes (negative class), as illustrated in Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref>. The positive distribution is roughly unimodal and close to zero whereas the negative distribution, which represents many genomes at once, is diverse and yields strongly negative values. Intuitively, we want to select α such that the positive class is well separated from the negative class in the aggregate log-likelihood function in Equation <ns0:ref type='formula' target='#formula_13'>11</ns0:ref>.</ns0:p><ns0:p>Because α cannot be determined by likelihood maximization, the contributions are balanced in a robust way by setting α to the inverse standard deviation of the genome (positive class) log-likelihood distributions. More precisely, we calculate the average standard deviation over all genomes weighted by the amount of contig data (bp) for each genome and calculate α k as the inverse of this value. This scales down submodels with a high average variance. When we normalize the standard deviation of genome log-likelihood distributions in all submodels before summation, we assume that a high variance means uncertainty. This form of weight estimation requires that for at least some of the genomes, a sufficient number of sequences must be available to estimate the standard deviation. In some instances, it might be necessary to split long contigs into smaller sequences to generate a sufficient number of data points for estimation.</ns0:p><ns0:p>Parameter Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_14'>β</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>original log-likelihood weighted log-likelihood <ns0:ref type='formula' target='#formula_13'>11</ns0:ref>.</ns0:p><ns0:formula xml:id='formula_15'>l(Θ 2 | F i,2 ) l(Θ 1 | F i,1 ) α 1 l(Θ 1 | F i,1 ) α 2 l(Θ 2 | F i,2 ) submodel 1 submodel 2 -∞ -∞ -∞ -∞ = aggregate</ns0:formula></ns0:div>
<ns0:div><ns0:head>Data simulation</ns0:head><ns0:p>We simulated reads of a complex microbial community from 400 publicly available genomes (Supplementary Methods and Supplementary Table <ns0:ref type='table'>1</ns0:ref>). These comprised 295 unique and 44 species with each two or three strain genomes to mimic strain heterogeneity. Our aim was to create a difficult benchmark dataset under controlled settings, minimizing potential biases introduced by specific software. We sampled abundances from a lognormal distribution because it has been described as a realistic model <ns0:ref type='bibr' target='#b28'>(Schloss & Handelsman, 2006)</ns0:ref>. We then simulated a primary community which was then subject to environmental changes resulting in exponential growth of 25% of the community members at growth rates which where chosen uniformly at random between one and ten whereas the other genome abundances remained unchanged. We applied this procedure three times to the primary community which resulted in one primary and three secondary artificial community abundances profiles. With these, we generated 150 bp long Illumina HiSeq reads using the ART simulator <ns0:ref type='bibr' target='#b10'>(Huang et al., 2012)</ns0:ref> and chose a yield of 15 Gb per sample.</ns0:p><ns0:p>The exact amount of read data for all four samples after simulation was 59.47 Gb. To avoid any bias caused by specific metagenome assembly software and to assure a constant contig length, we divided the original genome sequences into non-overlapping artificial contigs of 1 kb length and selected a random 500 kb of each genome to which we mapped the simulated reads using Bowtie2 <ns0:ref type='bibr' target='#b17'>(Langmead & Salzberg, 2012)</ns0:ref>. By the exclusion of some genome reference, we imitated incomplete genome assemblies when mapping reads, which affects the coverage values. Finally, we subsampled 300 kb contigs per genome with non-zero read coverage in at least one of the samples to form the demonstration dataset (120 Mb), which has 400 genomes (including related strains), four samples and contigs of size 1 kb. Due to the short contigs and few samples, this is a challenging dataset for complete genome recovery <ns0:ref type='bibr'>(Nielsen et al., 2014)</ns0:ref> but suitable to demonstrate the functioning of our model with limited data. For each contig we derived 5-mer frequencies, taxonomic annotation (removing species-level genomes from the reference sequence data) and average read coverage per sample, as described in the Supplementary Methods. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>Maximum likelihood classification</ns0:head><ns0:p>We evaluated the performance of the model when classifying contigs to the genome with the highest likelihood, a procedure called Maximum Likelihood (ML) classification. We applied a form of three-fold cross-validation, dividing the simulated data set into three equally-sized parts with 100 kb from every genome. We used only 100 kb (training data) of every genome to infer the model parameters and the other 200 kb (test data) to measure the classification error. 100 kb was used for training because it is often difficult to identify sufficient training data in metagenome analysis. For each combination of submodels, we calculated the mean squared error (MSE) and mean pairwise coclustering (MPC) probability for the predicted (ML) probability matrices (Suppl. Methods), averaged over the three test data partitions. We included the MPC as it can easily be interpreted: for instance, a value of 0.5 indicates that on average 50% of all contig pairs of a genome end up in the same bin after classification. Table <ns0:ref type='table'>2</ns0:ref> shows that the model integrates information from each data source such that the inclusion of additional submodels resulted in a better MPC and also MSE, with a single exception when combining absolute and relative abdundance models which resulted in a marginal increase of the MSE. We also found that taxonomic annotation represents the most powerful information type in our simulation. For comparson, we added scores for NBC <ns0:ref type='bibr' target='#b27'>(Rosen, Reichenberger & Rosenfeld, 2011)</ns0:ref>, a classifier based on nucleotide composition with in-sample training using 5-mers and 15-mers, and Centrifuge <ns0:ref type='bibr' target='#b14'>(Kim et al., 2016)</ns0:ref>, a similarity-based classifier both with in-sample and reference data. These programs were given the same information as the corresponding submodels and they rank close to these. In a further step, we investigated how the presence of very similar genomes impacted the performance of the model. We first collapsed strains from the same species by merging the corresponding columns in the classification likelihood matrix, retaining the entry with the highest likelihood, and then computed the resulting coclustering performance increase ∆MPC ML . Considering assignment on species instead of strain level showed a larger ∆MPC ML for nucleotide composition and taxonomic annotation than for absolute and relative abundance. This is expected, because both do not distinguish among strains, whereas genome abundance does in some, but not all cases.</ns0:p><ns0:p>Table <ns0:ref type='table'>2</ns0:ref>. Cross-validation performance of ML classification for all possible combinations of submodels. We calculated the mean pairwise coclustering (MPC), the strain to species MPC improvement (∆MPC ML ) and the mean squared error (MSE). AbAb = absolute total abundance; ReAb = relative abundance; NuCo = nucleotide composition; TaAn = taxonomic annotation. NBC (v1.1) and <ns0:ref type='bibr'>Centrifuge (v.1.0.3b)</ns0:ref> </ns0:p></ns0:div>
<ns0:div><ns0:head>Soft assignment</ns0:head><ns0:p>The contig length of 1 kb in our simulation is considerably shorter, and therefore harder to classify, than sequences which can be produced by current assembly methods or by some cutting-edge sequencing platforms <ns0:ref type='bibr' target='#b7'>(Goodwin, McPherson & McCombie, 2016)</ns0:ref>. In practice, longer contigs can be classified with higher accuracy than short ones, as more information is provided as a basis for assignment. For instance, a more robust coverage mean, a k-mer spectrum derived from more counts or more local alignments to reference genomes can be inferred from longer sequences. However, as short contigs remain frequent in current metagenome assemblies, 1 kb is sometimes considered a minimum useful contig length <ns0:ref type='bibr' target='#b1'>(Alneberg et al., 2014)</ns0:ref>. To account for the natural uncertainty when assigning short contigs, one can calculate the posterior probabilities over the genomes (see Suppl. Methods), which results in partial assignments of each contig to the genomes. This can reflect situations in which a particular contig is associated with multiple genomes, for instance in case of misassemblies or the presence of homologous regions across genomes.</ns0:p><ns0:p>The free model parameter β in Equation <ns0:ref type='formula'>1</ns0:ref>, which is identical in all genome models, smoothens or <ns0:ref type='table'>2</ns0:ref>. Thus, soft assignment seems more suitable to classify 1 kb contigs, which tend to produce similar likelihoods under more than one genome model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Genome enrichment</ns0:head><ns0:p>Enrichment is commonly known as an experimental technique to increase the concentration of a target substance relative to others in a probe. Thus, an enriched metagenome still contains a mixture of different genomes, but the target genome will be present at much higher frequency than before. This allows a more focused analysis of the contigs or an application of methods which seem prohibitive for the full data by runtime or memory considerations. In the following, we demonstrate how to filter metagenome contigs by p-value to enrich in-silico for specific genomes. Often, classifiers model an exhaustive list of alternative genomes but in practice it is difficult to recognize all species or strains in a metagenome with appropriate training data. When we only look at individual likelihoods, for instance the maximum among the genomes, this can be misleading if the contig comes from a missing genome. For better judgment, a p-value tells us how frequent or extreme the actual likelihood is for each genome. Many if not all binning methods lack explicit significance calculations. We can take advantage of the fact that the classification model compresses all features into a genome likelihood and generate a null (log-)likelihood distribution on training data for each genome. Therefore, we can associate empirical p-values with each newly classified contig and can, for sufficiently small p-values, reject the null hypothesis that the contig belongs to the respective genome. Since this is a form of binary classification, there is the risk to reject a good contig which we measure as sensitivity.</ns0:p><ns0:p>We enriched a metagenome by first training a genome model and then calculating the p-values of remaining contigs using this model. Contigs with higher p-values than the chosen critical value were discarded. The higher this cutoff is, the smaller the enriched sample becomes, but also the target genome Figure <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>, is approximately a linear function of the p-value.</ns0:p><ns0:p>will be less complete. We calculated the reduced sample size as a function of the p-value cutoff for our simulation (Figure <ns0:ref type='figure'>5</ns0:ref>). Selecting a p-value threshold of 2.5% shrinks the test data on average down to 5% of the original size. Instead of an empirical p-value, we could also use a parametrized distribution or select a critical log-likelihood value by manual inspection of the log-likelihood distribution (see Figure <ns0:ref type='figure' target='#fig_6'>3</ns0:ref> for an example of such a distribution). This example shows that generally a large part of a metagenome dataset can be discarded while retaining most of the target genome sequence data.</ns0:p></ns0:div>
<ns0:div><ns0:head>Bin analysis</ns0:head><ns0:p>The model can be used to analyze bins of metagenome contigs, regardless of the method that was used to infer these bins. Specifically, one can measure the similarity of two bins in terms of the contig likelihood instead of, for instance, an average euclidean distance based on the contig or genome k-mer and abundance vectors. We compare bins to investigate the relation between the given data, represented by the features in the model, and their grouping into genome bins. For instance, one could ask whether the creation of two genome bins is sufficiently backed up by the contig data or whether they should be merged into a single bin. For readability, we write the likelihood of a contig in bin A to:</ns0:p><ns0:formula xml:id='formula_16'>L(θ A | contig i) = L i (θ A ) = L(θ A ) = L A</ns0:formula><ns0:p>To compare two specific bins, we select the corresponding pair of columns in the classification likelihood matrix and calculate two mixture likelihoods for each contig (rows), L, using the MLE of the parameters for both bins and L swap under the hypothesis that we swap the model parameters of both bins.</ns0:p><ns0:p>The partial assignment weights πA and πB , called responsibilities, are estimated by normalization of the two bin likelihoods. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_17'>L = πA L A + πB L B = L A L A +L B L A + L B L A +L B L B = L 2 A + L 2 B L A + L B<ns0:label>(12</ns0:label></ns0:formula><ns0:p>Computer Science</ns0:p><ns0:formula xml:id='formula_18'>L swap = πA L B + πB L A = L A L A +L B L B + L B L A +L B L A = 2L A L B L A + L B<ns0:label>(13)</ns0:label></ns0:formula><ns0:p>For example, if πA and πB assign one third of a contig to the first, less likely bin and two thirds to the second, more likely bin using the optimal parameters, then L swap would simply exchange the contributions in the mixture likelihood so that one third are assigned to the more likely and two thirds to the less likely bin. The ratio L swap / L ranges from zero to one and can be seen as a percentage similarity. We form a joint relative likelihood for all N contigs, weighting each contig by its optimal mixture likelihood L and normalizing over these likelihood values.</ns0:p><ns0:formula xml:id='formula_19'>S(A, B) = Z N i=1        2 L i (θ A ) L i (θ B ) L 2 i (θ A ) + L 2 i (θ B )        L 2 i (θ A )+L 2 i (θ B ) L i (θ A )+L i (θ B ) (14)</ns0:formula><ns0:p>normalized by the total joint mixture likelihood</ns0:p><ns0:formula xml:id='formula_20'>Z = N i=1 L 2 i (θ A ) + L 2 i (θ B ) L i (θ A ) + L i (θ B )<ns0:label>(15)</ns0:label></ns0:formula><ns0:p>The quantity in Equation <ns0:ref type='formula'>14</ns0:ref>ranges from zero to one, reaching one when the two bin models produce identical likelihood values. We can therefore interpret the ratio as a percentage similarity between any two bins. A connection to the Kullback-Leibler divergence can be constructed (Supplementary Methods).</ns0:p><ns0:p>To demonstrate the application, we trained the model on our simulated genomes, assuming they were bins, and created trees (Figure <ns0:ref type='figure' target='#fig_11'>6</ns0:ref>) for a randomly drawn subset of 50 of the 400 genomes using the probabilistic bin distances −log(S ) (Equation <ns0:ref type='formula'>14</ns0:ref>). We computed the distances twice, first with only nucleotide composition and taxonomic annotation submodels and second with the full feature set to compare the bin resolution. The submodel parameters were inferred using the full dataset and β using three-fold crossvalidation. We then applied average linkage clustering to build balanced and rooted trees with equal distance from leave to root for visual inspection. The first tree loosely reflects phylogenetic structure corresponding to the input features. However, many similarities over 50% (outermost ring) show that model and data lack the support for separating these bins. In contrast, the fully informed tree, which additionally includes information about contig coverages, separates the genomes bins, such that only closely related strains remain ambiguous. This analysis shows again that the use of additional features improves the resolution of individual genomes and, specifically, that abundance separates similar genomes.</ns0:p><ns0:p>Most importantly, we show that our model provides a measure of support for a genome binning. We know the taxa of the genome bins in this example but for real metagenomes, such an analysis can reveal binning problems and help to refine the bins as in Figure <ns0:ref type='figure' target='#fig_0'>1d</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>Genome bin refinement</ns0:head><ns0:p>We applied the model to show one of its current use cases on more realistic data. We downloaded the medium complexity dataset from www.cami-challenge.org. This dataset is quite complex (232 genomes, two sample replicates). We also retrieved the results of two highest-performing automatic binning programs, MaxBin and Metawatt, in the CAMI challenge evaluation <ns0:ref type='bibr' target='#b29'>(Sczyrba et al., 2017)</ns0:ref>. We took the simplest possible approach: we trained MLGEX on the genome bins derived by these methods and classified the contigs to the bins with the highest likelihood, thus ignoring all details of contig splitting, β or p-value calculation and the possibility of changing the number of genome bins. When contigs were assigned to multiple bins with equal probability, we attributed them to the first bin in the list because the CAMI evaluation framework did not allow sharing contigs between bins. In our evaluation, we only used information provided to the contestants by the time of the challenge. We report the results for two settings <ns0:ref type='formula'>14</ns0:ref>) to demonstrate the ability of the model to measure bin resolution. This example compares the left (blue) tree, which was constructed only with nucleotide composition and taxonomic annotations, with the right (red) tree, which uses all available features. The tip labels were shortened to fit into the figure. The similarity axis is scaled as log(1-log(S)) to focus on values near one. Bins which are more than 50% similar branch in the outermost ring whereas highly dissimilar bins branch close to the center. We created the trees by applying the R function hclust(method='average') to MGLEX output. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>for each method using the recall, the fraction of overall assigned contigs (bp), and the Adjusted Rand index (ARI) as defined in <ns0:ref type='bibr' target='#b29'>Sczyrba et al. (2017)</ns0:ref>. Both measures are dependent so that usually a tradeoff between them is chosen. In the first experiment, we swapped contigs which were originially assigned between bins. In the second experiment, all available contigs were assigned, thus maximizing the recall.</ns0:p><ns0:p>Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref> shows that MGLEX bin refinement improved the genome bins in terms of the ARI for both sets of genome bins when fixing the recall, and increased in both measures for Metawatt but not MaxBin when assigning all contigs including the originally unassigned. This is likely due to the fact that MaxBin has fewer but relatively complete bins to which the other contigs cannot correctly be recruited. Further improvement would involve dissecting and merging bins within and among methods, for which MGLEX likelihoods can be considered. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>We describe an aggregate likelihood model for the reconstruction of genome bins from metagenome data sets and show its value for several applications. The model can learn from and classify nucleotide sequences from metagenomes. It provides likelihoods and posterior bin probabilities for existing genome bins, as well as p-values, which can be used to enrich a metagenome dataset with a target genome.</ns0:p><ns0:p>The model can also be used to quantify bin similarity. It builds on four different submodels that make use of different information sources in metagenomics, namely absolute and relative contig coverage, nucleotide composition and previous taxonomic assignments. By its modular design, the model can easily be extended to include additional information sources. This modularity also helps in interpretation and computations. The former, because different features can be analyzed separately and the latter, because submodels can be trained independently and in parallel.</ns0:p><ns0:p>In comparison to previously described parametric binning methods, our model incorporates two new types of features. The first is relative differential coverage, for which, to our knowledge, this is the first Manuscript to be reviewed</ns0:p><ns0:p>Computer Science attempt to use binomials to account for systematic bias in the read mapping for different genome regions.</ns0:p><ns0:p>As such, the binomial submodel represents the parametric equivalent of covariance distance clustering.</ns0:p><ns0:p>The second new type is taxonomic annotation, which substantially improved the classification results in our simulation. Taxonomic annotations, as used in the model and in our simulation, were not correct up to the species level and need not be, as seen in the classification results. We only require the same annotation method be applied to all sequences. In comparison to previous methods, our aggregate model has weight parameters to combine the different feature types and allows tuning the bin posterior distribution by selection of an optimal smoothing parameter β.</ns0:p><ns0:p>We showed that probabilistic models represent a good choice to handle metagenomes with short contigs or few sample replicates, because they make soft, not hard decisions, and because they can be applied in numerous ways. When the individual submodels are trained, genome bin properties are compressed into fewer model parameters, such as mean values, which are mostly robust to outliers and therefore tolerate a certain fraction of bin pollution. This property allows to reassign contigs to bins, which we demonstrated in the 'Genome bin refinement' section. Measuring the performance of the individual submodels and their corresponding features on short simulated contigs (Table <ns0:ref type='table'>2</ns0:ref>), we find that they discriminate genomes or species pan-genomes by varying degrees. Genome abundance represents, in our simulation with four samples, the weakest single feature type, which will likely become more powerful with increasing sample numbers. Notably, genomes of individual strains are more difficult to distinguish than species level pangenomes using any of the features. In practice, if not using idealized assemblies as in our current evaluation, strain resolution poses a problem to metagenome assembly, which is currently not resolved in a satisfactory manner <ns0:ref type='bibr' target='#b29'>(Sczyrba et al., 2017)</ns0:ref>.</ns0:p><ns0:p>The current MGLEX model is somewhat crude because it makes many simplifying assumptions in the submodel definitions. For instance, the multi-layer model for taxonomic annotation assumes that the probabilities in different layers are independent, the series of binomials for relative abundance should be </ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1. Genome reconstruction workflow. To recover genomes from environmental sequencing data, the illustrated processes can be iterated. Different programs can be run for each process and iteration. MGLEX can be applied in all steps: (a) to classify contigs or to cluster by embedding the probabilistic model into an iterative procedure; (b) to enrich a metagenome for a target genome to reduce its size and to filter out irrelevant sequence data; (c) to select contigs of existing bins based on likelihoods and p-values and to repeat the binning process with a reduced dataset; (d) to refine existing bins, for instance to merge bins as suggested by bin analysis.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>P k (contig i | genome) that contig i belongs to a particular genome. Each of the components k reflects a Submission v0.5.5s 3/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017) Manuscript to be reviewed Computer Science particular feature such as • a weight w i (contig length)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head /><ns0:label /><ns0:figDesc>A genome is represented by a similar set of vectors θ = {θ l | 1 ≤ l ≤ L} with identical dimensions, but here, entries represent relative frequencies on the particular level l, for instance a distribution over all family taxa. The corresponding Submission v0.5.5s 7/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head /><ns0:label /><ns0:figDesc>in Equation 11 is only relevant for soft classification but not in the context of ML classification or p-values. It can best be viewed as a sharpening or smoothing parameter of the bin posterior distribution (the probability of a genome or bin given the contig). β is estimated by minimization of the training or test error, as in our simulation. Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Procedure for determination of α k for each submodel. The figure shows a schematic for a single genome and two submodels. The genome's contig log-likelihood distribution (A and B) is scaled to a standard deviation of one (C and D) before adding the term in the aggregate model in Equation 11.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head /><ns0:label /><ns0:figDesc>sharpens the posterior distribution: β = 0 produces a uniform posterior and with very high β, the posterior approaches the sharp ML solution. We determined β by optimizing the MSE on both training and test data, shown in Figure 4. As expected, the classification training error was smaller than the test error because the submodel parameters were optimized with respect to the training data. Because the minima are close to each other, the full aggregate model seems robust to overfitting of β on training data. The comparison of soft vs. hard assignment shows that the former has a smaller average test classification MSE of ∼ 0.28 (the illustrated minimum in Figure 4) compared to the latter (ML) assignment MSE of ∼ 0.33 in Table</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 4 .Figure 5 .</ns0:head><ns0:label>45</ns0:label><ns0:figDesc>Figure 4. Model training (err) and test error (Err) as a function of β for the complete aggregate model including all submodels and feature types. The solid curve shows the average and the colored shading the standard deviation of the three partitions in cross-validation. The corresponding optimal values for β are marked by black dots and vertical lines. The minimum average training error is 0.238 (β = 2.85) and test error is 0.279 at β = 1.65.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure6. Average linkage clustering of a random subset of 50 out of 400 genomes using probabilistic distances −log(S ) (Equation14) to demonstrate the ability of the model to measure bin resolution. This example compares the left (blue) tree, which was constructed only with nucleotide composition and taxonomic annotations, with the right (red) tree, which uses all available features. The tip labels were shortened to fit into the figure. The similarity axis is scaled as log(1-log(S)) to focus on values near one. Bins which are more than 50% similar branch in the outermost ring whereas highly dissimilar bins branch close to the center. We created the trees by applying the R function hclust(method='average') to MGLEX output.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head /><ns0:label /><ns0:figDesc>replaced by a multinomial to accout for the parameter dependencies or the absolute abdundance Poisson model should incorporate overdispersion to model the data more appropriately. Exploiting this room for improvement can lead to further improvement in the performance while the overall framework and usage of MGLEX stays unchanged. When we devised our model, we had an embedding into more complex routines in mind. In the future, the model can be used in inference procedures such as EM or MCMC to infer or improve an existing genome binning. Thus, MGLEX provides a software package for use in other programs. However, it also represents a powerful stand-alone tool for the adept user in its current form.Currently, MGLEX does not yet have support for multiple processors and only provides the basic functionality presented here. However, training and classification can easily be implemented in parallel because they are expressed as matrix multiplications. The model requires sufficient training data to robustly estimate the submodel weights α using the standard deviation of the empirical log-likelihood distributions and requires linked sequences to estimate β using error minimization. In situations with a limited number of contigs per genome bin, we therefore advise to generate linked training sequences of a certain length, as in our simulation, for instance by splitting assembled contigs. The optimal length for splitting may depend on the overall fragmentation of the metagenome.Our open-source Python package MGLEX provides a flexible framework for metagenome analysis and binning which we intent to develop further together with the metagenomics research community. It can be used as a library to write new binning applications or to implement custom workflows, for example to supplement existing binning strategies. It can build upon a present metagenome binning by taking assignments to bins as input and deriving likelihoods and p-values that allow for critical inspection of the contig assignments. Based on the likelihood, MGLEX can calculate bin similarities to provide insight into the structure of data and community. Finally, genome enrichment of metagenomes can improve the recovery of particular genomes in large datasets.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head /><ns0:label /><ns0:figDesc>Taxonomy for Table 1 which is simplified to four levels and eight nodes. A full taxonomy may consist of thousands of nodes.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell /><ns0:cell>a</ns0:cell><ns0:cell /><ns0:cell>Domain (level 1)</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>b</ns0:cell><ns0:cell>c</ns0:cell><ns0:cell>Class (level 2)</ns0:cell></ns0:row><ns0:row><ns0:cell>d</ns0:cell><ns0:cell>e</ns0:cell><ns0:cell /><ns0:cell>Family (level 3)</ns0:cell></ns0:row><ns0:row><ns0:cell>f</ns0:cell><ns0:cell>g</ns0:cell><ns0:cell>h</ns0:cell><ns0:cell>Species (level 4)</ns0:cell></ns0:row><ns0:row><ns0:cell>Figure 2.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>8) Submission v0.5.5s 6/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017)Manuscript to be reviewed</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>are external classifiers added for comparison. Best values are in bold and worst in italic.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Submodels</ns0:cell><ns0:cell cols='3'>MPC ML ∆MPC ML MSE ML</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb + TaAn</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>+0.10</ns0:cell><ns0:cell>0.36</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb + NuCo + TaAn</ns0:cell><ns0:cell>0.64</ns0:cell><ns0:cell>+0.11</ns0:cell><ns0:cell>0.34</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb + TaAn</ns0:cell><ns0:cell>0.65</ns0:cell><ns0:cell>+0.10</ns0:cell><ns0:cell>0.35</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb + NuCo + TaAn</ns0:cell><ns0:cell>0.68</ns0:cell><ns0:cell>+0.11</ns0:cell><ns0:cell>0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>Submodels</ns0:cell><ns0:cell cols='3'>MPC ML ∆MPC ML MSE ML</ns0:cell></ns0:row><ns0:row><ns0:cell>Centrifuge (in-sample)</ns0:cell><ns0:cell>0.01</ns0:cell><ns0:cell>+0.01</ns0:cell><ns0:cell>0.51</ns0:cell></ns0:row><ns0:row><ns0:cell>NBC (15-mers)</ns0:cell><ns0:cell>0.02</ns0:cell><ns0:cell>+0.00</ns0:cell><ns0:cell>0.66</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb</ns0:cell><ns0:cell>0.03</ns0:cell><ns0:cell>+0.00</ns0:cell><ns0:cell>0.58</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb</ns0:cell><ns0:cell>0.08</ns0:cell><ns0:cell>+0.02</ns0:cell><ns0:cell>0.61</ns0:cell></ns0:row><ns0:row><ns0:cell>Centrifuge (reference)</ns0:cell><ns0:cell>0.13</ns0:cell><ns0:cell>+0.03</ns0:cell><ns0:cell>0.45</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb</ns0:cell><ns0:cell>0.21</ns0:cell><ns0:cell>+0.04</ns0:cell><ns0:cell>0.59</ns0:cell></ns0:row><ns0:row><ns0:cell>NuCo</ns0:cell><ns0:cell>0.30</ns0:cell><ns0:cell>+0.06</ns0:cell><ns0:cell>0.52</ns0:cell></ns0:row><ns0:row><ns0:cell>NBC (5-mers)</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>+0.06</ns0:cell><ns0:cell>0.48</ns0:cell></ns0:row><ns0:row><ns0:cell>ReAb + NuCo</ns0:cell><ns0:cell>0.41</ns0:cell><ns0:cell>+0.07</ns0:cell><ns0:cell>0.48</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + NuCo</ns0:cell><ns0:cell>0.43</ns0:cell><ns0:cell>+0.08</ns0:cell><ns0:cell>0.50</ns0:cell></ns0:row><ns0:row><ns0:cell>TaAn</ns0:cell><ns0:cell>0.46</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + ReAb + NuCo</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.44</ns0:cell></ns0:row><ns0:row><ns0:cell>NuCo + TaAn</ns0:cell><ns0:cell>0.52</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.40</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + TaAn</ns0:cell><ns0:cell>0.54</ns0:cell><ns0:cell>+0.09</ns0:cell><ns0:cell>0.39</ns0:cell></ns0:row><ns0:row><ns0:cell>AbAb + NuCo + TaAn</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>+0.10</ns0:cell><ns0:cell>0.37</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Submission v0.5.5s</ns0:cell><ns0:cell /><ns0:cell /></ns0:row></ns0:table><ns0:note>10/20 PeerJ Comput. Sci. reviewing PDF | (CS-2016:12:15063:2:0:NEW 23 Apr 2017) Manuscript to be reviewed Computer Science</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Genome bin refinement for CAMI medium complexity dataset with 232 genomes and two samples. The recall is the fraction of overall assigned contigs (bp). The Adjusted Rand index (ARI) is a measure of binning precision. The unmodified genome bins are the submissions to the CAMI challenge using the corresponding unsupervised binning methods Metawatt and MaxBin. MGLEX swapped contigs: contigs in original genome bins reassigned to the bin with highest MGLEX likelihood. MGLEX all contigs: all contigs (with originally uncontained) assigned to the bin with highest MGLEX likelihood. The lowest scores are written in italic and highest in bold.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Binner</ns0:cell><ns0:cell>Variant</ns0:cell><ns0:cell cols='2'>Bin count Recall (bp) ARI</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Metawatt unmodified</ns0:cell><ns0:cell>285</ns0:cell><ns0:cell>0.94 0.75</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Metawatt MGLEX swapped contigs</ns0:cell><ns0:cell>285</ns0:cell><ns0:cell>0.94 0.82</ns0:cell></ns0:row><ns0:row><ns0:cell cols='2'>Metawatt MGLEX all contigs</ns0:cell><ns0:cell>285</ns0:cell><ns0:cell>1.00 0.77</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxBin</ns0:cell><ns0:cell>unmodified</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>0.82 0.90</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxBin</ns0:cell><ns0:cell>MGLEX swapped contigs</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>0.82 0.92</ns0:cell></ns0:row><ns0:row><ns0:cell>MaxBin</ns0:cell><ns0:cell>MGLEX all contigs</ns0:cell><ns0:cell>125</ns0:cell><ns0:cell>1.00 0.76</ns0:cell></ns0:row></ns0:table><ns0:note>ImplementationWe provide a Python package called MGLEX, which includes the described model. Simple text input facilitates the integration of external programs for feature extraction like k-mer counting or read mapping, which are not included. MGLEX can process millions of sequences with vectorized arithmetics using NumPy<ns0:ref type='bibr' target='#b32'>(Walt, Colbert & Varoquaux, 2011)</ns0:ref> and includes a command line interface to the main functionality, such as model training, classification, p-value and error calculations. It is open source (GPLv3) and freely available via the Python Package Index 1 and on GitHub 2 .</ns0:note></ns0:figure>
</ns0:body>
" | "Helmholtz Centre for Infection Research
Inhoffenstraße 7
38124 Braunschweig, Germany
www.helmholtz-hzi.de
2017-04-23
Dear Dr. Titus Brown,
Thank you and the reviewers for such a thorough review of the manuscript “A probabilistic model to
recover individual genomes from metagenomes” (#CS-2016:12:15063:0:0:REVIEW). We have
addressed all of the the remaining comments. The changes we have included in response to the
comments are denoted in italics below.
Johannes Dröge
On behalf of all authors: Johannes Dröge, Alexander Schönhuth and Alice C. McHardy.
Reviewer: Dinghua Li
Comments for the author
The authors have addressed all my comments.
Reviewer: Daan Speth
In my opinion the authors have adequately addressed the my comments and those of the other
reviewers, and thus recommend the manuscript for publication. I do urge the authors to indeed
further expand the documentation.
line 374: euclidian rather than euklidian
Fixed.
line 436-438 has been changed to state that four different submodels are used, but then list
coverage, composition and taxonomic asssignment. Maybe explicitly name relative & absolute
coverage
We have taken up this suggestion, thanks.
Reviewer: Qingpeng Zhang
The authors made good efforts to address the issues brought up by the reviewers. The authors
tested the performance of two other methods, NBC and Centrifuge for comparison. The authors
also tested the method on the dataset from caimi-challenge.org, which is helpful. Now I think that
all of my comments on the manuscript have been answered. I just have a question about the
discussion about the new Table 3.
On Line 422-423, the authors mention “Table 3 shows that MGLEX bin refinement improved the
genome bins in terms of the ARI for both sets of genome bins and increased the recall for
Metawatt but not MaxBin. “ However from Table3, the recalls for Metawatt and Maxbin are
both increased to 1.00 for “all contig” , and do not change for “swapped contigs”. Also the ARI
for MaxBin “all contigs” decreased from 0.90 to 0.76. This may require further clarification.
We have extended the discussion of the new results for a better understanding. Recall/sensitivity and
ARI (resembling precision), are dependent. Thus increasing one is likely to lower the other. In order to
state that a method generally improved over another, both measures must increase (or at least not
decrease).
" | Here is a paper. Please give your review comments after reading it. |
740 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>CodonGenie, freely available from http://codon.synbiochem.co.uk , is a simple web application for designing ambiguous codons to support protein mutagenesis applications.</ns0:p><ns0:p>Ambiguous codons are derived from specific heterogeneous nucleotide mixtures, which create sequence degeneracy when synthesised in a DNA library. In directed evolution studies, such codons are carefully selected to encode multiple amino acids. For example, the codon NTN, where the code N denotes a mixture of all four nucleotides, will encode a mixture of phenylalanine, leucine, isoleucine, methionine and valine. Given a user-defined target collection of amino acids matched to an intended host organism, CodonGenie designs and analyses all ambiguous codons that encode the required amino acids. The codons are ranked according to their efficiency in encoding the required amino acids while minimising the inclusion of additional amino acids and stop codons. Organism-specific codon usage is also considered.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>Introduction</ns0:head><ns0:p>Site-directed mutagenesis of DNA is an established technique of generating libraries of DNA variants in a controlled manner, and has applications in a range of fields, primarily that of protein engineering <ns0:ref type='bibr' target='#b8'>(Jäckel, Kast & Hilvert, 2008)</ns0:ref>, but also in more fundamental research including the study of sequence-to-fitness relationships <ns0:ref type='bibr' target='#b7'>(Hietpas et al., 2011)</ns0:ref>. The design of mutant protein libraries typically involves a manual process in which required sites for mutation are selected and ambiguous codons (those containing mixtures of nucleotides) designed to introduce controlled variation in these positions.</ns0:p><ns0:p>In this process, one may wish to design a codon to specify any subset of amino acids in a given position. Since each amino acid may be included in the subset or otherwise, the number of possible subsets is 2 20 -1, i.e. there are 1,048,575 possible subsets of 20 amino acids. (Each of the sets can be represented by a 20-digit binary number, where a one at position n indicates that amino acid n is included in the set, and a zero indicates that it is absent. There are 2 20 such numbers, but one of them represents the empty set and is thus not counted here.) Not all of these 1,048,575 subsets of 20 amino acids are uniquely designable using ambiguous codons, of which there are only 3375. (There are 15 (=2 4 -1) relevant nucleotide codes ('letters'), ranging from the completely unambiguous A, C, G and T representing a single nucleotide, to the completely ambiguous N representing all 4 nucleotides <ns0:ref type='bibr' target='#b2'>(Cornish-Bowden, 1985)</ns0:ref>. There are 15 3 = 3375 triplet codons that can be assembled from this 15-letter alphabet of ambiguous codes, compared to the 4 3 = 64 codons that can be constructed from the standard 4-letter alphabet of unambiguous nucleotides.)</ns0:p><ns0:p>Given the degeneracy of the codon table, there are often multiple ways to encode a chosen set of amino acids. The experimenter must a) decide if it is feasible to encode all desired amino acids <ns0:ref type='bibr' target='#b13'>(Mena & Daugherty, 2005)</ns0:ref>; b) determine whether this creates an acceptable number of sequence combinations (depending on screening capability and throughput) <ns0:ref type='bibr' target='#b3'>(Currin et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b9'>Kille et al., 2013;</ns0:ref><ns0:ref type='bibr' target='#b12'>Lutz, 2010;</ns0:ref><ns0:ref type='bibr' target='#b18'>Pines et al., 2015)</ns0:ref>; and c) consider the codon usage of the organism to be used <ns0:ref type='bibr' target='#b16'>(Nakamura, Gojobori & Ikemura, 2000)</ns0:ref>. It therefore follows that the design of ambiguous codons is non-trivial.</ns0:p><ns0:p>CodonGenie is therefore introduced to provide a quick and easy-to-use means of designing optimal ambiguous codons, considering the above parameters according to the user input, and ranking the ambiguous codons with respect to their suitability for expression in a target host organism. The tool is designed to be both human-and computer-readable, providing both a simple web browser interface and a RESTful webservice API.</ns0:p></ns0:div>
<ns0:div><ns0:head>Materials & Methods</ns0:head></ns0:div>
<ns0:div><ns0:head>Algorithm</ns0:head><ns0:p>The standard codon table is such that 17 of the 20 naturally occurring amino acids are encoded by codons with fixed bases in the first and second positions, with the third 'wobble'-position allowing variation that accounts for the degeneracy of the DNA code. Determining optimal ambiguous codons for combinations of amino acids involves the following process, which is optimized for computational efficiency, compared to a brute-force examination of all possible ambiguous codons: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>[TA] and T.</ns0:p><ns0:p>The first two and wobble position bases are combined to produce candidate ambiguous codons, which are scored as described below.</ns0:p><ns0:p>Three amino acids (leucine, arginine and serine) cannot be simply encoded by codons with fixed bases in the first and second positions. (For example, both CTN and TT[AG] encode leucine.) For combinations including these more complex residues, the above algorithm is performed for each encoding and the results combined.</ns0:p><ns0:p>Note that CodonGenie returns not only the most 'specific' ambiguous codons, that is, the codons that provide the fewest DNA variants whilst encoding all target amino acids. Providing results that include less specific ambiguous codons, which may also encode additional amino acids, allows the user to perform a trade-off between library size and codon specificity, depending on the experimental objective. A smaller library is generally advantageous for screening purposes, but may contain codons that are unfavoured by the target host organism.</ns0:p></ns0:div>
<ns0:div><ns0:head>Scoring</ns0:head><ns0:p>The goal of the scoring scheme is to preferentially rank the most efficient ambiguous codons. That is, the ambiguous codons that encodes all of the required amino acids while minimising the encoding on non-desired amino acids.</ns0:p><ns0:p>The score for an ambiguous codon is therefore defined as the mean of the value, v i , of each of the codons that it encodes. For codons that encode required amino acids, v i is the ratio of the frequency of the codon f i and the frequency of the most frequent synonymous codon f j for the amino acid that it encodes. For codons that encode non-required amino acids, v i is zero.</ns0:p><ns0:p>, where Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>score = 1 |𝐶| ∑ 𝑖 ∈ 𝐶 𝑣 𝑖 𝑣 𝑖 = { 𝑓 𝑖 max ({𝑓 𝑗 :𝑗 ∈ 𝑆 𝑖 }) 𝑖 ∈ 𝑅 0 𝑖 ∉ 𝑅 𝐶 = {</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>This scoring algorithm thus achieves a principled trade-off between codon specificity, library size and codon favourability (according to the codon usage preferences of the target organism).</ns0:p></ns0:div>
<ns0:div><ns0:head>Web service access</ns0:head><ns0:p>CodonGenie also offers a RESTful web service interface, supporting its integration with software pipelines. The Design method can be accessed by specifying required amino acids and required host organism (as an NCBI Taxonomy id <ns0:ref type='bibr' target='#b4'>(Federhen, 2012)</ns0:ref>) as follows:</ns0:p><ns0:p>http://codon.synbiochem.co.uk/codons?aminoAcids=DE&organism=4932</ns0:p><ns0:p>Similarly, the Analyse method can be accessed by specifying a variant codon and the required organism:</ns0:p><ns0:p>http://codon.synbiochem.co.uk/codons?codon=NSS&organism=4932</ns0:p><ns0:p>CodonGenie also provides web service interfaces for accessing supported organisms. The first allows all organisms to be listed, showing NCBI Taxonomy id and name, and the second allows the collection to be searched according to a given term: http://codon.synbiochem.co.uk/organisms/ http://codon.synbiochem.co.uk/organisms/escher In all cases, results are returned in json format.</ns0:p></ns0:div>
<ns0:div><ns0:head>Distribution</ns0:head><ns0:p>The web application is freely available from http://codon.synbiochem.co.uk. CodonGenie is written in Python (using the Flask framework) and HTML / Javascript (using the Bootstrap and AngularJS libraries) and is packaged as a Docker application for ease of deployment. Source code is available from https://github.com/synbiochem/CodonGenie.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results and Discussion</ns0:head><ns0:p>CodonGenie provides a simple web interface affording two functions: a) the design, and b) the analysis of ambiguous codons. Considering the Design module, the user specifies the combination of amino acids to be encoded and an organism in which the library will be expressed. The codon usage table is automatically extracted from the Codon Usage Database <ns0:ref type='bibr' target='#b16'>(Nakamura, Gojobori & Ikemura, 2000)</ns0:ref>, which as of May 2017 provided support for 35,792 organisms. CodonGenie then calculates suitable ambiguous codons and presents these in an interactive table (see Figure <ns0:ref type='figure'>1</ns0:ref>).</ns0:p><ns0:p>The Analyse module provides the functionality of checking an existing ambiguous codon. Users specify a variant codon and required host organism, and the results returned indicate which amino acids are encoded along with their codon usage frequency.</ns0:p><ns0:p>The benefit of CodonGenie can be exemplified by the design of an ambiguous codon to encode non-polar amino acids phenylalanine, leucine, isoleucine, methionine and valine. A simple and widely used ambiguous codon to encode this subset is NTN, which equates to 16 DNA variants. However, CodonGenie identifies that these same amino acids can be encoded by the DTK codon (where D denotes [AGT] and K denotes [GT]) using 6 variants. Selecting DTK therefore means fewer enzyme variants need to be screened to test all sequence combinations. This benefit is particularly significant when encoding multiple variant codons. For example, when using 3 DTK codons the library size is reduced from 4096 (16 3 ) to 213 (6 3 ) combinations.</ns0:p><ns0:p>An example of the importance of considering codon usage of the target host organism can be seen when considering the design of an ambiguous codon to encode the set of five non-polar amino acids (F, I, L, M and V) considered above. For E. coli, the preferred codon is DTK (ATG|T|GT), with a score of 0.88. DTS (ATG|T|GC) also encodes all five amino acids using 6 variants, but with a score of 0.68. In Streptomyces coelicolor -a commonly used host for antibiotic production <ns0:ref type='bibr' target='#b17'>(Pickens et al., 2011)</ns0:ref>, the ranking is reversed, with DTS being preferred with a score of 0.79, substantially higher than that of 0.29 for DTK. The reason for this can be found in the codon usage frequencies of each of these organisms, as shown in Table <ns0:ref type='table'>1</ns0:ref>: The codons DTK and DTS differ by specifying either GT or GC in the third position, respectively.</ns0:p><ns0:p>Taking the example of encoding phenylalanine, F, the codon TTT encoded by ambiguous codon DTK is preferred over TTC (encoded by DTS) in E. coli by a frequency of 0.64 to 0.36. By contrast, S. coelicolor strongly prefers TTC to TTT to encode F, with frequencies of 0.97 to 0.03, respectively. A similar preference is observable in the codon usage frequencies for encoding isoleucine, I, in S. coelicolor, where ATC has a frequency of 0.95 compared to that of 0.03 for ATT. Thus, S. coelicolor has a strong preference for the variant codon containing C in the 'wobble' position, and this is reflected in the scores of 0.79 for DTS and 0.29 for DTK.</ns0:p><ns0:p>Organism-specific codon usage is therefore a key consideration in the design of ambiguous codons for a given host.</ns0:p><ns0:p>CodonGenie adds to a toolkit of existing software tools for ambiguous codon selection, which includes AA-Calculator <ns0:ref type='bibr' target='#b5'>(Firth & Patrick, 2008)</ns0:ref> and DYNAMCC <ns0:ref type='bibr' target='#b6'>(Halweg-Edwards et al., 2016)</ns0:ref>.</ns0:p><ns0:p>In contrast to AA-Calculator, CodonGenie ranks designed ambiguous codon based on their suitability for use in a given host organism. DYNAMCC also scores designed codons but offers complementary functionality to CodonGenie, as it designs sets of ambiguous codons to encode a set of amino acids with no off-target amino acid encoding and minimal redundancy. CodonGenie designs single ambiguous codons to encode a desired set of amino acids, which may also include off-target amino acids, allowing users to make a conscious trade-off between a larger library and the ease of generating such a library with a single ambiguous codon.</ns0:p><ns0:p>The above example of Table <ns0:ref type='table'>1</ns0:ref> illustrates a key difference between CodonGenie and DYNAMCC. Where CodonGenie will provide a list of individual ambiguous codons that will encode all desired amino acids (and potentially additional, off-target amino acids), DYNAMCC returns a single, best-scoring set of ambiguous codons that encode all desired amino acids with minimal redundancy. In the case of F, I, L, M and V, DYNAMCC returns the set of codons WTT (encoding F and I and L) and VTG (encoding M and V). The advantage of the DYNAMCC approach is in increased efficiency of the library: five DNA variants encode the five desired amino acids, while CodonGenie's solution of DTK or DTS encode six DNA variants, thus producing a larger library. The advantage of CodonGenie's solution lies in the ease in which the library can be produced with a single ambiguous codon.</ns0:p><ns0:p>CodonGenie provides a clean, intuitive web-based user interface which requires minimal user input, and which takes advantage of modern web-application development libraries such as AngularJS and Bootstrap. AngularJS (https://angularjs.org), developed and maintained by Google, provides a framework for the rapid development of modular, testable single-page web applications. Bootstrap (http://getbootstrap.com), initially developed at Twitter, provides a library of reusable user interface 'widgets', such as forms, auto-fill boxes, tables, etc. Using freely available yet commercially developed libraries such as these confers a number of advantages: From a development perspective, the libraries are easy to use, are well documented and are thoroughly tested on a range of browsers (including those on mobile phones and tablets) being used perhaps billions of times a day worldwide. More importantly, the user experience is improved through use of well-developed modules that in many cases users have experienced numerous times previously in various other web applications. As a result, CodonGenie can provide a simple, easy-to-use interface that requires no documentation and can run on many platforms with the minimum of development effort.</ns0:p><ns0:p>CodonGenie is designed to follow the concept of 'microservices' <ns0:ref type='bibr' target='#b20'>(Williams et al., 2016)</ns0:ref>. Microservice architecture advocates the breaking down of large, monolithic applications into simple, atomic services of limited scope of functionality. By deconstructing large applications or pipelines (such as a DNA design tool) into a collection of independent units (such as a codon design module), the individual microservices can be developed, tested and deployed in isolation, increasing their reliability and reusability. CodonGenie follows this paradigm (the entire application consists of ~700 lines of code) and allows for integration into larger applications by providing a simple computer-readable RESTful web service API, as well as making itself available as a Docker container <ns0:ref type='bibr' target='#b0'>(Belmann et al, 2015;</ns0:ref><ns0:ref type='bibr' target='#b11'>Leprevost et al., 2017)</ns0:ref>, allowing users to easily redeploy their own instantiation on individual computers and services, or cloud-based platforms.</ns0:p><ns0:p>One example of the use of the CodonGenie as a microservice within a larger application is in automating the design of a synthetic DNA sequence to encode a protein sequence generated from a multiple sequence alignment. Consider a multiple sequence alignment of a hypothetic active site of an enzyme: PFDMR PIAMR Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head>PLHLR PMNMR PVHMR</ns0:head><ns0:p>The CodonGenie webservice facilitates the writing of a simple script to automate the process of designing a synthetic DNA sequence that captures the variation encoded in this alignment. By iterating through the alignment, the set of amino acids required at each position can be collected ({P} for position 1, {FILMV} for position 2, etc.). These sets can be submitted to the CodonGenie webservice (along with a desired host organism) and a synthetic DNA sequence built up from the highest-scoring ambiguous codon returned. In practice, CodonGenie would produce the following DNA sequence for E. coli: CCG|DTK|VMT|MTG|CGT</ns0:p><ns0:p>In this example, the first codon (CCG) is not strictly an ambiguous codon, as it contains no ambiguous nucleotides, given that a single amino acid, P, is required in the first position. The codon returned is the therefore the most frequent codon for encoding proline in E. coli. The second codon is the optimum codon for encoding F, I, L, M and V, as shown previously.</ns0:p><ns0:p>This example shows the benefit of offering webservice access to the CodonGenie method. While manually designing an optimised DNA sequence for a short alignment such as this is tractable, performing a similar operation on a longer alignment or a number of alignments in a manual fashion would not be feasible. Example code performing this simple operation is available (https://github.com/synbiochem/CodonGenie/blob/master/codon_genie/example/align.py), giving an indication of the ease with which CodonGenie could be incorporated into more comprehensive DNA design pipelines.</ns0:p></ns0:div>
<ns0:div><ns0:head>Conclusion</ns0:head><ns0:p>CodonGenie provides two simple-to-use yet valuable tools that aid the design of variant protein libraries in mutagenesis and directed evolution studies. Through both its web and web service interfaces, CodonGenie is amenable to future integration with new and existing variant library design software tools <ns0:ref type='bibr' target='#b19'>(Swainston et al, 2014)</ns0:ref>. Its modular and open-source format allows for straightforward adaptation to emerging needs in the synthetic biology community, in particular the consideration of augmented genetic codes and expanded genetic alphabets <ns0:ref type='bibr' target='#b10'>(Lajoie et al, 2013;</ns0:ref><ns0:ref type='bibr'>Zhang, 2017)</ns0:ref>.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Align the first two positions and select the most specific ambiguous bases to encode the alignment. For example, with the combination asparagine and isoleucine (encoded by AA[CT] and AT[ACT] respectively), the alignment of the first two positions is A[AT], i.e. AW. All combinations of aligned wobble positions are calculated, i.e. [CA], [CC], [CT], [TA], [TC], [TT]. These are then collapsed into unique sets, in this example giving [CA], C, [CT], PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16109:1:1:NEW 17 May 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>all variants of ambiguous codon 𝑐} 𝐴 = { target amino acids } 𝑎 𝑖 : amino acid encoded by codon 𝑖 ∈ 𝐶 𝑓 𝑖 : codon usage frequency of codon 𝑖 ∈ 𝐶 Set of synonymous codons of codon i 𝑆 𝑖 = {𝑗 :𝑎 𝑗 = 𝑎 𝑖 } Set of codon variants of c encoding target amino acids 𝑅 = {𝑖 ∈ 𝐶 :𝑎 𝑖 ∈ 𝐴} PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16109:1:1:NEW 17 May 2017)</ns0:figDesc></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='13,42.52,484.87,525.00,272.25' type='bitmap' /></ns0:figure>
<ns0:note place='foot'>PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16109:1:1:NEW 17May 2017) </ns0:note>
</ns0:body>
" | "Manchester Centre for Synthetic Biology of Fine and Speciality Chemicals (SYNBIOCHEM)
Manchester Institute of Biotechnology
University of Manchester
Manchester
M1 7DN
United Kingdom
16 May 2017
Dr Sophie Kusy
Associate Editor
PeerJ, Inc.
PO Box 910224
San Diego
CA 92191
USA
Dear Dr Kusy,
Thanks to both you and the reviewers for providing valuable feedback and suggestions on our manuscript, “CodonGenie: optimised ambiguous codon design tools”.
We address each of the reviewer comments below, with reviewers’ comments in grey and our responses in black. These useful suggestions have resulted in an extensive rewrite of sections of the manuscript, and have also led to the addition of new and improved functionality in the application itself.
We therefore hope that we have addressed the reviewers’ comments and that the improved manuscript and application are now suitable for publication.
Yours sincerely,
Dr Neil Swainston.
Response to reviewers
Firstly, I must apologise for the long period of review - the fault was largely mine. You will find three of the four reviewers have commented positively on your manuscript, but R1 considers that the paper in its current form is not suitable for publication, because no research question has been addressed.
Thanks to both you and the Reviewers for their thoughtful comments.
From a pure methodological perspective, R1's principle objection is well supported and I urge you to consider rephrasing aspects of the manuscript to make clear:
1) Why a new tool for codon set generation is necessary [e.g. to support enhanced and synthetic coding systems]
It is perhaps difficult to specify why an entirely new tool is necessary. In bioinformatics in general, there are multiple “competing” tools for sequence analysis, metabolomics data analysis, primer design, etc., that coexist. In producing CodonGenie, we were not attempting to compete with existing tools such as DYNAMCC, but rather implement a resource that covers aspects that may not be supported in existing work. It is clear, though, that we have not highlighted these differences in the initial submission, and have therefore updated the manuscript to make these additional considerations more explicit.
2) How the design of codon genie meets these needs [e.g. presenting a real world example where codon genie performs in a superior fashion - or makes its capabilities available to other web applications via its rest api].
An example of the use of CodonGenie – specifically, its RESTful API – is now provided, demonstrating how the tool can be used to design synthetic DNA from a multiple amino acid sequence alignment. Example source code is also made available.
3) Additionally, I would recommend you also describe any user interface design decisions that were made during the course of developing the final version of CodonGenie - other web applications for this purpose have a very different visual style, and it would be relevant to report any design principles or usability optimisations that you implemented.
CodonGenie has been built based on two premises: application of the concept of “microservices” and the use of modern web development libraries to produce a clean and simple user interface. The manuscript was lacking an explanation of these motivations and has been updated to cover these issues.
4) R4 points out that directed evolution is just one of the methods used in protein engineering. Please address this in your revision. R3 additionally highlights other methodologies for large-scale mutagenesis methods that are perhaps appropriate to be highlighted in this paper.
We note that Reviewers 3 and 4 raised these similar concerns and address them in comments to Reviewer 3 below, and of course in updates to the manuscript.
5) R3 provides detailed revision suggestions in both the general text and description of the method and applicability of the CodonGenie webapp. R4 also points out that more detail concerning the differences between Codon-Genie and Halweg-Edwards et al. are required. Both R2 and R3 suggest the mathematical details require more rigorous explanation - this would be particularly useful for readers less familiar with protein encoding models used by biological organisms. Additionally, R3 suggests it would be useful to provide an example where organism specific coding systems yield different results with CodonGenie's methods.
Each of the above points has been considered in specific responses to reviewers’ individual concerns, below.
6) The first step for users employing codon Genie is to enter the target organism. Although this text box provides an autocomplete list, I found the widget somewhat unreliable if arbitrary text is entered. I suggest you provide a link to the list of supported taxons and a short description of the OTU naming conventions supported (e.g. can NCBI ids be provided?).
CodonGenie supports all organisms specified to the Codon Usage Database (a total of 35,792 organisms). We agree that it would be useful to make this clearer, have updated the manuscript accordingly, and have introduced an additional web service (http://codon.synbiochem.co.uk/organisms/) to the API allowing users to extract a full list of all supported organisms and their NCBI ids, which is both human and computer readable. Furthermore, the existing web service for returning organism names, which is used to fill the auto-fill organism textbox, has been updated to ensure alphabetical ordering of returned names, which improves usability and should (hopefully) behave a little more intuitively for users.
As a further aside, the webservice for returning results has been completely rewritten to present the results in a more (human and computer) readable manner.
7) Thank you for complying with PeerJ CS's requirements regarding the provision of source code. Please ensure that you create a tag for the version you describe in the paper, and additionally, please also revise your readme.txt to include instructions for Docker in addition to the Google compute engine.
The readme.txt file has been updated as requested.
Please address all of these comments. Again, I apologise for the long delay in returning these reviews to you, and look forward to receiving your revised manuscript.
Reviewer 1 (Anonymous)
Experimental design
This article describes a simple tool for the design of ambiguous codons for encoding multiple amino acids while minimizing undesirable designs. Additionally, using a specific scoring scheme, the tool can score the ambiguous codons for utilizing codons used preferentially in protein coding genes of a target organism.
The PeerJ Computer Science journal only considers research articles, and this manuscript details the implementation of a software tool that uses a brute force method to rank and suggest ambiguous codons to its user. I personally do not see a research objective; either in the form of a hypothesis that is tested, or an algorithm that offers some non-trivial time and/or space complexity. As such, I do not believe there exists a meaningful research question that is pursued.
Validity of the findings
No research question identified.
Comments for the Author
The manuscript describes an interesting and well-implemented tool that can prove useful to the protein design community. It would benefit from pursuing a research question, such as the success of the methodology and scoring scheme described in experiments involving design and evaluation of protein variant libraries.
We thank the reviewer for their comments and agree that there could be more discussion given to both the utility of the work and the specific design considerations that we encountered when implementing this work. Please refer to comments above in response to the Editor, and updates to the manuscript that we hope will alleviate these concerns.
Reviewer 2 (Anonymous)
Basic reporting
The paper by Swainston et al. describes CodonGenie, a freely available online tool for designing ambiguous codons to code for defined sets of amino acids. Such a tool is very valuable when constructing protein variant libraries for directed evolution experiments. The ambiguous codons are then ranked according to their efficiency encoding the desired amino acids.
The paper is well written and the language is clear. However, the authors may consider explaining in a bit more detail how they derive the numbers in the sentence in lines 45 – 48.
The manuscript has been updated with details of how the combinatorials were calculated.
Validity of the findings
The authors' approach is to derive suitable ambiguous codons and determine their 'quality' is sound.
Comments for the Author
The web application is very easy and intuitive to use and I would like to commend the authors on their effort to provide a very slimmed down and clean interface that manages to provide the relevant information in a clear and concise manner.
We thank the reviewer for their positive comments.
Reviewer 3 (Anonymous)
Basic reporting
In this short paper, Swainston and colleagues describe a new online tool named CodonGenie, for helping practitioners of directed evolution to design their degenerate codons. Such tools are useful, although as noted by the authors (and in my revisions, below), they are not the first to provide one.
Overall, the basic reporting (writing, figure presentation etc.) is suitable for publication in PeerJ, with the following revisions:
1. The first two paragraphs should be rewritten. Currently, the primary point of them seems to be to advertise/cite a lot of the authors’ previous papers. The first sentence (lines 26-28) makes it sound as though directed evolution is the only way to do protein engineering – better to clarify that directed evolution is one approach of many (e.g. site-directed mutagenesis, de novo design, ancestral sequence reconstruction). Similarly, how do the authors imagine that site-directed mutagenesis (defined by Wikipedia, no less, as “…used to make specific and intentional changes to the DNA sequence of a gene”) is a method for directed evolution (line 28)? Do they mean site-saturation mutagenesis here? What is the point of the second paragraph, in the context of CodonGenie? My (re-)interpretation of the first two paragraphs is, “There are a lot of directed evolution techniques that make use of degenerate codons, including site-saturation mutagenesis and a number of newer methods.” The newer methods aren’t “in contrast” (line 35) to site saturation. And rather than citing four of their own papers, the authors might also consider other large-scale mutagenesis methods such as PFunkel (Firnberg and Ostermeier, PLoS One, 2012) and EMPIRIC (Hietpas et al., PNAS, 2011).
The reviewer is of course correct to note that protein engineering and directed evolution are not synonymous and that there is a range of applications in which the design of variant codons would be applicable. As such, we have greatly trimmed the Introduction to reduce any bias towards a particular application area, and have removed references to DNA assembly techniques that could be applied to synthetic DNA designed through this tool.
2. The calculations in lines 46-48 should either be explained, or deleted. While I understand where 1,048,575 comes from (line 46), I suspect many experimentalists would not (but may wish to know, so they can perform similar calculations on their own experimental systems). On the other hand, I can’t fathom where 15^3 and 4^3 (line 48) come from.
The manuscript has been updated with details of how the combinatorials were calculated.
Experimental design
1. CodonGenie is a really nice little algorithm, but the authors should better explain the gap that it fills. On line 56, they pertinently cite the DYNAMCC algorithm (Halweg-Edwards et al.). In my experience, the other algorithm closest to CodonGenie is AA-Calculator (Firth and Patrick, Nucleic Acids Res., 2008). CodonGenie has clear points of difference to DYNAMCC (which gives you a set of codons with no redundancy, instead of one optimal codon) and AA-Calculator (which gives all degenerate codon possibilities, but no convenient score to choose between them) – point out these differences to the reader!
These suggestions have been added to the paper, with a specific example given of differences between the output of the same query from CodonGenie and DYNAMCC.
2. The authors emphasize organism-specific codon design as a key advantage of CodonGenie (e.g. lines 53, 85, 103, 122-124). Can they provide examples where organism-specific codon usage might change the ‘answer’ provided by CodonGenie? Further discussion, and a table or figure, would be justified. There are perhaps half a dozen genetically tractable species in routine use as hosts for protein engineering and synthetic biology. Could the authors come up with, for example, a case in which one degenerate codon might be best for E. coli, and another for S. cerevisiae? How commonly (or rarely) does the choice of host affect the identity of the top-ranked codon – once in a blue moon, or more often than we might think?
The manuscript has been updated to include an example of the differences that can occur when designing a variant codon for the same set of amino acids in both E. coli and Streptomyces coelicolor.
Validity of the findings
There are no data or experiments, however the algorithm and the scoring function are clearly described. The web application appears to be bug free and easy to use.
We thank the reviewer for their positive comments.
Reviewer 4 (Markus Herrgard)
Basic reporting
The article is well written and clear.
Two minor comments:
1) The introduction equates protein engineering with directed evolution. Directed evolution is usually used in protein engineering projects, but it is certainly possible to also just do rational protein design without any diversity generation and screening.
We refer the reviewer to the above comments to Reviewer 3 who made similar concerns, and the updated manuscript.
2) It would be good to comment briefly on how the approach described by Halweg-Edwards et al. cited in the article differs from the approach described in this article. This information would aid the reader in deciding which tool to use for a given task.
Again, we note that Reviewer 3 raised essentially the same point and refer to the comments above and the updated manuscript.
Validity of the findings
The results here are a tool that can be used to design variant libraries for protein engineering applications. The method underlying the tool is solid, the web-based tool works and would be expected to provide value for applications. The only data used for the method are publicly available codon usage tables for different organisms.
We thank the reviewer for their positive comments.
" | Here is a paper. Please give your review comments after reading it. |
742 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Despite recent algorithmic improvements, learning the optimal structure of a Bayesian network from data is typically infeasible past a few dozen variables. Fortunately, domain knowledge can frequently be exploited to achieve dramatic computational savings, and in many cases domain knowledge can even make structure learning tractable. Several methods have previously been described for representing this type of structural prior knowledge, including global orderings, super-structures, and constraint rules. While superstructures and constraint rules are flexible in terms of what prior knowledge they can encode, they achieve savings in memory and computational time simply by avoiding considering invalid graphs. We introduce the concept of a 'constraint graph' as an intuitive method for incorporating rich prior knowledge into the structure learning task. We describe how this graph can be used to reduce the memory cost and computational time required to find the optimal graph subject to the encoded constraints, beyond merely eliminating invalid graphs. In particular, we show that a constraint graph can break the structure learning task into independent subproblems even in the presence of cyclic prior knowledge. These subproblems are well suited to being solved in parallel on a single machine or distributed across many machines without excessive communication cost.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>Introduction</ns0:head><ns0:p>Bayesian networks are directed acyclic graphs (DAGs) in which nodes correspond to random variables and directed edges represent dependencies between these variables. Conditional independence between a pair of variables is represented as the lack of an edge between the two corresponding nodes.</ns0:p><ns0:p>The parameters of a Bayesian network are typically simple to interpret, making such networks highly desirable in a wide variety of application domains that require model transparancy.</ns0:p><ns0:p>Frequently, one does not know the structure of the Bayesian network beforehand, making it necessary to learn the structure directly from data. The most intuitive approach to the task of Bayesian network structure learning (BNSL) is 'search-and-score,' in which one iterates over all possible DAGs and chooses the one that optimizes a given scoring function. Recent work has described methods that find the optimal Bayesian network structure without explicitly considering all possible DAGs <ns0:ref type='bibr' target='#b10'>(Malone et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b19'>Yuan et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b5'>Fan et al., 2014;</ns0:ref><ns0:ref type='bibr' target='#b9'>Jaakkola et al., 2003)</ns0:ref>, but these methods are still infeasible for more than a few dozen variables. In practice, a wide variety of heuristics are often employed for larger datasets. These algorithms, which include branch-and-bound <ns0:ref type='bibr' target='#b15'>(Suzuki, 1996)</ns0:ref>, Chow-Liu trees <ns0:ref type='bibr' target='#b1'>(Chow & Liu, 2003)</ns0:ref>, optimal reinsertion <ns0:ref type='bibr' target='#b11'>(Moore & Wong, 2003)</ns0:ref>, and hill-climbing <ns0:ref type='bibr' target='#b18'>(Tsamardinos et al., 2006)</ns0:ref>, typically attempt to efficiently identify a structure that captures the majority of important dependencies.</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2017:03:16734:2:0:NEW 6 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>In many applications, the search space of possible network structures can be reduced by taking into account domain-specific prior knowledge <ns0:ref type='bibr' target='#b7'>(Gamberoni1 et al., 2005;</ns0:ref><ns0:ref type='bibr' target='#b21'>Zuo & Kita, 2012;</ns0:ref><ns0:ref type='bibr' target='#b14'>Schneiderman, 2004;</ns0:ref><ns0:ref type='bibr' target='#b20'>Zhou & Sakane, 2003)</ns0:ref>. A simple method is to specify an ordering on the variables and require that parents of a variable must precede it in the ordering <ns0:ref type='bibr' target='#b2'>(Cooper & Herskovits, 1992)</ns0:ref>. This representation leads to tractable structure learning because identifying the parent set for each variable can be carried out independently from the other variables. Unfortunately, prior knowledge is typically more ambiguous than knowing a full topological ordering and may only exist for some of the variables. A more general approach to handling prior knowledge is to employ a 'super-structure,' i.e., an undirected graph that defines the super-set of edges defining valid learned structures, forbidding all others <ns0:ref type='bibr' target='#b13'>(Perrier et al., 2008)</ns0:ref>. This method has been fairly well studied and can also be used as a heuristic if defined through statistical tests instead of prior knowledge. A natural extension of the undirected super-structure is the directed super-structure <ns0:ref type='bibr' target='#b12'>(Ordyniak & Szeider, 2013)</ns0:ref>, but to our knowledge the only work done on directed super-structures proved that an acyclic directed superstructure is solvable in polynomial time. An alternate, but similar, concept is to define which edges must or cannot exist as a set of rules <ns0:ref type='bibr' target='#b0'>(Campos & Ji, 2011)</ns0:ref>. However, these rule-based techniques do not specify how one would exploit the constraints to reduce the computational time past simply skipping over invalid graphs.</ns0:p><ns0:p>We propose the idea of a 'constraint graph' as a method for incorporating prior information into the BNSL task. A constraint graph is a directed graph where each node represents a set of variables in the BNSL problem and edges represent which variables are candidate parents for which other variables.</ns0:p><ns0:p>The primary advantage of constraint graphs versus other methods is that the structure of the constraint graph can be used to achieve savings in both memory cost and computational time beyond simply eliminating invalid structures. This is done by breaking the problem into independent subproblems even in the presence of cyclic prior knowledge. An example of this cyclic prior knowledge is identifying two groups of variables that can draw parents only from each other, similar to a biparte graph. It can be difficult to identify the best parents for each variable that does not result in a cycle in the learned structure. In addition, constraint graphs are visually more intuitive than a set of written rules while also typically being simpler than a super-structure, because constraint graphs are defined over sets of variables instead of the original variables themselves. This intuition, combined with automatic methods for identifying parallelizable subproblems, makes constraint graphs easy for non-experts to define and use without requiring them to know the details of the structure learning task. This technique is similar to work done by <ns0:ref type='bibr' target='#b5'>Fan et al. (2014)</ns0:ref>, where the authors describe the same computational gains through the identification of 'potentially optimal parent sets.' One difference is that Fan et al. define the constraints on individual variables instead of on sets on variables, as this work does. By defining the constraints on sets of variables instead of individual ones, one can identify further computational gains when presented with cyclic prior knowledge. Given that two types of graphs will be discussed throughout this paper, the Bayesian network we are attempting to learn and the constraint graph, we will use the terminology 'variable' exclusively in reference to the Bayesian network and 'node' exclusively in reference to the constraint graph.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>Constraint Graphs</ns0:head><ns0:p>A constraint graph is a directed graph in which nodes contain disjoint sets of variables from the BNSL task, and edges indicate which sets of variables can serve as parents to which other sets of variables. A self-loop in the constraint graph indicates that no prior knowledge is known about the relationship between variables in that node, whereas a lack of a self-loop indicates that no variables in that particular node can serve as parents for another variable in that node. Thus, the naive BNSL task can be represented as a constraint graph consisting of a single node with a self-loop. A constraint graph can be thought of as a way to group the variables (Fig. <ns0:ref type='figure' target='#fig_0'>1a</ns0:ref>), define relationships between these groups (Fig. <ns0:ref type='figure' target='#fig_0'>1c</ns0:ref>), and then guide the BNSL task to efficiently find the optimal structure given these constraints (Fig. <ns0:ref type='figure' target='#fig_0'>1d</ns0:ref>). In contast, a directed super-structure defines all possible edges that can exist in accordance with the prior knowledge (Fig. <ns0:ref type='figure' target='#fig_0'>1b</ns0:ref>). Typically, a directed super-structure is far more complicated than the equivalent constraint graph. Cyclic prior knowledge can be represented as a simple cycle in the constraint graph, such that the variables in node A draw their parents solely from node B, and B from A.</ns0:p><ns0:p>Any method for reducing computational time through prior knowledge exploits the 'global parameter independence property' of BNSL. Briefly, this property states that the optimal parents for a variable The variables are colored according to the group that they belong to, which is defined by the user. These variables can either (b) be organized into a directed super structure or (c) grouped into a constraint graph to encode equivalent prior knowledge. Both graphs define the superset of edges which can exist, but the constraint graph uses far fewer nodes and edges to encode this knowledge. (d) Either technique can then be used to guide the BNSL task to learn the optimal Bayesian network given the constraints. are independent of the optimal parents for another variable given that the variables do not form a cycle in the resulting Bayesian network. This acyclicity requirement is typically computationally challenging to determine because a cycle can involve more variables than the ones being directly considered, such as a graph which is simply a directed loop over all variables. However, given an acyclic constraint graph or an acyclic directed super-structure, it is impossible to form a cycle in the resulting structure; hence, the optimal parent set for each variable can be identified independently from all other variables. A convenient property of constraint graphs, and one of their advantages relative to other methods, is that independent subproblems can be found through global parameter independence even in constraint graphs which contain cycles. We describe in Section 3.2 the exact algorithm for finding optimal parent sets for each case one can encounter in a constraint graph. Briefly, the constraint graph is first broken up into its strongly connected components (SCCs) that identify which variables can have their parent sets found independently from all other variables ('solving a component') without the possibility of forming a cycle in the resulting graph. Typically these SCCs will be single nodes from the constraint graph, but may be comprised of multiple nodes if cyclic prior knowledge is being represented. In the case of an acyclic constraint graph, all SCCs will be single nodes, and in fact each variable can be optimized without needing to consider other variables, in line with theoretical results from <ns0:ref type='bibr' target='#b12'>Ordyniak & Szeider (2013)</ns0:ref>. In addition to allowing these problems to be solved in parallel, this breakdown suggests a more efficient method of sharding the data in a distributed learning context. Specifically, one can assign an entire SCC of the constraint graph to a machine, including all columns of data corresponding to the variables in that SCC and all variables in nodes which are parents to nodes in the SCC. Given that all subproblems which involve this shard of the data are contained in this SCC of the constraint graph, there will never be duplicate shards and all tasks involving a shard are limited to the same machine. The concept of identifying SCCs as independent subproblems has also been described in <ns0:ref type='bibr' target='#b5'>Fan et al. (2014)</ns0:ref>.</ns0:p><ns0:p>It is possible to convert any directed super-structure into a constraint graph and vice-versa though it is far simpler to go from a constraint graph to a directed super-structure. To convert from a directed super-structure to a constraint graph, one must first identify all strongly connected components that are more than a single variable. All variables in a strongly connected component can be put into the same node in a constraint graph that contains a self loop. Then, one would tabulate the unique parent and children sets a variable can have. All variables outside of the previously identified strongly connected components with the same parent and children sets can be grouped together into a node in the constraint graph. Edges then connect these sets based on the shared parent sets specified for each node. In the situation where a node in the constraint graph can draw parents from only a subset of the variables in a node created by the identification of the strongly connected components, the node must <ns0:ref type='table' target='#tab_3'>-2017:03:16734:2:0:NEW 6 Jun 2017)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>3 PeerJ Comput. Sci. reviewing PDF | (CS</ns0:formula><ns0:p>Computer Science be broken into two nodes that both have self loops and loops connecting to each other to allow for only a subset of those variables to serve as a parent for another node. In contrast, to convert from a constraint graph to a directed super-structure one would simply draw, for each node, an edge from all variables in the current node to all variables in the node's children. We suggest that constraint graphs are the more intuitive method both due to their simpler representation and ease of extracting computational benefits from the task.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>Methods</ns0:head></ns0:div>
<ns0:div><ns0:head n='3.1'>Bayesian Network Structure Learning</ns0:head><ns0:p>Although solving a component in a constraint graph can be accomplished by a variety of algorithms including heuristic algorithms, we assume for this paper that one is using some variant of the exact dynamic programming algorithm proposed by <ns0:ref type='bibr' target='#b10'>Malone et al. (2011)</ns0:ref>. We briefly review that algorithm here.</ns0:p><ns0:p>The goal of the algorithm is to identify the optimal Bayesian network defined over the set of variables without having to repeat any calculations and without having to use excessive memory. This is done by defining additional graphs, the parent graphs and the order graph. We will refer to each node in these graphs as 'entries' to distinguish them from the constraint graph and the learned Bayesian network. A parent graph is defined for each variable and can be defined as a lattice, where the entries to some layer i correspond to combinations of all other variables of size i. Each entry is connected to the entries in the previous layers that are subsets of that entry such that (X 1 , X 2 ) would be connected to both X 1 and X 2 . For each entry, the score of the variable is calculated using the parents in the entry and compared to the scores held in the parent entries, recording only the best scoring value and parent set amongst them. These entries then hold the dynamically calculated best parent set and associated score, allowing for constant time lookups later on of the best parent set given a set of possible parents. The order graph is structured in the same manner as the parent graphs except over all variables. In contrast with the parent graphs, it is the edges that store useful information in the form of the score associated with adding a given variable to the set of seen variables stored in the entry and the parent set that yields this score Each path from the empty root node to the leaf node containing the full set of variables encodes the optimal network given a topological sort of the variables, and the shortest path encodes the optimal network. This data structure reduces the time required to find the optimal Bayesian network from O(n2 n(n−1) ) time in the number of variables to O(n2 n ) time in the number of variables without the need to keep a large cache of values.</ns0:p><ns0:p>Structure learning is flexible with respect to the score function used to identify the optimal graph. There are many score functions that typically aim to penalize the log likelihood of the data by the complexity of the graph to encourage sparser structures. These usually come in the form of Bayesian score functions, such as Bayesian-Dirichlet <ns0:ref type='bibr' target='#b8'>(Heckerman et al., 1995)</ns0:ref>, or those derived from information theory, such as minimum description length (MDL) <ns0:ref type='bibr' target='#b15'>(Suzuki, 1996)</ns0:ref>. Most score functions decompose across variables of a Bayesian network according to the global parameter independence property, such that the score for a dataset given a model is equal to the product of the score of each variable given its parents. While constraint graphs remain agnostic to the specific score function used, we assume that MDL is used as it has several desirable computational benefits. For review, MDL defines the score as the following:</ns0:p><ns0:formula xml:id='formula_1'>M DL(D|M ) = P (D|M ) − 1 2 log(N )|B| (1)</ns0:formula><ns0:p>where |B| defines the number of parameters in the network. The term 'minimum description length' arises from needing 1 2 log(N ) bits to represent each parameter in the model, making the second term the total number of bits needed to represent the full model. The MDL score function has the convenient property that a variable cannot have more than log n log(n) parents given n samples, greatly reducing computational time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Solving a Component of the Constraint Graph</ns0:head><ns0:p>The strongly connected components of a constraint graph can be identified using Tarjan's algorithm <ns0:ref type='bibr' target='#b17'>(Tarjan, 1971)</ns0:ref>. Each SCC corresponds to a subproblem of the constraint graph and can be solved Manuscript to be reviewed</ns0:p><ns0:p>Computer Science independently. In many cases the SCC will be a single node of the constraint graph, because prior knowledge is typically not cyclic. In general, the SCCs of a constraint graph can be solved in any order due to the global parameter independence property.</ns0:p><ns0:formula xml:id='formula_2'>(0, 1) (0, 1, 2) (0, 1, 2, 3) (1, 2) (1, 2, 3) (0,) (1,) (2,) (2, 3) (0, 2, 3) (3,) (0, 3) (0, 1, 3) (,)<ns0:label>0</ns0:label></ns0:formula><ns0:p>The algorithm for solving an SCC of a constraint graph is a straightforward modification of the dynamic programming algorithm described above. Specifically, parent graphs are created for each variable in the SCC but defined only over the union of possible parents for that variable. Consider the case of a simple, four-node cycle with no self-loops such that</ns0:p><ns0:formula xml:id='formula_3'>W → X → Y → Z → W . A</ns0:formula><ns0:p>parent graph is defined for each variable in W ∪ X ∪ Y ∪ Z but only over valid parents. For example, the parent graph for X 1 would be over only variables in W . Then, an order graph is defined with entries that violate the edge structure of the constraint graph filtered out. The first layer of the order graph would be unchanged with only singletons, but the second layer would prohibit entries with two variables from the same layer because there are no valid orderings in which X i is a parent of X j , and would prohibit entries in which a variable W is joined with a variable of Y . One can identify valid entries by taking the entries of a previous layer and iterating over each variable present, adding all valid parents for that variable which are not already present in the set.</ns0:p><ns0:p>A simple example illustrating the algorithm is a constraint graph made up of a four node cycle where each node contains only a single variable (Fig <ns0:ref type='figure' target='#fig_2'>2a</ns0:ref>). The parent graphs defined for this would consist solely of two entries, the null entry and the entry corresponding to the only valid parent. The first layer of the order graph would be all variables as previously (2b). However, once a variable is chosen to start the topological ordering the order of the remaining variables is fixed because of the constraints, producing a far simpler lattice.</ns0:p><ns0:p>Because constraint graphs can encode a wide variety of different constraints, the complexity of the task depends on the structure of the constraint graph. Broadly, the results from Ordyniak & Szeider (2013) still hold, namely, that acyclic constraint graphs can be solved in quadratic time. As was found in <ns0:ref type='bibr' target='#b5'>Fan et al. (2014)</ns0:ref>, because each SCC can be solved independently, the time complexity for constraint graphs containing a cycle corresponds to the time complexity of the worst case component.</ns0:p><ns0:p>Fortunately, although the complexity of a node engaging in a cycle is still exponential, it is only exponential with respect to the number of variables that node interacts with. Adding additional, equally sized nodes to the constraint graph only causes the algorithm to grow linearly in time and has no additional memory cost if the components are solved sequentially.</ns0:p><ns0:p>The algorithm described above has five natural cases and are described below.</ns0:p><ns0:p>One node, no parents, no self loop: The variables in this node contain no parents, so nothing needs to be done to find the optimal parent sets given the constraints. This naturally takes O(1) time to solve.</ns0:p><ns0:p>One node, no parents, self loop: This is equivalent to exact BNSL with no prior knowledge. In this case, the previously proposed dynamic programming algorithm is used to identify the optimal 5</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2017:03:16734:2:0:NEW 6 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science structure of the subnetwork containing only variables in this node. This takes O(n2 n ) time where n is the number of variables in the node.</ns0:p><ns0:p>One node, one or more parent nodes, no self loop: In this case it is impossible for a cycle to be formed in the resulting Bayesian network regardless of optimal parent sets, so we can justify solving every variable in this node independently by the global parameter independence property. Doing so results in a significant improvement over applying the algorithm naively, because neither the parent graphs nor the order graph need to be explicitly calculated or stored. The optimal parent set can be calculated without the need for dynamic programming because the optimal topological ordering does not need to be discovered. Because no dynamic programming needs to be done, there is no need to store either the parent or order graphs in memory. This takes O(nm k ) time, where n is the number of variables in the node, m is the number of possible parents, and k is the maximum number of parents that a node can have, in this case set by the MDL algorithm. If k is set to any constant value, then this step requires quadratic time with respect to the number of possible parents and linear with respect to the number of variables in the node.</ns0:p><ns0:p>One node, one or more parents, self loop: Initially, one may think that solving this SCC could involve taking the union of all variables from all involved nodes, running exact BNSL over the full set, and simply discarding the parent sets learned for the variables not in the currently considered node. However, in the same way that one should not handle prior knowledge by learning the optimal graph over all variables and discarding edges which offend the prior knowledge, one should not do the same in this case. Instead, a modification to the dynamic programming algorithm itself can be made to restrict the parent sets on a variable-by-variable basis. For simplicity, we define the variables in the current node of the constraint graph as X and the union of all variables in the parent nodes in the constraint graph as Y . We begin by setting up an order graph, as usual defined over X. We then add Y to each node in the order graph such that the root node now is now comprised of Y instead of the empty set and the leaf node is comprised of X ∪ Y instead of just X. Because the primary purpose of the order graph is to identify the optimal parent sets that do not form cycles, this addition is intuitive because it is impossible to form a cycle by including any of the variables in Y as parents for any of the variables in X. In other words, if one attempted to find the optimal topological ordering over X ∪ Y it would always begin with the variables in Y but would be invariant to the ordering of Y . Parent graphs are then created for all variables in X but are defined over the set of all variables in X ∪ Y , because that is the full set of parents that the variables could be drawn from. This restriction allows the optimal parents for each variable in X to be identified without wasting time considering what the parent set for variables in Y should be, or potentially throwing away the optimal graph because of improper edges leading from a variable in Y to a variable in X. This step takes O(n2 n+m ) time, where n is the number of variables in the node and m is the number of variables in the parent nodes. This is because we only need to define a parent graph for the variables in the node we are currently considering, but these parent graphs must be defined over all variables in the node plus all the variables in the parent nodes.</ns0:p><ns0:p>Multiple nodes: The algorithm as presented initially is used to solve an entire component at the same time.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>Results</ns0:head><ns0:p>While it is intuitive how a constraint graph provides computational gains by splitting the structure learning task into subproblems, we have thus far only alluded to the idea that prior knowledge can provide efficiencies past that. In this section we examine the computational gains achieved in the three non-trivial cases of the algorithm presented in Section 3.2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.1'>Acyclic Constraint Graphs Can Model the Global Stock Market</ns0:head><ns0:p>First, we examine the computational benefits of an acyclic constraint graph modeling the global stock market. In particular, we want to identify for each stock which other stocks are predictive to its performance. We chose to do this by learning a Bayesian network over the opening and closing prices for the same market are grouped into separate nodes, for a total of six nodes in the constraint graph. There are no self-loops because the opening price of one stock does not influence the opening price of another stock. Naturally, the closing prices of one group of stocks are influenced by the opening price of the stocks from the same market, but they are also influenced by the opening or closing prices of any markets which opened or closed in the meantime. For instance, the TSE closes after the FTSE opens, so the FTSE opening prices have the opportunity to influence the TSE closing prices. However, the TSE closes before the NYSE opens, so the NYSE cannot influence those stock prices. The dataset consists of opening and closing prices from these stocks between December 2nd 2015 and November 29th 2016, binarized to indicate whether the value was an increase compared to the prior price seen.</ns0:p><ns0:p>The resulting Bayesian network has some interesting connections (Fig. <ns0:ref type='figure' target='#fig_3'>3b</ns0:ref>). For example, the opening price of Microsoft influences the closing price of Raytheon, and the closing price of Debenhams plc, a British multinational realtor, influences the closing price of GE. In addition, there were some surprising and unexplained connections, such as Google and Johnson & Johnson influencing the closing price of Cobham plc, a British defense firm. Given that this example is primarily to illustrate the types of constraints a constraint graph can easily model, we suggest caution in thinking too deeply about these connections.</ns0:p><ns0:p>It took only ∼35 seconds on a computer with modest hardware to run BNSL over 250 samples. If we set the maximum number of parents to three, which is the empirically determined maximum number of parents, then it only takes ∼2 seconds to run. In contrast it would be infeasible to run the exact BNSL algorithm on even half the number of variables considered here.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.2'>Constraint Graphs Allow Learning of Bayesian Network Classifiers</ns0:head><ns0:p>Bayesian network classifiers are an extension of Bayesian networks to supervised learning tasks by defining a Bayesian network over both the feature variables and the target variables together. Normal inference methods are used to predict the target variables given the observed feature variables. In the case where feature variables are always observed, only the Markov blanket of the target variables must be defined, i.e. their parents and children. The other variables are independent of the target variables and can be discarded, serving as a form of feature selection. <ns0:ref type='table'>2</ns0:ref>: Algorithm comparison on a node with a self loop and other parents. The exact algorithm and the constrained algorithm proposed here were on a SCC comprosied of a main node with a self loop and one parent node. Shown are the results of increasing the number of variables in the main node while keeping the variables in the parent node steady at 5, and the results of increasing the number of variables in the parent node while keeping the number of variables in the main node constant. For both algorithm we show the number of nodes across all parent graphs (PGN), the number of nodes in the order graph (OGN), the number of edges in the order graph (OGE) and the time to compute.</ns0:p><ns0:p>A popular Bayesian network classifier is the naive Bayes classifier that defines a single class variable as the parent to all feature variables. A natural extension to this method is to learn which features are useful, instead of assuming they all are, thereby combining feature selection with parameter learning in a manner that has some similarities to decision trees. This approach can be modeled by using a constraint graph that has all feature variables X in one node and all target variables y in its parent node, such that y → X.</ns0:p><ns0:p>We empirically evaluated the performance of learning a simple Bayesian network classifier on the UCI Digits Dataset. The digits dataset is a collection of 8x8 images of handwritten digits, where the features are discretized values between 0 and 16 representing the intensity of that pixel and the labels are between 0 and 9 representing the digit stored there. We learn a Bayesian network, where the 64 pixels are in one node in the constraint graph and the class label is by itself it another node in the constraint graph that serves as a parent. We then train a Bayesian network classifier, a naive Bayes classifier, and a random forest classifier comprised of 100 trees, on a test set of 1500 images and test their performance on a held out 297 images. As expected, the learned Bayesian network classifier falls between naive Bayes and the random forest in terms of both training time and test set performance (Table <ns0:ref type='table'>.</ns0:ref> 1). Futhermore, more complicated Bayesian network classifiers can be learned with different constraint graphs. One interesting extension is that instead of constraining all features to be children of the target variable, to allow features to be either parents or children of the target variable. This can be specified by a cyclic constraint graph where y → X → y, preventing the model from spending time identifying dependencies between the features. Finally, in cases where some features may be missing, it may be beneficial to model all dependencies between the features in order to allow inference to flow from observed variables not directly connected to the target variables to the target variables. This can be modeled by adding a self loop on the features variables X, allowing all edges to be learned except those between pairs of target variables. Learning a Bayesian network classifier in this manner will suffer from the same computational challenges as an unconstrained version, given the looseness of the constraints. </ns0:p></ns0:div>
<ns0:div><ns0:head n='4.3'>Self-Loops And Parents</ns0:head><ns0:p>We then turn to the case where the strongly connected component is a main node with a self loop and a parent node. Because an order graph is defined only over the variables in the main node its size is invariant to the number of variables in the parent node, allowing for speed improvements when it comes to calculating the shortest path. In addition, parent graphs are only defined for variables in the parent set, and so while they are not smaller than the ones in the exact algorithm, there are fewer. We compare the computational time and complexity of the underlying order and parent graphs between the exact algorithm over the full set of variables and the modified algorithm based on a constraint graph (Table <ns0:ref type='table'>.</ns0:ref> 2). The data consisted of randomly generated binary values, because the running time does not depend on the presence of underlying structure in the data. We note that in all cases there are significant speed improvements and simpler graphs but that there are particularly encouraging speed improvements when the number of variables in the main node are increased. This suggests that it is always worth the time to identify which variables can be moved from a node with a self loop to a separate node.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4.4'>Cyclic Constraint Graphs</ns0:head><ns0:p>Lastly, we consider constraint graphs that encode cyclic prior knowledge. We visually inspect the results from cyclic constraint graphs to ensure that they do not produce cyclic Bayesian networks even when the potential exists. Two separate constraint graphs are inspected, a two node cycle and a four node cycle (Fig. <ns0:ref type='figure' target='#fig_4'>4a/c</ns0:ref>). The dataset is comprised of random binary values, where the value of one variable in the cycle is copied to the other variables in the cycle to add synthetic structure. However, by jointly solving all nodes cycles are avoided while dependencies are still captured (Fig. <ns0:ref type='figure' target='#fig_4'>4b/d</ns0:ref>).</ns0:p><ns0:p>We then compare the exact algorithm without constraints to the use of an appropriate constraint graph in a similer manner as before (Table <ns0:ref type='table'>.</ns0:ref> 3). This is done first for four node cycles where we increase the number of variables in each node of the constraint graph and then for increasing sized cycles with three variables per node. The exact algorithm likely produces structures that are invalid according to the constraints and so this comparison is done solely to highlight that efficiencies are gained by considering the constraints. In each case using a constraint graph yields simpler parent and order graphs and the computational time is significantly reduced. The biggest difference is in the number of nodes in the parent graphs, as the constraints place significant limitations on which variables are allowed to be parents for which other variables. Since the construction of the parent graph is the only part of the algorithm which considers the dataset itself it is unsurprising that significant savings are achieved for larger datasets when much smaller parent graphs are used.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>Discussion</ns0:head><ns0:p>Constraint graphs are a flexible way of encoding into the BNSL task prior knowledge concerning the relationships among variables. The graph structure can be exploited to identify potentially massive computational gains, and acyclic constraint graphs make problems tractable which would be infeasible to solve without constraints. This is particularly useful in cases where there are both a great number of variables and many constraints present from prior knowledge. We anticipate that the automatic manner in which parallelizable subtasks are identified in a constraint graph will be of particular interest given the recent increase in availability of distributed computing.</ns0:p><ns0:p>Although the networks learned in this paper are discrete, the same principles can be applied to all types of Bayesian networks. Because the constraint graph represents only a restriction in the parent set on a variable-by-variable basis, the same algorithms that are used to learn linear Gaussian or hybrid networks can be seamlessly combined with the idea of a constraint graph. In addition, most of the approximation algorithms which have been developed for BNSL can be modified to take into account constraints because these algorithms simply encode a limitation on the parent set for each variable.</ns0:p><ns0:p>One could extend constraint graphs in several interesting ways. The first is to assign weights to edges so that the weight represents the prior probability that the variables in the parent set are parents of the variables in the child set, perhaps as pseudocounts to take into account when coupled with a Bayesian scoring function. A second way is to incorporate 'hidden nodes' that are variables which model underlying, onobserved phenomena and can be used to reduce the parameterization of the network. Several algorithms have been proposed for learning the structure of a Bayesian network given hidden variables <ns0:ref type='bibr' target='#b4'>(Elidan et al., 2001;</ns0:ref><ns0:ref type='bibr' target='#b3'>Elidan & Friedman, 2005;</ns0:ref><ns0:ref type='bibr' target='#b6'>Friedman, 1997)</ns0:ref>. Modifying these algorithms to obey a constraint graph seems like a promising way to incorporate restrictions on this difficult task. A final way may be to encode ancestral relationships instead of direct parent 10 PeerJ Comput. Sci. reviewing PDF | (CS-2017:03:16734:2:0:NEW 6 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science relationships, indicating that a given variable must occur at some point before some other variable in the topological ordering.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>2Figure 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure1: A constraint graph grouping variables. (a) We wish to learn a Bayesian network over 11 variables. The variables are colored according to the group that they belong to, which is defined by the user. These variables can either (b) be organized into a directed super structure or (c) grouped into a constraint graph to encode equivalent prior knowledge. Both graphs define the superset of edges which can exist, but the constraint graph uses far fewer nodes and edges to encode this knowledge. (d) Either technique can then be used to guide the BNSL task to learn the optimal Bayesian network given the constraints.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>4</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2017:03:16734:2:0:NEW 6 Jun 2017)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 :</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure2: An example of a constraint graph and resulting order graph (a) A constraint graph is defined as a cycle over four nodes with each node containing a single variable. (b) The resulting order graph during the BNSL task. It is significantly sparser than the typical BNSL task because after choosing a variable to start the topological ordering the remaining variables must be added in the order defined by the cycle.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure3: A section of the learned Bayesian network of the global stock market. (a) The constraint graph contains six nodes, the opening and closing prices for each of the three markets. These are connected such that the closing prices in a market depend on the opening prices but also the most recent international activity. (b) The most connected subset of stocks from the learned network covering 25 variables.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>8Figure 4 :</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4: Cyclic constraint graphs (a) This constraint graph is comprised of a simple two node cycle with each node containing four variables. (b) The learned Bayesian network on random data where some variables were forced to identical values. Each circle here corresponds to a variable in the resulting Bayesian network instead of a node in the constraint graph. There were multiple possible cycles which could have been formed but the constraint graph prevented that from occuring. (c) This constraint graph now encodes a four node cycle each with four variables. (d) The learned Bayesian network on random data with two distinct loops of identical values forced. Again, no loops are formed.</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>7</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2017:03:16734:2:0:NEW 6 Jun 2017)</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Computer Science</ns0:cell><ns0:cell /><ns0:cell>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell cols='3'>Model Train Time (s) Test Set Accuracy</ns0:cell></ns0:row><ns0:row><ns0:cell>Naive Bayes</ns0:cell><ns0:cell>0.05</ns0:cell><ns0:cell>0.79</ns0:cell></ns0:row><ns0:row><ns0:cell>BNC</ns0:cell><ns0:cell>0.5</ns0:cell><ns0:cell>0.81</ns0:cell></ns0:row><ns0:row><ns0:cell>Random Forest</ns0:cell><ns0:cell>1.4</ns0:cell><ns0:cell>0.89</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 1 :</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Model comparison between naive Bayes, Bayesian network classifiers (BNC), and random forest. Three algorithms were evaluated on the UCI handwritten digits dataset, fed in the binarized value corresponding to whether the intensity of a pixel was above average. The fitting time and test set accuracy are reported for each algorithm.</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Exact</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Constraint Graph</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Variables</ns0:cell><ns0:cell>PGN</ns0:cell><ns0:cell>OGN</ns0:cell><ns0:cell cols='2'>OGE Time (s)</ns0:cell><ns0:cell cols='2'>PGN OGN</ns0:cell><ns0:cell cols='2'>OGE Time (s)</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>2304</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>2304</ns0:cell><ns0:cell>0.080</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>0.033</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>53248</ns0:cell><ns0:cell>8192</ns0:cell><ns0:cell>53248</ns0:cell><ns0:cell>1.30</ns0:cell><ns0:cell>32768</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>0.545</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>12 1114112 131072 1114112</ns0:cell><ns0:cell cols='4'>27.03 786432 4096 24576</ns0:cell><ns0:cell>9.56</ns0:cell></ns0:row><ns0:row><ns0:cell>Parents</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>2304</ns0:cell><ns0:cell>512</ns0:cell><ns0:cell>2304</ns0:cell><ns0:cell>0.087</ns0:cell><ns0:cell>1280</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>0.045</ns0:cell></ns0:row><ns0:row><ns0:cell>8</ns0:cell><ns0:cell>53248</ns0:cell><ns0:cell>8192</ns0:cell><ns0:cell>53258</ns0:cell><ns0:cell>1.401</ns0:cell><ns0:cell>20480</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>0.356</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>12 1114112 131072 1114112</ns0:cell><ns0:cell cols='2'>27.22 327680</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>80</ns0:cell><ns0:cell>4.01</ns0:cell></ns0:row><ns0:row><ns0:cell>Table</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>9</ns0:head><ns0:label /><ns0:figDesc>PeerJ Comput. Sci. reviewing PDF | (CS-2017:03:16734:2:0:NEW 6 Jun 2017)</ns0:figDesc><ns0:table><ns0:row><ns0:cell cols='4'>Computer Science</ns0:cell><ns0:cell /><ns0:cell cols='4'>Manuscript to be reviewed</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell cols='2'>Exact</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell>Exact</ns0:cell><ns0:cell /></ns0:row><ns0:row><ns0:cell>Variables</ns0:cell><ns0:cell>PGN</ns0:cell><ns0:cell>OGN</ns0:cell><ns0:cell cols='3'>OGE Time (s) PGN</ns0:cell><ns0:cell>OGN</ns0:cell><ns0:cell cols='2'>OGE Time (s)</ns0:cell></ns0:row><ns0:row><ns0:cell>1</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>0.005</ns0:cell><ns0:cell>8</ns0:cell><ns0:cell>14</ns0:cell><ns0:cell>16</ns0:cell><ns0:cell>0.005</ns0:cell></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>1024</ns0:cell><ns0:cell>0.036</ns0:cell><ns0:cell>32</ns0:cell><ns0:cell>186</ns0:cell><ns0:cell>544</ns0:cell><ns0:cell>0.014</ns0:cell></ns0:row><ns0:row><ns0:cell>3</ns0:cell><ns0:cell>24576</ns0:cell><ns0:cell>4096</ns0:cell><ns0:cell>24576</ns0:cell><ns0:cell>0.611</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>3086</ns0:cell><ns0:cell>16032</ns0:cell><ns0:cell>0.320</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>524288</ns0:cell><ns0:cell>65536</ns0:cell><ns0:cell>525288</ns0:cell><ns0:cell>14.0</ns0:cell><ns0:cell>256</ns0:cell><ns0:cell>54482</ns0:cell><ns0:cell>407328</ns0:cell><ns0:cell>7.12</ns0:cell></ns0:row><ns0:row><ns0:cell>Nodes</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>2</ns0:cell><ns0:cell>192</ns0:cell><ns0:cell>64</ns0:cell><ns0:cell>192</ns0:cell><ns0:cell>0.111</ns0:cell><ns0:cell>48</ns0:cell><ns0:cell>56</ns0:cell><ns0:cell>150</ns0:cell><ns0:cell>0.008</ns0:cell></ns0:row><ns0:row><ns0:cell>4</ns0:cell><ns0:cell>24576</ns0:cell><ns0:cell>4096</ns0:cell><ns0:cell>24576</ns0:cell><ns0:cell>0.634</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>3086</ns0:cell><ns0:cell>16032</ns0:cell><ns0:cell>0.217</ns0:cell></ns0:row><ns0:row><ns0:cell cols='4'>6 2359296 262144 2359296</ns0:cell><ns0:cell>60.9</ns0:cell><ns0:cell cols='3'>144 168068 1307358</ns0:cell><ns0:cell>26.12</ns0:cell></ns0:row><ns0:row><ns0:cell>Samples</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>100</ns0:cell><ns0:cell>24576</ns0:cell><ns0:cell>4096</ns0:cell><ns0:cell>24576</ns0:cell><ns0:cell>0.357</ns0:cell><ns0:cell>96</ns0:cell><ns0:cell>3086</ns0:cell><ns0:cell>16032</ns0:cell><ns0:cell>0.311</ns0:cell></ns0:row><ns0:row><ns0:cell>1000</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.615</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.211</ns0:cell></ns0:row><ns0:row><ns0:cell>10000</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>2.670</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>0.357</ns0:cell></ns0:row><ns0:row><ns0:cell>100000</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>243.9</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>-</ns0:cell><ns0:cell>10.41</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 :</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Algorithm comparison on a cyclic constraint graph. The exact algorithm and the constrained algorithm proposed here were run for four node cycles with differing numbers of variables, cycles with different numbers of nodes but three variables per node, and differing numbers of samples for a four-node three-variable cycle. All experiments with differing numbers of variables or nodes were run on 1000 randomly generated samples. Shown for both algorithms are the number of nodes across all parent graphs (PGN), the number of nodes in the order graph (OGN), the number of edges in the order graph (OGE) and the time to compute. Since the number of nodes does not change as a function of samples those values are not repeated in the blank cells.</ns0:figDesc><ns0:table /></ns0:figure>
</ns0:body>
" | "We thank the reviewers for their helpful comments, and the editor for supervising the process.
We address the few comments remaining from Reviewer 2 on a point by point basis. Our
comments are colored in blue (such as here), unmodified text from the manuscript is also in
blue, and modified text from the manuscript is in red. The reviewers original comments are in
black.
Reviewer 2 (Anonymous)
Basic reporting
The authors have properly addressed my concern about self-containedness.
Fig. 3 still contains the minor mistake, but it can easily be fixed without further review.
Experimental design
The data-sets have been described in more detail, so the authors have addressed my
concern.
Validity of the findings
The authors have included a small real-world example (feature selection using naive
Bayes) which reinforces the validity of their findings.
Comments for the Author
The authors did a good job in revising the manuscript and I suggest acceptance.
In the final version, please fix the following really minor issues:
* Figure 3 still has a caption mistake
We have amended the text in the manuscript. The caption now reads:
A section of the learned Bayesian network of the global stock market. (a) The
constraint graph contains six nodes, the opening and closing prices for each of the three
markets. These are connected such that the closing prices in a market depend on the
opening prices but also the most recent international activity. (b) The most connected
subset of stocks from the learned network covering 25 variables.
* The Markov Blanket of a BN (defined on page) also includes the co-parents of a node's
children.
We have amended the text in the body of the manuscript to now read:
Futhermore, more complicated Bayesian network classifiers can be learned with different
constraint graphs. One interesting extension is that instead of constraining all features to
be children of the target variable, to allow features to be either parents or children of the
target variable. This can be specified by a cyclic constraint graph where $y \rightarrow X
\rightarrow y$, preventing the model from spending time identifying dependencies between
the features. Finally, in cases where some features may be missing, it may be beneficial to
model all dependencies between the features in order to allow inference to flow from
observed variables not directly connected to the target variables to the target variables.
This can be modeled by adding a self loop on the features variables $X$, allowing all
edges to be learned except those between pairs of target variables. Learning a Bayesian
network classifier in this manner will suffer from the same computational challenges as an
unconstrained version, given the looseness of the constraints.
* On page 8, line 307, grammar: probably 'that' needs to be deleted.
We have removed the word “that” from the sentence, making that line now read as follows:
Futhermore, more complicated Bayesian network classifiers can be learned with different
constraint graphs.
" | Here is a paper. Please give your review comments after reading it. |
743 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Objects of more than three dimensions can be used to model geographic phenomena that occur in space, time and scale. For instance, a single 4D object can be used to represent the changes in a 3D object's shape across time or all its optimal representations at various levels of detail. In this paper, we look at how such higher-dimensional space-time and space-scale objects can be visualised as projections from \(\mathbb{R}^4\) to \(\mathbb{R}^3\). We present three projections that we believe are particularly intuitive for this purpose: (i) a simple `long axis' projection that puts 3D objects side by side; (ii) the well-known orthographic and perspective projections; and (iii) a projection to a 3-sphere (\(S^3\)) followed by a stereographic projection to \(\mathbb{R}^3\), which results in an inwards-outwards fourth axis. Our focus is in using these projections from \(\mathbb{R}^4\) to \(\mathbb{R}^3\), but they are formulated from \(\mathbb{R}^n\) to \(\mathbb{R}^{n-1}\) so as to be easily extensible and to incorporate other non-spatial characteristics. We present a prototype interactive visualiser that applies these projections from 4D to 3D in real-time using the programmable pipeline and compute shaders of the Metal graphics API.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>BACKGROUND</ns0:head><ns0:p>Projecting the 3D nature of the world down to two dimensions is one of the most common problems at the juncture of geographic information and computer graphics, whether as the map projections in both paper and digital maps <ns0:ref type='bibr' target='#b78'>(Snyder, 1987;</ns0:ref><ns0:ref type='bibr' target='#b35'>Grafarend and You, 2014)</ns0:ref> or as part of an interactive visualisation of a 3D city model on a computer screen <ns0:ref type='bibr' target='#b31'>(Foley and Nielson, 1992;</ns0:ref><ns0:ref type='bibr' target='#b76'>Shreiner et al., 2013)</ns0:ref>. However, geographic information is not inherently limited to objects of three dimensions. Non-spatial characteristics such as time <ns0:ref type='bibr' target='#b41'>(Hägerstrand, 1970;</ns0:ref><ns0:ref type='bibr' target='#b40'>Güting et al., 2000;</ns0:ref><ns0:ref type='bibr' target='#b49'>Hornsby and Egenhofer, 2002;</ns0:ref><ns0:ref type='bibr' target='#b53'>Kraak, 2003)</ns0:ref> and scale <ns0:ref type='bibr' target='#b61'>(Meijers, 2011a)</ns0:ref> are often conceived and modelled as additional dimensions, and objects of three or more dimensions can be used to model objects in 2D or 3D space that also have changing geometries along these non-spatial characteristics <ns0:ref type='bibr' target='#b84'>(van Oosterom and Stoter, 2010;</ns0:ref><ns0:ref type='bibr' target='#b4'>Arroyo Ohori, 2016)</ns0:ref>. For example, a single 4D object can be used to represent the changes in a 3D object's shape across time <ns0:ref type='bibr' target='#b11'>(Arroyo Ohori et al., 2017)</ns0:ref> or all the best representations of a 3D object at various levels of detail <ns0:ref type='bibr' target='#b55'>(Luebke et al., 2003;</ns0:ref><ns0:ref type='bibr' target='#b83'>van Oosterom and Meijers, 2014;</ns0:ref><ns0:ref type='bibr'>Arroyo Ohori et al., 2015a,c)</ns0:ref>.</ns0:p><ns0:p>Objects of more than three dimensions can be however unintuitive <ns0:ref type='bibr' target='#b64'>(Noll, 1967;</ns0:ref><ns0:ref type='bibr' target='#b32'>Frank, 2014)</ns0:ref>, and visualising them is a challenge. While some operations on a higher-dimensional object can be achieved by running automated methods (e.g. certain validation tests or area/volume computations) or by visualising only a chosen 2D or 3D subset (e.g. some of its bounding faces or a cross-section), sometimes there is no substitute for being able to view a complete nD object-much like viewing floor or fac ¸ade plans is often no substitute for interactively viewing the complete 3D model of a building. By viewing a complete model, one can see at once the 3D objects embedded in the model at every point in time or scale as well as the equivalences and topological relationships between their constituting elements. More directly, it also makes it possible to get an intuitive understanding of the complexity of a given 4D model. Figure <ns0:ref type='figure'>1</ns0:ref>. A 4D model of a house at two levels of detail and all the equivalences its composing elements is a polychoron bounded by: (a) volumes representing the house at the two levels of detail, (b) a pyramidal volume representing the window at the higher LOD collapsing to a vertex at the lower LOD, (c) a pyramidal volume representing the door at the higher LOD collapsing to a vertex at the lower LOD, and a roof volume bounded by (a) the roof faces of the two LODs, (b) the ridges at the lower LOD collapsing to the tip at the higher LOD and (c) the hips at the higher LOD collapsing to the vertex below them at the lower LOD. (d) A 3D cross-section of the model obtained at the middle point along the LOD axis.</ns0:p></ns0:div>
<ns0:div><ns0:head>For instance, in</ns0:head><ns0:p>This paper thus looks at a key aspect that allows higher-dimensional objects to be visualised interactively, namely how to project higher-dimensional objects down to fewer dimensions. While there is previous research on the visualisation of higher-dimensional objects, we aim to do so in a manner that is reasonably intuitive, implementable and fast. We therefore discuss some relevant practical concerns, such as how to also display edges and vertices and how to use compute shaders to achieve good framerates in practice.</ns0:p><ns0:p>In order to do this, we first briefly review the most well-known transformations (translation, rotation and scale) and the cross-product in nD, which we use as fundamental operations in order to project objects and to move around the viewer in an nD scene. Afterwards, we show how to apply three different projections from R n to R n−1 and argue why we believe they are intuitive enough for real-world use. These can be used to project objects from R 4 to R 3 , and if necessary, they can be used iteratively in order to bring objects of any dimension down to 3D or 2D. We thus present: (i) a simple 'long axis' projection that stretches objects along one custom axis while preserving all other coordinates, resulting in 3D objects that are presented side by side; (ii) the orthographic and perspective projections, which are analogous to those used from 3D to 2D; and (iii) an inwards/outwards projection to an (n − 1)-sphere followed by an stereographic projection to R n−1 , which results in a new inwards-outwards axis.</ns0:p><ns0:p>We present a prototype that applies these projections from 4D to 3D and then applies a standard perspective projection down to 2D. We also show that with the help of low-level graphics APIs, all the required operations can be applied at interactive framerates for the 4D to 3D case. We finish with a discussion of the advantages and disadvantages of this approach.</ns0:p></ns0:div>
<ns0:div><ns0:head>Higher-dimensional modelling of space, time and scale</ns0:head><ns0:p>There are a great number of models of geographic information, but most consider space, time and scale separately. For instance, space can be modelled using primitive instancing <ns0:ref type='bibr' target='#b30'>(Foley et al., 1995</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>2007), constructive solid geometry <ns0:ref type='bibr' target='#b72'>(Requicha and Voelcker, 1977)</ns0:ref> or various boundary representation approaches <ns0:ref type='bibr' target='#b63'>(Muller and Preparata, 1978;</ns0:ref><ns0:ref type='bibr' target='#b38'>Guibas and Stolfi, 1985;</ns0:ref><ns0:ref type='bibr' target='#b54'>Lienhardt, 1994)</ns0:ref>, among others.</ns0:p><ns0:p>Time can be modelled on the basis of snapshots <ns0:ref type='bibr' target='#b3'>(Armstrong, 1988;</ns0:ref><ns0:ref type='bibr' target='#b42'>Hamre et al., 1997)</ns0:ref>, space-time composites <ns0:ref type='bibr' target='#b67'>(Peucker and Chrisman, 1975;</ns0:ref><ns0:ref type='bibr' target='#b22'>Chrisman, 1983)</ns0:ref>, events <ns0:ref type='bibr' target='#b87'>(Worboys, 1992;</ns0:ref><ns0:ref type='bibr' target='#b68'>Peuquet, 1994;</ns0:ref><ns0:ref type='bibr' target='#b69'>Peuquet and Duan, 1995)</ns0:ref>, or a combination of all of these <ns0:ref type='bibr' target='#b1'>(Abiteboul and Hull, 1987;</ns0:ref><ns0:ref type='bibr' target='#b89'>Worboys et al., 1990;</ns0:ref><ns0:ref type='bibr' target='#b88'>Worboys, 1994;</ns0:ref><ns0:ref type='bibr' target='#b85'>Wachowicz and Healy, 1994)</ns0:ref>. Scale is usually modelled based on independent datasets at each scale <ns0:ref type='bibr' target='#b20'>(Buttenfield and DeLotto, 1989;</ns0:ref><ns0:ref type='bibr' target='#b34'>Friis-Christensen and Jensen, 2003;</ns0:ref><ns0:ref type='bibr' target='#b62'>Meijers, 2011b)</ns0:ref>, although approaches to combine them into single datasets <ns0:ref type='bibr' target='#b37'>(Gröger et al., 2012)</ns0:ref> or to create progressive and continuous representations also exist <ns0:ref type='bibr' target='#b12'>(Ballard, 1981;</ns0:ref><ns0:ref type='bibr' target='#b50'>Jones and Abraham, 1986;</ns0:ref><ns0:ref type='bibr' target='#b39'>Günther, 1988;</ns0:ref><ns0:ref type='bibr' target='#b81'>van Oosterom, 1990;</ns0:ref><ns0:ref type='bibr' target='#b29'>Filho et al., 1995;</ns0:ref><ns0:ref type='bibr' target='#b74'>Rigaux and Scholl, 1995;</ns0:ref><ns0:ref type='bibr' target='#b70'>Plümer and Gröger, 1997;</ns0:ref><ns0:ref type='bibr' target='#b82'>van Oosterom, 2005)</ns0:ref>.</ns0:p><ns0:p>As an alternative to the all these methods, it is possible to represent any number of parametrisable characteristics (e.g. two or three spatial dimensions, time and scale) as additional dimensions in a geometric sense, modelling them as orthogonal axes such that real-world 0D-3D entities are modelled as higher-dimensional objects embedded in higher-dimensional space. These objects can be consequently stored using higher-dimensional data structures and representation schemes Čomić and de Floriani (2012);</ns0:p><ns0:p>Arroyo <ns0:ref type='bibr' target='#b8'>Ohori et al. (2015b)</ns0:ref>. Possible approaches include incidence graphs <ns0:ref type='bibr' target='#b75'>Rossignac and O'Connor (1989)</ns0:ref>; <ns0:ref type='bibr' target='#b57'>Masuda (1993)</ns0:ref>; <ns0:ref type='bibr' target='#b79'>Sohanpanah (1989)</ns0:ref>; <ns0:ref type='bibr' target='#b43'>Hansen and Christensen (1993)</ns0:ref>, Nef polyhedra <ns0:ref type='bibr' target='#b18'>Bieri and Nef (1988)</ns0:ref>, and ordered topological models <ns0:ref type='bibr' target='#b19'>Brisson (1993)</ns0:ref>; <ns0:ref type='bibr' target='#b54'>Lienhardt (1994)</ns0:ref>. This is consistent with the basic tenets of n-dimensional geometry <ns0:ref type='bibr' target='#b25'>(Descartes, 1637;</ns0:ref><ns0:ref type='bibr' target='#b73'>Riemann, 1868)</ns0:ref> and topology <ns0:ref type='bibr' target='#b71'>(Poincaré, 1895)</ns0:ref>, which means that it is possible to apply a wide variety of computational geometry and topology methods to these objects.</ns0:p><ns0:p>In a practical sense, 4D topological relationships between 4D objects provide insights that 3D topological relationships cannot <ns0:ref type='bibr' target='#b5'>(Arroyo Ohori et al., 2013)</ns0:ref>. Also, <ns0:ref type='bibr' target='#b58'>McKenzie et al. (2001)</ns0:ref> contends that weather and groundwater phenomena cannot be adequately studied in less than four dimensions, and van Oosterom and Stoter (2010) argue that the integration of space, time and scale into a 5D model for GIS can be used to ease data maintenance and improve consistency, as algorithms could detect if the 5D representation of an object is self-consistent and does not conflict with other objects.</ns0:p></ns0:div>
<ns0:div><ns0:head>Basic transformations and the cross-product in nD</ns0:head><ns0:p>The basic transformations (translation, scale and rotation) have a straightforward definition in n dimensions, which can be used to move and zoom around a scene composed of nD objects. In addition, the ndimensional cross-product can be used to obtain a new vector that is orthogonal to a set of other n − 1 vectors in R n . We use these operations as a base for nD visualisation and are thus described briefly below.</ns0:p><ns0:p>The translation of a set of points in R n can be easily expressed as a sum with a vector t = [t 0 , . . . ,t n ],</ns0:p><ns0:p>or alternatively as a multiplication with a matrix using homogeneous coordinates 1 in an (n + 1) × (n + 1) matrix, which is defined as:</ns0:p><ns0:formula xml:id='formula_0'>T =        1 0 • • • 0 t 0 0 1 • • • 0 t 1 . . . . . . . . . . . . . . . 0 0 • • • 1 t n 0 0 • • • 0 1       </ns0:formula><ns0:p>Scaling is similarly simple. Given a vector s = [s 0 , s 1 , . . . , s n ] that defines a scale factor per axis (which in the simplest case can be the same for all axes), it is possible to define a matrix to scale an object as:</ns0:p><ns0:formula xml:id='formula_1'>S =      s 0 0 • • • 0 0 s 1 • • • 0 . . . . . . . . . . . . 0 0 • • • s n     </ns0:formula><ns0:p>1 A coordinate system based on projective geometry and typically used in computer graphics. An additional coordinate indicates a scale factor that is applied to all other coordinates.</ns0:p></ns0:div>
<ns0:div><ns0:head>3/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16530:1:1:NEW 2 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Rotation is somewhat more complex. Rotations in 3D are often conceptualised intuitively as rotations around the x, y and z axes. However, this view of the matter is only valid in 3D. In higher dimensions, it is necessary to consider instead rotations parallel to a given plane <ns0:ref type='bibr' target='#b48'>(Hollasch, 1991)</ns0:ref>, such that a point that is continuously rotated (without changing the rotation direction) will form a circle that is parallel to that plane. This view is valid in 2D (where there is only one such plane), in 3D (where a plane is orthogonal to the usually defined axis of rotation) and in any higher dimension. Incidentally, this shows that the degree of rotational freedom in nD is given by the number of possible combinations of two axes (which define a plane) on that dimension <ns0:ref type='bibr' target='#b44'>(Hanson, 1994)</ns0:ref>, i.e. n 2 .</ns0:p><ns0:p>Thus, in a 4D coordinate system defined by the axes x, y, z and w, it is possible to define six 4D rotation matrices, which correspond to the six rotational degrees of freedom in 4D <ns0:ref type='bibr' target='#b44'>(Hanson, 1994)</ns0:ref>. These</ns0:p><ns0:p>respectively rotate points in R 4 parallel to the xy, xz, xw, yz, yw and zw planes:</ns0:p><ns0:formula xml:id='formula_2'>R xy =     cos θ − sin θ 0 0 sin θ cos θ 0 0 0 0 1 0 0 0 0 1     R xz =     cos θ 0 − sin θ 0 0 1 0 0 sin θ 0 cos θ 0 0 0 0 1     R xw =     cos θ 0 0 − sin θ 0 1 0 0 0 0 1 0 sin θ 0 0 cos θ     R yz =     1 0 0 0 0 cos θ − sin θ 0 0 sin θ cos θ 0 0 0 0 1     R yw =     1 0 0 0 0 cos θ 0 − sin θ 0 0 1 0 0 sin θ 0 cos θ     R zw =     1 0 0 0 0 1 0 0 0 0 cos θ − sin θ 0 0 sin θ cos θ    </ns0:formula><ns0:p>The n-dimensional cross-product is easy to understand by first considering the lower-dimensional cases. In 2D, it is possible to obtain a normal vector to a 1D line as defined by two (different) points p 0 and p 1 , or equivalently a normal vector to a vector from p 0 to p 1 . In 3D, it is possible to obtain a normal vector to a 2D plane as defined by three (non-collinear) points p 0 , p 1 and p 2 , or equivalently a normal vector to a pair of vectors from p 0 to p 1 and from p 0 to p 2 . Similarly, in nD it is possible to obtain a normal vector to a (n − 1)D subspace-probably easier to picture as an (n − 1)-simplex-as defined by n linearly independent points p 0 , p 1 , . . . , p n−1 , or equivalently a normal vector to a set of n − 1 vectors from p 0 to every other point (i.e. p 1 , p 2 , . . . , p n−1 ) <ns0:ref type='bibr' target='#b56'>(Massey, 1983;</ns0:ref><ns0:ref type='bibr' target='#b27'>Elduque, 2004)</ns0:ref>. <ns0:ref type='bibr' target='#b44'>Hanson (1994)</ns0:ref> follows the latter explanation using a set of n − 1 vectors all starting from the first point to give an intuitive definition of the n-dimensional cross-product. Assuming that a point p i in R n is defined by a tuple of coordinates denoted as (p i 0 , p i 1 , . . . , p i n−1 ) and a unit vector along the i-th dimension is denoted as xi , the n-dimensional cross-product N of a set of points p 0 , p 1 , . . . , p n−1 can be expressed compactly as the cofactors of the last column in the following determinant:</ns0:p><ns0:formula xml:id='formula_3'>N = (p 1 0 − p 0 0 ) (p 2 0 − p 0 0 ) • • • (p n−1 0 ) x0 (p 1 1 − p 0 1 ) (p 2 1 − p 0 1 ) • • • (p n−1 1 ) x1 . . . . . . . . . . . . . . . (p 1 n−1 − p 0 n−1 ) (p 2 n−1 − p 0 n−1 ) • • • (p n−1 n−1 ) xn−1</ns0:formula><ns0:p>The components of the normal vector N are thus given by the minors of the unit vectors x0 , x1 , . . . , xn−1 .</ns0:p><ns0:p>This vector N-like all other vectors-can be normalised into a unit vector by dividing it by its norm N .</ns0:p></ns0:div>
<ns0:div><ns0:head>Previous work on the visualisation of higher-dimensional objects</ns0:head><ns0:p>There is a reasonably extensive body of work on the visualisation of 4D and nD objects, although it is still more often used for its creative possibilities (e.g. making nice-looking graphics) than for practical applications. In literature, visual metaphors of 4D space were already described in the 1880s in Flatland: Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>A Romance of Many Dimensions <ns0:ref type='bibr' target='#b0'>(Abbott, 1884)</ns0:ref> and A New Era of Thought <ns0:ref type='bibr' target='#b47'>(Hinton, 1888)</ns0:ref>. Other books that treat the topic intuitively include Beyond the Third Dimension: Geometry, Computer Graphics, and Higher Dimensions <ns0:ref type='bibr' target='#b14'>(Banchoff, 1996)</ns0:ref> and The Visual Guide To Extra Dimensions: Visualizing The Fourth Dimension, Higher-Dimensional Polytopes, And Curved Hypersurfaces <ns0:ref type='bibr' target='#b59'>(McMullen, 2008)</ns0:ref>.</ns0:p><ns0:p>In a more concrete computer graphics context, already in the 1960s, <ns0:ref type='bibr' target='#b64'>Noll (1967)</ns0:ref> described a computer implementations of the 4D to 3D perspective projection and its application in art <ns0:ref type='bibr' target='#b65'>(Noll, 1968)</ns0:ref>. <ns0:ref type='bibr' target='#b16'>Beshers and Feiner (1988)</ns0:ref> describe a system that displays animating (i.e. continuously transformed)</ns0:p><ns0:p>4D objects that are rendered in real-time and use colour intensity to provide a visual cue for the 4D depth.</ns0:p><ns0:p>It is extended to n dimensions by <ns0:ref type='bibr' target='#b28'>Feiner and Beshers (1990)</ns0:ref>.</ns0:p><ns0:p>Banks (1992) describes a system that manipulates surfaces in 4D space. It describes interaction techniques and methods to deal with intersections, transparency and the silhouettes of every surface.</ns0:p><ns0:p>Hanson and Cross (1993) describes a high-speed method to render surfaces in 4D space with shading using a 4D light and occlusion, while <ns0:ref type='bibr' target='#b44'>Hanson (1994)</ns0:ref> describes much of the mathematics that are necessary for nD visualisation. A more practical implementation is described in <ns0:ref type='bibr' target='#b46'>Hanson et al. (1999)</ns0:ref>. <ns0:ref type='bibr' target='#b23'>Chu et al. (2009)</ns0:ref> describe a system to visualise 2-manifolds and 3-manifolds embedded in 4D space and illuminated by 4D light sources. Notably, it uses a custom rendering pipeline that projects tetrahedra in 4D to volumetric images in 3D-analogous to how triangles in 3D that are usually projected to 2D images.</ns0:p><ns0:p>A different possible approach lies in using meaningful 3D cross-sections of a 4D dataset. For instance, <ns0:ref type='bibr' target='#b52'>Kageyama (2016)</ns0:ref> describes how to visualise 4D objects as a set of hyperplane slices. <ns0:ref type='bibr' target='#b17'>Bhaniramka et al. (2000)</ns0:ref> describe how to compute isosurfaces in dimensions higher than three using an algorithm similar to marching cubes. D'Zmura et al. ( <ns0:ref type='formula'>2000</ns0:ref>) describe a system that displays 3D cross-sections of a 4D virtual world one at a time.</ns0:p><ns0:p>Similar to the methods described above, <ns0:ref type='bibr' target='#b48'>Hollasch (1991)</ns0:ref> gives a simple formulation to describe the 4D to 3D projections, which is itself based on the 3D to 2D orthographic and perspective projection methods described by <ns0:ref type='bibr' target='#b31'>Foley and Nielson (1992)</ns0:ref>. This is the method that we extend to define n-dimensional versions of these projections and is thus explained in greater detail below. The mathematical notation is however changed slightly so as to have a cleaner extension to higher dimensions.</ns0:p><ns0:p>In order to apply the required transformations, <ns0:ref type='bibr' target='#b48'>Hollasch (1991)</ns0:ref> first defines a point f rom ∈ R 4 where the viewer (or camera) is located, a point to ∈ R 4 that the viewer directly points towards, and a set of two vectors − → up and − − → over. Based on these variables, he defines a set of four unit vectors â, b, ĉ and d that define the axes of a 4D coordinate system centred at the f rom point. These are ensured to be orthogonal by using the 4D cross-product to compute them, such that: to to (i.e. to − f rom), and (ii) that the last vector ĉ does not need to be normalised since the cross-product already returns a unit vector. These new unit vectors can then be used to define a transformation matrix to transform the 4D coordinates into a new set of points E (as in eye coordinates) with a coordinate system with the viewer at its centre and oriented according to the unit vectors. The points are given by:</ns0:p><ns0:formula xml:id='formula_4'>d = to − f rom to − f rom â = up × over × d up × over × d b = over × d × â over × d × â ĉ = d ×</ns0:formula><ns0:formula xml:id='formula_5'>E = P − f rom â b ĉ d</ns0:formula><ns0:p>For an orthographic projection given E = [ e 0 e 1 e 2 e 3 ], the first three columns e 0 , e 1 and e 2 can be used as-is, while the fourth column e 3 defines the orthogonal distance to the viewer (i.e. the depth).</ns0:p></ns0:div>
<ns0:div><ns0:head>5/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16530:1:1:NEW 2 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Where ϑ is the viewing angle between x and the line between the f rom point and every point as shown in Fig. <ns0:ref type='figure'>2</ns0:ref>. A similar computation is done for y and z. In E ′ , the first three columns (i.e. e ′ 0 , e ′ 1 and e ′ 2 ) similarly give the 3D coordinates for a perspective projection of the 4D points while the fourth column is also the depth of the point.</ns0:p><ns0:p>Figure <ns0:ref type='figure'>2</ns0:ref>. The geometry of a 4D perspective projection along the x axis for a point p. By analysing the depth along the depth axis given by e 3 , it is possible to see that the coordinates of the point along the x axis, given by e 0 , are scaled inwards in order to obtain e ′ 0 based on the viewing angle ϑ . Note that xn−1 is an arbitrary viewing hyperplane and another value can be used just as well.</ns0:p></ns0:div>
<ns0:div><ns0:head>METHODOLOGY</ns0:head><ns0:p>We present here three different projections from R n to R n−1 which can be applied iteratively to bring objects of any dimension down to 3D for display. We three projections that are reasonably intuitive in 4D to 3D: a 'long axis' projection that puts 3D objects side by side, the orthographic and perspective projections that work in the same way as their 3D to 2D analogues, and a projection to an (n − 1)-sphere followed by a stereographic projection to R n−1 .</ns0:p></ns0:div>
<ns0:div><ns0:head>'Long axis' projection</ns0:head><ns0:p>First we aim to replicate the idea behind the example previously shown in Fig. <ns0:ref type='figure'>1</ns0:ref>-a series of 3D objects that are shown next to each other, seemingly projected separately with the correspondences across scale or time shown as long edges (as in Fig. <ns0:ref type='figure'>1</ns0:ref>) or faces connecting the 3D objects. Edges would join correspondences between vertices across the models, while faces would join correspondences between elements of dimension up to one (e.g. a pair of edges, or an edge and a vertex). Since every 3D object is apparently projected separately using a perspective projection to 2D, it is thus shown in the same intuitive way in which a single 3D object is projected down to 2D. The result of this projection is shown in Fig. <ns0:ref type='figure'>?</ns0:ref>?</ns0:p><ns0:p>for the model previously shown in Fig. <ns0:ref type='figure'>1</ns0:ref> and in Fig. <ns0:ref type='figure'>4</ns0:ref> for a 4D model using 3D space with time.</ns0:p><ns0:p>Although to the best of our knowledge this projection does not have a well-known name, it is widely used in explanations of 4D and nD geometry-especially when drawn by hand or when the intent is to focus on the connectivity between different elements. For instance, it is usually used in the typical explanation for how to construct a tesseract, i.e. a 4-cube or the 4D analogue of a 2D square or 3D cube, which is based on drawing two cubes and connecting the corresponding vertices between the two (Fig. <ns0:ref type='figure' target='#fig_4'>5</ns0:ref>).</ns0:p><ns0:p>Among other examples in the scientific literature, this kind of projection can be seen in Figure <ns0:ref type='figure'>2</ns0:ref> Conceptually, describing this projection from n to n − 1 dimensions, which we hereafter refer to as a 'long axis' projection, is very simple. Considering a set of points P in R n , the projected set of points P ′ in R n−1 is given by taking the coordinates of P for the first n − 1 axes and adding to them the last coordinate of P which is spread over all coordinates according to weights specified in a customisable vector xn . For instance, Fig. <ns0:ref type='figure'>3</ns0:ref> uses xn = [ 2 0 0 ], resulting in 3D models that are 2 units displaced for every unit in which they are apart along the n-th axis. In matrix form, this kind of projection can then be applied Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Orthographic and perspective projections</ns0:head><ns0:p>Another reasonably intuitive pair of projections are the orthographic and perspective projections from nD to (n − 1)D. These treat all axes similarly and thus make it more difficult to see the different (n − 1)dimensional models along the n-th axis, but they result in models that are much less deformed. Also, as shown in the 4D example in Fig. <ns0:ref type='figure' target='#fig_5'>6</ns0:ref>, it is easy to rotate models in such a way that the corresponding features are easily seen. Based on the description of 4D-to-3D orthographic and perspective projection described from Hollasch (1991), we here extend the method in order to describe the n-dimensional to (n − 1)-dimensional case, changing some aspects to give a clearer geometric meaning for each vector.</ns0:p><ns0:p>Similarly, we start with a point f rom ∈ R n where the viewer is located, a point to ∈ R n that the viewer directly points towards (which can be easily set to the centre or centroid of the dataset), and a set of n − 2 initial vectors − → v 1 , . . . , − → v n−2 in R n that are not all necessarily orthogonal but nevertheless are linearly independent from each other and from the vector to − f rom. In this setup, the − → v i vectors serve as a base to define the orientation of the system, much like the traditional − → up vector that is used in 3D to 2D projections and the − − → over vector described previously. From the above mentioned variables and using the nD cross-product, it is possible to define a new set of orthogonal unit vectors x0 , . . . , xn−1 that define the axes x 0 , . . . , x n−1 of a coordinate system in R n as: Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_6'>Computer Science xn−1 = to − f rom to − f rom x0 = − → v 1 × • • • × − → v n−2 × xn−1 − → v 1 × • • • × − → v n−2 × xn−1 xi = − → v i+1 × • • • × − → v n−2 × xn−1 × x0 × • • • × xi−1 − → v i+1 × • • • × − → v n−2 × xn−1 × x0 × • • • × xi−1 xn−2 = xn−1 × x0 × • • • × xn−2</ns0:formula><ns0:p>The vector xn−1 is the first that needs to be computed and is oriented along the line from the viewer </ns0:p><ns0:formula xml:id='formula_7'>< i < n − 1, xi is simply a normalised − → v i .</ns0:formula><ns0:p>Like in the previous case, the vectors x0 , . . . , xn−1 can then be used to transform an m × n matrix of m nD points in world coordinates P into an m × n matrix of m nD points in eye coordinates E by applying the following transformation:</ns0:p><ns0:formula xml:id='formula_8'>E = P − f rom x0 • • • xn−1 As before, if E has rows of the form [ e 0 ••• e n−1</ns0:formula><ns0:p>] representing points, e 0 , . . . , e n−2 are directly usable as the coordinates in R n−1 of the projected point in an n-dimensional to (n − 1)-dimensional orthographic projection, while e n−1 represents the depth, i.e. the distance between the point and the projection (n − 1)-dimensional subspace, which can be used for visual cues 2 . The coordinates along e 0 , . . . , e n−2 could be made to fit within a certain bounding box by computing their extent along each axis, then scaling appropriately using the extent that is largest in proportion to the extent of the bounding box's corresponding axis.</ns0:p><ns0:p>For an n-dimensional to (n − 1)-dimensional perspective projection, it is only necessary to compute the distance between a point and the viewer along every axis by taking into account the viewing angle ϑ between xn−1 and the line between the to point and every point. Intuitively, this means that if an object is n times farther than another identical object, it is depicted n times smaller, or 1 n of its size. This situation is shown in Fig. <ns0:ref type='figure' target='#fig_7'>7</ns0:ref> and results in new e ′ 0 , . . . , e ′ n−2 coordinates that are shifted inwards. The coordinates are computed as:</ns0:p><ns0:formula xml:id='formula_9'>e ′ i = e i e n−1 tan ϑ /2 , for 0 ≤ i ≤ n − 2</ns0:formula><ns0:p>The (n − 1)-dimensional coordinates generated by this process can then be recursively projected down to progressively lower dimensions using this method. The objects represented by these coordinates can also be discretised into images of any dimension. For instance, <ns0:ref type='bibr' target='#b44'>Hanson (1994)</ns0:ref> describes how to perform many of the operations that would be required, such as dimension-independent clipping tests and ray-tracing methods.</ns0:p></ns0:div>
<ns0:div><ns0:head>Stereographic projection</ns0:head><ns0:p>A final projection possibility is to apply a stereographic projection from R n to R n−1 , which for us was partly inspired by Jenn 3D 3 (Fig. <ns0:ref type='figure' target='#fig_8'>8</ns0:ref>). This program visualises polyhedra and polychora embedded in R 4 by first projecting them inwards/outwards to the volume of a 3-sphere 4 and then projecting them stereographically to R 3 , resulting in curved edges, faces and volumes.</ns0:p><ns0:p>2 Visual cues can still be useful in higher dimensions. See http://eusebeia.dyndns.org/4d/vis/08-hsr.</ns0:p><ns0:p>3 http://www.math.cmu.edu/ ˜fho/jenn/ 4 Intuitively, an unbounded volume that wraps around itself, much like a 2-sphere can be seen as an unbounded surface that wraps around itself.</ns0:p></ns0:div>
<ns0:div><ns0:head>9/17</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16530:1:1:NEW 2 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science In a dimension-independent form, this type of projection can be easily done by considering the angles ϑ 0 , . . . , ϑ n−2 in an n-dimensional spherical coordinate system. <ns0:ref type='bibr'>Steeb (2011, §12.</ns0:ref>2) formulates such a system as:</ns0:p><ns0:formula xml:id='formula_10'>r = x 2 0 + • • • + x 2 n−1 ϑ i = cos −1   x i r 2 − ∑ i−1 j=0 x 2 j   , for 0 ≤ i < n − 2 ϑ n−2 = tan −1 x n−1 x n−2</ns0:formula><ns0:p>It is worth to note that the radius r of such a coordinate system is a measure of the depth with respect to the projection (n − 1)-sphere S n−1 and can be used similarly to the previous projection examples. The points can then be converted back into points on the surface of an (n − 1)-sphere of radius 1 by making r = 1 and applying the inverse transformation. <ns0:ref type='bibr'>Steeb (2011, §12.</ns0:ref>2) formulates it as:</ns0:p><ns0:formula xml:id='formula_11'>x i = r cos ϑ i i−1 ∏ j=0 sin ϑ j , for 0 ≤ i < n − 2 x n−1 = r n−2 ∏ j=0 sin ϑ j</ns0:formula><ns0:p>The next step, a stereographic projection, is also easy to apply in higher dimensions, mapping an Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p><ns0:p>(n + 1)-dimensional point x = (x 0 , . . . , x n ) on an n-sphere S n to an n-dimensional point x ′ = (x 0 , . . . , x n−1 )</ns0:p><ns0:p>in the n-dimensional Euclidean space R n . <ns0:ref type='bibr' target='#b21'>Chisholm (2000)</ns0:ref> formulates this projection as:</ns0:p><ns0:formula xml:id='formula_12'>x ′ i = x i x n − 1 , for 0 ≤ i < n</ns0:formula><ns0:p>The stereographic projection from nD to (n − 1)D is particularly intuitive because it results in the n-th axis being converted into an inwards-outwards axis. As shown in Fig. <ns0:ref type='figure' target='#fig_9'>9</ns0:ref>, when it is applied to scale, this results in models that decrease or increase in detail as one moves inwards or outwards. The case with time is similar: as one moves inwards/outwards, it is easy to see the state of a model at a time before/after. </ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head><ns0:p>We have implemented a small prototype for an interactive viewer of arbitrary 4D objects that performs the three projections previously described. It was used to generate Figures 3, 6 and 9, which were obtained by moving around the scene, zooming in/out and capturing screenshots using the software.</ns0:p><ns0:p>The prototype was implemented using part of the codebase of azul 5 and is written in a combination of Swift 3 and C++11 using Metal-a low-level and low-overhead graphics API-under macOS 10.12 6 .</ns0:p><ns0:p>By using Metal, we are able to project and display objects with several thousand polygons with minimal visual lag on a standard computer. Its source code is available under the GPLv3 licence at https: //github.com/kenohori/azul4d.</ns0:p><ns0:p>We take advantage of the fact that the Metal Shading Language-as well as most other linear algebra libraries intended for computer graphics-has appropriate data structures for 4D geometries and linear algebra operations with vectors and matrices of size up to four. While these are normally intended for use with homogeneous coordinates in 3D space, they can be used to do various operations in 4D space with minor modifications and by reimplementing some operations.</ns0:p><ns0:p>Unfortunately, this programming trick also means that extending the current prototype to dimensions higher than four requires additional work and rather cumbersome programming. However, implementing these operations in a dimension-independent way is rather not difficult outside in a more flexible programming environment. For instance, Fig. <ns0:ref type='figure' target='#fig_10'>10</ns0:ref> shows how a double stereographic projection can be used to reduce the dimensionality of an object from 5D to 3D. This figure was generated in a separate C++ program which exports its results to an OBJ file. The models were afterwards rendered in Blender 7 .</ns0:p><ns0:p>In our prototype, we only consider the vertices, edges and faces of the 4D objects, as the higherdimensional 3D and 4D primitives-whose 0D, 1D and 2D boundaries are however shown-would</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science readily obscure each other in any sort of 2D or 3D visualisation <ns0:ref type='bibr' target='#b15'>(Banks, 1992)</ns0:ref>. Every face of an object is thus stored as a sequence of vertices with coordinates in R 4 and is appended with an RGBA colour attribute with possible transparency. The alpha value of each face is used see all faces at once, as they would otherwise overlap with each other on the screen.</ns0:p><ns0:p>The 4D models were manually constructed based on defining their vertices with 4D coordinates and their faces as successions of vertices. In addition to the 4D house previously shown, we built a simpler tesseract for testing. As built, the tesseract consists of 16 vertices and 24 vertices, while the 4D house consists of 24 vertices and 43 faces. However, we used the face refining process described below to test our prototype with models with up to a few thousand faces. Once created, the models were still displayed and manipulated smoothly.</ns0:p><ns0:p>To start, we preprocess a 4D model by triangulating and possibly refining each face, which makes it possible to display concave faces and to properly see the curved shapes that are caused by the stereographic projection previously described. For this, we first compute the plane passing through the first three points of each face 8 and project each point from R 4 to a new coordinate system in R 2 on the plane. We then triangulate and refine separately each face in R 2 with the help of a few packages of the Computational Geometry Algorithms Library (CGAL) 9 , and then we reproject the results back to the previously computed plane in R 4 .</ns0:p><ns0:p>We then use a Metal Shading Language compute shader-a technique to perform general-purpose computing on graphics processing units (GPGPU)-in order to apply the desired projection from R 4 to R 3 . The three different projections presented previously are each implemented as a compute shader. By doing so, it is easier to run them as separate computations outside the graphics pipeline, to then extract the projected R 3 vertex coordinates of every face and use them to generate separate representations of their bounding edges and vertices 10 . Using their projected coordinates in R 3 , the edges and vertices surrounding each face are thus displayed respectively as possibly refined line segments and as icosahedral approximations of spheres (i.e. icospheres).</ns0:p><ns0:p>Finally, we use a standard perspective projection in a Metal vertex shader to display the projected model with all its faces, edges and vertices. We use a couple of tricks in order to keep the process fast and as parallel as possible: separate threads for each CPU process (the generation of the vertex and edge geometries and the modification of the projection matrices according to user interaction) and GPU process (4D-to-3D projection and 3D-to-2D projection for display), and blending with order-independent transparency without depth checks. For complex models, this results in a small lag where the vertices and edges move slightly after the faces.</ns0:p><ns0:p>In the current prototype, we have implemented a couple functions to interact with the model: rotations in 4D and translations in 3D. In 4D, the user can rotate the model around the 6 possible rotation planes by clicking and dragging while pressing different modifier keys. In 3D, it is possible to move a model around using 2D scrolling on a touchpad to shift it left/right/up/down and using pinch gestures to shift it backward/forward (according to the current view).</ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION AND CONCLUSIONS</ns0:head><ns0:p>Visualising complete 4D and nD objects projected to 3D and displayed in 2D is often unintuitive, but it enables analysing higher-dimensional objects in a thorough manner that cross-sections do not. The three projections we have shown here are nevertheless reasonably intuitive due to their similarity to common projections from 3D to 2D, the relatively small distortions in the models and the existence of a clear fourth axis. They also have a dimension-independent formulation.</ns0:p><ns0:p>There are however many other types of interesting projections that can be defined in any dimension, such as the equirectangular projection where evenly spaced angles along a rotation plane can be directly converted into evenly spaced coordinates-in this case covering 180 • vertically and 360 • horizontally.</ns0:p><ns0:p>Extending such a projection to nD would result in an n-orthotope, such as a (filled) rectangle in 2D or a cuboid (i.e. a box) in 3D.</ns0:p><ns0:p>By applying the projections shown in this paper to 4D objects depicting 3D objects that change in time or scale, it is possible to see at once all correspondences between different elements of the 3D objects and the topological relationships between them.</ns0:p><ns0:p>Compared to other 4D visualisation techniques, we opt for a rather minimal approach without lighting and shading. In our application, we believe that this is optimal due to better performance and because it makes for simpler-looking and more intuitive output. In this manner, progressively darker shades of a colour are a good visual cue for the number of faces of the same colour that are visually overlapping at any given point. Since we apply the projection from 4D to 3D in the GPU, it is not efficient to extract the surfaces again in order to compute the 3D normals required for lighting in 3D, while lighting in 4D results in unintuitive visual cues.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Fig. 1 we show an example of a 4D model representing a house at two different levels of detail and all the equivalences its composing elements. It forms a valid manifold 4-cell (Arroyo Ohori PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16530:1:1:NEW 2 Jun 2017) Manuscript to be reviewed Computer Science et al., 2014), allowing it to be represented using data structures such as a 4D generalised or combinatorial map.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head /><ns0:label /><ns0:figDesc>â × b Note two aspects in the equations above: (i) that the input vectors − → up and − − → over are left unchanged (i.e. b = − → up and ĉ = − − → over) if they are already orthogonal to each other and orthogonal to the vector from f rom</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Finally</ns0:head><ns0:label /><ns0:figDesc>, in order to obtain a perspective projection, he scales the points inwards in direct proportion to their depth. Starting from E, he computes E ′ = [ e ′</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .Figure 4 .</ns0:head><ns0:label>34</ns0:label><ns0:figDesc>Figure3. A model of a 4D house similar to the example shown previously in Fig.1, here including also a window and a door that are collapsed to a vertex in the 3D object at the lower level of detail. (a) shows the two 3D objects positioned as in Fig.1, (b) rotates these models 90 • so that the front of the house is on the right, and (c) orients the two 3D objects front to back. Many more interesting views are possible, but these show the correspondences particularly clearly. Unlike the other model, this one was generated with 4D coordinates and projected using our prototype that applies the projection described in this section.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure5. The typical explanation for how to draw the vertices and edges in an i-cube. Starting from a single vertex representing a point (i.e. a 0-cube), an (i + 1)-cube can be created by drawing two i-cubes and connecting the corresponding vertices of the two. Image credit: Wikimedia Commons.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. (a-c) The 4D house model and (d-f) the two buildings model projected down to 3D using an orthographic projection. The different views are obtained by applying different rotations in 4D. The less and more detailed 3D models can be found by looking at where the door and window are collapsed.</ns0:figDesc><ns0:graphic coords='9,279.12,348.74,116.48,145.15' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc>f rom) to the point that it is oriented towards (to). Afterwards, the vectors are computed in order from x0 to xn−2 as normalised n-dimensional cross products of n − 1 vectors. These contain a mixture of the input vectors − → v 1 , . . . , − → v n−2 and the computed unit vectors x0 , . . . , xn−1 , starting from n − 2 input vectors and one unit vector for x0 , and removing one input vector and adding the previously computed unit vector for the next xi vector. Note that if − → v 1 , . . . , − → v n−2 and xn−1 are all orthogonal to each other, ∀0</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. The geometry of an nD perspective projection for a point p. By analysing each axis xi (∀0 ≤ i < n − 1) independently together with the final axis xn−1 , it is possible to see that the coordinates of the point along that axis, given by e i , are scaled inwards based on the viewing angle ϑ .</ns0:figDesc><ns0:graphic coords='11,340.38,201.46,173.57,130.17' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. A polyhedron and a polychoron in Jenn 3D: (a) a cube and (b) a 24-cell.</ns0:figDesc><ns0:graphic coords='11,183.36,201.46,173.57,130.17' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 9 .</ns0:head><ns0:label>9</ns0:label><ns0:figDesc>Figure 9. (a) The 4D house model and (b) the two buildings model projected first inwards/outwards to the closest point on the 3-sphere S 3 and then stereographically to R 3 . The round surfaces are obtained by first refining every face in the 4D models.</ns0:figDesc><ns0:graphic coords='12,327.70,191.15,186.25,150.26' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 10 .</ns0:head><ns0:label>10</ns0:label><ns0:figDesc>Figure 10. (a) A stereographic projection of a 4-orthoplex and (b) a double stereographic projection of a 5-orthoplex. The family of orthoplexes contains the analogue shapes of a 2D square or a 3D octahedron.</ns0:figDesc><ns0:graphic coords='13,183.09,64.00,165.43,165.43' type='bitmap' /></ns0:figure>
<ns0:note place='foot' n='5'>https://github.com/tudelft3d/azul 6 https://developer.apple.com/metal/ 7 https://www.blender.org 11/17 PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16530:1:1:NEW 2 Jun 2017)</ns0:note>
<ns0:note place='foot' n='8'>This is sufficient for our purposes, but other applications would need to find three linearly-independent points or to use a more computationally expensive method that finds the best fitting plane for the face. 9 http://www.cgal.org 10 An alternative would be to embed these in 4D from the beginning, but it would result in distorted shapes depending on their position and orientation due to the extra degrees of rotational freedom in R 4 .12/17PeerJ Comput. Sci. reviewing PDF | (CS-2017:02:16530:1:1:NEW 2 Jun 2017)</ns0:note>
</ns0:body>
" | "COMMENTS REQUIRING ACTION ON OUR PART
REVIEWER 1 (ANONYMOUS)
Experimental design
More detailed description of the prototype program would be desirable. For
example: Is it possible to interactively specify the view position and angle in 4-D?
Is it possible to interactively rotate the object in 4-D? If the answers are yes, how?
Some readers would like to know since those 4-D operations are notoriously
difficult with the standard user-interface. And how do the authors apply those
operations (rotations, walk-throughs, and others) in higher n-D for n>4?
Interactive visualisation is indeed possible. We have added information about
the operations we support in our prototype and how they are accessed.
Comments for the Author
(I do not insist that the following improvement is necessary. This is just a
suggestion for a possible improvement of the manuscript, since the 'dimensionindependence' is one of the emphasized points in the paper.)
The methods proposed in this paper are formulated in general n-D space for n>=4.
On the other hand, the presented examples (Figs. 3, 5, and 9) are all for n=4. It
would be nice for readers if authors could include an additional example of higher
dimensional visualization, say a simple regular polytope in 5-D that is visualized by
recursively applied the proposed method (3) for two times. I know it would lead to a
highly complex image, but the complexity itself would convey the challenge of the
n-D visualization and the power of the proposed methods to the readers.
We have added a 5D example in the paper.
Much more minor points.
p.3: The matrix T is (n+1)-D, i.e., n-D homogenous coordinates, while the matrix S is
written in n-D. This would be confusing for readers who are unfamiliar with
computer graphics. A comment would be desirable.
1
We have added a comment to highlight this.
p.12, line 309: I could not understand the sentence 'This is necessary because...'
and the footnote 8.
We have expanded the sentence to clarify the meaning here.
REVIEWER 2 (ANONYMOUS)
No changes needed
REVIEWER 3 (ANONYMOUS)
Comments for the Author
On the other side, although the paper presents some relevant results, their
applicability at its current state is quite limited. First of all, the paper promises a
system to visualize data with high dimensionality in 3D (Actually, 2D, as this is
displayed on a sheet of paper or on a computer screen), but the higher
dimensionality the paper demonstrates is 4D, with examples that are already easy
to visualize with common tools. Throughout the paper, there is not a single
example with initial dimensionality equal or higher than 5D. For this paper to be
truly relevant, more complex dimensions should be explored and demonstrated.
We have added a 5D example that was generated externally and explained
why our current prototype is limited to 4D.
The second serious drawback I can see about this paper at this state, is that all
interesting examples are limited to LoD visualization, with the only exception of
figure 7, which is a trivial example, and Figure 8, which is taken from somewhere
else and that it is not much relevant here. Please, provide a larger variety of
examples and applications, besides LoD.
We have added examples of all projections using time as well.
The third drawback, in my point of view, is that LoD, the main application
demonstrated in the paper, is mainly used in practice for two purposes: to generate
different versions of the model for simulations (like the LoDs in CityGML), or for
2
visualization, where adaptive LoD techniques are often used to select a different
model (LoD level) according to some criterion like viewer’s distance. Now, this is
basically a matter of selecting the appropriate model, and displaying it, so showing
all LoD models can be done, either side by side, or with an animation in the case of
a continuous LoD. I really do not see the need of a visualization like that, which can
become really confusing for a complex object. Perhaps I might be wrong about this
last part, so please, provide an example of LoD for a really complex building, also
with several LoD levels. Actually, as the paper should be generic, other LoD
applications, like for a tree or a character model, should also be shown and
compared (observe here I am referring to mode types of LoD examples, not just
simple houses, while in the previous paragraph I referred to non-LoD examples). By
the way, how would this technique do to visualize a continuous LoD model, like the
ones developed in the past decade (see missing reference below).
Unfortunately we do not yet have tools to generate very complex 4D models at
present, so we cannot fully fulfil this request. However, we have added a new
example using 3D+time which is more complex than the previously provided
one with 3D+LoD.
Regarding the usefulness of higher-dimensional models in general, we believe
that the strongest use case for them is in terms of data management. See:
Peter van Oosterom and Jantien Stoter. 5D data modelling: Full integration of
2D/3D space, time and scale dimensions. In Sara Irina Fabrikant, Tumasch
Reichenbacher, Marc van Kreveld, and Christoph Schlieder, editors,
Geographic Information Science: 6th International Conference, GIScience
2010, Zurich, Switzerland, September 14-17, 2010. Proceedings, pages 311–324.
Springer Berlin Heidelberg, 2010.
And as long as such models are constructed and used, we believe that there is
also a need to view them directly, not least in order to develop, debug and test
new algorithms.
Other comments: * please, add the following seminal reference for LoD: Level of
Detail for 3D Graphics, David Luebke, Martin Reddy, Jonathan D. Cohen, Amitabh
Varshney, Benjamin Watson and Robert Huebner, ISBN: 978-1-55860-838-2
We have added this reference.
• Figures 8 and 11 seem out of place, please, remove.
They have been removed.
3
• Figure 10 is trivial. Can you provide a more complex example?
This figure has been removed and a more complex 5D example has been
added.
• The word “cumbersome” in the last sentence before the references looks
awkward. Probably, saying “not efficient” would have a better impact on the
readers.
This has been changed.
• Please, remove the trivial explanation about translations and scaling matrices,
these are well known. Also, rotations in 4D are also well known, though
homogeneous coordinates, so, in any case, provide an n dimensional
formulation. However, I think this part is also trivial and can be safely removed.
We would strongly prefer to keep this part in order to make this paper as selfcontained as possible. While we do agree that these operations are well
known in 3D, their dimension-independent formulations are not
straightforward.
REVIEWER 4 (SENT BY EMAIL AFTERWARDS)
The initial motivation is for analyzing 'geographic phenomena that occur in space,
time, and scale' but there is little indication of the way the topics treated in the
manuscript are related to geographic phenomena. In the book of mine cited in the
text, there is an example of geological information about prarie-forest data in an
area of the USA, showing three-dimensional representations changing in time, a
four-dimensional aggregate that can be projected to three-space in various ways to
emphasize what is happening in a particular subregion, for example along a
riverbed. If that it the sort of application the authors have in mind, it should be
presented with more explanation.
This is not at all what we have in mind here. Our intention is not to show or
emphasise certain parts of a higher-dimensional model but instead to show it
all in a manner that remains reasonably intuitive. In a 3D+time example this
would include all objects existing at all points in time as well as their
corresponding vertices, edges and faces.
4
With respect to the modes of projection mentioned in the article, the 'long-axis
method' is the least familiar and the house example could be interesting if it were
explained more thoroughly. What is revealed by progressively projecting portions of
a structure to points by cone procedures?
We believe that this is related to the previous point. We do not aim to apply this
kind of operation to only parts of a given structure.
This is one place where there is an example, but the utility of the process is unclear.
The procedure for building up a projection of a hypercube progressively dimension
by dimension is useful certainly but it isn't clear how this relates to the 'house'
example.
Also related to the previous points. Hopefully it will be clearer now with the
added examples.
The standard orthographic projections described in the article do not seem to
include anything that is not in the existing literature cited by the authors.
The cited literature considers only the 4D to 3D case. Our explanation covers
the more general nD to (n-1)D case.
Projecting to a sphere and then applying stereographic projection from a point
could be quite useful when the data points are already close to a sphere. If not,
then central projection from the center of a sphere followed by stereographic
projecting from different points, or a one-parameter family of points would involve
distortions that could be difficult to analyze. Without some examples, other than
familiar regular polytopes, it is difficult to see how this method would yield useful
results.
In 4D to 3D, this type of projection results in similar XYZ axes and an inwardsoutwards fourth axis, so we do not believe that the distortions become too
difficult to visualise. Hopefully this will be more persuasive with the added
examples.
One feature of the manuscript that is distinctive and possibly quite useful is the
extensive bibliography, listing items from a variety of different fields that usually do
not appear together. There are some errors however, for example a book by
'Banchoff' and subsequently a reference to a joint article by a misspelled
'Blanchoff'.
5
Sorry for the mistake. We have corrected this reference.
I am unfamiliar with the 'Metal graphics API' and the description of that system is
not helpful in showing why it is particularly well adapted to the project of modelling
geographical data.
This library is not especially suited for geographical data, but it is very well
suited for high-performance graphics computing in general. We have added
some text to explain this better.
6
" | Here is a paper. Please give your review comments after reading it. |
744 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The availability of large databases containing high resolution three-dimensional (3D) models of proteins in conjunction with functional annotation allows the exploitation of advanced supervised machine learning techniques for automatic protein function prediction.</ns0:p><ns0:p>Methods. In this work, novel shape features are extracted representing protein structure in the form of local (per amino acid) distribution of angles and amino acid distances, respectively. Each of the multichannel feature maps is introduced into a deep convolutional neural network (CNN) for function prediction and the outputs are fused through Support Vector Machines (SVM) or a correlation-based knearest neighbor classifier. Two different architectures are investigated employing either one CNN per multi-channel feature set, or one CNN per image channel.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results.</ns0:head><ns0:p>Cross validation experiments on enzymes (n = 44,661) from the PDB database achieved 90.1% correct classification demonstrating the effectiveness of the proposed method for automatic function annotation of protein structures.</ns0:p><ns0:p>Discussion. The automatic prediction of protein function can provide quick annotations on extensive datasets opening the path for relevant applications, such as pharmacological target identification.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Research in metagenomics led to a huge increase of protein databases and discovery of new protein families <ns0:ref type='bibr' target='#b11'>(Godzik, 2011)</ns0:ref>. While the number of newly discovered, but possibly redundant, protein sequences rapidly increases, experimentally verified functional annotation of whole genomes remains limited. Protein structure, i.e. the 3D configuration of the chain of amino acids, is a very good predictor of protein function, and in fact a more reliable predictor than protein sequence because it is far more conversed in nature <ns0:ref type='bibr' target='#b14'>(Illergård et al., 2009)</ns0:ref>.</ns0:p><ns0:p>By now, the number of proteins with functional annotation and experimentally predicted structure of their native state (e.g. by NMR spectroscopy or X-ray crystallography) is adequately large to allow learning training models that will be able to perform automatic functional annotation of unannotated proteins. Also, as the number of protein sequences rapidly grows, the overwhelming majority of proteins can only be annotated computationally. In this work enzymatic structures from the Protein Data Bank (PDB) are considered and the enzyme commission (EC) number is used as a fairly complete framework for annotation. The EC number is a numerical classification scheme based on the chemical reactions the enzymes catalyze, proven by experimental evidence <ns0:ref type='bibr'>(web, 1992)</ns0:ref>.</ns0:p><ns0:p>There have been plenty machine learning approaches in the literature for automatic enzyme annotation.</ns0:p><ns0:p>A systematic review on the utility and inference of various computational methods for functional characterization is presented in <ns0:ref type='bibr' target='#b27'>(Sharma and Garg, 2014)</ns0:ref>, while a comparison of machine learning approaches can be found in <ns0:ref type='bibr' target='#b37'>(Yadav and Tiwari, 2015)</ns0:ref>. Most methods use features derived from the amino acid sequence and apply Support Vector Machines (SVM) <ns0:ref type='bibr' target='#b6'>(Cai et al., 2003)</ns0:ref> <ns0:ref type='bibr' target='#b12'>(Han et al., 2004)</ns0:ref> <ns0:ref type='bibr' target='#b10'>(Dobson and Doig, 2005)</ns0:ref> <ns0:ref type='bibr' target='#b8'>(Chen et al., 2006)</ns0:ref> <ns0:ref type='bibr' target='#b38'>(Zhou et al., 2007)</ns0:ref> <ns0:ref type='bibr' target='#b22'>(Lu et al., 2007)</ns0:ref> <ns0:ref type='bibr' target='#b19'>(Lee et al., 2009)</ns0:ref> <ns0:ref type='bibr' target='#b26'>(Qiu et al., 2010)</ns0:ref> <ns0:ref type='bibr' target='#b35'>(Wang et al., 2010)</ns0:ref> <ns0:ref type='bibr' target='#b36'>(Wang et al., 2011)</ns0:ref> <ns0:ref type='bibr' target='#b1'>(Amidi et al., 2016)</ns0:ref>, k-Nearest Neighbor (kNN) classifier <ns0:ref type='bibr' target='#b13'>(Huang et al., 2007)</ns0:ref> <ns0:ref type='bibr' target='#b28'>(Shen and Chou, 2007a)</ns0:ref> <ns0:ref type='bibr' target='#b24'>(Nasibov and Kandemir-Cavas, 2009a)</ns0:ref>, classification trees/forests <ns0:ref type='bibr' target='#b19'>(Lee et al., 2009)</ns0:ref> <ns0:ref type='bibr' target='#b17'>(Kumar and Choudhary, 2012a)</ns0:ref> <ns0:ref type='bibr' target='#b23'>(Nagao et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Yadav and Tiwari, 2015)</ns0:ref>, and neural networks <ns0:ref type='bibr' target='#b33'>(Volpato et al., 2013)</ns0:ref>. In <ns0:ref type='bibr' target='#b3'>(Borgwardt et al., 2005)</ns0:ref> sequential, structural and chemical information was combined into one graph model of proteins which was further classified by SVM. There has been little</ns0:p><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:1:2:NEW 31 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>work in the literature on automatic enzyme annotation based only on structural information. A Bayesian approach <ns0:ref type='bibr' target='#b4'>(Borro et al., 2006)</ns0:ref> for enzyme classification using structure derived properties achieved 45% accuracy. <ns0:ref type='bibr' target='#b1'>Amidi et al. (2016)</ns0:ref> obtained 73.5% classification accuracy on 39,251 proteins from the PDB database when they used only structural information.</ns0:p><ns0:p>In the past few years, deep learning techniques, and particularly convolutional neural networks, have rapidly become the tool of choice for tackling many challenging computer vision tasks, such as image classification <ns0:ref type='bibr' target='#b16'>(Krizhevsky et al., 2012)</ns0:ref>. The main advantage of deep learning techniques is the automatic exploitation of features and tuning of performance in a seamless fashion, that simplifies the conventional image analysis pipelines. CNNs have recently been used for protein secondary structure prediction <ns0:ref type='bibr' target='#b30'>(Spencer et al., 2015)</ns0:ref> <ns0:ref type='bibr' target='#b20'>(Li and Shibuya, 2015)</ns0:ref>. In <ns0:ref type='bibr' target='#b30'>(Spencer et al., 2015)</ns0:ref> prediction was based on the position-specific scoring matrix profile (generated by PSI-BLAST), whereas in <ns0:ref type='bibr' target='#b20'>(Li and Shibuya, 2015)</ns0:ref> 1D convolution was applied on features related to the amino acid sequence. Also a deep CNN architecture was proposed in <ns0:ref type='bibr' target='#b21'>(Lin et al., 2016)</ns0:ref> to predict protein properties. This architecture used a multilayer shift-and-stitch technique to generate fully dense per-position predictions on protein sequences.</ns0:p><ns0:p>To the best of authors's knowledge, deep CNNs have not been used for prediction of protein function so far.</ns0:p><ns0:p>In this work the author exploits experimentally acquired structural information of enzymes and apply deep learning techniques in order to produce models that predict enzymatic function based on structure.</ns0:p><ns0:p>Novel geometrical descriptors are introduced and the efficacy of the approach is illustrated by classifying a dataset of 44,661 enzymes from the PDB database into the l = 6 primary categories: oxidoreductases (EC1), transferases (EC2), hydrolases (EC3), lyases (EC4), isomerases (EC5), ligases (EC6). The novelty of the proposed method lies first in the representation of the 3D structure as a 'bag of atoms (amino acids)' which are characterized by geometric properties, and secondly in the exploitation of the extracted feature maps by deep CNNs. Although assessed for enzymatic function prediction, the method is not based on enzyme-specific properties and therefore can be applied (after re-training) for automatic large-scale annotation of other 3D molecular structures, thus providing a useful tool for data-driven analysis. In the following sections more details on the implemented framework are first provided, including the representation of protein structure, the CNN architecture and the fusion process of the network outputs.</ns0:p><ns0:p>Then the evaluation framework and the obtained results are presented, followed by some discussion and conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>METHODS</ns0:head><ns0:p>Data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any handcrafted features. It is hypothesized that by combining 'amino acid specific' descriptors with the recent advances in deep learning we can boost model performance. The main advantage of the proposed method is that it exploits complementarity in both data representation phase and learning phase. Regarding the former, the method uses an enriched geometric descriptor that combines local shape features with features characterizing the interaction of amino acids on this 3D spatial model. Shape representation is encoded by the local (per amino acid type) distribution of torsion angles <ns0:ref type='bibr' target='#b2'>(Bermejo et al., 2012)</ns0:ref>. Amino acid interactions are encoded by the distribution of pairwise amino acid distances. While the torsion angles and distance maps are usually calculated and plotted for the whole protein <ns0:ref type='bibr' target='#b2'>(Bermejo et al., 2012)</ns0:ref>, in the current approach they are extracted for each amino acid type separately, therefore characterizing local interactions. Thus, the protein structure is represented as a set of multi-channel images which can be introduced into any machine learning scheme designed for fusing multiple 2D feature maps. Moreover, it should be noted that the utilized geometric descriptors are invariant to global translation and rotation of the protein, therefore previous protein alignment is not required.</ns0:p><ns0:p>Our method constructs an ensemble of deep CNN models that are complementary to each other.</ns0:p><ns0:p>The deep network outputs are combined and introduced into a correlation-based k-nearest neighbor (kNN) classifier for function prediction. For comparison purposes, SVM were also implemented for final classification. Two system architectures are investigated in which the multiple image channels are considered jointly or independently, as will be described next. Both architectures use the same CNN structure (within the highlighted boxes) which is illustrated in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>2/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:1:2:NEW 31 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The network includes layers performing convolution (Conv), batch normalization (Bnorm), rectified linear unit (ReLU) activation, dropout (optionally) and max-pooling (Pool). Details are provided in section 2.2.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Representation of protein structure</ns0:head><ns0:p>The building blocks of proteins are amino acids which are linked together by peptide bonds into a chain.</ns0:p><ns0:p>The polypeptide folds into a specific conformation depending on the interactions between its amino acid side chains which have different chemistries. Many conformations of this chain are possible due to the rotation of the chain about each carbon (Cα) atom. For structure representation, two sets of feature maps were used. They express the shape of the protein backbone and the distances between the protein building blocks (amino acids). The use of global rotation and translation invariant features is preferred over features based on the Cartesian coordinates of atoms, in order to avoid prior protein alignment, which is a bottleneck in the case of large datasets with proteins of several classes (unknown reference template space). The feature maps were extracted for every amino acid being present in the dataset including the 20 standard amino acids, as well as asparagine/aspartic (ASX), glutamine/glutamic (GLX), and all amino acids with unidentified/unknown residues (UNK), resulting in m = 23 amino acids in total.</ns0:p><ns0:p>Torsion angles density. The shape of the protein backbone was expressed by the two torsion angles of the polypeptide chain which describe the rotations of the polypeptide backbone around the bonds between N-Cα (angle φ ) and Cα-C (angle ψ). All amino acids in the protein were grouped according to their type and the density of the torsion angles φ and ψ(∈ [−180, 180]) was estimated for each amino acid type based on the 2D sample histogram of the angles (also known as Ramachandran diagram) using equal sized bins (number of bins h A = 19). The histograms were not normalized by the number of instances, therefore their values indicate the frequency of each amino acid within the polypeptide chain. In the obtained feature maps (X A ), with dimensionality [h A × h A × m], he number of amino acids (m) corresponds to the number of channels. Smoothness in the density function was achieved by moving average filtering, i.e. by convoluting the density map with a 2D gaussian kernel (σ = 0.5).</ns0:p><ns0:p>Density of amino acid distances. For each amino acid a i , i = 1, .., m, the distances to amino acid a j , j = 1, .., m, in the protein are calculated based on the coordinates of the Cα atoms for the residues and stored as an array d i j . Since the size of the proteins varies significantly, the length of the array d i j is different across proteins, thus not directly comparable. In order to standardize measurements, the sample histogram of d i j is extracted (using equally sized bins) and smoothed by convolution with a 1D gaussian kernel (σ = 0.5). The processing of all pairs of amino acids resulted to feature maps (</ns0:p><ns0:formula xml:id='formula_0'>X D ) of dimensionality [m × m × h D ],</ns0:formula><ns0:p>where h D = 8 is the number of histogram bins (considered as number of channels in this case).</ns0:p></ns0:div>
<ns0:div><ns0:head>3/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_2'>2016:08:12536:1:2:NEW 31 Jan 2017)</ns0:ref> Manuscript to be reviewed Training and testing stage of each CNN. The output of each CNN is a vector of probabilities, one for each of the l possible enzymatic classes. The CNN performance can be measured by a loss function which assigns a penalty to classification errors. The CNN parameters are learned to minimize this loss averaged over the annotated (training) samples. The softmaxloss function (i.e. the softmax operator followed by the logistic loss) is applied to predict the probability distribution over categories. Optimization was based on an implementation of stochastic gradient descent. At the testing stage, the network outputs after softmax normalization are used as class probabilities.</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head n='2.3'>Fusion of CNN outputs using two different architectures</ns0:head><ns0:p>Two fusion strategies were implemented. In the first strategy (Architecture 1) the two feature sets, X A and X D , are each introduced into a CNN, which performs convolution at all channels, and then the l class probabilities produced for each feature set are combined into a feature vector of length l * 2. In the second strategy (Architecture 2) , each one of the (m = 23 or h D = 8) channels of each feature set is introduced independently into a CNN and the obtained class probabilities are concatenated into a vector of l * m features for X A and l * h D features for X D , respectively. These two feature vectors are further combined into a single vector of length l * (m + h D ) (=186). For both architectures, kNN classification was applied for final class prediction using as distance measure between two feature vectors, x 1 and x 2 , the metric 1 − cor(x 1 , x 2 ), where cor is the sample Spearman's rank correlation. The value k = 12 was selected for all experiments. For comparison, fusion was also performed with linear SVM classification <ns0:ref type='bibr' target='#b7'>(Chang and Lin, 2011)</ns0:ref>. The code was developed in MATLAB environment and the implementation of CNNs was based on MatConvNet <ns0:ref type='bibr' target='#b32'>(Vedaldi and Lenc, 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS</ns0:head><ns0:p>The protein structures (n = 44, 661) were collected from the PDB. Only enzymes that occur in a single class were processed, whereas enzymes that perform multiple reactions and are hence associated with multiple enzymatic functions were excluded. Since protein sequence was not examined during feature extraction, all enzymes were considered without other exclusion criteria, such as small sequence length or homology bias. The dataset was unbalanced in respect to the different classes. The number of samples per class is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The dataset was split into 5 folds. Four folds were used for training and one for testing. The training samples were used to learn the parameters of the network (such as the weights of the convolution filters), as well as the parameters of the subsequent classifiers used during fusion (SVM or kNN model). Once the network was trained, the class probabilities were obtained for the testing samples, which were introduced into the trained SVM or kNN classifier for final prediction. The SVM model was linear, thus didn't require any hyper-parameter optimization. Due to lack of hyper-parameters, no extra validation set was necessary. On the side, the author examined also non-linear SVM with gaussian radial basis function kernel, but didn't observe any significant improvement, thus the corresponding results are not reported.</ns0:p><ns0:p>A classification result was deemed a true positive if the match with the highest probability was in first place in a rank-ordered list. The classification accuracy (percentage of correctly classified samples over all samples) was calculated for each fold and then results were averaged across the 5 folds.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:1:2:NEW 31 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science 3.8 9.1 7.2 78.5 1.1 0.4 3.7 8.4 6.9 80.7 0.1 0.1 EC5 6.1 11.5 10.7 2.3 68.5 1.0 3.5 9.7 8.6 0.9 76.9 0.3 EC6 4.9 18.8 13.5 1.0 1.3 60.6 4.2 14.1 10.3 0.7 0.3 70.5</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Classification performance</ns0:head><ns0:p>Common options for the network were used, except of the size of the filters which was adjusted to the dimensionality of the input data. Specifically, the convolutional layer used neurons with receptive field of size 5 for the first two layers and 2 for the third layer. The stride (specifying the sliding of the filter) was always 1. The number of filters was 20, 50 and 500 for the three layers, respectively, and the learning rate 0.001. The batch size was selected according to information amount (dimensionality) of input. It was assumed (and verified experimentally) that for more complicated the data, a larger number of samples is required for learning. One thousand samples per batch were used for Architecture 1, which takes as input all channels, and 100 samples per batch for Architecture 2, in which an independent CNN is trained for each channel. The dropout rate was 20%. The number of epochs was adjusted to the rate of convergence for each architecture (300 for Architecture 1 and 150 for Architecture 2).</ns0:p><ns0:p>The average classification accuracy over the 5 folds for each enzymatic class is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> for both fusion schemes, whereas the analytic distribution of samples in each class is shown in the form of confusion matrices in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>.</ns0:p><ns0:p>In order to further assess the performance of the deep networks, receiver operating characteristic (ROC) curves and area-under-the-curve (AUC) values were calculated for each class for the selected scheme (based on kNN and Architecture 2), as shown in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). The calculations were performed based on the final decision scores in a one-versus-rest classification scheme. The decision scores for the kNN classifier reflected the ratio of the within-class neighbors over total number of neighbors. The ROC curve represents the true positive rate against the false positive rate and was produced by averaging over the five folds of the cross-validation experiments.</ns0:p></ns0:div>
<ns0:div><ns0:head>5/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:1:2:NEW 31 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Effect of sequence redundancy and sample size. Analysis of protein datasets is often performed after removal of redundancy, such that the remaining entries do not overreach a pre-arranged threshold of sequence identity. In this particular work the author chose not to employ data filtering strategies, since the pattern analysis method is based on structure similarity and not sequence similarity. Thus, even if proteins are present with high sequence identity, the distance metrics during classification do not exploit it. Based on the (by now) established opinion that structure is far more conversed than sequence in nature (Illergard2009), the aim was not to jeopardize the dataset by losing reliable structural entries over a sequence based threshold cutoff. Also, only X-ray crystallography data were used; such data represent a 'snapshot' of a given protein's 3D structure. In order not to miss the multiple poses that the same protein may adopt in different crystallography experiments, sequence/threshold metrics were not applied to remove sequence-redundancy in the presented results.</ns0:p><ns0:p>Nevertheless, the performance of the method was also investigated on a non-redundant dataset and the classification accuracy was compared in respect to the original (redundant) dataset randomly subsampled to include equal number of proteins. This experiment allows to assess the effect of redundancy under conditions (number of samples). Since inference in deep networks requires the estimation of a very large number of parameters, a large amount of training data is required and therefore very strict filtering strategies could not be applied. A dataset (the pdbaanr) pre-compiled by PISCES <ns0:ref type='bibr' target='#b34'>(Wang and Dunbrack, 2003)</ns0:ref>, was used that includes only non-redundant sequences across all PDB files (n = 23242 proteins, i.e.</ns0:p><ns0:p>half in size of the original dataset). Representative chains are selected based on the highest resolution structure available and then the best R-values. Non-X-ray structures are considered after X-ray structures.</ns0:p><ns0:p>As a note, the author also explored the Leaf algorithm <ns0:ref type='bibr' target='#b5'>(Bull et al., 2013)</ns0:ref> which is especially designed to maximize the number of retained proteins and has shown improvement over PISCES. However, the computational cost was too high (possibly due to the large number of samples) and the analysis was not completed. The classification performance was assessed on Architecture 2 by using 80% of the samples for training and 20% of the samples for testing. For the non-redundant dataset the accuracy was 79.3% for kNN and 75.5% for linear-SVM, whereas for the sub-sampled dataset it was 85.7% for kNN and 83.2%</ns0:p><ns0:p>for linear-SVM. The results show that for the selected classifier (kNN), the accuracy drops 4.4% when the number of samples are reduced to the half, and it also drops additionally 6.4% if the utilized samples are non-redundant. Also the decrease in performance is not inconsiderable, the achieved accuracy indicates that structural similarity is an important criterion for the prediction of enzymatic function.</ns0:p></ns0:div>
<ns0:div><ns0:head>6/11</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:1:2:NEW 31 Jan 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science For an amino acid a, yellow means that the number of occurrences of the specific value (φ , ψ) in all observations of a (within and across proteins) is at least equal to the number of proteins. On the opposite, blue indicates a small number of occurrences, and is observed for rare amino acids or unfavorable conformations.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Structural representation and complementarity of features</ns0:head><ns0:p>Next, some examples of the extracted feature maps are illustrated, in order to provide some insight on the representation of protein's 3D structure. The average (over all samples) 2D histogram of torsion angles for each amino acid is shown in Fig. <ns0:ref type='figure' target='#fig_3'>3</ns0:ref>. The horizontal and vertical axes at each plot represent torsion angles (in [−180 • , 180 • ]). It can be observed that the non-standard (ASX, GLX, UNK) amino acids are very rare, thus their density maps have nearly zero values. The same color scale was used in all plots to make feature maps comparable, as 'seen' by the deep network. Since the histograms are (on purpose) not normalized for each sample, rare amino acids will have few visible features and due to the 'max-pooling operator'</ns0:p><ns0:p>will not be selected as significant features. The potential of these feature maps to differentiate between classes is illustrated in Fig. <ns0:ref type='figure' target='#fig_5'>4</ns0:ref> for three randomly selected amino acids (ALA, GLY, TYR). Overall the spatial patterns in each class are distinctive and form a multi-dimensional signature for each sample. As a note, before training of the CNN ensemble data standardization is performed by subtracting the mean density map. The same map is used to standardize the test sample during assessment.</ns0:p><ns0:p>Examples of features maps representing amino acid distances (X D ) are illustrated in figures 1 and 5. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The discrimination ability and complementary of the extracted features in respect to classification performance is shown in Table <ns0:ref type='table' target='#tab_2'>3</ns0:ref>. It can be observed that the relative position of amino acids and their arrangement in space (features X D ) predict enzymatic function better than the backbone conformation (features X A ). Also, the fusion of network decisions based on correlation distance outperforms predictions from either network alone, but the difference is only marginal in respect to the predictions by X D . In all cases the differences in prediction for the performed experiments (during cross validation) was very small (usually standard deviation < 0.5%), indicating that the method is robust to variations in training examples.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DISCUSSION</ns0:head><ns0:p>A deep CNN ensemble was presented that performs enzymatic function classification through fusion in feature level and decision level. The method has been applied for the prediction of the primary EC number and achieved 90.1% accuracy, which is a considerable improvement over the accuracy (73.5%) achieved in previous work <ns0:ref type='bibr' target='#b1'>(Amidi et al., 2016)</ns0:ref> when only structural information was incorporated.</ns0:p><ns0:p>Many methods have been proposed in the literature using different features and different classifiers.</ns0:p><ns0:p>Nasibov and Kandemir-Cavas (2009b) obtained 95%-99% accuracy by applying kNN-based classification on 1200 enzymes based on their amino acid composition. <ns0:ref type='bibr' target='#b29'>Shen and Chou (2007b)</ns0:ref> fused results derived from the functional domain and evolution information and obtained 93.7% average accuracy on 9,832 enzymes. On the same dataset <ns0:ref type='bibr' target='#b36'>Wang et al. (2011)</ns0:ref> improved the accuracy (which ranged from 81% to 98% when predicting the first three EC digits) by using sequence encoding and SVM for hierarchy labels.</ns0:p><ns0:p>Kumar and Choudhary (2012b) reported overall accuracy of 87.7% in predicting the main class for 4,731 enzymes using random forests. <ns0:ref type='bibr' target='#b33'>Volpato et al. (2013)</ns0:ref> applied neural networks on the full sequence and achieve 96% correct classification on 6,000 non-redundant proteins. Most of these works have been applied on a subset of enzymes and have not been tested for large-scale annotation. Also they incorporate sequence-based features.</ns0:p><ns0:p>Assessment of the relationship between function and structure <ns0:ref type='bibr' target='#b31'>(Todd et al., 2001)</ns0:ref> revealed 95% conservation of the fourth EC digit for proteins with up to 30% sequence identity. Similarity, <ns0:ref type='bibr' target='#b9'>Devos and Valencia (2000)</ns0:ref> concluded that enzymatic function is mostly conserved for the first digit of EC code whereas more detailed functional characteristics are poorly conserved. It is generally believed that as sequences diverge, 3D protein structure becomes a more reliable predictor than sequence, and that structure is far more conversed than sequence in nature <ns0:ref type='bibr' target='#b14'>(Illergård et al., 2009)</ns0:ref>. Thus, the focus of this study was to explore the predictive ability of 3D structure alone and provide a tool that can generalize in cases where sequence information is insufficient. Thus the presented results are not directly comparable to the ones of previous methods which incorporate sequence information. If desired, the current approach can also be combined with sequence-related features; in such a case it is expected that classification accuracy would further increase.</ns0:p><ns0:p>A possible limitation of the proposed approach is that the extracted features do not capture the topological properties of the 3D structure. Due to the statistical nature of the implemented descriptors, calculated by considering the amino acids as elements in Euclidean space, connectivity information is not strictly retained. The author and colleagues recently started to investigate in parallel the predictive power of the original 3D structure, represented as a volumetric image, without the extraction of any statistical features. Since the more detailed representation increased the dimensionality considerably, new ways are being explored to optimally incorporate the relationship between the structural units (amino-acids) in order not to impede the learning process. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSIONS</ns0:head><ns0:p>A method was presented that extracts shape features from the 3D protein geometry that are introduced into a deep CNN ensemble for enzymatic function prediction. The investigation of protein function based only on structure reveals relationships hidden at the sequence level and provides the foundation to build a better understanding of the molecular basis of biological complexity. Overall, the presented approach can provide quick protein function predictions on extensive datasets opening the path for relevant applications, such as pharmacological target identification. Future work includes application of the method for prediction of the hierarchical relation of function subcategories and annotation of enzymes up to the last digit of the enzyme classification system.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The deep CNN ensemble for protein classification. In this framework (Architecture 1) each multi-channel feature set is introduced to a CNN and results are combined by kNN or SVM classification. The network includes layers performing convolution (Conv), batch normalization (Bnorm), rectified linear unit (ReLU) activation, dropout (optionally) and max-pooling (Pool). Details are provided in section 2.2.</ns0:figDesc><ns0:graphic coords='4,143.10,63.79,410.83,198.42' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>2. 2</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Classification by deep CNNsFeature extraction stage of each CNN. The CNN architecture employs three computational blocks of consecutive convolutional, batch normalization, rectified linear unit (ReLU) activation, dropout (optionally) and max-pooling layers, and a fully-connected layer. The convolutional layer computes the output of neurons that are connected to local regions in the input in order to extract local features. It applies a 2D convolution between each of the input channels and a set of filters. The 2D activation maps are calculated by summing the results over all channels and then stacking the output of each filter to produce the output 3D volume. Batch normalization normalizes each channel of the feature map by averaging over spatial locations and batch instances. The ReLU layer applies an element-wise activation function, such as the max(0, x) thresholding at zero. The dropout layer is used to randomly drop units from the CNN during training and reduce overfitting. Dropout was used only for the X A feature set. The pooling layer performs a downsampling operation along the spatial dimensions in order to capture the most relevant global features with fixed length. The max operator was applied within a [2 × 2] neighborhood. The last layer is fully-connected and represents the class scores.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. ROC curves for each enzymatic class based on kNN and Architecture 2</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.58,226.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Torsion angles density maps (Ramachandran plots) averaged over all samples for each of the 20 standard and 3 non-standard (ASX, GLX, UNK) amino acids. The horizontal and vertical axes at each plot correspond to φ and ψ angles and vary from −180 • (top left) to 180 • (right bottom). The color scale (blue to yellow) is in the range [0, 1].For an amino acid a, yellow means that the number of occurrences of the specific value (φ , ψ) in all observations of a (within and across proteins) is at least equal to the number of proteins. On the opposite, blue indicates a small number of occurrences, and is observed for rare amino acids or unfavorable conformations.</ns0:figDesc><ns0:graphic coords='8,172.75,63.78,351.54,278.09' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Fig. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig. 1 illustrates an image slice across the 3rd dimension, i.e. one [m × m] channel, and as introduced in the 2D multichannel CNN, i.e. after mean-centering (over all samples). Fig. 5 illustrates image slices (of size [m × h D ]) across the 1st dimension averaged within each class. Fig. 5 has been produced by selecting the same amino acids as in Fig. 4 for easiness of comparison of the different feature representations. It can be noticed that for all classes most pairwise distances are concentrated in the last bin, correspondingto high distances between amino acids. Also, as expected there are differences in quantity of each amino acid, e.g. by focusing on the last bin, it can be seen that ALA and GLY have higher values than TYR in most classes. Moreover, the feature maps indicate clear differences between samples of different classes.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Ramachandran plots averaged across samples within each class. Rows correspond to amino acids and columns to functional classes. Three amino acids (ALA, GLY, TYR) are randomly selected for illustration of class separability. The horizontal and vertical axes at each plot correspond to φ and ψ angles and vary from −180 • (top left) to 180 • (right bottom). The color scale (blue to yellow) is in the range [0, 1] as illustrated in Fig. 3.</ns0:figDesc><ns0:graphic coords='9,172.75,73.14,351.54,209.83' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Histograms of paiwise amino acid distances averaged across samples within each class. The same three amino acids (ALA, GLY, TYR) selected in Fig. 4 are also shown here. The horizontal axis at each plot represents the histogram bins (distance values in the range [5, 40]). The vertical axis at each plot corresponds to the 23 amino acids sorted alphabetically from top to bottom (ALA, ARG, ASN, ASP, ASX, CYS, GLN, MET, GLU, GLX, GLY, HIS, ILE, LEU, LYS, PHE, PRO, SER, THR, TRP, TYR, UNK, VAL). Thus each row shows the histogram of distances for a specific pair of the amino acids (the one in the title and the one corresponding to the specific row). The color scale is the same for all plots and shown at the bottom of the figure.</ns0:figDesc><ns0:graphic coords='9,141.73,379.40,413.57,233.85' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:08:12536:1:2:NEW 31 Jan 2017)</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Cross-validation accuracy (in percentage) in predicting main enzymatic function using the deep CNN ensemble</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Architecture 1</ns0:cell><ns0:cell /><ns0:cell cols='2'>Architecture 2</ns0:cell></ns0:row><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell>Samples</ns0:cell><ns0:cell>linear-SVM</ns0:cell><ns0:cell>kNN</ns0:cell><ns0:cell>linear-SVM</ns0:cell><ns0:cell>kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>EC1</ns0:cell><ns0:cell>8,075</ns0:cell><ns0:cell>86.4</ns0:cell><ns0:cell>88.8</ns0:cell><ns0:cell>91.2</ns0:cell><ns0:cell>90.6</ns0:cell></ns0:row><ns0:row><ns0:cell>EC2</ns0:cell><ns0:cell>12,739</ns0:cell><ns0:cell>84.0</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>88.0</ns0:cell><ns0:cell>91.7</ns0:cell></ns0:row><ns0:row><ns0:cell>EC3</ns0:cell><ns0:cell>17,024</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>91.3</ns0:cell><ns0:cell>89.6</ns0:cell><ns0:cell>94.0</ns0:cell></ns0:row><ns0:row><ns0:cell>EC4</ns0:cell><ns0:cell>3,114</ns0:cell><ns0:cell>79.4</ns0:cell><ns0:cell>78.4</ns0:cell><ns0:cell>84.9</ns0:cell><ns0:cell>80.7</ns0:cell></ns0:row><ns0:row><ns0:cell>EC5</ns0:cell><ns0:cell>1,905</ns0:cell><ns0:cell>69.5</ns0:cell><ns0:cell>68.6</ns0:cell><ns0:cell>79.6</ns0:cell><ns0:cell>77.0</ns0:cell></ns0:row><ns0:row><ns0:cell>EC6</ns0:cell><ns0:cell>1,804</ns0:cell><ns0:cell>61.0</ns0:cell><ns0:cell>60.6</ns0:cell><ns0:cell>73.6</ns0:cell><ns0:cell>70.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell>44,661</ns0:cell><ns0:cell>84.4</ns0:cell><ns0:cell>86.7</ns0:cell><ns0:cell>88.0</ns0:cell><ns0:cell>90.1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Confusion matrices for each fusion scheme and classification technique</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classifier</ns0:cell><ns0:cell /><ns0:cell cols='6'>prediction by Architecture 1</ns0:cell><ns0:cell cols='5'>prediction by Architecture 2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>linear-</ns0:cell><ns0:cell cols='7'>EC1 86.5 4.9 4.8 1.8 1.1 1.0</ns0:cell><ns0:cell cols='3'>91.2 2.9 1.9</ns0:cell><ns0:cell cols='2'>2.2 1.1 0.7</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>EC2</ns0:cell><ns0:cell cols='6'>3.4 84.0 7.9 1.9 1.2 1.6</ns0:cell><ns0:cell cols='5'>3.6 88.0 3.5 2.2 1.2 1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC3</ns0:cell><ns0:cell cols='6'>2.4 6.1 88.7 1.0 0.8 1.0</ns0:cell><ns0:cell cols='5'>2.3 4.1 89.6 1.6 1.2 1.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC4</ns0:cell><ns0:cell cols='6'>4.4 7.3 5.7 79.4 1.8 1.3</ns0:cell><ns0:cell cols='5'>4.3 4.9 2.7 84.9 1.7 1.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC5</ns0:cell><ns0:cell cols='6'>7.0 10.1 9.0 2.9 69.4 1.6</ns0:cell><ns0:cell cols='5'>4.5 5.4 4.7 4.4 79.5 1.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC6</ns0:cell><ns0:cell cols='6'>5.9 15.5 13.0 2.3 2.3 61.0</ns0:cell><ns0:cell cols='5'>5.5 10.3 5.4 3.3 1.9 73.6</ns0:cell></ns0:row><ns0:row><ns0:cell>kNN</ns0:cell><ns0:cell cols='7'>EC1 88.8 5.0 4.5 0.7 0.5 0.5</ns0:cell><ns0:cell cols='5'>90.6 4.4 4.6 0.3 0.1 0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC2</ns0:cell><ns0:cell cols='6'>2.5 87.5 7.4 1.0 0.6 1.1</ns0:cell><ns0:cell cols='2'>1.7 91.7</ns0:cell><ns0:cell cols='3'>5.8 0.3 0.2 0.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC3</ns0:cell><ns0:cell cols='6'>1.8 5.4 91.3 0.5 0.4 0.6</ns0:cell><ns0:cell cols='5'>1.2 4.4 94.0 0.2 0.1 0.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_2'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Cross-validation accuracy (average ± standard deviation over 5 folds) for each feature set separately and after fusion of CNN outputs based on Architecture 2</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature sets</ns0:cell><ns0:cell>linear-SVM</ns0:cell><ns0:cell>kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>X A (angles)</ns0:cell><ns0:cell>79.6 ± 0.5</ns0:cell><ns0:cell>82.4 ± 0.4</ns0:cell></ns0:row><ns0:row><ns0:cell>X D (distances)</ns0:cell><ns0:cell>88.1 ± 0.4</ns0:cell><ns0:cell>89.8 ± 0.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Ensemble</ns0:cell><ns0:cell>88.0 ± 0.4</ns0:cell><ns0:cell>90.1 ± 0.2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='11'>/11 PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:1:2:NEW 31 Jan 2017) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "First of all I would like to thank the editor and the reviewers for their valuable comments. I tried to address all the raised issues, as explained in my point to point response below. Additionally, the results were rounded to one decimal point since there is no meaning in higher precision (accuracies fluctuate more than that) and I speak of the author (myself) as “the author” instead of “we”. Such insignificant changes are not marked in the manuscript with track changes for clarity of presentation.
Resubmission requirements
# Funding Statement
Please remove all financial and grant disclosure information from the source file manuscript. This information should only be provided in the Funding Statement here: <https://peerj.com/manuscripts/11084/declarations/#question_18>.
Response: The funding information is included in the Funding Statement. I removed it from the manuscript.
# References
In the reference section, please provide the full author name lists for any references with 'et al.' including this reference: “Webb, E. C.. et al.”.
Response: This reference is a nomenclature, started by the late 1950's, published in 1992 and followed by many supplements. Often it is cited without author names due to many people involved. I did the same in the revised version, but I can correct it if it doesn't follow exactly the format of the journal.
# Keywords
Please remove the Keywords from your manuscript and make sure they are included in the metadata here instead <https://peerj.com/manuscripts/12536/keywords>.
Response: The keywords are included in the metadata. I removed them from the manuscript.
# Tables
In addition to any tables embedded directly in your text manuscript, please also upload the tables in separate Word documents here <https://peerj.com/manuscripts/12536/files> as these will be used in production. [note: Tables should not be a .jpg or .pdf image of a table pasted into the Word document.] The file should be named using the table number: Table1.doc, Table2.doc.
Response: Currently the Tables are produced in latex, but if the paper is accepted I can create them in any desired format, such as a Word document.
# Figures
1) Please use numbers to name your files, example: Fig1.eps, Fig2.png.
Response: I have renamed all figures.
2) # Figures
1) Please combine any figures with multiple parts into single, labeled, figure files. Ex: Figs 5A and 5B should be one figure grouping the parts either next to each other or one on top of the other and only labeled “A” and “B” on the respective figure parts. Each figure with multiple parts should label each part alphabetically (e.g. A, B, C) and all parts of each single figure should be submitted together in one file.
Response: I merged the two parts of Fig5 into a single file. In respect to the labeling, I think that it is not nice to label the colorbars (in Fig. 3 and Fig. 5) as different parts. Colorbars are attached to the plots and not separate findings.
2) Figures 2, 3, 4, and 5 have multiple parts. Each part needs to be labeled alphabetically to use (A, B, C, D, etc) instead of directions (left, right, upper, lower, etc).
Response: These figures do not have multiple parts. Each figure is produced as a single file by my function in Matlab and includes multiple plots. Using additional labels (A, B, C, etc) for each plot does not help the understanding since
1) there is no need to describe the location of each plot (e.g. using left, right, upper, lower, etc) and
2) each plot has already its own title that characterizes the figure. The plots are organized like in a Cartesian grid (with horizontal and vertical axes corresponding to different notions. A serial labeling (from A to B to C etc) would just be confusing.
Also as a note, I changed slightly the caption of Figure 5 to make it more clear. The expression “from top to bottom” did not refer to the organization of the plots but to the information displayed within each plot.
3) In addition to providing figures embedded in the manuscript, please upload your higher resolution (at least 900 by 900 pixels) figures in either EPS, PNG, JPG (photographs only) or PDF (vector PDF's only) as primary files here <https://peerj.com/manuscripts/12536/files> as these will be needed at production. Note: You have chosen to submit the reviewing manuscript as a 'single PDF' and so these higher resolution figures won't be merged with that document, just used at production.
Response: I upsampled the figures to higher resolution and uploaded them in PNG format.
# Manuscript Source File
1) Please provide the clean unmarked source file (e.g. .DOCX, .DOC, .ODT) with no tracked changes shown, all tracked changes accepted and tracked changes turned off.
2) Please do continue to include low res files embedded in the manuscript and upload the manuscript file here: https://peerj.com/manuscripts/12536/files/.
Response: I uploaded the revised manuscript without track changes.
3) If you uploaded a PDF because of formatting problems, please provide the .docx source file as a Supplemental File and we will mark it as the correct file type as necessary if the manuscript is accepted.
Response: I uploaded the latex documents (main.tex, acknowledgements.tex, references.bib, wlpeerj.cls, llncs.cls, lineno.sty) used to compile the manuscript and produce the pdf.
# Raw Data
Thank you for providing your code for review at this Google Drive link: https://goo.gl/0aIU0f. This is not a repository in the way we need for publication purposes as the data files could be deleted or altered after publication. If your paper is accepted for publication, we will need the data accessible either as supplemental files (not to exceed 30mb) or in a public repository such as Figshare or an institutional repository.
Response: I upload the code in GitHub: https://github.com/ezachar/PeerJ
**********************************************************************
Editor's Decision Major Revisions
Whist your manuscript describes a novel method, the major concern of both reviewers is that the two convolutional models' performance were not rigorously evaluated. Reviewer 1 provides detailed explanation and questions regarding the data set, whilst Reviewer 2 also asks whether a hold out set was used (ie a set that is used only for evaluation, after cross-validated training has been performed). On behalf of both reviewers, I therefore request you revise and repeat the experiment as requested (non-redundant PDB dataset, independent & non-homologous test set).
Response: Thank you for the suggestions. I have performed additional experiments as recommended by the reviewers. I respond to the comments analytically below.
In addition to this additional experiment, please address their comments, as well as my own below:
0. In your introduction you begin by describing metagenomics as 'the field which combines the study of nucleotide sequences with their structure,'. This definition is incorrect, and in fact, what you describe is typically the result of 'structural genomics' efforts where protein structures are systematically expressed and their structures determined without further biochemical characterisation.
It is worth noting that 'Structural Metagenomics' has been proposed a term: see Adam Godzik's talk at Metagenomics 2007 http://www.calit2.net/newsroom/article.php?id=1135
Response: The first sentence in the introduction was meant to show the contrast between the highly productive research in metagenomics and the challenges of structural genomics, but the definition was not indeed not well given. Thus I rephrased the introduction:
“Research in metagenomics led to a huge increase of protein databases and discovery of new protein families (Godzik, 2011). While the number of newly discovered, ...”
Also the keyword “protein structure” was replaced by “structural genomics”.
Added References
Godzik A. Metagenomics and the protein universe. Current opinion in structural biology. 2011 Jun 30;21(3):398-403.
1. Statement of question. This paper is of interest to both biologists and computer scientists. It is essential that you provide a clear description of the domain issues in this classification problem that is both accurate and accessible to a computer scientist with only passing familiarity with macromolecular structure and evolution.
Revision: Please properly introduce the problem, and provide an analysis of your predictor's performance in context of the confounding factors in enzyme function prediction: that 1) a structural class may yield several enzyme classes, and 2) (highlighted by reviewer 1) that a structural class is not a predictor of enzymatic function. Indeed, it would be extremely valuable if a method were available that could distinguish functional and non-functional structures, but your experimental design does not include an H0 dataset to assess this.
Response: The main aim of this paper was to compare a new methodological framework based on deep learning with our previous pipeline developed for enzyme classification (Amidi et al., 2016). Thus the models do not predict structural class; they have been trained to predict the EC number. Enzymes that perform multiple reactions and are hence associated with multiple EC categories were not appropriate for this method and thus excluded from the analysis. In our ongoing work we examine also these enzymes and explore multi-label classifiers, but these models are at the moment integrated only to our standard pipeline (Amidi et al., 2016), and still under investigation.
In respect to assessment of non-functional structures, although the deep learning framework is general and could be trained for any class label, training in a cross-validation scheme is computationally expensive, especially since the experiments are performed on a standard personal computer (not a cluster). The exploration of new structures and re-training of all models is prohibitive in respect to the time frame of this review, thus were not included. However, this is a very interesting comment and we started investigating it.
2. In your results, you only take the top-most hit as the prediction. R1 suggests you quote confusion matrices. I strongly suggest you take this one step further and provide ROC curves (rank sensitivity/specificity plots) and area-under-the-curve (AUC) values, since these statistics are normally presented to depict performance. It is also customary to highlight specific cases where your methodology performed (or failed to perform) well.
Response: I provide now both confusion matrices and ROC curves (including AUC values) for each enzymatic class . The introduced results are shown in Table 2 and in the new Figure 2. Details on the calculations are provided in the text.
Other work: Whilst we appreciate there are very few structure based EC classifiers available, it is important to discuss how this method is distinct from other structure based approaches (beyond the use of deep-learning), and to give an indication of relative performance. It would also be relevant to examine how the features your approach employs compare to those applied in other protein structure classification tasks.
Response: The proposed framework was compared on the same dataset with our previous pipeline (Amidi et al., 2016) based on standard machine learning techniques (SVM, kNN). As mentioned in the discussion it achieved 90.1% accuracy, which is a considerable improvement over the accuracy (73.5%) achieved in (Amidi et al., 2016) when only structural information was incorporated.
In the second paragraph of the discussion section I also review methods of others, but these results are not directly comparable because they have been produced using different data and different features, such a sequence-based.
Further work: you state that you are currently exploring other encoding schemes based on volumetric and topologically distinct representations of structure, but that these experiments have not yielded models that are able to generalise. If in the revised manuscript you still wish to discuss further work, then you should also provide some analysis to support these observations.
Response: Thank you for your note. This sentence referred to the general “curse of dimensionality” problem we will have to deal with (i.e. when dimensionality increases, learning becomes more difficult) and was not meant as an observation from new the coding schemes. However, since it is confusing and maybe sounded like an unsupported claim, I rephrased this paragraph as follows:
“The author and colleagues recently started to investigate in parallel the predictive power of the original 3D structure, represented as a volumetric image, without the extraction of any statistical features. Since the more detailed representation increased the dimensionality considerably, new ways are being explored to optimally incorporate the relationship between the structural units (amino-acids) in order not to impede the learning process.”
Finally, please review PeerJ's rules regarding figures and data. In particular, ensure that all figures are properly labelled, and common terminology is used. e.g. in figure 4 'mean centred histogram' is not a typical name for a 2D heat map); and the horizontal and vertical bins should be annotated. R1 in particular notes that figure 4's colourscheme also makes it difficult to interpret.
Response: I had used “mean-centered histogram” because it corresponds to the mathematical formula used to extract the values of the matrix we display, whereas the “2D heat map” just characterizes the graphical representation of a matrix of values. In the revision, I changed the graphs and the captions to make it easier for interpretation. Details are provided in the response to the reviewers' comments.
**********************************************************************
Comments from the reviewers
Reviewer: Andrew Doig
Basic reporting
There are a few small errors with English.
Response: I corrected the text in some places. In case the paper will be accepted I will ask for a native English speaker to proof-read the final version.
Experimental design
• Zacharaki has developed a new method to predict the EC class of a protein structure, known to be an enzyme. The results are superficially excellent (~90% accuracy), but I think this is due to using a flawed dataset, to a large extent. The author uses all the enzyme PDB files. As many of these files are from the same enzyme (e.g. there are 1734 lysozymes), the method will usually work simply by recognising itself. For example, if the unknown protein is a lysozyme and there is a lysozyme in the training set, then all an algorithm needs to do is to spot a copy of itself in the training set (k-mean algorithms will do this). If a dataset like this is used, then there is no need to develop any kind of new algorithm, since PSI-BLAST will already work perfectly well. It may well, however, work poorly on any structure that does not have a homologue in the training set – hence it is overfit.
I therefore think that the work should be redone using a data set that has no pairs of similar proteins. A sequence cut-off of no more than 20% sequence identity could be used. The author says herself: “Assessment of the relationship between function and structure (Todd et al., 2001) revealed 95% conservation of the fourth EC digit for proteins with up to 30% sequence identity.” I think that is why the method works – if proteins are present with high sequence identity, then they have the same EC number.
Response: Thank you for raising the protein redundancy issue in the presented work. A new section (Effect of sequence redundancy and sample size) is included in the Results to elaborate on this issue. I quote it here:
“Analysis of protein datasets is often performed after removal of redundancy, such that the remaining entries do not overreach a pre-arranged threshold of sequence identity. In this particular work the author chose not to employ data filtering strategies, since the pattern analysis method is based on structure similarity and not sequence similarity. Thus, even if proteins are present with high sequence identity, the distance metrics during classification do not exploit it. Based on the (by now) established opinion that structure is far more conversed than sequence in nature (Illergard2009), the aim was not to jeopardize the dataset by losing reliable structural entries over a sequence based threshold cutoff. Also, only X-ray crystallography data were used; such data represent a ‘snapshot’ of a given protein’s 3D structure. In order not to miss the multiple poses that the same protein may adopt in different crystallography experiments, sequence/threshold metrics were not applied to remove sequence-redundancy in the presented results.
Nevertheless, the performance of the method was also investigated on a non-redundant dataset and the classification accuracy was compared in respect to the original (redundant) dataset randomly subsampled to include equal number of proteins. This experiment allows to assess the effect of redundancy under conditions (number of samples). Since inference in deep networks requires the estimation of a very large number of parameters, a large amount of training data is required and therefore very strict filtering strategies could not be applied. A dataset (the pdbaanr) pre-compiled by PISCES (Wang and Dunbrack,2003), was used that includes only non-redundant sequences across all PDB files (n=23242 proteins, i.e. half in size of the original dataset). Representative chains are selected based on the highest resolution structure available and then the best R-values. Non-X-ray structures are considered after X-ray structures. As a note, the author also explored the Leaf algorithm (Bull et al., 2013) that is especially designed to maximize the number of retained proteins and has shown improvement over PISCES. However, the computational cost was too high (possibly due to the large number of samples) and the analysis was not completed. The classification performance was assessed on Architecture 2 by using 80% of the samples for training and 20% of the samples for testing. For the non-redundant dataset the accuracy was 79.3% for kNN and 75.5% for linear-SVM, whereas for the sub-sampled dataset it was 85.7% for kNN and 83.2% for linear-SVM. The results show that for the selected classifier (kNN), the accuracy drops 4.4% when the number of samples are reduced to the half, and it also drops additionally ~6.4% if the utilized samples are non-redundant. Also the decrease in performance is not inconsiderable, the achieved accuracy indicates that structural similarity is an important criterion for the prediction of enzymatic function.”
Added References
Illergard, K., Ardell, D. H., and Elofsson, A. (2009). Structure is three to ten times more conserved than sequence—a study of structural response in protein cores. Proteins: Structure, Function, and Bioinformatics, 77(3):499–508.
Wang, G. and Dunbrack, R. L. (2003). Pisces: a protein sequence culling server. Bioinformatics, 19(12):1589–1591.
Bull, S. C., Muldoon, M. R., and Doig, A. J. (2013). Maximising the size of non-redundant protein datasets using graph theory. PloS one, 8(2):e55484.
It is assumed that the protein is an enzyme in advance (with a single EC number). The work would be much more powerful if it was coupled with predicting whether it is an enzyme, since then it could be applied to any protein structure.
Response: In respect to the assessment of other protein structures, although the deep learning framework is general and could be trained for any class label, training in a cross-validation scheme is computationally expensive, especially since the experiments are performed on a standard personal computer (not a cluster). The exploration of new structures and re-training of all models is prohibitive in respect to the time frame of this review, thus was not included. However, this is a very interesting comment and we started investigating it.
If these changes are made, and the model has a good performance, it could be publishable, as the method is original.
Validity of the findings
The numbers of proteins in each EC class look very odd. … Why are the numbers in Table 1 so different? Ligases should be the rarest, but here they are the most frequent.
Response: I would like to thank the reviewer for noticing this and pointing it out. The number of samples was introduced in wrong order in the Table. This is corrected now. The results (classification accuracy per class) were in correct order, thus not changed in the revised version.
Comments for the author
• It is informative to report a confusion matrix for each model, i.e. numbers of false positives, false negatives etc., rather than just accuracy.
Response: I followed the suggestion and included the confusion matrices for each fusion scheme and classification technique (Table 2). Moreover ROC curves (including AUC values) are introduced for each enzymatic class to further illustrate the performance of the method (new Figures 2). Details on the calculations are provided in the text.
• “The assessment of structure/function relationship however is hampered by the lack of a unified functional classification scheme of the protein universe.” What about SCOP, CATH or DALI?
Response: There exist many classification schemes. Although structural classifications, such as SCOP and CATH, are probably easier to define on the basis of molecular-similarity criteria, their overlap is more limited (Liu2003) than the overlap of functional classifications (Rison2000). In this study the EC classification system was selected to test the proposed machine learning methodology, as one of the oldest functional classification schemes of proteins, which is based on experimentally predicted classes (to be used as “ground truth” for training) and not on computational methods that might include inaccurate assignments.
To avoid confusion, I removed this sentence from the paper.
References
Liu, J. & Rost, B. Domains, motifs and clusters in the protein universe. Curr. Opin. Chem. Biol. 7, 5–11 (2003). An overview of present methods for protein sequence clustering.
Rison, S. C., Hodgman, T. C. & Thornton, J. M. Comparison of functional annotation schemes for genomes. Funct. Integr. Genomics 1, 56–69 (2000).
An in-depth analysis and comparison of present functional classification schemes.
• Figure 1 is hard to understand. More explanation in the legend would help. What is Bnorm, for example?
Response: I included more explanation in the legend:
“Figure 1. The deep CNN ensemble for protein classification. In this framework (Architecture 1) each multi-channel feature set is introduced to a CNN and results are combined by kNN or SVM classification. The network includes layers performing convolution (Conv), batch normalization (Bnorm), rectified linear unit (ReLU) activation, dropout (optionally) and max-pooling (Pool). Details are provided in section 2.2.”
The axes in Figure 2 should be labelled. Figure 2 is simply low resolution Ramachandran plots for each amino acid. I don’t think it is helpful to put every amino acid on the same scale of 0,4. Rare amino acids will then have few visible features, so very rare (e.g. Asx) are solid blue.
Response: These are truly Ramachandran plots (I added this in the caption), averaged over all samples. I avoided the use of labels in all the axes because it would occupy a lot of space between each subplot and reduce the clarity of the images, but the caption of the figure was extended as following:
“ Torsion angles density maps (Ramachandran plots) averaged over all samples for each of the 20 standard and 3 non-standard (ASX, GLX, UNK) amino acids. The horizontal and vertical axes at each plot correspond to φ and ψ angles and vary from −180◦ (top left) to 180◦ (right bottom). The color scale (blue to yellow) is in the range [0, 1]. For an amino acid a, yellow means that the number of occurences of the specific value (φ, ψ) in all observations of a (within and across proteins) is at least equal to the number of proteins. On the opposite, blue indicates a small number of occurrences, and is observed for rare amino acids or unfavorable conformations.”
The following text was also inserted to clarify the choice of same scale:
“The same color scale was used in all plots to make feature maps comparable, as “seen” by the deep network. Since the histograms are (on purpose) not normalized for each sample, rare amino acids will have few visible features and due to the 'max-pooling operator' will not be selected as significant features.”
For clarity of presentation, the next figure (illustrating the Ramachandran plots across samples of the same class) was reproduced using the same color scale ([0 1]) without previously subtracting the average Ramachandran plot (what was called mean-centering).
Clarify in Table 1 that the correlation columns are the k-mean results.
Response: The correlation columns are the k-NN classification results (correlation was the distance metric). The title of the column was changed to avoid confusion.
Do the colours in Figure 4 largely reflect the abundance of each amino acid in each class? For example, does EC2 have a lot of Ala, thus making the plot nearly all yellow? It would probably be clearer to rescale for each plot, as all yellow or all blue is unhelpful.
Response: First of all to facilitate interpretation I now plot the feature maps before subtracting the average, thus the scale has changed. Blue corresponds to no occurrence, whereas yellow corresponds to high number of occurrences. I quote also from the text some additions that might help the interpretation:
It can be noticed that for all classes most pairwise distances are concentrated in the last bin, corresponding to high distances between amino acids. Also, as expected there are differences in quantity of each amino acid, e.g. by focusing on the last bin it can be seen that ALA and GLY have higher values than TYR in most classes. Moreover, the feature maps indicate clear differences between samples of different classes.
Say what the 23 rows in Figure 4 are.
Response: The caption of the figure was expanded as shown next:
“Histograms of paiwise amino acid distances averaged across samples within each class. The same three amino acids (ALA, GLY, TYR) selected in Fig. 4 are also shown here. The horizontal axis at each plot represents the histogram bins (distance values in the range [5, 40]). The vertical axis corresponds to the 23 amino acids sorted alphabetically (ALA, ARG, ASN, ASP, ASX, CYS, GLN, MET, GLU, GLX, GLY, HIS, ILE, LEU, LYS, PHE, PRO, SER, THR, TRP, TYR, UNK, VAL) and illustrated from top to bottom. Thus each row shows the histogram of distances for a specific pair of the amino acids (the one in the title and the one corresponding to the specific row). The color scale is the same for all plots and shown at the bottom of the figure. Blue corresponds to no occurrence, whereas yellow corresponds to high number of occurrences.”
**********************************************************************
Reviewer 2
Basic reporting
No Comments
Experimental design
There are some issues with the experimental design:
1) the experimental data contains redundancy/ That is, some proteins are highly similar at sequence level or structure level. The authors shall remove highly redundant proteins from both the training and validation set. For example, they can use 30% or 40% sequence identity as cutoff to exclude redundancy.
Response: Thank you for raising the protein redundancy issue in the presented work. A new section (Effect of sequence redundancy and sample size) is included in the Results to elaborate on this issue. I quote it here:
“Analysis of protein datasets is often performed after removal of redundancy, such that the remaining entries do not overreach a pre-arranged threshold of sequence identity. In this particular work the author chose not to employ data filtering strategies, since the pattern analysis method is based on structure similarity and not sequence similarity. Thus, even if proteins are present with high sequence identity, the distance metrics during classification do not exploit it. Based on the (by now) established opinion that structure is far more conversed than sequence in nature (Illergard2009), the aim was not to jeopardize the dataset by losing reliable structural entries over a sequence based threshold cutoff. Also, only X-ray crystallography data were used; such data represent a ‘snapshot’ of a given protein’s 3D structure. In order not to miss the multiple poses that the same protein may adopt in different crystallography experiments, sequence/threshold metrics were not applied to remove sequence-redundancy in the presented results.
Nevertheless, the performance of the method was also investigated on a non-redundant dataset and the classification accuracy was compared in respect to the original (redundant) dataset randomly subsampled to include equal number of proteins. This experiment allows to assess the effect of redundancy under conditions (number of samples). Since inference in deep networks requires the estimation of a very large number of parameters, a large amount of training data is required and therefore very strict filtering strategies could not be applied. A dataset (the pdbaanr) pre-compiled by PISCES (Wang and Dunbrack,2003), was used that includes only non-redundant sequences across all PDB files (n=23242 proteins, i.e. half in size of the original dataset). Representative chains are selected based on the highest resolution structure available and then the best R-values. Non-X-ray structures are considered after X-ray structures. As a note, the author also explored the Leaf algorithm (Bull et al., 2013) that is especially designed to maximize the number of retained proteins and has shown improvement over PISCES. However, the computational cost was too high (possibly due to the large number of samples) and the analysis was not completed. The classification performance was assessed on Architecture 2 by using 80% of the samples for training and 20% of the samples for testing. For the non-redundant dataset the accuracy was 79.3% for kNN and 75.5% for linear-SVM, whereas for the sub-sampled dataset it was 85.7% for kNN and 83.2% for linear-SVM. The results show that for the selected classifier (kNN), the accuracy drops 4.4% when the number of samples are reduced to the half, and it also drops additionally ~6.4% if the utilized samples are non-redundant. Also the decrease in performance is not inconsiderable, the achieved accuracy indicates that structural similarity is an important criterion for the prediction of enzymatic function.”
Added References
Illergard, K., Ardell, D. H., and Elofsson, A. (2009). Structure is three to ten times more conserved than sequence—a study of structural response in protein cores. Proteins: Structure, Function, and Bioinformatics, 77(3):499–508.
Wang, G. and Dunbrack, R. L. (2003). Pisces: a protein sequence culling server. Bioinformatics, 19(12):1589–1591.
Bull, S. C., Muldoon, M. R., and Doig, A. J. (2013). Maximising the size of non-redundant protein datasets using graph theory. PloS one, 8(2):e55484.
2) is there any redundancy between the training and validation set?
Response: The non-redundant dataset produced by PISCES was split into 80% of the samples for training and 20% of the samples for testing, thus there was no additional redundancy introduced.
Validity of the findings
There is no detailed comparison with the other methods using the same set of test proteins. It is hard to judge if the proposed method indeed improves the state of the art or not.
Response: The proposed framework was compared on the same dataset with our previous pipeline (Amidi et al., 2016) based on standard machine learning techniques (SVM, kNN). As mentioned in the discussion it achieved 90.1% accuracy, which is a considerable improvement over the accuracy (73.5%) achieved in (Amidi et al., 2016) when only structural information was incorporated.
In the second paragraph of the discussion section I also review methods of others, but these results are not directly comparable because they have been produced using different data and different features, such a sequence-based.
" | Here is a paper. Please give your review comments after reading it. |
745 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Background. The availability of large databases containing high resolution three-dimensional (3D) models of proteins in conjunction with functional annotation allows the exploitation of advanced supervised machine learning techniques for automatic protein function prediction.</ns0:p><ns0:p>Methods. In this work, novel shape features are extracted representing protein structure in the form of local (per amino acid) distribution of angles and amino acid distances, respectively. Each of the multichannel feature maps is introduced into a deep convolutional neural network (CNN) for function prediction and the outputs are fused through Support Vector Machines (SVM) or a correlation-based knearest neighbor classifier. Two different architectures are investigated employing either one CNN per multi-channel feature set, or one CNN per image channel.</ns0:p></ns0:div>
<ns0:div><ns0:head>Results.</ns0:head><ns0:p>Cross validation experiments on single-functional enzymes (n=44,661) from the PDB database achieved 90.1% correct classification, demonstrating an improvement over previous results on the same dataset when sequence similarity was not considered.</ns0:p><ns0:p>Discussion. The automatic prediction of protein function can provide quick annotations on extensive datasets opening the path for relevant applications, such as pharmacological target identification. The proposed method shows promise for structure-based protein function prediction but sufficient data may not yet be available to properly assess the method's performance on non-homologous proteins, thus reduce the confounding factor of evolutionary relationships.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head n='1'>INTRODUCTION</ns0:head><ns0:p>Research in metagenomics led to a huge increase of protein databases and discovery of new protein families <ns0:ref type='bibr' target='#b12'>(Godzik, 2011)</ns0:ref>. While the number of newly discovered, but possibly redundant, protein sequences rapidly increases, experimentally verified functional annotation of whole genomes remains limited. Protein structure, i.e. the 3D configuration of the chain of amino acids, is a very good predictor of protein function, and in fact a more reliable predictor than protein sequence because it is far more conversed in nature <ns0:ref type='bibr' target='#b15'>(Illergård et al., 2009)</ns0:ref>.</ns0:p><ns0:p>By now, the number of proteins with functional annotation and experimentally predicted structure of their native state (e.g. by NMR spectroscopy or X-ray crystallography) is adequately large to allow learning training models that will be able to perform automatic functional annotation of unannotated proteins. Also, as the number of protein sequences rapidly grows, the overwhelming majority of proteins can only be annotated computationally. In this work enzymatic structures from the Protein Data Bank (PDB) are considered and the enzyme commission (EC) number is used as a fairly complete framework for annotation. The EC number is a numerical classification scheme based on the chemical reactions the enzymes catalyze, proven by experimental evidence <ns0:ref type='bibr'>(web, 1992)</ns0:ref>.</ns0:p><ns0:p>There have been plenty machine learning approaches in the literature for automatic enzyme annotation.</ns0:p><ns0:p>A systematic review on the utility and inference of various computational methods for functional characterization is presented in <ns0:ref type='bibr' target='#b28'>(Sharma and Garg, 2014)</ns0:ref>, while a comparison of machine learning approaches can be found in <ns0:ref type='bibr' target='#b38'>(Yadav and Tiwari, 2015)</ns0:ref>. Most methods use features derived from the amino acid sequence and apply Support Vector Machines (SVM) <ns0:ref type='bibr' target='#b7'>(Cai et al., 2003)</ns0:ref> <ns0:ref type='bibr' target='#b13'>(Han et al., 2004)</ns0:ref> <ns0:ref type='bibr' target='#b11'>(Dobson and Doig, 2005)</ns0:ref> <ns0:ref type='bibr' target='#b9'>(Chen et al., 2006)</ns0:ref> <ns0:ref type='bibr' target='#b39'>(Zhou et al., 2007)</ns0:ref> <ns0:ref type='bibr' target='#b23'>(Lu et al., 2007)</ns0:ref> <ns0:ref type='bibr' target='#b19'>(Lee et al., 2009)</ns0:ref> <ns0:ref type='bibr' target='#b27'>(Qiu et al., 2010)</ns0:ref> <ns0:ref type='bibr' target='#b36'>(Wang et al., 2010)</ns0:ref> <ns0:ref type='bibr' target='#b37'>(Wang et al., 2011)</ns0:ref> <ns0:ref type='bibr' target='#b1'>(Amidi et al., 2016)</ns0:ref>, k-Nearest Neighbor (kNN) classifier <ns0:ref type='bibr'>(Huang et al.,</ns0:ref> PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:2:0:NEW 15 May 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science 2007) <ns0:ref type='bibr' target='#b29'>(Shen and Chou, 2007a)</ns0:ref> <ns0:ref type='bibr' target='#b25'>(Nasibov and Kandemir-Cavas, 2009a)</ns0:ref>, classification trees/forests <ns0:ref type='bibr' target='#b19'>(Lee et al., 2009)</ns0:ref> <ns0:ref type='bibr' target='#b17'>(Kumar and Choudhary, 2012a)</ns0:ref> <ns0:ref type='bibr' target='#b24'>(Nagao et al., 2014)</ns0:ref> <ns0:ref type='bibr' target='#b38'>(Yadav and Tiwari, 2015)</ns0:ref>, and neural networks <ns0:ref type='bibr' target='#b34'>(Volpato et al., 2013)</ns0:ref>. In <ns0:ref type='bibr' target='#b4'>(Borgwardt et al., 2005)</ns0:ref> sequential, structural and chemical information was combined into one graph model of proteins which was further classified by SVM. There has been little work in the literature on automatic enzyme annotation based only on structural information. A Bayesian approach <ns0:ref type='bibr' target='#b5'>(Borro et al., 2006)</ns0:ref> for enzyme classification using structure derived properties achieved 45% accuracy. <ns0:ref type='bibr' target='#b1'>Amidi et al. (2016)</ns0:ref> obtained 73.5% classification accuracy on 39,251 proteins from the PDB database when they used only structural information.</ns0:p><ns0:p>In the past few years, deep learning techniques, and particularly convolutional neural networks, have rapidly become the tool of choice for tackling many challenging computer vision tasks, such as image classification <ns0:ref type='bibr' target='#b16'>(Krizhevsky et al., 2012)</ns0:ref>. The main advantage of deep learning techniques is the automatic exploitation of features and tuning of performance in a seamless fashion, that simplifies the conventional image analysis pipelines. CNNs have recently been used for protein secondary structure prediction <ns0:ref type='bibr' target='#b31'>(Spencer et al., 2015)</ns0:ref> <ns0:ref type='bibr' target='#b20'>(Li and Shibuya, 2015)</ns0:ref>. In <ns0:ref type='bibr' target='#b31'>(Spencer et al., 2015)</ns0:ref> prediction was based on the position-specific scoring matrix profile (generated by PSI-BLAST), whereas in <ns0:ref type='bibr' target='#b20'>(Li and Shibuya, 2015)</ns0:ref> 1D convolution was applied on features related to the amino acid sequence. Also a deep CNN architecture was proposed in <ns0:ref type='bibr' target='#b21'>(Lin et al., 2016)</ns0:ref> to predict protein properties. This architecture used a multilayer shift-and-stitch technique to generate fully dense per-position predictions on protein sequences.</ns0:p><ns0:p>To the best of authors's knowledge, deep CNNs have not been used for prediction of protein function so far.</ns0:p><ns0:p>In this work the author exploits experimentally acquired structural information of enzymes and apply deep learning techniques in order to produce models that predict enzymatic function based on structure.</ns0:p><ns0:p>Novel geometrical descriptors are introduced and the efficacy of the approach is illustrated by classifying a dataset of 44,661 enzymes from the PDB database into the l = 6 primary categories: oxidoreductases (EC1), transferases (EC2), hydrolases (EC3), lyases (EC4), isomerases (EC5), ligases (EC6). The novelty of the proposed method lies first in the representation of the 3D structure as a 'bag of atoms (amino acids)' which are characterized by geometric properties, and secondly in the exploitation of the extracted feature maps by deep CNNs. Although assessed for enzymatic function prediction, the method is not based on enzyme-specific properties and therefore can be applied (after re-training) for automatic large-scale annotation of other 3D molecular structures, thus providing a useful tool for data-driven analysis. In the following sections more details on the implemented framework are first provided, including the representation of protein structure, the CNN architecture and the fusion process of the network outputs.</ns0:p><ns0:p>Then the evaluation framework and the obtained results are presented, followed by some discussion and conclusions.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2'>METHODS</ns0:head><ns0:p>Data-driven CNN models tend to be domain agnostic and attempt to learn additional feature bases that cannot be represented through any handcrafted features. It is hypothesized that by combining 'amino acid specific' descriptors with the recent advances in deep learning we can boost model performance. The main advantage of the proposed method is that it exploits complementarity in both data representation phase and learning phase. Regarding the former, the method uses an enriched geometric descriptor that combines local shape features with features characterizing the interaction of amino acids on this 3D spatial model. Shape representation is encoded by the local (per amino acid type) distribution of torsion angles <ns0:ref type='bibr' target='#b3'>(Bermejo et al., 2012)</ns0:ref>. Amino acid interactions are encoded by the distribution of pairwise amino acid distances. While the torsion angles and distance maps are usually calculated and plotted for the whole protein <ns0:ref type='bibr' target='#b3'>(Bermejo et al., 2012)</ns0:ref>, in the current approach they are extracted for each amino acid type separately, therefore characterizing local interactions. Thus, the protein structure is represented as a set of multi-channel images which can be introduced into any machine learning scheme designed for fusing multiple 2D feature maps. Moreover, it should be noted that the utilized geometric descriptors are invariant to global translation and rotation of the protein, therefore previous protein alignment is not required.</ns0:p><ns0:p>Our method constructs an ensemble of deep CNN models that are complementary to each other. considered jointly or independently, as will be described next. Both architectures use the same CNN structure (within the highlighted boxes) which is illustrated in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.1'>Representation of protein structure</ns0:head><ns0:p>The building blocks of proteins are amino acids which are linked together by peptide bonds into a chain.</ns0:p><ns0:p>The polypeptide folds into a specific conformation depending on the interactions between its amino acid side chains which have different chemistries. Many conformations of this chain are possible due to the rotation of the chain about each carbon (Cα) atom. For structure representation, two sets of feature maps were used. They express the shape of the protein backbone and the distances between the protein building blocks (amino acids). The use of global rotation and translation invariant features is preferred over features based on the Cartesian coordinates of atoms, in order to avoid prior protein alignment, which is a bottleneck in the case of large datasets with proteins of several classes (unknown reference template space). The feature maps were extracted for every amino acid being present in the dataset including the 20 standard amino acids, as well as asparagine/aspartic (ASX), glutamine/glutamic (GLX), and all amino acids with unidentified/unknown residues (UNK), resulting in m = 23 amino acids in total.</ns0:p><ns0:p>Torsion angles density. The shape of the protein backbone was expressed by the two torsion angles of the polypeptide chain which describe the rotations of the polypeptide backbone around the bonds between N-Cα (angle φ ) and Cα-C (angle ψ). All amino acids in the protein were grouped according to their type and the density of the torsion angles φ and ψ(∈ [−180, 180]) was estimated for each amino acid type based on the 2D sample histogram of the angles (also known as Ramachandran diagram) using equal sized bins (number of bins h A = 19). The histograms were not normalized by the number of instances, therefore their values indicate the frequency of each amino acid within the polypeptide chain. In the obtained feature maps (X A ), with dimensionality [h A × h A × m], he number of amino acids (m) corresponds to the number of channels. Smoothness in the density function was achieved by moving average filtering, i.e. by convoluting the density map with a 2D gaussian kernel (σ = 0.5).</ns0:p><ns0:p>Density of amino acid distances. For each amino acid a i , i = 1, .., m, the distances to amino acid a j , j = 1, .., m, in the protein are calculated based on the coordinates of the Cα atoms for the residues and stored as an array d i j . Since the size of the proteins varies significantly, the length of the array d i j is different across proteins, thus not directly comparable. In order to standardize measurements, the sample histogram of d i j is extracted (using equally sized bins) and smoothed by convolution with a 1D gaussian kernel (σ = 0.5). The processing of all pairs of amino acids resulted to feature maps (X D ) of</ns0:p></ns0:div>
<ns0:div><ns0:head>3/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS- <ns0:ref type='table' target='#tab_3'>2016:08:12536:2:0:NEW 15 May 2017)</ns0:ref> Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_0'>Computer Science dimensionality [m × m × h D ],</ns0:formula><ns0:p>where h D = 8 is the number of histogram bins (considered as number of channels in this case).</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.2'>Classification by deep CNNs</ns0:head><ns0:p>Feature extraction stage of each CNN. The CNN architecture employs three computational blocks of consecutive convolutional, batch normalization, rectified linear unit (ReLU) activation, dropout (optionally) and max-pooling layers, and a fully-connected layer. The convolutional layer computes the output of neurons that are connected to local regions in the input in order to extract local features. It applies a 2D convolution between each of the input channels and a set of filters. The 2D activation maps are calculated by summing the results over all channels and then stacking the output of each filter to produce the output 3D volume. Batch normalization normalizes each channel of the feature map by averaging over spatial locations and batch instances. The ReLU layer applies an element-wise activation function, such as the max(0, x) thresholding at zero. The dropout layer is used to randomly drop units from the CNN during training and reduce overfitting. Dropout was used only for the X A feature set. The pooling layer performs a downsampling operation along the spatial dimensions in order to capture the most relevant global features with fixed length. The max operator was applied within a [2 × 2] neighborhood. The last layer is fully-connected and represents the class scores.</ns0:p><ns0:p>Training and testing stage of each CNN. The output of each CNN is a vector of probabilities, one for each of the l possible enzymatic classes. The CNN performance can be measured by a loss function which assigns a penalty to classification errors. The CNN parameters are learned to minimize this loss averaged over the annotated (training) samples. The softmaxloss function (i.e. the softmax operator followed by the logistic loss) is applied to predict the probability distribution over categories. Optimization was based on an implementation of stochastic gradient descent. At the testing stage, the network outputs after softmax normalization are used as class probabilities.</ns0:p></ns0:div>
<ns0:div><ns0:head n='2.3'>Fusion of CNN outputs using two different architectures</ns0:head><ns0:p>Two fusion strategies were implemented. In the first strategy (Architecture 1) the two feature sets, X A and X D , are each introduced into a CNN, which performs convolution at all channels, and then the l class probabilities produced for each feature set are combined into a feature vector of length l * 2. In the second strategy (Architecture 2) , each one of the (m = 23 or h D = 8) channels of each feature set is introduced independently into a CNN and the obtained class probabilities are concatenated into a vector of l * m features for X A and l * h D features for X D , respectively. These two feature vectors are further combined into a single vector of length l * (m + h D ) (=186). For both architectures, kNN classification was applied for final class prediction using as distance measure between two feature vectors, x 1 and x 2 , the metric 1 − cor(x 1 , x 2 ), where cor is the sample Spearman's rank correlation. The value k = 12 was selected for all experiments. For comparison, fusion was also performed with linear SVM classification <ns0:ref type='bibr' target='#b8'>(Chang and Lin, 2011)</ns0:ref>. The code was developed in MATLAB environment and the implementation of CNNs was based on MatConvNet <ns0:ref type='bibr' target='#b33'>(Vedaldi and Lenc, 2015)</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3'>RESULTS</ns0:head><ns0:p>The protein structures (n = 44, 661) were collected from the PDB. Only enzymes that occur in a single class were processed, whereas enzymes that perform multiple reactions and are hence associated with multiple enzymatic functions were excluded. Since protein sequence was not examined during feature extraction, all enzymes were considered without other exclusion criteria, such as small sequence length or homology bias. The dataset was unbalanced in respect to the different classes. The number of samples per class is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>. The dataset was split into 5 folds. Four folds were used for training and one for testing. The training samples were used to learn the parameters of the network (such as the weights of the convolution filters), as well as the parameters of the subsequent classifiers used during fusion (SVM or kNN model). Once the network was trained, the class probabilities were obtained for the testing samples, which were introduced into the trained SVM or kNN classifier for final prediction. The SVM model was linear, thus didn't require any hyper-parameter optimization. Due to lack of hyper-parameters, no extra validation set was necessary. On the side, the author examined also non-linear SVM with gaussian radial basis function kernel, but didn't observe any significant improvement, thus the corresponding results are not reported.</ns0:p></ns0:div>
<ns0:div><ns0:head>4/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:2:0:NEW 15 May 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science 3.8 9.1 7.2 78.5 1.1 0.4 3.7 8.4 6.9 80.7 0.1 0.1 EC5 6.1 11.5 10.7 2.3 68.5 1.0 3.5 9.7 8.6 0.9 76.9 0.3 EC6 4.9 18.8 13.5 1.0 1.3 60.6 4.2 14.1 10.3 0.7 0.3 70.5</ns0:p><ns0:p>A classification result was deemed a true positive if the match with the highest probability was in first place in a rank-ordered list. The classification accuracy (percentage of correctly classified samples over all samples) was calculated for each fold and then averaged across the 5 folds.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.1'>Classification performance</ns0:head><ns0:p>Common options for the network were used, except of the size of the filters which was adjusted to the dimensionality of the input data. Specifically, the convolutional layer used neurons with receptive field of size 5 for the first two layers and 2 for the third layer. The stride (specifying the sliding of the filter) was always 1. The number of filters was 20, 50 and 500 for the three layers, respectively, and the learning rate 0.001. The batch size was selected according to information amount (dimensionality) of input. It was assumed (and verified experimentally) that for more complicated the data, a larger number of samples is required for learning. One thousand samples per batch were used for Architecture 1, which takes as input all channels, and 100 samples per batch for Architecture 2, in which an independent CNN is trained for each channel. The dropout rate was 20%. The number of epochs was adjusted to the rate of convergence for each architecture (300 for Architecture 1 and 150 for Architecture 2).</ns0:p><ns0:p>The average classification accuracy over the 5 folds for each enzymatic class is shown in Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref> for both fusion schemes, whereas the analytic distribution of samples in each class is shown in the form of confusion matrices in Table <ns0:ref type='table' target='#tab_1'>2</ns0:ref>.</ns0:p><ns0:p>In order to further assess the performance of the deep networks, receiver operating characteristic (ROC) curves and area-under-the-curve (AUC) values were calculated for each class for the selected scheme (based on kNN and Architecture 2), as shown in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref>). The calculations were performed based Manuscript to be reviewed Effect of sequence redundancy and sample size. Analysis of protein datasets is often performed after removal of redundancy, such that the remaining entries do not overreach a pre-arranged threshold of sequence identity. In the previously presented results, sequence/threshold metrics were not applied to remove sequence-redundancy. Although structure similarity is affected by sequence similarity, the aim was not to lose structural entries (necessary for efficient learning) over a sequence based threshold cutoff. Also, only X-ray crystallography data were used; such data represent a 'snapshot' of a given protein's 3D structure. In order not to miss the multiple poses that the same protein may adopt in different crystallography experiments, the whole dataset was explored.</ns0:p><ns0:note type='other'>Computer Science</ns0:note><ns0:p>Subsequently, the performance of the method was also investigated on a less redundant dataset and the classification accuracy was compared in respect to the original (redundant) dataset, but randomly subsampled to include equal number of proteins. This experiment allows to assess the effect of redundancy under conditions (number of samples). Since inference in deep networks requires the estimation of a very large number of parameters, a large amount of training data is required and therefore very strict filtering strategies could not be applied. A dataset, the pdbaanr 1 , pre-compiled by PISCES <ns0:ref type='bibr' target='#b35'>(Wang and Dunbrack, 2003)</ns0:ref>, was used that includes only non-redundant sequences across all PDB files (n = 23242 proteins, i.e. half in size of the original dataset). This dataset has one representative for each unique sequence in the PDB; representative chains are selected based on the highest resolution structure available and then the best R-values. Non-X-ray structures are considered after X-ray structures. As a note, the author also explored the Leaf algorithm <ns0:ref type='bibr' target='#b6'>(Bull et al., 2013)</ns0:ref> which is especially designed to maximize the number of retained proteins and has shown improvement over PISCES. However, the computational cost was too high (possibly due to the large number of samples) and the analysis was not completed.</ns0:p><ns0:p>The classification performance was assessed on Architecture 2 by using 80% of the samples for training and 20% of the samples for testing. For the pdbaanr dataset, the accuracy was 79.3% for kNN and 75.5% for linear-SVM, whereas for the sub-sampled dataset it was 85.7% for kNN and 83.2% for linear-SVM. The results show that for the selected classifier (kNN), the accuracy drops 4.4% when the number of samples is reduced to the half, and it also drops additionally 6.4% if the utilized sequences are less similar. The decrease in performance shows that the method is affected by the number of samples as well as by their similarity level.</ns0:p></ns0:div>
<ns0:div><ns0:head n='3.2'>Structural representation and complementarity of features</ns0:head><ns0:p>Next, some examples of the extracted feature maps are illustrated, in order to provide some insight on the representation of protein's 3D structure. The average (over all samples) 2D histogram of torsion angles for each amino acid is shown in Fig. <ns0:ref type='figure' target='#fig_2'>3</ns0:ref>. The horizontal and vertical axes at each plot represent torsion angles (in [−180 • , 180 • ]). It can be observed that the non-standard (ASX, GLX, UNK) amino acids are very rare, thus their density maps have nearly zero values. The same color scale was used in all plots to make feature maps comparable, as 'seen' by the deep network. Since the histograms are (on purpose) not normalized for each sample, rare amino acids will have few visible features and due to the 'max-pooling operator'</ns0:p><ns0:p>will not be selected as significant features. The potential of these feature maps to differentiate between classes is illustrated in Fig. <ns0:ref type='figure' target='#fig_4'>4</ns0:ref> for three randomly selected amino acids (ALA, GLY, TYR). Overall the spatial patterns in each class are distinctive and form a multi-dimensional signature for each sample. As a note, before training of the CNN ensemble data standardization is performed by subtracting the mean density map. The same map is used to standardize the test sample during assessment.</ns0:p><ns0:p>Examples of features maps representing amino acid distances (X D ) are illustrated in figures 1 and 5. to high distances between amino acids. Also, as expected there are differences in quantity of each amino acid, e.g. by focusing on the last bin, it can be seen that ALA and GLY have higher values than TYR in most classes. Moreover, the feature maps indicate clear differences between samples of different classes.</ns0:p></ns0:div>
<ns0:div><ns0:head>7/12</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:2:0:NEW 15 May 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science The discrimination ability and complementary of the extracted features in respect to classification performance is shown in Table <ns0:ref type='table' target='#tab_3'>3</ns0:ref>. It can be observed that the relative position of amino acids and their arrangement in space (features X D ) predict enzymatic function better than the backbone conformation (features X A ). Also, the fusion of network decisions based on correlation distance outperforms predictions from either network alone, but the difference is only marginal in respect to the predictions by X D . In all cases the differences in prediction for the performed experiments (during cross validation) was very small (usually standard deviation < 0.5%), indicating that the method is robust to variations in training examples.</ns0:p></ns0:div>
<ns0:div><ns0:head n='4'>DISCUSSION</ns0:head><ns0:p>A deep CNN ensemble was presented that performs enzymatic function classification through fusion in feature level and decision level. The method has been applied for the prediction of the primary EC number and achieved 90.1% accuracy, which is a considerable improvement over the accuracy obtained in our previous work (73.5% in <ns0:ref type='bibr' target='#b1'>(Amidi et al., 2016)</ns0:ref> and 83% in <ns0:ref type='bibr' target='#b2'>(Amidi et al., 2017</ns0:ref>)) when only structural information was incorporated. These results were achieved without imposing any pre-selection criteria, such as based on sequence identity, thus the effect of evolutionary relationships, as confounding factor in the prediction of function from 3D structure, has not been sufficiently studied. Since deep learning technology requires a large number of samples to produce generalizable models, a filtered dataset with only non-redundant proteins would be too small for reliable training. This is a limitation of the current approach, which mainly aimed to increase predictive power over previous methods using common Manuscript to be reviewed</ns0:p><ns0:p>Computer Science to 98% when predicting the first three EC digits) by using sequence encoding and SVM for hierarchy labels. <ns0:ref type='bibr' target='#b18'>Kumar and Choudhary (2012b)</ns0:ref> reported overall accuracy of 87.7% in predicting the main class for 4,731 enzymes using random forests. <ns0:ref type='bibr' target='#b34'>Volpato et al. (2013)</ns0:ref> applied neural networks on the full sequence and achieve 96% correct classification on 6,000 non-redundant proteins. Most of the previous methods incorporate sequence-based features. Many were assessed on a subset of enzymes acquired after imposition of different pre-selection criteria and levels of sequence similarity. More discussion on machine learning techniques for single-label and multi-label enzyme classification can be found in <ns0:ref type='bibr' target='#b2'>(Amidi et al., 2017)</ns0:ref>.</ns0:p><ns0:p>Assessment of the relationship between function and structure <ns0:ref type='bibr' target='#b32'>(Todd et al., 2001)</ns0:ref> revealed 95% conservation of the fourth EC digit for proteins with up to 30% sequence identity. Similarity, <ns0:ref type='bibr' target='#b10'>Devos and Valencia (2000)</ns0:ref> concluded that enzymatic function is mostly conserved for the first digit of EC code whereas more detailed functional characteristics are poorly conserved. It is generally believed that as sequences diverge, 3D protein structure becomes a more reliable predictor than sequence, and that structure is far more conversed than sequence in nature <ns0:ref type='bibr' target='#b15'>(Illergård et al., 2009)</ns0:ref>. The focus of this study was to explore the predictive ability of 3D structure and provide a tool that can generalize in cases where sequence information is insufficient. Thus the presented results are not directly comparable to the ones of previous methods due to the use of different features a well as datasets. If desired, the current approach can easily incorporate also sequence-related features. In such a case however, the use of non-homologous data would be inevitable for rigorous assessment.</ns0:p><ns0:p>The reported accuracy is the average of 5 folds on the testing set. A separate validation set was not used within each fold, because the design of the network architecture (size of convolution kernel, number of layers, etc) and final classifier (number of neighbors in kNN) were preselected and not optimized within the learning framework. Additional validation and optimization of the model would be necessary to improve performance and provide better insight into the capabilities of this method.</ns0:p><ns0:p>A possible limitation of the proposed approach is that the extracted features do not capture the topological properties of the 3D structure. Due to the statistical nature of the implemented descriptors, calculated by considering the amino acids as elements in Euclidean space, connectivity information is not strictly retained. The author and colleagues recently started to investigate in parallel the predictive power of the original 3D structure, represented as a volumetric image, without the extraction of any statistical features. Since the more detailed representation increased the dimensionality considerably, new ways are being explored to optimally incorporate the relationship between the structural units (amino-acids) in order not to impede the learning process.</ns0:p></ns0:div>
<ns0:div><ns0:head n='5'>CONCLUSIONS</ns0:head><ns0:p>A method was presented that extracts shape features from the 3D protein geometry that are introduced into a deep CNN ensemble for enzymatic function prediction. The investigation of protein function based only on structure reveals relationships hidden at the sequence level and provides the foundation to build a better understanding of the molecular basis of biological complexity. Overall, the presented approach can provide quick protein function predictions on extensive datasets opening the path for relevant applications, such as pharmacological target identification. Future work includes application of the method for prediction of the hierarchical relation of function subcategories and annotation of enzymes up to the last digit of the enzyme classification system.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. The deep CNN ensemble for protein classification. In this framework (Architecture 1) each multi-channel feature set is introduced to a CNN and results are combined by kNN or SVM classification. The network includes layers performing convolution (Conv), batch normalization (Bnorm), rectified linear unit (ReLU) activation, dropout (optionally) and max-pooling (Pool). Details are provided in section 2.2.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. ROC curves for each enzymatic class based on kNN and Architecture 2</ns0:figDesc><ns0:graphic coords='7,141.73,63.78,413.58,226.25' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Torsion angles density maps (Ramachandran plots) averaged over all samples for each of the 20 standard and 3 non-standard (ASX, GLX, UNK) amino acids. The horizontal and vertical axes at each plot correspond to φ and ψ angles and vary from −180 • (top left) to 180 • (right bottom). The color scale (blue to red) is in the range [0, 1].For an amino acid a, red means that the number of occurrences of the specific value (φ , ψ) in all observations of a (within and across proteins) is at least equal to the number of proteins. On the opposite, blue indicates a small number of occurrences, and is observed for rare amino acids or unfavorable conformations.</ns0:figDesc><ns0:graphic coords='8,162.41,63.78,372.21,267.20' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Fig. 1</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Fig. 1 illustrates an image slice across the 3rd dimension, i.e. one [m × m] channel, and as introduced in the 2D multichannel CNN, i.e. after mean-centering (over all samples). Fig. 5 illustrates image slices (of size [m × h D ]) across the 1st dimension averaged within each class. Fig. 5 has been produced by selecting the same amino acids as in Fig. 4 for easiness of comparison of the different feature representations. It can be noticed that for all classes most pairwise distances are concentrated in the last bin, corresponding</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Ramachandran plots averaged across samples within each class. Rows correspond to amino acids and columns to functional classes. Three amino acids (ALA, GLY, TYR) are randomly selected for illustration of class separability. The horizontal and vertical axes at each plot correspond to φ and ψ angles and vary from −180 • (top left) to 180 • (right bottom). The color scale (blue to red) is in the range [0, 1] as illustrated in Fig. 3.</ns0:figDesc><ns0:graphic coords='9,162.41,63.78,372.21,216.21' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Histograms of paiwise amino acid distances averaged across samples within each class. The same three amino acids (ALA, GLY, TYR) selected in Fig. 4 are also shown here. The horizontal axis at each plot represents the histogram bins (distance values in the range [5, 40]). The vertical axis at each plot corresponds to the 23 amino acids sorted alphabetically from top to bottom (ALA, ARG, ASN, ASP, ASX, CYS, GLN, MET, GLU, GLX, GLY, HIS, ILE, LEU, LYS, PHE, PRO, SER, THR, TRP, TYR, UNK, VAL). Thus each row shows the histogram of distances for a specific pair of the amino acids (the one in the title and the one corresponding to the specific row). The color scale is the same for all plots and is shown horizontally at the bottom of the figure.</ns0:figDesc><ns0:graphic coords='10,141.73,190.96,413.59,304.47' type='bitmap' /></ns0:figure>
<ns0:figure><ns0:head /><ns0:label /><ns0:figDesc /><ns0:graphic coords='4,143.14,63.84,410.73,198.37' type='bitmap' /></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Cross-validation accuracy (in percentage) in predicting main enzymatic function using the deep CNN ensemble</ns0:figDesc><ns0:table><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>Architecture 1</ns0:cell><ns0:cell /><ns0:cell cols='2'>Architecture 2</ns0:cell></ns0:row><ns0:row><ns0:cell>Class</ns0:cell><ns0:cell>Samples</ns0:cell><ns0:cell>linear-SVM</ns0:cell><ns0:cell>kNN</ns0:cell><ns0:cell>linear-SVM</ns0:cell><ns0:cell>kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>EC1</ns0:cell><ns0:cell>8,075</ns0:cell><ns0:cell>86.4</ns0:cell><ns0:cell>88.8</ns0:cell><ns0:cell>91.2</ns0:cell><ns0:cell>90.6</ns0:cell></ns0:row><ns0:row><ns0:cell>EC2</ns0:cell><ns0:cell>12,739</ns0:cell><ns0:cell>84.0</ns0:cell><ns0:cell>87.5</ns0:cell><ns0:cell>88.0</ns0:cell><ns0:cell>91.7</ns0:cell></ns0:row><ns0:row><ns0:cell>EC3</ns0:cell><ns0:cell>17,024</ns0:cell><ns0:cell>88.7</ns0:cell><ns0:cell>91.3</ns0:cell><ns0:cell>89.6</ns0:cell><ns0:cell>94.0</ns0:cell></ns0:row><ns0:row><ns0:cell>EC4</ns0:cell><ns0:cell>3,114</ns0:cell><ns0:cell>79.4</ns0:cell><ns0:cell>78.4</ns0:cell><ns0:cell>84.9</ns0:cell><ns0:cell>80.7</ns0:cell></ns0:row><ns0:row><ns0:cell>EC5</ns0:cell><ns0:cell>1,905</ns0:cell><ns0:cell>69.5</ns0:cell><ns0:cell>68.6</ns0:cell><ns0:cell>79.6</ns0:cell><ns0:cell>77.0</ns0:cell></ns0:row><ns0:row><ns0:cell>EC6</ns0:cell><ns0:cell>1,804</ns0:cell><ns0:cell>61.0</ns0:cell><ns0:cell>60.6</ns0:cell><ns0:cell>73.6</ns0:cell><ns0:cell>70.4</ns0:cell></ns0:row><ns0:row><ns0:cell>Total</ns0:cell><ns0:cell>44,661</ns0:cell><ns0:cell>84.4</ns0:cell><ns0:cell>86.7</ns0:cell><ns0:cell>88.0</ns0:cell><ns0:cell>90.1</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head>Table 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Confusion matrices for each fusion scheme and classification technique</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Classifier</ns0:cell><ns0:cell /><ns0:cell cols='6'>prediction by Architecture 1</ns0:cell><ns0:cell cols='5'>prediction by Architecture 2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell /><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell><ns0:cell>1</ns0:cell><ns0:cell>2</ns0:cell><ns0:cell>3</ns0:cell><ns0:cell>4</ns0:cell><ns0:cell>5</ns0:cell><ns0:cell>6</ns0:cell></ns0:row><ns0:row><ns0:cell>linear-</ns0:cell><ns0:cell cols='7'>EC1 86.5 4.9 4.8 1.8 1.1 1.0</ns0:cell><ns0:cell cols='3'>91.2 2.9 1.9</ns0:cell><ns0:cell cols='2'>2.2 1.1 0.7</ns0:cell></ns0:row><ns0:row><ns0:cell>SVM</ns0:cell><ns0:cell>EC2</ns0:cell><ns0:cell cols='6'>3.4 84.0 7.9 1.9 1.2 1.6</ns0:cell><ns0:cell cols='5'>3.6 88.0 3.5 2.2 1.2 1.5</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC3</ns0:cell><ns0:cell cols='6'>2.4 6.1 88.7 1.0 0.8 1.0</ns0:cell><ns0:cell cols='5'>2.3 4.1 89.6 1.6 1.2 1.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC4</ns0:cell><ns0:cell cols='6'>4.4 7.3 5.7 79.4 1.8 1.3</ns0:cell><ns0:cell cols='5'>4.3 4.9 2.7 84.9 1.7 1.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC5</ns0:cell><ns0:cell cols='6'>7.0 10.1 9.0 2.9 69.4 1.6</ns0:cell><ns0:cell cols='5'>4.5 5.4 4.7 4.4 79.5 1.7</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC6</ns0:cell><ns0:cell cols='6'>5.9 15.5 13.0 2.3 2.3 61.0</ns0:cell><ns0:cell cols='5'>5.5 10.3 5.4 3.3 1.9 73.6</ns0:cell></ns0:row><ns0:row><ns0:cell>kNN</ns0:cell><ns0:cell cols='7'>EC1 88.8 5.0 4.5 0.7 0.5 0.5</ns0:cell><ns0:cell cols='5'>90.6 4.4 4.6 0.3 0.1 0.0</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC2</ns0:cell><ns0:cell cols='6'>2.5 87.5 7.4 1.0 0.6 1.1</ns0:cell><ns0:cell cols='2'>1.7 91.7</ns0:cell><ns0:cell cols='3'>5.8 0.3 0.2 0.4</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC3</ns0:cell><ns0:cell cols='6'>1.8 5.4 91.3 0.5 0.4 0.6</ns0:cell><ns0:cell cols='5'>1.2 4.4 94.0 0.2 0.1 0.2</ns0:cell></ns0:row><ns0:row><ns0:cell /><ns0:cell>EC4</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_3'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Cross-validation accuracy (average ± standard deviation over 5 folds) for each feature set separately and after fusion of CNN outputs based on Architecture 2</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Feature sets</ns0:cell><ns0:cell>linear-SVM</ns0:cell><ns0:cell>kNN</ns0:cell></ns0:row><ns0:row><ns0:cell>X A (angles)</ns0:cell><ns0:cell>79.6 ± 0.5</ns0:cell><ns0:cell>82.4 ± 0.4</ns0:cell></ns0:row><ns0:row><ns0:cell>X D (distances)</ns0:cell><ns0:cell>88.1 ± 0.4</ns0:cell><ns0:cell>89.8 ± 0.2</ns0:cell></ns0:row><ns0:row><ns0:cell>Ensemble</ns0:cell><ns0:cell>88.0 ± 0.4</ns0:cell><ns0:cell>90.1 ± 0.2</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:note place='foot' n='1'>http://dunbrack.fccc.edu/Guoli/pisces download.php 6/12 PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:2:0:NEW 15 May 2017) Manuscript to be reviewed Computer Science</ns0:note>
<ns0:note place='foot' n='12'>/12 PeerJ Comput. Sci. reviewing PDF | (CS-2016:08:12536:2:0:NEW 15 May 2017) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Dr Procter
Thank you so much for taking the time to respond and provide a detailed explanation to my
response letter on the comments of the reviewers. I have revised the manuscript trying to take into
account your considerations, which were very enlightening and understandable. I provide below a
response to each comment and a justification on the remaining limitations of the revised version
hoping that those won’t be very crucial for the final decision, since, as I mentioned in my previous
response letter, the paper focused more on the computational method than on the biological aspect. I
would also like to communicate to you that our conference paper (Amidi et al., 2016), after some
extensions, was published recently in PeerJ journal and can be used for performance comparison.
1) Comment: Data redundancy
I agree about the fact that the evolutionary relationships are a confounding factor for prediction of
function from 3D structure. This is a limitation of the current approach based on deep learning
technology. The incorporated changes are the following:
a) In the abstract I discuss on the results focusing on the comparison of the method with our
previous approach on the same dataset and not emphasizing the overall performance of the method
for protein function prediction.
Results. Cross validation experiments on single-functional enzymes (n=44,661) from the PDB
database achieved 90.1% correct classification, demonstrating an improvement over previous
results on the same dataset when sequence similarity was not considered.
Discussion. The automatic prediction of protein function can provide quick annotations on
extensive datasets opening the path for relevant applications, such as pharmacological target
identification. The proposed method shows promise for structure-based protein function prediction
but sufficient data may not yet be available to properly assess the method's performance on nonhomologous proteins, thus reduce the confounding factor of evolutionary relationships.
b) In the results section, I clarify the reasons for not removing data redundancy:
Analysis of protein datasets is often performed after removal of redundancy, such that the
remaining entries do not overreach a pre-arranged threshold of sequence identity. In the previously
presented results, sequence/threshold metrics were not applied to remove sequence-redundancy.
Although structure similarity is affected by sequence similarity, the aim was not to lose structural
entries (necessary for efficient learning) over a sequence based threshold cutoff. Also, only X-ray
crystallography data were used; such data represent a ‘snapshot’ of a given protein’s 3D structure.
In order not to miss the multiple poses that the same protein may adopt in different crystallography
experiments, the whole dataset was explored.
c) In order to elaborate on this more, in the first paragraph of the discussion section I added the
following text:
These results were achieved without imposing any pre-selection criteria, such as based on sequence
identity, thus the effect of evolutionary relationships, as confounding factor in the prediction of
function from 3D structure, has not been sufficiently studied. Since deep learning technology
requires a large number of samples to produce generalizable models, a filtered dataset with only
non-redundant proteins would be too small for reliable training. This is a limitation of the current
approach, which mainly aimed to increase predictive power over previous methods using common
features for structural representation and common classifiers such as SVM and nearest neighbor,
rather than addressing this confounding factor in the prediction of protein structure.
d) The response in the new section (“Effect of sequence redundancy and sample size”) is modified
by acknowledging the connection between sequence similarity and structure similarity and by
rephrasing expressions like “non-redundant” to “sequences that are less similar”, as shown next.
Subsequently, the performance of the method was also investigated on a less redundant dataset and
the classification accuracy was compared in respect to the original (redundant) dataset, but
randomly subsampled to include equal number of proteins. This experiment allows to assess the
effect of redundancy under conditions (number of samples). Since inference in deep networks
requires the estimation of a very large number of parameters, a large amount of training data is
required and therefore very strict filtering strategies could not be applied.
d) In respect to the pdbaanr dataset, and the comment on not having clearly communicated exactly
how it is constructed, I should clarify that this dataset was not produced by me. It was used in
studies by other authors; I just downloaded it without modifying it. So I could not provide more
details than what the authors had communicated. I copy-pasted the information from their website
and now I include also an additional sentence from their email when I had contacted them for more
details, before using this dataset. In the previous revision, I included a reference to their paper
(Wang and Dunbrack,2003); I include now also a link to the website with the data.
A dataset, the pdbaanr [http://dunbrack.fccc.edu/Guoli/pisces_download.php], pre-compiled by
PISCES (Wang and Dunbrack,2003), was used that includes only non-redundant sequences across
all PDB files (n=23242 proteins, i.e. half in size of the original dataset). This dataset has one
representative for each unique sequence in the PDB; representative chains are selected based on
the highest resolution structure available and then the best R-values. Non-X-ray structures are
considered after X-ray structures.
The classification performance was assessed on Architecture 2 by using 80% of the samples for
training and 20% of the samples for testing. For the pdbaanr dataset, the accuracy was 79.3% for
kNN and 75.5% for linear-SVM, whereas for the sub-sampled dataset it was 85.7% for kNN and
83.2% for linear-SVM. The results show that for the selected classifier (kNN), the accuracy drops
4.4% when the number of samples is reduced to the half, and it also drops additionally ~6.4% if the
utilized sequences are less similar. The decrease in performance shows that the method is affected
by the number of samples as well as by their similarity level.
2) Comment: Comparison with methods of others
Results on single-labeled enzymes: When only structural information was used, the classification
accuracy of our first method (Amidi et al. 2016) was 73.5% on almost the same dataset (39,251
proteins from the PDB) and it increased to 83% maximum accuracy with our improved method
(Amidi et al., PeerJ, 2017). The accuracy obtained in this proposed framework is significantly
higher (90.1%), but lower when compared to previous work using also sequence information.
Discussion on methods of others is included in the 2nd and 3rd paragraph of the Discussion section.
The following changes were made in the revision:
Most of the previous methods incorporate sequence-based features. Many were assessed on a subset
of enzymes acquired after imposition of different pre-selection criteria and levels of sequence
similarity. More discussion on machine learning techniques for single-label and multi-label enzyme
classification can be found in (Amidi et al., 2017).
….
Thus the presented results are not directly comparable to the ones of previous methods due to the
use of different features a well as datasets. If desired, the current approach can easily incorporate
also sequence-related features. In such a case however, the use of non-homologous data would be
inevitable for rigorous assessment.
3) Comment: Evaluation of the models with the hold-out approach, i.e. a set that is used only
for evaluation
The evaluation was based on the testing set (the held-out part), which was not used in any step of
the method. The reported accuracy is the average of the 5 repetitions, each on the 20% hold-out,
thus it is not an over-estimate of the prediction accuracy. I would also like to clarify that I did not
report any results after merging the 5 folds or the 5 training models, since this would bias the results
and the reported accuracy would be higher than the expected one. In such a process an independent
validation set (one for each of the 5 repetitions) would be required to set up parameters of the
network (size of convolution kernel, number of layers, etc) or parameters of the final classifier. I
preselected the architecture of the network (number of layers and kernel size) and of the classifier
(number of neighbors), therefore I did not fit the model parameters to the test data. I agree that an
independent validation set would be useful, not because evaluation is biased or underestimating the
prediction error, but because it would allow to optimize the model parameters and possibly improve
the results.
4) EC number and structural class
This paper focuses only on single labeled enzymes. In our recent work however (Amidi et al., PeerJ,
2017) we present a machine learning methodology for multi-class and multi-label classification of
enzymes. The dataset with the multi-labeled proteins included only 783 enzymes. Such small
numbers cannot be used in deep learning frameworks, thus I did not apply the current method on the
multi-labelled enzymes. Under the current status, I recommend the method in (Amidi et al., PeerJ,
2017) for classification of multi-labelled enzymes.
5) Color scale in Figures 3 and 5
I changed the color scale in Figures 3 to 5. I think that previously subtle color differences (around
zero) are now more pronounced. Remaining solid blue areas indicate zero entries, such as
unfavorable conformations (Fig. 3 and Fig. 4).
" | Here is a paper. Please give your review comments after reading it. |
746 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>The unit of experimental measurement in a variety of scientific applications is the onedimensional (1D) continuum: a dependent variable whose value is measured repeatedly, often at regular intervals, in time or space. A variety of software packages exist for computing continuum-level descriptive statistics and also for conducting continuum-level hypothesis testing, but very few offer power computing capabilities, where 'power' is the probability that an experiment will detect a true continuum signal given experimental noise. Moreover, no software package yet exists for arbitrary continuum-level signal / noise modeling. This paper describes a package called power1d which implements (a) two analytical 1D power solutions based on random field theory (RFT) and (b) a high-level framework for computational power analysis using arbitrary continuum-level signal / noise modeling. First power1d's two RFT-based analytical solutions are numerically validated using its random continuum generators. Second arbitrary signal / noise modeling is demonstrated to show how power1d can be used for flexible modeling well beyond the assumptions of RFT-based analytical solutions. Its computational demands are nonexcessive, requiring on the order of only 30 s to execute on standard desktop computers, but with approximate solutions available much more rapidly. Its broad signal / noise modeling capabilities along with relatively rapid computations imply that power1d may be a useful tool for guiding experimentation involving multiple measurements of similar 1D continua, and in particular to ensure that an adequate number of measurements is made to detect assumed continuum signals.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>Analyzing multiple measurements of one-dimensional (1D) continua is common to a variety of scientific applications ranging from annual temperature fluctuations in climatology (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) to position trajectories in robotics. These measurements can be denoted y(q) where y is the dependent variable, q specifies continuum position, usually in space or time, and where the continua are sampled at Q discrete points.</ns0:p><ns0:p>For the climate data depicted in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> y is temperature, q is day and Q=365.</ns0:p><ns0:p>Measurements of y(q) are often: (i) registered and (ii) smooth. The data are 'registered' in the sense that point q is homologous across multiple continuum measurements. Registration implies that it is generally valid to compute mean and variance continua as estimators of central tendency and dispersion <ns0:ref type='bibr' target='#b4'>(Friston et al., 1994)</ns0:ref>. That is, at each point q the mean and variance values are computed, and these form mean and variance continua (Fig. <ns0:ref type='figure' target='#fig_0'>1b</ns0:ref>) which may be considered unbiased estimators of the true population mean and variance continua.</ns0:p><ns0:p>The data are 'smooth' in the sense that continuum measurements usually exhibit low frequency signal. This is often a physical consequence of the spatial or temporal process which y(q) represents. For example, the Earth's rotation is slow enough that day-to-day temperature changes are typically much smaller than season-to-season changes (Fig. <ns0:ref type='figure' target='#fig_0'>1a</ns0:ref>). Regardless of the physical principles underlying the smoothness, basic information theory in fact requires smooth continua because sufficiently high measurement frequency is needed to avoid signal aliasing. This smoothness has important statistical implications because smoothness means that neighboring points (q and q + 1) are correlated, or equivalently that adjacent points do not PeerJ Comput. Sci. reviewing PDF | (CS-2017:04:17420:1:1:NEW 9 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed Computer Science vary in a completely independent way. Thus, even when Q separate values are measured to characterize a single continuum, there may be far fewer than Q independent stochastic units underlying that continuum process.</ns0:p><ns0:p>The Canadian temperature dataset in Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref> exhibits both features. The data are naturally registered because each measurement station has one measurement per day over Q=365 days. The data are smooth because, despite relatively high-frequency day-to-day temperature changes, there are also comparatively low-frequency changes over the full year and those low-frequency changes are presumably the signals of interest.</ns0:p><ns0:p>Having computed mean and variance continua it is natural to ask probabilistic questions regarding them, and two basic kinds of probability questions belong to the categories: (i) classical hypothesis testing and (ii) power analysis. Continuum-level hypothesis testing has been well-documented in the literature <ns0:ref type='bibr' target='#b4'>(Friston et al., 1994;</ns0:ref><ns0:ref type='bibr' target='#b14'>Nichols and Holmes, 2002;</ns0:ref><ns0:ref type='bibr' target='#b15'>Pataky, 2016)</ns0:ref> but power has received comparatively less attention. While this paper focuses on power analysis it is instructive to first consider continuum-level hypothesis testing because those results are what power analysis attempts to control. </ns0:p></ns0:div>
<ns0:div><ns0:head>Continuum-level hypothesis testing</ns0:head><ns0:p>Classical hypothesis testing can be conducted at the continuum level using a variety of theoretical and computational procedures. In the context of the temperature data (Fig. <ns0:ref type='figure' target='#fig_0'>1b</ns0:ref>) a natural hypothesis testing question is: is there is a statistically significant difference between the Atlantic and Continental mean temperature continua? Answering that question requires a theoretical or computational model of stochastic continuum behavior so that probabilities pertaining to particular continuum differences can be calculated.</ns0:p><ns0:p>One approach is Functional Data Analysis (FDA) <ns0:ref type='bibr' target='#b16'>(Ramsay and Silverman, 2005)</ns0:ref> which combines 'basis functions', or mathematically-defined continua, to model the data. Since the basis functions are analytical, one can compute a variety of probabilities associated with their long-term stochastic behavior.</ns0:p><ns0:p>A second approach is Random Field Theory (RFT) <ns0:ref type='bibr' target='#b0'>(Adler and Hasofer, 1976;</ns0:ref><ns0:ref type='bibr' target='#b7'>Hasofer, 1978)</ns0:ref> which extends Gaussian behavior to the 1D continuum level via a smoothness parameter <ns0:ref type='bibr' target='#b12'>(Kiebel et al., 1999)</ns0:ref> from which a variety of continuum level probabilities can be calculated <ns0:ref type='bibr' target='#b4'>(Friston et al., 1994)</ns0:ref>. A third approach is the non-parametric permutation method of <ns0:ref type='bibr' target='#b14'>Nichols and Holmes (2002)</ns0:ref> Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>the true continuum difference is null) would traverse in α percent of an infinite number of experiments, where α is the Type I error rate and is usually 0.05.</ns0:p><ns0:p>Of the three thresholds depicted in Fig. <ns0:ref type='figure' target='#fig_1'>2</ns0:ref> only one (the RFT threshold) is a true continuum-level threshold. The other two depict nappropriate thresholds as references to highlight the meaning of the RFT threshold. In particular, the uncorrected threshold (α=0.05) is 'uncorrected' because it presumes Q = 1; since Q = 365 for these data it is clearly inappropriate. On the other extreme is a Bonferroni threshold which assumes that there are Q completely independent processes. It is a 'corrected' threshold because it acknowledges that Q > 1, but it is inappropriate because it fails to account for continuum smoothness, and thus overestimates the true number of stochastic processes underlying these data. The third method (RFT) is also a 'corrected' threshold, and it is closest to the true threshold required to control α because it considers both Q and smoothness <ns0:ref type='bibr' target='#b4'>Friston et al. (1994)</ns0:ref>. Specifically, it assesses inter-node correlation using the 1D derivative <ns0:ref type='bibr' target='#b12'>(Kiebel et al., 1999)</ns0:ref> to lower the estimated number of independent processes, which in turn lowers the critical threshold relative to the Bonferroni threshold. This RFT approach is described extensively elsewhere <ns0:ref type='bibr' target='#b5'>Friston et al. (2007)</ns0:ref> and has also been validated extensively for 1D continua <ns0:ref type='bibr' target='#b15'>(Pataky, 2016)</ns0:ref>.</ns0:p><ns0:p>For this particular dataset the test statistic continuum crosses all three thresholds, implying that the null hypothesis of equivalent mean continua is rejected regardless of correction procedure. If the continuum differences are not as pronounced as they are here, especially near the start and end of the calendar year, the correction procedure would become more relevant to interpretation objectivity. </ns0:p></ns0:div>
<ns0:div><ns0:head>Continuum-level power analysis</ns0:head><ns0:p>Before conducting an experiment for which one intends to conduct classical hypothesis testing it is often useful to conduct power analysis, where 'power' represents the probability of detecting a true effect. The main purposes of power analysis are (a) to ensure that an adequate number of measurements is made to elucidate a signal of empirical interest and (b) to ensure that not too many measurements are made, in which case one risks detecting signals that are not of empirical interest. The balance point between (a)</ns0:p><ns0:p>and (b) is conventionally set at a power of 0.8, and that convention is followed below.</ns0:p><ns0:p>The literature describes two main analytical approaches to continuum-level power analysis: (i) inflated variance <ns0:ref type='bibr' target='#b6'>(Friston et al., 1996)</ns0:ref> and (ii) noncentral RFT <ns0:ref type='bibr' target='#b8'>(Hayasaka et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b13'>Mumford and Nichols, 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Joyce and Hayasaka, 2012)</ns0:ref>. The inflated variance method models signal as smooth Gaussian noise (Fig. <ns0:ref type='figure' target='#fig_2'>3a</ns0:ref>) which is superimposed upon Gaussian noise with different amplitude and smoothness. The non-central RFT approach models signal as a constant mean shift from the null continuum (Fig. <ns0:ref type='figure' target='#fig_2'>3b</ns0:ref>). Since both techniques are analytical power calculations can be made effectively instantaneously. However, both techniques are limited by simple signal models and relatively simple noise models. In reality the signal</ns0:p></ns0:div>
<ns0:div><ns0:head>3/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2017:04:17420:1:1:NEW 9 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:p>Computer Science can be geometrically arbitrary and the noise can be arbitrarily complex (Fig. <ns0:ref type='figure' target='#fig_2'>3c</ns0:ref>). Currently no analytical methods exist for arbitrary signal geometries and arbitrary noise.</ns0:p><ns0:p>The purpose of this study was to develop a computational approach to continuum-level power analysis that permits arbitrary signal and noise modeling. This paper introduces the resulting open-source Python software package called power1d, describes its core computational components, and cross-validates its ultimate power results with results from the two existing analytical methods (inflated variance and non-central RFT). Source code, HTML documentation and scripts replicating all results in this manuscript are available at http://www.spm1d.org/power1d. <ns0:ref type='bibr' target='#b13'>Mumford and Nichols (2008)</ns0:ref>. (c) This paper's proposed computational method. RFT=random field theory.</ns0:p></ns0:div>
<ns0:div><ns0:head>SOFTWARE IMPLEMENTATION</ns0:head><ns0:p>power1d was developed in Python 3.6 (van Rossum, 2014) using Anaconda 4.4 (Continuum Analytics, 2017) and is also compatible with Python 2.7. Its dependencies include Python's standard numerical, scientific and plotting packages:</ns0:p><ns0:p>• NumPy 1.11 (van der Walt et al., 2011)</ns0:p><ns0:p>• SciPy 0.19 <ns0:ref type='bibr' target='#b10'>(Jones et al., 2001)</ns0:ref> • matplotlib 2.0 <ns0:ref type='bibr' target='#b9'>(Hunter, 2007)</ns0:ref>.</ns0:p><ns0:p>Other versions of these dependencies are likely compatible but have not been tested thoroughly. The package is organized into the following modules:</ns0:p><ns0:p>• power1d.geom -1D geometric primitives for data modeling</ns0:p><ns0:p>• power1d.models -high-level interfaces to experiment modeling and numerical simulation </ns0:p></ns0:div>
<ns0:div><ns0:head>Geometry (power1d.geom)</ns0:head><ns0:p>Basic geometries can be constructed and visualized as follows:</ns0:p><ns0:formula xml:id='formula_0'>import power1d Q = 101 y = power1d.geom.GaussianPulse( Q , q=60 , fwhm=20, amp=3.2 ) y.plot()</ns0:formula><ns0:p>Here Q is the continuum size, q is the continuum position at which the Gaussian pulse is centered, fwhm is the full-width-at-half-maximum of the Gaussian kernel, and amp is its maximum value (Fig. <ns0:ref type='figure' target='#fig_3'>4</ns0:ref>).</ns0:p><ns0:p>All of power1d's geometric primitives have a similar interface and are depicted in Fig. <ns0:ref type='figure'>5</ns0:ref>. More complex geometries can be constructed using standard Python operators as follows (see Fig. <ns0:ref type='figure'>6</ns0:ref>). Here J is sample size and is a necessary input for all power1d.noise classes. This code chunk results in the noise depicted in Fig. <ns0:ref type='figure'>7</ns0:ref>. The SmoothGaussian noise (Fig. <ns0:ref type='figure'>7b</ns0:ref>) represents residuals observed in real datasets like those depicted implicitly in Fig. <ns0:ref type='figure' target='#fig_0'>1a</ns0:ref>. For this SmoothGaussian noise model the fwhm parameter represents the full-width-at-half-maximum of a Gaussian kernel that is convolved with uncorrelated Gaussian continua. RFT describes probabilities associated with smooth Gaussian continua (Fig. <ns0:ref type='figure'>7b</ns0:ref>) and in particular the survival functions for test statistic continua <ns0:ref type='bibr' target='#b6'>(Friston et al., 1996;</ns0:ref><ns0:ref type='bibr' target='#b15'>Pataky, 2016)</ns0:ref>.</ns0:p><ns0:p>All power1d noise models are depicted in Fig. <ns0:ref type='figure'>8</ns0:ref>. Compound noise types are supported including additive, mixture, scaled and signal-dependent. As an example, the additive noise model depicted in Fig. <ns0:ref type='figure'>8</ns0:ref> (bottom-left panel) can be constructed as follows:</ns0:p><ns0:formula xml:id='formula_1'>n0 = power1d.noise.Gaussian( J , Q , mu=0 , sigma=0.1 ) n1 = power1d.noise.SmoothGaussian( J , Q , mu=0 , sigma=1.5 , fwhm=40 ) n = power1d.noise.Additive ( noise0 , noise1 )</ns0:formula><ns0:p>All noise models use the random method to generate new random continua, and all store the current continuum noise in the value attribute, and all number generation can be controlled using NumPy's random.seed method as follows: Note that the emodel0 and emodel1 objects represent null and a Gaussian pulse signal, respectively, and thus represent the null and alternative hypotheses, respectively. The Monte Carlo simulation proceeds over 10,000 iterations (triggered by the simulate command) and completes for this example in approximately 2.2 s. The final results.plot command produces the results depicted in Fig. <ns0:ref type='figure' target='#fig_6'>12</ns0:ref>.</ns0:p><ns0:p>In this example the omnibus power is 0.92 (Fig. <ns0:ref type='figure' target='#fig_6'>12</ns0:ref>, top left panel), implying that the probability of rejecting the null at at least one continuum location is 0.92. This omnibus power should be used when the hypothesis pertains to the entire continuum because it embodies whole-continuum-level control of both false negatives and false positives.</ns0:p><ns0:p>While the omnibus power is greater than 0.9, the point-of-interest (POI) and center-of-interest (COI) powers are both well below 0.8 (Fig. <ns0:ref type='figure' target='#fig_6'>12</ns0:ref>, bottom right panel); see the Fig. <ns0:ref type='figure' target='#fig_6'>12</ns0:ref> caption for a description of POI and COI powers. The POI power should be used if one's hypothesis pertains to a single continuum location. The COI power should be used if the scope of the hypothesis is larger than a single point but smaller than the whole continuum.</ns0:p><ns0:p>Overall these results imply that, while the null hypothesis will be rejected with high power, it will not always be rejected in the continuum region which contains the modeled signal (i.e., roughly between continuum positions 40 and 80). This simple model thus highlights the following continuum-level power concepts:</ns0:p><ns0:p>• The center-of-interest (COI) continuum depicts the same but expands the search area to a certain radius surrounding the POI, in this case with an arbitrary radius of three. Thus the omnibus power is equivalent to the maximum COI power when the COI radius is Q (i.e., the full continuum size). The integral of the POI power continuum for the null model is α. Powers of 0, 0.8 and 1 are displayed as dotted lines for visual reference.</ns0:p><ns0:p>• The investigator must specify the scope of the hypothesis in an a priori manner (i.e. single point, general region or whole-continuum) and use the appropriate power value (i.e. POI, COI or omnibus, respectively).</ns0:p><ns0:p>The model depicted in Fig. <ns0:ref type='figure' target='#fig_6'>12</ns0:ref> is simple, and similar results could be obtained analytically by constraining the continuum extent of noncentral RFT inferences <ns0:ref type='bibr' target='#b8'>(Hayasaka et al., 2007)</ns0:ref>. The advantages of numerical simulation are thus primarily for situations involving arbitrary complexities including but not limited to: multiple, possibly interacting signals, signal-dependent noise, covariate-dependent noise, unequal sample sizes, non-sphericity, etc. All of these complexities introduce analytical difficulties, but all are easily handled within power1d's numerical framework.</ns0:p></ns0:div>
<ns0:div><ns0:head>Regions of interest (power1d.roi)</ns0:head><ns0:p>The final functionality supported in power1d is hypothesis constraining via region of interest (ROI) continua. In practical applications, even when complete continua are recorded, one's hypothesis does not necessarily relate to the whole continuum. For example, the Canadian temperature example (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) depict daily values collected for the whole year, but one's hypothesis might pertain only to the summer months (approximately days 150 -250). In this case it is probably most practical to model the entire year, but constrain the hypothesis to a certain portion of it as follows: The code above models a maximum temperature increase of six degrees on Day 200 as a Gaussian pulse with an FWHM of 100 days, and constrains the hypothesis to Days 150-250 via the set roi method.</ns0:p><ns0:formula xml:id='formula_2'>data =</ns0:formula><ns0:p>The results in Fig. <ns0:ref type='figure' target='#fig_8'>13</ns0:ref> depict the ROI as blue background window and suggest that the omnibus power is close to 0.8. Setting the COI radius to the ROI radius of 50 via the set coi radius method emphasizes that the COI power continuum's maximum is the same as the omnibus power. Also note that, had an ROI not been set, the ROI is implicitly the entire continuum, in which case the omnibus power would have been considerably lower at 0.586. This emphasizes the fact that the critical threshold must be raised as the continuum gets larger in order to control for omnibus false positives across the continuum. These analyses, involving a more complex additive noise model and 10,000 iterations, required approximately 15 s on a standard desktop PC. where u is the critical threshold and where the power is p=0.829. To replicate this in power1d one must create a model which replicates the assumptions underlying the analytical calculation above. In the code below a continuum size of Q=2 is used because that is the minimum size that power1d supports. Here power is given by the p reject1 attribute of the simulation results (i.e., the probability of rejecting the null hypothesis in alternative experiment given the null and alternative models) and in this case the power is estimated as p=0.835. Increasing the number of simulation iterations improves convergence to the analytical solution.</ns0:p><ns0:formula xml:id='formula_3'>Q = 2 baseline = power1d.geom.Null( Q ) signal0 = power1d.geom.Null( Q ) signal1 = power1d.geom.Constant( Q , amp=effect ) noise = power1d.noise.Gaussian( J , Q , mu=0<ns0:label>, sigma=1 ) model0</ns0:label></ns0:formula><ns0:p>Repeating across a range of sample and effect sizes yields the results depicted in Fig. <ns0:ref type='figure' target='#fig_10'>14</ns0:ref>. This power1d interface for computing 0D power is admittedly verbose. Nevertheless, as a positive point power1d's interface emphasizes the assumptions that underly power computations, and in particular the nature of the signal and noise models.</ns0:p></ns0:div>
<ns0:div><ns0:head>1D power: inflated variance method</ns0:head><ns0:p>The inflated variance method <ns0:ref type='bibr' target='#b6'>(Friston et al., 1996)</ns0:ref> models signal as a Gaussian continuum with a particular smoothness and particular variance. power1d does not support random signal modeling, but the inflated variance model can nevertheless be modeled using alternative noise models as demonstrated below. First Here W0 and W1 are the continuum smoothness values under the null and alternative hypotheses, respectively, and sigma is the effect size as the standard deviation of the 'signal' (i.e., noise) under the alternative.</ns0:p><ns0:p>Next the critical RFT threshold can be computed using power1d's inverse survival function following <ns0:ref type='bibr'>Friston et al. (1996) (Eqn.5, p.226)</ns0:ref> as follows:</ns0:p><ns0:formula xml:id='formula_4'>u = power1d.prob.t_isf( alpha , df , Q , W0 )</ns0:formula><ns0:p>Next the smoothness and threshold parameters are transformed according to <ns0:ref type='bibr' target='#b6'>Friston et al. (1996)</ns0:ref> (Eqns.8-9, p.227):</ns0:p><ns0:formula xml:id='formula_5'>s2 = sigma f = float( W1 ) / W0 Wstar = W0 * ( ( 1 + s2 ) / (1 + s2 / ( 1 + f ** 2 ) ) ) ** 0.5 ustar = u * ( 1 + s2 ) ** -0.5</ns0:formula><ns0:p>Here s2 is the variance and f is the ratio of signal-to-noise smoothness. The probability of rejecting the null hypothesis when the alternative is true is given as the probability that random fields with smoothness W * will exceed the threshold u * (Wstar and ustar, respectively), and where that probability can be computed using the standard RFT survival function:</ns0:p><ns0:formula xml:id='formula_6'>p = power1d.prob.t_sf( ustar , df , Q , Wstar )</ns0:formula><ns0:p>Here the analytical power is p=0.485. Validating this analytical power calculation in power1d can be achieved using a null signal and two different noise models as follows: The numerically estimate power is p=0.492, which is reasonably close to the analytical probability of 0.485 after just 1000 iterations. Repeating for background noise smoothness values of 10, 20 and 50, sample sizes of 5, 10 and 25 and effect sizes ranging from σ =0.5 to 2.0 yields the results depicted in Fig. <ns0:ref type='figure' target='#fig_11'>15</ns0:ref>. Close agreement between the theoretical and simulated power results is apparent. As noted by <ns0:ref type='bibr' target='#b8'>Hayasaka et al. (2007)</ns0:ref> powers are quite low for the inflated variance approach because the signal is not strong; the 'signal' is effectively just a different type of noise. The noncentral RFT approach described in the next section addresses this limitation.</ns0:p></ns0:div>
<ns0:div><ns0:head>1D power: noncentral RFT method</ns0:head><ns0:p>The noncentral RFT method models signal as a constant continuum shift <ns0:ref type='bibr' target='#b8'>(Hayasaka et al., 2007)</ns0:ref>. Like the inflated variance method above, it can by computed analytically in power1d by first defining all power-relevant parameters: where delta is the noncentrality parameter. Next power can be be computed via noncentral RFT <ns0:ref type='bibr' target='#b8'>(Hayasaka et al., 2007;</ns0:ref><ns0:ref type='bibr' target='#b13'>Mumford and Nichols, 2008;</ns0:ref><ns0:ref type='bibr' target='#b11'>Joyce and Hayasaka, 2012)</ns0:ref> as follows: Here the numerically estimated power is p=0.747, which is again similar to the analytical probability of p=0.731 after just 1000 iterations. Repeating for smoothness values of 10, 20 and 50, sample sizes of 5, 10 and 25 and effect sizes ranging from 0.1 to 0.7 yields the results depicted in Fig. <ns0:ref type='figure' target='#fig_13'>16</ns0:ref>. Agreement between the theoretical and numerically simulated powers is reasonable except for large effect sizes and intermediate sample sizes (Fig. <ns0:ref type='figure' target='#fig_13'>16c, J=25</ns0:ref>). Since theoretical and simulated results appear to diverge predominantly for high powers these results suggest that the noncentral RFT approach is valid in scenarios where powers of approximately 0.8 are sought for relatively small sample sizes.</ns0:p><ns0:p>While the noncentral RFT approach has addressed the low-power limitation of the inflated variance method (Fig. <ns0:ref type='figure' target='#fig_11'>15</ns0:ref>), its 'signal' is geometrically simple in the form of a mean shift. Clearly other, more complex signal geometries may be desirable. For example, in the context of the Canadian temperature data (Fig. <ns0:ref type='figure' target='#fig_0'>1</ns0:ref>) one may have a forward dynamic model which predicts regional temperatures through region-specific parameters such as land formations, foliage, wind patterns, proximity to large bodies of water and atmospheric carbon dioxide. Forward models like those can be used to generate specific continuum predictions based on, for example, increases in atmospheric carbon dioxide. Those continuum predictions are almost certainly not simple signals like the ones represented by the inflated variance and noncentral RFT methods. Therefore, when planning an experiment to test continuum-level predictions, and specifically when determining how many continuum measurements are needed to achieve a threshold power, the numerical simulation capabilities of power1d may be valuable.</ns0:p></ns0:div>
<ns0:div><ns0:head>18/20</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2017:04:17420:1:1:NEW 9 Jun 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p><ns0:note type='other'>Computer Science COMPARISON WITH OTHER SOFTWARE PACKAGES</ns0:note><ns0:p>Power calculations for 0D (scalar) data are available in most commercial and open-source statistical software packages. Many of those offer limited functionality in that most are limited to the noncentral t distribution, and many have vague user interfaces in terms of experimental design. Some also offer an interface to noncentral F computations, but nearly all have limited capabilities in terms of design.</ns0:p><ns0:p>The most comprehensive and user-friendly software package for computing power is G-power <ns0:ref type='bibr' target='#b3'>(Faul et al., 2007)</ns0:ref>. In addition to the standard offerings of noncentral t computations, G-power also offers noncentral distributions for F, χ 2 and a variety of other test statistics. It has an intuitive graphical user interface that is dedicated to power-specific questions. However, in the context of this paper G-power is identical to common software packages in that its power calculations are limited to 0D (scalar) data.</ns0:p><ns0:p>Two software packages dedicated to continuum-level power assessments, and those most closely related to power1d are:</ns0:p><ns0:p>1. PowerMap <ns0:ref type='bibr' target='#b11'>(Joyce and Hayasaka, 2012)</ns0:ref> 2. fmripower <ns0:ref type='bibr' target='#b13'>(Mumford and Nichols, 2008)</ns0:ref> Both PowerMap and fmripower are designed specifically for continuum-level power analysis, and both extend the standard noncentral t and F distributions to the continuum domain via RFT. They have been used widely in the field of Neuroimaging for planning brain imaging experiments and they both offer graphical interfaces with a convenient means of incorporating piiot data into guided power analyses.</ns0:p><ns0:p>However, both are limited in terms of the modeled signals they offer. RFT's noncentral t and F distributions model 'signal' as a whole-continuum mean displacement, which is geometrically simple relative to the types of geometries that are possible at the continuum level (see the Software Implementation: Geometry section above). PowerMap and fmripower somewhat overcome the signal simplicity problem through continuum region constraints, where signal is modeled in some regions and not in others in a binary sense. This approach is computationally efficient but is still geometrically relatively simple. A second limitation of both packages is that they do not support numerical simulation of random continua. This is understandable because it is computationally infeasible to routinely simulate millions or even thousands of the large-volume 3D and 4D random continua that are the target of those packages' power assessments.</ns0:p><ns0:p>Consequently neither PowerMap nor fmripower supports arbitrary continuum signal modeling.</ns0:p><ns0:p>As outlined in the examples above power1d replicates the core functionality of PowerMap and fmripower for 1D continua. It also offers functionality that does not yet exist in any other package:</ns0:p><ns0:p>arbitrary continuum-level signal and noise modeling and associated computational power analysis though numerical simulation of random continua. This functionality greatly increases the flexibility with which one can model one's data, and allows investigators to think about the signal and noise in real-world units, without directly thinking about effect sizes and effect continua.</ns0:p></ns0:div>
<ns0:div><ns0:head>SUMMARY</ns0:head><ns0:p>This paper has described a Python package called power1d for estimating power in experiments involving 1D continuum data. Its two main features include (a) analytical continuum-level power calculations based on random field theory (RFT) and (b) computational power analysis via continuum-level signal and noise modeling. Numerical simulation is useful for 1D power analysis because 1D continuum signals can adopt arbitrary and non-parameterizable geometries. This study's cross-validation results show that power1d's numerical estimates closely follow theoretical solutions, and also that its computational demands are not excessive, with even relatively complex model simulations completing in under 20 s. Since power1d accommodates arbitrary signals, arbitrary noise models and arbitrarily complex experimental designs it may be a viable choice for routine yet flexible power assessments prior to 1D continuum experimentation.</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. Canadian temperature data (Ramsay and Silverman, 2005). (a) All measurements. (b) Means (thick lines) and standard deviations (error clouds). Dataset download on 28 March 2017 from: http://www.psych.mcgill.ca/misc/fda/downloads/FDAfuns/Matlab/fdaMatlab.zip (./examples/weather)</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Two-sample hypothesis test comparing the Atlantic and Continental regions from Fig.1. The test statistic continuum is depicted along with uncorrected, random field theory (RFT)-corrected and Bonferroni-corrected thresholds.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Continuum-level power analysis methods. (a) Friston et al. (1996). (b) Hayasaka et al. (2007);<ns0:ref type='bibr' target='#b13'>Mumford and Nichols (2008)</ns0:ref>. (c) This paper's proposed computational method. RFT=random field theory.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>•Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. Example GaussianPulse geometry.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 5 .Figure 6 .</ns0:head><ns0:label>56</ns0:label><ns0:figDesc>Figure 5. All geometric primitives. The Continuum1D primitive accepts an arbitrary 1D array as input, and all other primitives are parameterized.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 7 .Figure 8 .Figure 9 .Figure 10 .Figure 11 .</ns0:head><ns0:label>7891011</ns0:label><ns0:figDesc>Figure 7. (a) Uncorrelated Gaussian noise. (b) Smooth (correlated) Gaussian noise.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 12 .</ns0:head><ns0:label>12</ns0:label><ns0:figDesc>Figure12. Example power1d results (α=0.05). The top left panel depicts the two experiment models and the omnibus power (p=0.920). The bottom two panels depict power continua (left: null model, right: alternative model). The point-of-interest (POI) continuum indicates the probability of null hypothesis rejection at each continuum point. The center-of-interest (COI) continuum depicts the same but expands the search area to a certain radius surrounding the POI, in this case with an arbitrary radius of three. Thus the omnibus power is equivalent to the maximum COI power when the COI radius is Q (i.e., the full continuum size). The integral of the POI power continuum for the null model is α. Powers of 0, 0.8 and 1 are displayed as dotted lines for visual reference.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head /><ns0:label /><ns0:figDesc>power1d.data.weather() y = data[ ' Continental ' ] baseline = power1d.geom.Continuum1D( y.mean ( axis=0 ) ) signal0 = power1d.geom.Null( Q ) signal1 = power1d.geom.GaussianPulse( Q , q=200 , amp=6 , fwhm=100 ) n0 = power1d.noise.Gaussian( J , Q , mu=0 , sigma=0.3 ) n1 = power1d.noise.SmoothGaussian( J , Q , mu=0 , sigma=5 , fwhm=70 ) noise = power1d.noise.Additive( n0 , n1 ) model0 = power1d.models.DataSample( baseline , signal0 , noise , J=J ) model1 = power1d.models.DataSample( baseline , signal1 , noise , J=J ) teststat = power1d.stats.t_1sample emodel0 = power1d.models.Experiment( model0 , teststat ) emodel1 = power1d.models.Experiment( model1 , teststat ) sim = power1d.ExperimentSimulator( emodel0 , emodel1 ) results = sim.simulate( 10000 ) roi = np.array( [ False ]</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 13 .</ns0:head><ns0:label>13</ns0:label><ns0:figDesc>Figure 13. Example region of interest (ROI)-constrained power results (α=0.05). Note that an COI radius of 365 would raise the null COI power continuum to α.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head /><ns0:label /><ns0:figDesc>= power1d.DataSample( baseline , signal0 , noise , J=J ) model1 = power1d.DataSample( baseline , signal1 , noise , J=J ) Last, simulate the modeled experiments and numerically estimate power: teststat = power1d.stats.t_1sample emodel0 = power1d.models.Experiment( model0 , teststat ) emodel1 = power1d.models.Experiment( model1 , teststat ) sim = power1d.ExperimentSimulator( emodel0 , emodel1 ) results = sim.simulate( 1000 ) roi = np.array( [ True , False ] ) results.set_roi( roi ) p = results.p_reject1</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_10'><ns0:head>Figure 14 .</ns0:head><ns0:label>14</ns0:label><ns0:figDesc>Figure 14. Validation of power1d's 0D power calculations. Solid lines depict theoretical solutions from the noncentral t distribution and dots depict power1d's numerically simulated results (1000 iterations each).</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_11'><ns0:head>Figure 15 .</ns0:head><ns0:label>15</ns0:label><ns0:figDesc>Figure 15.Validation results for the inflated variance approach to 1D power. Solid lines depict theoretical solutions from the noncentral random field theory and dots depict power1d's numerically simulated results (10,000 iterations each). J represents sample size and FWHM represents the smoothness of the background noise process.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_12'><ns0:head /><ns0:label /><ns0:figDesc>* J ** 0.5</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_13'><ns0:head>Figure 16 .</ns0:head><ns0:label>16</ns0:label><ns0:figDesc>Figure16. Validation results for the noncentral random field theory approach to 1D power. Solid lines depict theoretical solutions from the noncentral random field theory and dots depict power1d's numerically simulated results (10,000 iterations each). FWHM and J represent continuum smoothness and sample size, respectively.</ns0:figDesc></ns0:figure>
<ns0:note place='foot' n='12'>/20 PeerJ Comput. Sci. reviewing PDF | (CS-2017:04:17420:1:1:NEW 9 Jun 2017)Manuscript to be reviewed</ns0:note>
</ns0:body>
" | "Responses
Thank you very much to the Reviewers for your comments and
suggestions, and that you as well to the Editorial team for your time and
consideration. Please find that I have reproduced and responded to all
Reviewer comments below. I have revised the manuscript where
indicated, and have highlighted all manuscript edits in yellow for visual
convenience.
Reviewer 1
Basic reporting
The manuscript submitted introduces a 1D power estimation toolbox. It is
clearly written and will be relevant for future studies within many fields.
Response: (No response, thank you!)
Figures are clear and support the understanding of the paper, while not all
y axis are labeled (4, 5, 6, 7, 8, 9,10, 12, 13) and different line styles would
help the understanding of graphs when printed black-white.
Response: Please find that all axes are now labeled and that different line
styles have been used in addition to colour to distinguish amongst lines
where relevant.
As I am not as experienced with the mathematics that underpin the ideas
behind code, I would have appreciated a bit more detail regarding what
findings means - e.g. what values are we looking for?
Response: Thank you for this comment. This is a good point but one that
is difficult to address directly because what one should look for depends
on how one models both the signal and the noise. For example, imagine
that one has modelled a signal in a specific continuum region (like in Fig.
12). One might be interested only in that continuum region, in which case
one would consider only the POI or COI power in that particular region.
On the other hand, if one is interested also in controlling false positives
across the entire continuum then one should consider the omnibus
power.
In attempts to clarify please find that I have made the following revisions
(between Figs.11 and 12):
“In this example the omnibus power is 0.92 (Fig.12, top left panel),
implying that the probability of rejecting the null at at least one continuum
location is 0.92. This omnibus power should be used when the
hypothesis pertains to the entire continuum because it embodies wholecontinuum-level control of both false negatives and false positives.
While the omnibus power is greater than 0.9, the point-of-interest (POI)
and center-of-interest (COI) powers are both well below 0.8 (Fig.12,
bottom right panel); see the Fig.12 caption for a description of POI and
COI powers. The POI power should be used if one's hypothesis pertains
to a single continuum location. The COI power should be used if the
scope of the hypothesis is larger than a single point but smaller than the
whole continuum.
Overall these results imply that, while the null hypothesis will be rejected
with high power, it will not always be rejected in the continuum region
which contains the modeled signal (i.e., roughly between continuum
positions 40 and 80). This simple model thus highlights the following
continuum-level power concepts:
-- Continuum-level signals can be modeled with arbitrary geometry
-- Continuum-level omnibus power does not necessarily pertain to the
modeled signal
-- The investigator must specify the scope of the hypothesis in an a priori
manner (i.e. single point, general region or whole-continuum) and use the
appropriate power value (i.e. POI, COI or omnibus, respectively)}.
Experimental design
The research question is well defined and the conclusion answers/
addresses the purpose of the paper.
Methods applied in the paper are well described and proven using
simulations. Code is made available allowing the paper to be replicated.
Response: (No response, thank you!)
Validity of the findings
The processes introduced within the paper are sound and provide the
user a ability to easily perform a 1d power estimation of a waveform.
Conclusion are well stated and differences to existing platforms are
explained and justify benefit of the introduced methods and hence the
need for publication.
Response: (No response, thank you!)
Comments for the Author
Congratulations Todd to a well written and interesting paper.
Below you'll find minor suggestion that will eliminate a few typos and
might make the paper a bit easier to read for users from a sport science
background.
Response: Thank you for pointing these out! Please find that nearly all of
the suggested edits have been made as summarized below.
Could you provide a code with would provide an answer to the number of
samples needed for an X% increase of a peak value?
Response: Yes, and thank you for this idea. Please find two new scripts
here:
./power1d/examples/ex_percent_increase.py
./power1d/examples/ex_percent_increase_sample_size.py
The first script demonstrates the basics: power calculation for a given
percent change (using 0D data). The second script demonstrates
sample-size calculation for a given percent change.
Note that the only difference from standard power calculations is that the
effect size is slightly different. Normally one specifies the absolute
increase (“x”) and the standard deviation (“s”), and these give the effect
size as:
effect_size = x / s
If specifying a percent change you only need to compute the absolute
change based on some datum level (“x0”) and a percent change value
(“p”). Then the effect size becomes simply:
effect_size = ( p * x0 ) / s
Line 49: consider 'q represents the day of the ...'
Response: Thank you for pointing out the poor wording. I think the
phrasing below might be even clearer, but please advise if you feel it is
still unclear.
Revision: The data are naturally registered because each measurement
station has one measurement per day over Q=365 days.
Line 66: 'analytical COMMA one can ....'
Revision: (as suggested)
Line 69-71: I am not sure if I got this right but there is a comma missing:
'behavior COMMA .....'
Revision: behavior directly COMMA ....
Line 84: remove 'also'
Revision: (as suggested)
Line 84 and following: you use absolute terms use incorrect rather then
clearly incorrect, while incorrect should be replace with inappropriate.
This should be adapted in the following lines
Revision: (as suggested)
Line 107: do you mean 'model a signal as ....' ? The same in line 109.
Response: I personally find “model signal as...” to be clearer; the phrase
“a signal” is somewhat unclear for 1D data because 1D data can contain
multiple independent signals which potentially interact with each other. I
believe that the term “signal” allows for the possibility of multiple
interacting simpler signals. Please advise if you disagree.
Line 208 and following please review the style of the term DataSample
and make it consistent.
Response: I think the original use of “DataSample” and “data sample” is
OK because these terms refer to different things. “DataSample” refers to
a computational entity, and specifically to power1d’s high-level definition
of all data samples. “Data sample” refers to a specific numerical
realization of a DataSample. Using object-oriented programming
terminology, these terms refer to classes and objects, respectively. In
attempts to clarify please find that I have added an added the passage
below.
Revision: In this section the terms ``DataSample'' and ``data sample''
refer to the object class: power1d.models.DataSample and a numerical
instantiation of that class, respectively.
Line 301: You might consider to mention / explain Monte Carlo simulation
and the Omnibus test before mentioning it - as your techniques will be
used in a sport science world.
Response: I agree that it would be necessary to define these terms if the
target audience were mainly the world of sports science. However, I
expect that most readers of PeerJ-CS (including: computer scientists,
data scientists, statisticians, etc.) will be familiar with these terms. Please
note that I am currently preparing a separate paper with colleagues from
sports science which explores 1D power concepts in the context of
various applications from the sports science literature, and which also
conducts experiments to validate the proposed power1d approach for
common sports science experiments. In that paper many technical
issues, including both Monte Carlo simulation and the omnibus criterion,
are clarified. I hope that this response will be sufficient, but if not I would
be happy to add a clarification in this manuscript as well. Please advise.
Figure 12: p = 0.992 is not correct. Please explain null and alternative in
the text before using it in the figure. Why did you choose COI = 3? What
are the dotted line in the figure?
Response: Apologies for the incorrect p value, I have edited it
accordingly.
The terms “null” and “alternative” are defined before this figure, on Lines
278-280 of the original manuscript, as follows: “...power analysis can be
numerically conducted by comparing two experiment models, one
representing the null hypothesis (which contains null signal) and one
representing the alternative hypothesis (which contains the signal one
wishes to detect)”.
COI = 3 is the default software setting and is arbitrary. The figure caption
describes what happens as the COI size changes, but in attempts to
further clarify please find that I have added the following statement to the
caption: “in this case with an arbitrary radius of three”. In other words, I
fully agree that investigators should justify their COI choice for specific
applications, but this paper is more general. This paper simply points out
that one’s COI choice can affect the results.
The dotted lines represent the lower power limit (p=0), the conventional
target power (p=0.8) and the upper power limit (p=1). These are just
visual guidelines. Please find that the revised Fig.12 labels these
horizontal lines as “power datum”.
Line 527: '(see §)' should refer to something
Response: Apologies for this mistake. Revised as follows: “(see the
Software Implementation: Geometry section above)”
Line 530: consider to omit 'understandable'
Response: Agreed, revised as suggested.
Reviewer 2
Response: Thank you very much for your comments and for the positive
feedback. I apologies for the poor wording cited under “MINOR POINT”
below; I have revised this passage as suggested.
Basic reporting
In the submitted manuscript, 'Power1D: A Python toolbox for numerical
power 1 estimates in experiments involving one-dimensional continua',
the author presents a computer package called 'power1d', which
implements analytical 1D power solutions and develops a computational
approach to continuum-level power analysis, which allows arbitrary signal
and noise modeling.
Use of English is correct, clear and unambiguous. The introduction gives
a good account of the statistical and computational problems that arise in
continuum measurements. The need for the computational methods
implemented in the package is well motivated with real data clearly
explained. Figures are well designed, relevant and informative. Literature
is relevant and well referenced.
Raw data as well as source code are supplied and well documented into
the site of the package.
MINOR POINT:
End of line 45, continuing in line 46, substitute 'to not vary completely
independently' by 'do not vary in a completely independent way'.
Experimental design
The author proposes, implement, test and demonstrates a computational
approach to continuum-level power analysis that permits arbitrary signal
and noise modeling, which is clearly within the scope of 'PeerJ Computer
Science'.
Implementation, dependencies and capabilities of the package are well
described in the manuscript and more details are given in the site of the
package (http://www.spm1d.org/power1d). The research question is well
defined and its relevance is clear, while the solutions implemented in the
package are well supported. The research presented fills a gap in the
methods for continuum-level power analysis, as the author shows both, in
the introduction as well as in the discussion of the results.
The author comprehensively demonstrates the use of the package for
distinct scenarios, clearly showing the frameworks where the methods are
of application. The use of real data makes the presentation easier to
understand for statisticians and other interested in this topic. The use of a
random seed makes the results replicable; in my opinion the research
was carried out in a rigorous way with very good technical standards. The
methods are described in detail, and the availability of the source code
makes it possible to evaluate all details of the methods and their
implementation.
Validity of the findings
Validations of the methods implemented in 'Power1D' are clearly and
extensively shown for 0D power, the 1D power with inflated variance and
non-central RFT. Comparisons with other software packages are
complete and fair; to my knowledge the author surveyed the main
alternative packages, stating their limitations, some of which are filled by
the methods implemented in 'Power1D'. In particular, the implementation
of arbitrary continuum-level signal and noise modeling, which allows the
power analysis though numerical simulation of random continua, is a
remarkable characteristic of the described package.
The methods presented by the author and implemented in the package
are statistically sound and well presented and exemplified by the use of
both, simulations as well as real data. Conclusions are well stated, linked
to the original research question and limited to supporting results.
" | Here is a paper. Please give your review comments after reading it. |
747 | "<ns0:abstract xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:p>Gathering up-to-date information on food prices is critical in developing regions, as it allows policymakers and development practitioners to rely on accurate data on food security. This study explores the feasibility of utilizing social media as a new data source for predicting food security landscape in developing countries. Through a case study of Indonesia, we developed a nowcast model that monitors mentions of food prices on Twitter and forecasts daily price fluctuations of four major food commodities: beef, chicken, onion, and chilli. A longitudinal test over 15 months of data demonstrates that not only the proposed model accurately predicts food prices, but also it is resilient to data scarcity. The high accuracy of the nowcast model is attributed to the observed trend that the volume of tweets mentioning food prices tends to increase on days when food prices change sharply.</ns0:p><ns0:p>We discuss factors that affect the veracity of price quotations such as social network-wide sensitivity and user influence.</ns0:p></ns0:div>
</ns0:abstract>
<ns0:body xmlns:ns0='http://www.tei-c.org/ns/1.0'>
<ns0:div><ns0:head>INTRODUCTION</ns0:head><ns0:p>The ability to rapidly monitor food price fluctuations is critical to government institutions, production collection through mobile phones and non-professional price collectors <ns0:ref type='bibr' target='#b17'>(Hamadeh et al., 2013)</ns0:ref>. Price data was collected for thirty tightly specified food commodity items on a monthly basis for approximately six months in eight pilot countries.</ns0:p><ns0:p>Recently an alternative source of information has become widely available as a new economic signal <ns0:ref type='bibr' target='#b24'>(Pappalardo et al., 2016)</ns0:ref>. User-generated data from various online social network services <ns0:ref type='bibr'>(OSNs)</ns0:ref> have been a source of indicative signals for predicting various societal phenomena including human behavior in crisis situations <ns0:ref type='bibr' target='#b37'>(Vieweg et al., 2015)</ns0:ref>, economic market changes <ns0:ref type='bibr' target='#b5'>(Bollen et al., 2011;</ns0:ref><ns0:ref type='bibr' target='#b4'>Asur et al., 2010)</ns0:ref>, and flu trends <ns0:ref type='bibr' target='#b20'>(Lampos et al., 2015;</ns0:ref><ns0:ref type='bibr' target='#b14'>Ginsberg et al., 2009)</ns0:ref>. Utilizing large-scale OSN signals has several benefits. First, social network signals are less costly than crowdsourcing because there is no need to reward individuals who generate data <ns0:ref type='bibr' target='#b35'>(Simula, 2013)</ns0:ref>. Second, the continuous nature of OSN data allows for near real-time monitoring or what is called nowcasting <ns0:ref type='bibr' target='#b12'>(Giannone et al., 2008)</ns0:ref>.</ns0:p><ns0:p>Designing a nowcast model for commodity prices, however, is a complex problem. This is because the task needs to produce accurate estimates of the official commodity prices, provide early warning signals of unexpected spikes in the real world, and adapt to a variety of commodities for wider applicability <ns0:ref type='bibr' target='#b19'>(Lampos and Cristianini, 2012)</ns0:ref>. These goals are harder to achieve in developing countries, where economic status is volatile and social media is less widely used. Nonetheless, rapidly expanding Web infrastructure, supported by humanitarian projects that provide free Internet in rural areas such as Internet.org <ns0:ref type='bibr' target='#b10'>(Facebook, 2016)</ns0:ref>, is being observed in many developing countries <ns0:ref type='bibr' target='#b1'>(Ali, 2011)</ns0:ref> and social media data can hence serve as an additional, non-invasive measurement method for those regions.</ns0:p><ns0:p>This paper presents a case study of adopting micro-blogging platform signals on Twitter as an additional data source for building a food price nowcast model in Indonesia. This research was initiated by the government of Indonesia as part of its effort to combine and adopt different sources of information to produce highly credible market statistics. Four critical food commodities (beef, chicken, onion, and chilli) were chosen as the first set of items to be tracked based on national food security priorities and data availability. Twitter was chosen as a data source, because of its popularity within the country; Indonesia has one of the highest adoption rates in the world for Twitter, both in terms of number of users and amount of generated content.</ns0:p><ns0:p>The main goal of this work is to create a nowcast model that reproduces time series of daily prices for the four chosen commodities during a 15-month investigation period between June 2012 and September 2013 based solely on price information from tweets. This main goal is achieved by three specific aims.</ns0:p><ns0:p>First, the model should be able to provide price time series that highly correlate with real-world price trends. We conduct an evaluation by using pearson correlation coefficient to determine a correlation between an official and predicted price time series. Secondly, the model should be able to estimate the absolute price value with minimized error in daily scale. We conduct the evaluation by using mean absolute percentage error (MAPE) to evaluate a magnitude of error between an official and predicted price time series. Thirdly, the model should be capable of nowcasting food price, which is defined as capturing information on a real-time basis within a short time gap typically in the single day range. For checking the feasibility of using the model as a daily price predictor, we conduct an additional evaluation process by using cross-correlation coefficient (CCF) that could estimate how an official and predicted time series are related at different time lags. We have shown that those predicted time series have the highest correlation at a lag within the timeframe of a single day, therefore we could clarifies that the price time series produced by the model is able to be used for nowcasting.</ns0:p><ns0:p>A two-step algorithm is proposed in this research. In the first step, a keyword filter is used to extract tweets mentioning price quotations of the four food commodities from the entire corpus of tweets that were generated from Indonesia between June 2012 and September 2013, a timeframe of 15 months. A numerical model parameter is also used to filter the tweets to ensure that the tweet price does not exceed a maximum allowable daily percentage price change (computed based on historical rates). The keyword and numerical filters extracted 41,761 relevant tweets from the data. In the second step, a statistical model, using OSN data, is built to accurately estimate food prices for each commodity in order to assist with the official statistics publicized by the Indonesian government. The nowcast model produces estimates of commodity prices that have a high correlation with official food price statistics over the timeframe covered and shows better prediction performance than existing algorithms. This paper also describes the effect of several important social network-wide variables, via testing the robustness of the model under data scarcity conditions and by modeling user-level credibility to suggest an enhanced sampling strategy. This research finds that Indonesians do tweet about food prices, and that those prices closely approx- </ns0:p></ns0:div>
<ns0:div><ns0:head>METHODS</ns0:head></ns0:div>
<ns0:div><ns0:head>Data collection</ns0:head><ns0:p>Indonesia is a good testbed for this study for two reasons. First, reliable ground-truth data is available on a daily basis. The Ministry of Trade in Indonesia collects and publishes daily price information, which is also published as monthly records by the Bureau of Statistics. Second, social media, like Twitter, are widely used in the country so that there are enough online signals on commodity prices. In fact, Indonesia is one of the top-five tweeting countries <ns0:ref type='bibr' target='#b34'>(Siim, 2013)</ns0:ref>.</ns0:p><ns0:p>Four basic food commodities, beef, chicken, onion, and chilli, were chosen for monitoring based on the availability of data in terms of tweet mentions and the country-level priorities for food security monitoring in consultantation with the Ministry of National Development Planning (Bappenas) and the WFP in Indonesia. Beef and chicken are in fact the two most commonly consumed meats in Indonesia, as people rarely consume pork. Likewise onion and chilli are the most popular spices across the nation. As a result, prices of these four commodity items have been frequently utilized to monitor inflation, where chilli in particular has been considered sensitive to inflation <ns0:ref type='bibr' target='#b2'>(Amindoni, 2016;</ns0:ref><ns0:ref type='bibr' target='#b31'>Sawitri, 2017)</ns0:ref>. Daily food price data can be obtained for these four target commodities via the webpage of the Ministry of Trade of Indonesia 2 .</ns0:p><ns0:p>Tweets were collected through a firehose access to Twitter, which returns a complete set of data.</ns0:p><ns0:p>We screen for price mentions between June 2012 and September 2013, for 15 months. A taxonomy of keywords and phrases in Bahasa (i.e., the official language in Indonesia) is developed and used. The full taxonomy is mostly composed of commodity names, prices, and units (Table <ns0:ref type='table' target='#tab_0'>1</ns0:ref>). Price information can be expressed in different ways, containing variations related to expressions of commodity name and mentioning prices. Price quotations are often mentioned in tweets with prefix Rp or suffix rupiah, where the price value may be either number or text. Commodity unit is also important; for instance expressions such as per kilogram or per liter are commonly used to define food price. Instead of using hundreds of regular expressions for normalizing various types of units into an identical unit, we suggest a nowcast model which can handle a commodity unit difference issue via a numerical approach. For the target commodities under this study, most price information from Twitter contains standardized units that are identical to the units of government official data, therefore it is possible to handle unit difference issue via numerical approach solely. Our model decides whether a commodity unit referenced in a tweet is appropriate or not by comparing its price value and credible price range.</ns0:p></ns0:div>
<ns0:div><ns0:head>Commodity Names</ns0:head></ns0:div>
<ns0:div><ns0:head>Beef</ns0:head><ns0:p>( 'sapi' ) Chicken ( 'daging' ) AND ( 'ayam' ) Onion ( 'bawang' ) Chilli ( 'cabe' | 'cabai' ) Prices Values ( Digits ) AND ( 'rb' | 'ribu' | 'ratus' | ',-' | ',00' | None) Units ( 'rp' | 'rupiah' ) Commodity Units ( 'per' | 'se' ) AND ( Letters ) Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>As a result, a total of 78,518 tweets from 28,800 accounts are collected over the 15-month period.</ns0:p><ns0:p>Below is an example tweet mentioning beef price and its translation in English:</ns0:p><ns0:p>Harga Daging Masih Rp 95 Ribu/Kg, Ini Cara Pemerintah Menekannya </ns0:p></ns0:div>
<ns0:div><ns0:head>Data cleaning</ns0:head><ns0:p>Tweet data contain noisy information and need to be cleaned prior to analysis. We employed the following measures in data cleaning. First involves removing ambiguity in meaning. An obvious case of ambiguity arises when a single tweet quotes the price of two or more commodity items. Such a case occurs 2,607 times or in the 5% of the price quotation data. Another case of ambiguity arises when the mentioned price is in relative terms, not in absolute terms (e.g., 'price increased by X amount'). For instance, the word 'naik' in Indonesia means 'increase (up to)' or 'by'. Our data shows that price quotations containing the 'naik' word resulted in extremely small price ranges compared to the rest of the data. Hence, we removed tweet data containing this word, which accounted for 8% of the data.</ns0:p><ns0:p>Another important data cleaning task focuses on removing redundant messages or spam bots. Certain bot accounts can be identified based on their large quantity of duplicated tweets. We assume accounts that posted more than 100 tweets with over 80% of duplicated messages are bots. Table <ns0:ref type='table'>2</ns0:ref> shows the list of the-top ten bot accounts that mention prices the most frequently. Most accounts with large tweet volumes posted the price information of their products with the purpose of advertisement. This finding indicates that the majority of accounts with a large volume of food price-related tweets are sellers. Note that the most prominent single account occupies 18,018 tweets (23% of all price quote tweets and 87% of all milk-related tweets). We can judge this account as a bot that promotes goat milk products, since its tweets are nearly identical to the following:</ns0:p><ns0:p>'sedia susu kambing etawa brand_name_hidden harga Rp 22 rb hub' Table <ns0:ref type='table'>2</ns0:ref>. Top-ten accounts with the largest tweet volume are all involved in advertising via bots</ns0:p><ns0:p>We eliminate bot accounts from certain sellers which simply keep echoing the redundant content with a vast volume. In the following section, we suggest a model that utilizes the volume of a tweeted price to determine its credibility, and it seems not reasonable to assign more credibility to bot-tweeted information based on its proportion of volume than human-tweeted information. Previous studies have defined spam as a bot designed to give unfair influence on opinion by echoing the earlier information <ns0:ref type='bibr' target='#b8'>(Chu et al., 2012;</ns0:ref><ns0:ref type='bibr' target='#b22'>Lim et al., 2010)</ns0:ref>. The bots we define in this study act as a spam rather than play a valuable social role because they provide unfair and significant statistical bias to information distribution, therefore we employ a basic bot detection method to eliminate a high volume of redundant tweets.</ns0:p><ns0:p>As a result, we remove a total of 36,757 (46.8%) tweets from the data if (1) a tweet is an exact duplication of another (22.9%), (2) a tweet contains a specific word 'naik' describing the difference between two price values, like 'increased by' in English (6.5%), and (3) a tweet mentions more than one price (17.4%).</ns0:p></ns0:div>
<ns0:div><ns0:head>4/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:14035:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>For the investigation period, the average number of tweets per account is 2.73. The contribution of tweets are heavily skewed among users so that the top-ten most prolific accounts posted 19,470 (24.8% of all) tweets. These top-ten accounts are all food vendors, e.g., local grocery shops advertising daily items (Table <ns0:ref type='table'>2</ns0:ref> ). In fact, people's motivations and willingness to post information on OSNs is influenced by external factors like news <ns0:ref type='bibr' target='#b13'>(Gil de Zúñiga et al., 2012)</ns0:ref> or the interdependence of other industries (e.g., agriculture depends on machinery and transportation <ns0:ref type='bibr' target='#b28'>(Richard, 2011)</ns0:ref>). We find that people post more tweets during price-rising periods compared to price-decreasing periods. This tendency is more apparent with food commodities that have volatile price fluctuations and a smaller total volume of tweets -onion receives on average 2.8 times more tweets when prices are rising compared to price-decreasing periods. </ns0:p></ns0:div>
<ns0:div><ns0:head>Price distribution</ns0:head><ns0:p>Once tweets mentioning prices are identified (N=41,761), we may look into the price distributions.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_2'>1A</ns0:ref> depicts example price quotations for onion on social media from a given day (translated in English) and the official price release of onion from the same date. These price quotations varied from one tweet to another and required data sensitization before they could be used for price prediction. Noise arises when commodity units are different (e.g., grams vs kilograms), mentions are of second-hand or related products (e.g., price of beef dishes instead of beef itself), or due to fake information, etc. Figure <ns0:ref type='figure' target='#fig_2'>1B</ns0:ref> shows the wide ranges of price quotations seen in raw social signals and official prices for onion over a 15 month period. The wide price difference is due to a combination of the aforementioned noise and economy volatility. The multi-modal shape of the distribution is also noteworthy, where multiple different prices were frequently quoted for a single food commodity such as onion.</ns0:p></ns0:div>
<ns0:div><ns0:head>RESULTS</ns0:head></ns0:div>
<ns0:div><ns0:head>The nowcast model</ns0:head><ns0:p>The challenge in determining a representative daily price trajectory from thousands to millions of price quotations on social streams is handling noise. This is because the raw price quotations span a wide price range and show muti-modal distribution, as shown in the example case of onion in Fig. <ns0:ref type='figure' target='#fig_2'>1B</ns0:ref>. Utilizing the raw tweet data without any screening of extremely high or low price values results in poor price prediction for two primary reasons. First, the predicted price from raw tweets could have disproportionately large spikes. For example, the beef price surged 17.5 times compared to the official price for certain days Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>in July 2012 based on our tweet data, which should be considered as outliers. Second, such outliers lead to an overall poor quality of price prediction measured by the mean absolute percentage of error.</ns0:p><ns0:p>Simply eliminating outliers would yield a large reduction in prediction error. Therefore devising a filter to eliminate unnecessary noise and find meaningful signals from the dataset is critical for price prediction. We propose a new nowcast model that is suitable for accommodating food price dynamics. The proposed nowcast model is depicted in Fig. <ns0:ref type='figure' target='#fig_3'>2</ns0:ref>, which takes in raw price quotations from social media streams as input and outputs a single price value per day for each commodity. Noise in the dataset is determined by examining the discrepancy between today's price quotations against yesterday's official price. In the model we assume market prices are non-stationary time series; this is consistent with the assumption that has been made in relevant studies <ns0:ref type='bibr' target='#b21'>(Leuthold, 1972;</ns0:ref><ns0:ref type='bibr' target='#b38'>Working, 1934)</ns0:ref>. We further consider the Markov process for price dynamics as assumed in <ns0:ref type='bibr' target='#b39'>(Zhang, 2004;</ns0:ref><ns0:ref type='bibr' target='#b11'>Ghasemi et al., 2007)</ns0:ref>. Hence, let today's price P t be determined both by yesterday's price P t−1 as well as today's price quotations from</ns0:p><ns0:p>Twitter P tweet t . The weighting factors in the Eq. 1, α and β , represent the relative importance of these two quantities on today's price. The model would then respond to the current market quotes faster when β is larger than α, in which case a larger degree of price fluctuations are expected.</ns0:p><ns0:formula xml:id='formula_0'>P t = αP t−1 + β P tweet t α + β<ns0:label>(1)</ns0:label></ns0:formula><ns0:p>Furthermore, we assume that daily food prices do not change radically. The maximum change in commodity price that we observe from historical data is marginal for most days. For instance, the largest deviation seen for the beef price was changing by 2.5% from one day to another on Aug 16th 2012. This observation leads us to assume that prices of a commodity on a given day and the consecutive day would be within certain bounds. This is modeled as a variable δ defining the maximum allowable price change rate. Any social signals that exceed this change limit from one day to another will be eliminated from analysis at the outset. Hence if a quoted tweet exceeds this threshold compared to the previous day, the model rejects it as a valid input. Eq. 2 describes this constraint, where T i t is an i th individual tweet price which is taken from day t.</ns0:p><ns0:formula xml:id='formula_1'>if T i t − P t P t > δ then eliminate T i t (<ns0:label>2</ns0:label></ns0:formula><ns0:formula xml:id='formula_2'>)</ns0:formula><ns0:p>Another assumption is made for calibrating the effect of tweet volume. Manuscript to be reviewed</ns0:p><ns0:p>Computer Science logarithmic value of tweet volume was used as the weighting parameter β in order to give disproportionately higher impact on days with large social signal. In case there was no social signal (i.e., zero tweet), the nowcast model assumes there is no change in price. On the other hand, in cases when food commodity prices decrease, people may tweet the price less frequently. To accommodate such data scarcity problem, the proposed nowcast model refreshes when there is no tweet for n consecutive days. The model takes the average price estimates from the recent k (k ≫ n) days. We demonstrate this example in the Supplemental Information (Article S1). The main idea is to restart the model with a starting price of the recent average price (from k days before today) since the model price cannot be guaranteed after any zero-tweet period.</ns0:p><ns0:p>Eq. 3 shows the final model with four parameters: α (the ratio between the weights of yesterday's price and today's tweet price), δ (the allowed maximum daily price change rate), n (the number of zero-tweet dates for restarting computation), and k (the period over which the average commodity price is calculated). Q j t refers to the individual price quotation from tweets, while [Q t ] is the number of daily tweets. We set the starting price P 0 as the commodity price on the first observation date. Manuscript to be reviewed</ns0:p><ns0:formula xml:id='formula_3'>P t = αP t−1 + log([Q t ] + 1)P tweets t α + log([Q t ] + 1)<ns0:label>(3)</ns0:label></ns0:formula><ns0:formula xml:id='formula_4'>P tweet t = ∑ [Q t ] j=1 w j t Q j t ∑ j w j t w j t =              1 − Q j t −P t−1 P t−1 δ , if Q j t −P t−1 P t−1 ≤ δ 0 , otherwise P t−1 = ∑ t−</ns0:formula></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Existing price prediction models</ns0:head><ns0:p>Previous studies have proposed several different models of price prediction that can be used in the context of social media price quotations. The first model we review is the inter-quartile range (IQR) filter model that eliminates any extremely low or high price quotations and accepts prices between the upper and lower quartile on a given day. The IQR filter is useful, when a distribution has central tendency and when the majority of data is placed nearby to form a truthful range. While this is a simple model, the IQR is known to perform poorly when the data have a distribution of multiple peaks, as in the case of the price quotations we observe on Twitter.</ns0:p><ns0:p>Second, density estimation models such as the kernal density estimation (KDE) are effective for single-dimensional multi-modal data, which are typical cases in price data as seen in Figure <ns0:ref type='figure' target='#fig_2'>1B</ns0:ref>. The KDE algorithm is a non-parametric method that estimates the probability density function of a random variable. Local minima in the density function from KDE can be used as a split point of data into clusters, thereby allowing one to identify the largest cluster of daily price quotes. The largest cluster on any given day indicates price values that are the most commonly quoted and hence can be considered as the most credible prices. We set the bandwidth of the kernal function by minimizing the mean absolute percentage of error (MAPE) with 80% of the randomly-chosen tweets over the first three months.</ns0:p><ns0:p>A third model considered is the auto-regressive integrated moving average (ARIMA), which is a widely used approach for forecasting trends in time series data. ARIMA model is a generalization of the Auto-Regressive (AR) model that predicts output values by its own previous values. The parameters of the ARIMA model were determined by the corrected Akaike information criterion (AICc) values <ns0:ref type='bibr' target='#b18'>(Hyndman and Khandakar, 2007)</ns0:ref> based on the first three-months worth of the official price data.</ns0:p><ns0:p>A fourth model is the linear model proposed for the Google flu trend, which adopts a linear regression function on logit space, where I(t) is the predicted influenza rates at time t, Q(t) is the influenza-related query fraction at time t, α is the multiplicative coefficient, ε is the zero-centered noise, and β is the</ns0:p><ns0:formula xml:id='formula_5'>intercept term: logit(I t ) = β + α • logit(Q t ) + ε.</ns0:formula><ns0:p>However, this model cannot be directly applied on Twitter for several reasons. One reason is that the linear correlation between tweet frequency and price change is not strong (Pearson correlation r = 0.17, p < 0.01) and in fact we find support for non-linearity. Another reason is that commodity price quotations on Twitter are sparsely distributed in time (e.g., zero-tweet days) compared to the rich source of data such as the Google search query. For these reasons, we do not directly compare our results with the Google flu trend-like model.</ns0:p></ns0:div>
<ns0:div><ns0:head>Prediction performance</ns0:head><ns0:p>Prediction performance of the nowcast model is measured and compared to existing models in two ways:</ns0:p><ns0:p>(1) trend forecasting via the Pearson correlation coefficient r and (2) error rates via the mean absolute percentage of error (MAPE) between the official and estimated prices. Some parameters in the model are independent of the intrinsic properties of food commodities. For instance, the relative responsiveness of the model to yesterday's price (α) and the thresholds to restart the model after a period of infrequent tweets (n and k) are assumed in the model and hence are set as follows: α = log(21), n = 7 days, and k = 60 days. Other parameters, in contrast, were tuned to best describe the data. For instance, the maximum daily price change rate (δ ) is trained separately for each food commodity and the starting price at day 0 of prediction (P 0 ) is set separately for each commodity as the commodity price on the first observation date (June 1st, 2012).</ns0:p><ns0:p>In determining δ , a parameter that determines which tweets are accepted or ignored in the model, we examine the price change dynamics from historical records. Beef price changed gradually with a maximum price change of no more than 2.5% from one day to the next, whereas onion showed a rapid change in price with a maximum change rate of 15.1% from one day to another. This means that the daily allowable change rate should be set higher for onion compared to beef. We set δ by training with a randomly-chosen 80% of the first three-months of tweets, which are identical to the training set for other comparison models, so that the nowcast model correlation r exceeds 0.80 and RMSE is within 10% of each commodity price. The allowable range of δ are shown in Fig. <ns0:ref type='figure' target='#fig_5'>4</ns0:ref>. Performance variation in terms of r according to change of δ across all target commodities is shown in the Supplemental Information (Fig. <ns0:ref type='figure' target='#fig_2'>S1</ns0:ref>).</ns0:p><ns0:p>Next we examine the prediction performance via the percentage of error of the daily prediction, measured by taking the difference between the official and estimated price divided by the official price.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_6'>5</ns0:ref> shows the distribution of the percentage error for all four commodities over 15 months; the Manuscript to be reviewed</ns0:p><ns0:p>Computer Science Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head></ns0:div>
<ns0:div><ns0:head>Time-lagged correlation</ns0:head><ns0:p>Beyond investigating the raw correlation in data, we test whether adding any time lag would better explain the relationship between the official and predicted prices. We utilize the cross-correlation coefficient (CCF) to estimate over what time lag the two price time series data are related. The CCF value at lag τ between two time series data measures the correlation of the first series with respect to the second series shifted by τ days <ns0:ref type='bibr' target='#b29'>(Ruiz et al., 2012)</ns0:ref>. For each target commodity, Table <ns0:ref type='table' target='#tab_8'>4</ns0:ref> displays that there are maximum positive correlations at lag of 0 or +1 day, meaning that the model has the highest accuracy within a single day lag. According to the literature, nowcasting is defined as the capability to capture information on a real-time basis within a short time gap typically in the single day range <ns0:ref type='bibr' target='#b12'>(Giannone et al., 2008)</ns0:ref>. We hence can conclude that the suggested model is capable of nowcasting daily food prices in Indonesia. </ns0:p></ns0:div>
<ns0:div><ns0:head>DISCUSSION</ns0:head><ns0:p>This study shares insights into building an affordable and efficient platform to complement offline surveys on food price monitoring. The market data gathered through social media help to predict economic signals and assist food security decisions. Price quotations in social media are a new type of information that need extensive cleaning before usage. A naive statistical filtering method is no longer effective, because price distribution is not normally distributed and contains various noise elements as shown in Figure <ns0:ref type='figure' target='#fig_2'>1B</ns0:ref>.</ns0:p><ns0:p>The proposed nowcast model attains acceptable performance with a simple filtering method that does not rely on sophisticated natural language processing techniques. In applying the suggested model to other languages, a taxonomy of keywords related to commodity names and prices would need to be identified.</ns0:p><ns0:p>Our model has minimum language dependency and no grammatical considerations are required. Its filter operates via keyword extraction and numerical analysis based on the characteristics of the Twitter data.</ns0:p><ns0:p>The model can also handle data sparsity, this quality is important given that people do not always mention prices on social media.</ns0:p><ns0:p>The nowcast model, which is tested successfully on four main food commodities in Indonesia, can be adapted to predict trends in other essential commodities and across countries. Our evaluation proves the accuracy of the nowcast model by comparing prices extracted from public tweets with official market prices. The tool, hence, could operate as an early warning system for monitoring unexpected price spikes at low cost, complementing traditional methods. Therefore, this work has implications in terms of demonstrating a simple and replicable technical methodology-keyword taxonomy refined by numerical filters-that allows for straightforward operational implementation and scaling.</ns0:p></ns0:div>
<ns0:div><ns0:head>Social network-wide sensitivity to price fluctuations</ns0:head><ns0:p>The premise of this paper lies in the assumption that social network users such as those on Twitter not only voluntarily share information about food prices but also these signals are sensitive enough to capture day-to-day price fluctuations. If there are not enough tweets mentioning food prices, algorithms like nowcast will face a data scarcity problem. In fact, data shortage can be witnessed in the historical data.</ns0:p><ns0:p>Tweets that mention food prices occupy no more than 0.07% of the entire tweet dataset in Indonesia and users on average post no more than a few tweets a year on such a topic (2.7 tweets over 15 months).</ns0:p></ns0:div>
<ns0:div><ns0:head>10/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:14035:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Here we check the robustness of the algorithm under extreme challenges involving noise and lack of data with the least mentioned commodity, chilli. Out of the entire 484-day observation period, chilli was not mentioned once over 312 days and fewer than three times over 87 days. To test the robustness of the nowcast algorithm under data scarcity, a random set of chilli-related tweets accounting 10% to 80% of total are removed and the price is predicted with only the remaining data. For each simulation, data elimination is repeated 50 times and the averaged performance results are reported for comparison.</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_7'>6</ns0:ref> shows the prediction quality r (Pearson's correlation) as a function of the data deletion ratio. We find the trend forecasting to remain relatively stable until a moderate level of data deletion; the r value is degraded no more than 20% until 40% of data is eliminated. The r value starts to decrease more rapidly after this point although still reaching a correlation of above 50% until 65% of data is eliminated. This high resilience to noise for the case of chilli demonstrates that the nowcast model can handle well the level of data scarcity seen in real data. Other food commodities, which are more frequently mentioned,</ns0:p><ns0:p>show an even higher level of resilience to noise. Another issue to be considered is the nowcast model's sensitivity to price fluctuations. We find that the model achieves better predictive power under large price variations; there exists a negative correlation between the daily price increase rate and model error (r=-0.52). This might be explained by several causal factors. For instance, the volume of price quotations is affected by how the actual food price changes; people tend to post more tweets during periods of price inflation than price deflation (Fig. <ns0:ref type='figure' target='#fig_4'>3</ns0:ref>).</ns0:p><ns0:p>This tendency is more apparent on food commodities that often experience volatile price fluctuations.</ns0:p><ns0:p>For instance, onion receives on average 11.3 times more tweets upon price inflation than price deflation.</ns0:p><ns0:p>Tweet volume is directly related to the richness of the data source for the nowcast model, and hence its performance depends on price trends. The partial correlation between the price change rate and the model error after controlling for tweet volume is considerably lower (r=-0.27).</ns0:p></ns0:div>
<ns0:div><ns0:head>Credible users</ns0:head><ns0:p>While the nowcast model treats individuals on Twitter equally and utilizes all tweets that are within the allowed price ranges, one may look further into whether a smaller set of highly credible users exist and if so what their common traits might be. <ns0:ref type='bibr' target='#b3'>An and Weber (2015)</ns0:ref> have shown that different user-level sampling strategies can affect the performance of nowcasting on common offline indexes. Based on their work, we test whether accounts that quote prices more frequently in fact mention more accurate prices. We define the credibility of an account and examine its relationship with tweet volume. Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>those tweets picked by the model in allowable price range (i.e., the mentioned price is within the δ range from the predicted price of the previous day).</ns0:p><ns0:p>Figure <ns0:ref type='figure' target='#fig_8'>7A</ns0:ref> displays user credibility, grouped by the number of price quotations during the observation period. Overall, Twitter users in Indonesia had an average credibility of 0.252, indicating one out of four tweets could be used for price prediction in the nowcast model. Those who quote food prices more than one time have 1.2-1.5 times higher credibility scores than the average. Nonetheless, there is no significant correlation between the tweet volume and credibility at the user level (Spearman correlation coefficient of 0.048), indicating that accounts that mention prices frequently are not necessarily credible. In particular, the top-ten most prolific accounts are food vendors, who send out provocative advertisements that may not represent the real commodity price. Another measure we consider is user degree or the follower count. Social media comprise users of various influence levels, which can be measured by metrics like the user degree. Would influential users generate more credible tweets when it comes to food prices? Users who mention food prices have far more followers than the average. The mean degree in the studied Twitter network is 1422 with a median of 220, which indicates a one-fold difference compared with what had been reported in other Twitter studies <ns0:ref type='bibr' target='#b7'>(Cha et al., 2010)</ns0:ref>. The correlation between user degree and credibility is also significant (Spearman r=0.320),</ns0:p><ns0:p>indicating that accounts with more followers mentioned more accurate food prices (Fig. <ns0:ref type='figure' target='#fig_8'>7B</ns0:ref>). Furthermore, those who tweeted food prices more frequently tend to have more followers (Spearman r=0.183) as shown in Fig. <ns0:ref type='figure' target='#fig_8'>7C</ns0:ref>. These observations lead us to conclude that while there is no direct correlation between the level of credibility and tweet volume, having more followers leads to a positive effect on quoting credible food prices. While the current nowcast model does not consider any user traits, it may be interesting to explore the idea of finding more informative and influential user groups for economic indicators.</ns0:p></ns0:div>
<ns0:div><ns0:head>Summary</ns0:head><ns0:p>The proposed nowcast model shows remarkable potential in tracking daily food commodity prices with high accuracy in the case of Indonesia, where official statistics on food are, at times, gained with a delay of several days. Given the volatile nature of the economy in developing countries and their resource hungry monitoring systems, online big data help address the limitations of traditional official statistics by allowing fine-grained prediction of economic trends <ns0:ref type='bibr' target='#b29'>(Ruiz et al., 2012)</ns0:ref>. Government actions that lead to temporal fluctuations of food prices are common in developing countries. For instance, the Indonesian government occasionally imports meats and other farm products to stabilize food prices. Governments sometimes also donates seeds to farmers or sell them at lower prices in the hope of increasing supply from the next harvesting season <ns0:ref type='bibr' target='#b30'>(Sambijantoro, 2015;</ns0:ref><ns0:ref type='bibr' target='#b9'>CustomsToday, 2016)</ns0:ref>. With faster monitoring of financial fluctuations, governments in developing economies can make better policy decisions to protect vulnerable populations. The nowcast model can predict daily food prices through a longitudinal period of 15 months, as demonstrated in Fig. <ns0:ref type='figure' target='#fig_9'>8</ns0:ref>.</ns0:p></ns0:div>
<ns0:div><ns0:head>12/16</ns0:head><ns0:p>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:14035:1:1:NEW 14 Mar 2017)</ns0:p><ns0:p>Manuscript to be reviewed</ns0:p></ns0:div>
<ns0:div><ns0:head>Computer Science</ns0:head><ns0:p>Traditional statistics and surveys nonetheless remain a practical and accurate source of information for establishing the ground truth. The presence of online big data complements the official data by providing transient views. From this perspective, the nowcast model acts as a supporting tool for official statistics than as a stand-alone system. In particular, nowcasting will be more valuable for short-term forecasts Manuscript to be reviewed</ns0:p><ns0:p>Computer Science</ns0:p></ns0:div><ns0:figure xml:id='fig_0'><ns0:head /><ns0:label /><ns0:figDesc>Sci. reviewing PDF | (CS-2016:10:14035:1:1:NEW 14 Mar 2017) Manuscript to be reviewed Computer Science imate official figures. A near real-time food price index that is nowcasted using social media signals may be an efficient tool with immediate utility for policy makers and economic risk managers. The results of this study are being used as a basis for the development of OSN-assisted nowcast systems in several other developing countries under the United Nations World Food Programme (WFP). Details of this research including the online demo are available at http://www.unglobalpulse.org/ nowcasting-food-prices.</ns0:figDesc></ns0:figure>
<ns0:figure xml:id='fig_1'><ns0:head>(</ns0:head><ns0:label /><ns0:figDesc /></ns0:figure>
<ns0:figure xml:id='fig_2'><ns0:head>Figure 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Figure 1. (A) The official price of onion on a given day and example price quotations from Twitter on the same day. Official price statistics are calculated from various vendor prices obtained from an off-line survey. Twitter signals have variations due to the geographic diversity of information sources, varying units, etc. (B) Official and tweet price distribution for onion over the monitored 15 months, which shows a multi-modal distribution. Distribution of raw price quotations from Twitter is denoted by the dashed line, while the solid red line is the official price published by the government for the same period. Vertical lines denote the mean price values.</ns0:figDesc><ns0:graphic coords='6,143.80,193.12,409.41,167.66' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_3'><ns0:head>Figure 2 .</ns0:head><ns0:label>2</ns0:label><ns0:figDesc>Figure 2. Framework of the nowcast model. The model takes in price quotations from social media streams and predicts today's commodity price via jointly considering yesterday's price with today's price quotations.</ns0:figDesc><ns0:graphic coords='7,152.07,132.46,392.88,152.51' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_4'><ns0:head>Figure 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Figure 3. Daily tweet volume according to price change rate of the day for each commodity. (A) Beef, (B) Chicken, (C) Onion, and (D) Chilli. It shows a tendency that more people talk about food prices when they goes up or down, however, the number of daily tweets itself cannot be a predictor.</ns0:figDesc><ns0:graphic coords='8,172.75,63.78,351.53,256.73' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_5'><ns0:head>Figure 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Figure 4. The allowable model parameter δ ranges for four target food commodities based on training data. All allowable delta ranges include four times the historical maximum daily price change rate, which are displayed with a vertical line for each commodity.</ns0:figDesc><ns0:graphic coords='10,183.09,63.78,330.83,109.87' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_6'><ns0:head>Figure 5 .</ns0:head><ns0:label>5</ns0:label><ns0:figDesc>Figure 5. Daily prediction error comparison between the models. (A) Beef, (B) Chicken, (C) Onion, and (D) Chilli. Time series based prediction models (ARIMA and Nowcast) show better performance in terms of error range.</ns0:figDesc><ns0:graphic coords='10,141.73,234.17,413.57,138.21' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_7'><ns0:head>Figure 6 .</ns0:head><ns0:label>6</ns0:label><ns0:figDesc>Figure 6. Decaying of the relative correlation performance (r) as a function of the scale of data removal for the most scarce commodity, chilli. Each data point indicates the average performance of 50 runs after randomly choosing to remove a given fraction of price quotations. The blue line indicates the relative increment/decrement (%) of the averaged correlation against the non-removing case and the shaded area represents the ranges of outcome across all 50 trials.</ns0:figDesc><ns0:graphic coords='12,224.45,230.97,248.14,190.92' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_8'><ns0:head>Figure 7 .</ns0:head><ns0:label>7</ns0:label><ns0:figDesc>Figure 7. (A) Users' credibility plot versus their number of Tweets. The dashed red line depicts the mean credibility of all users (=0.251) (B) Users' credibility plot versus their number of followers. (C) Users' followers versus their number of Tweets.</ns0:figDesc><ns0:graphic coords='13,143.80,196.20,409.44,164.82' type='bitmap' /></ns0:figure>
<ns0:figure xml:id='fig_9'><ns0:head>Figure 8 .</ns0:head><ns0:label>8</ns0:label><ns0:figDesc>Figure 8. Full comparison of (A) the three alternative models-interquartile range (IQR) filtering, ARIMA model, and kernel density estimation (KDE) clustering-and the official price and (B) the proposed nowcast model and the official price across four food commodities. The blue points indicate the price quotations from Twitter and the shaded area represents the credible price range determined by a model parameter δ .</ns0:figDesc></ns0:figure>
<ns0:figure type='table' xml:id='tab_0'><ns0:head>Table 1 .</ns0:head><ns0:label>1</ns0:label><ns0:figDesc>Full keyword taxonomy for tweet collection</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Keyword combination for tweet collection:</ns0:cell></ns0:row><ns0:row><ns0:cell>( Commodity Names ) AND ( Price Values ) AND ( Price Units | Commodity Units )</ns0:cell></ns0:row><ns0:row><ns0:cell>2 https://ews.kemendag.go.id/</ns0:cell></ns0:row><ns0:row><ns0:cell>3/16</ns0:cell></ns0:row><ns0:row><ns0:cell>PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:14035:1:1:NEW 14 Mar 2017)</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_1'><ns0:head /><ns0:label /><ns0:figDesc>• • • (Beef prices are still 95,000 Rupia per kilogram, this situation is pressing government• • • )</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:figure type='table' xml:id='tab_6'><ns0:head>Table 3</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>shows the result for the absolute error. Again, IQR and KDE do not yield the same level of performance as time series-based models. ARIMA yields the smallest MAPE for certain commodities like chilli and chicken, yet the correlation coefficient (r) remains the highest for Nowcast. This may be due to the non-stationary property of the price trend data in developing regions, which is handled better by the proposed nowcast model. For monitoring economic markets, the ability to represent trend dynamics is as important as reducing the absolute error. Hence this comparison demonstrates that the nowcast model outperforms existing models.</ns0:figDesc><ns0:table><ns0:row><ns0:cell>Commodity</ns0:cell><ns0:cell>Total tweets</ns0:cell><ns0:cell>r</ns0:cell><ns0:cell cols='2'>NOWCAST MAPE(%)</ns0:cell><ns0:cell>r</ns0:cell><ns0:cell>ARIMA MAPE(%)</ns0:cell><ns0:cell>r</ns0:cell><ns0:cell>IQR MAPE(%)</ns0:cell><ns0:cell>r</ns0:cell><ns0:cell>KDE MAPE(%)</ns0:cell></ns0:row><ns0:row><ns0:cell>Beef</ns0:cell><ns0:cell cols='3'>14473 0.85</ns0:cell><ns0:cell>4.91</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>5.02</ns0:cell><ns0:cell>0.29</ns0:cell><ns0:cell>18.05</ns0:cell><ns0:cell>0.25</ns0:cell><ns0:cell>11.14</ns0:cell></ns0:row><ns0:row><ns0:cell>Chicken</ns0:cell><ns0:cell>5223</ns0:cell><ns0:cell cols='2'>0.84</ns0:cell><ns0:cell>9.26</ns0:cell><ns0:cell>0.42</ns0:cell><ns0:cell>8.74</ns0:cell><ns0:cell>0.46</ns0:cell><ns0:cell>46.45</ns0:cell><ns0:cell>0.34</ns0:cell><ns0:cell>45.87</ns0:cell></ns0:row><ns0:row><ns0:cell>Onion</ns0:cell><ns0:cell>1954</ns0:cell><ns0:cell cols='2'>0.85</ns0:cell><ns0:cell>33.06</ns0:cell><ns0:cell>0.35</ns0:cell><ns0:cell>42.88</ns0:cell><ns0:cell>0.60</ns0:cell><ns0:cell>40.83</ns0:cell><ns0:cell>0.63</ns0:cell><ns0:cell>43.36</ns0:cell></ns0:row><ns0:row><ns0:cell>Chilli</ns0:cell><ns0:cell>1772</ns0:cell><ns0:cell cols='2'>0.76</ns0:cell><ns0:cell>12.99</ns0:cell><ns0:cell>0.51</ns0:cell><ns0:cell>11.26</ns0:cell><ns0:cell>0.32</ns0:cell><ns0:cell>70.21</ns0:cell><ns0:cell>-0.25</ns0:cell><ns0:cell>81.35</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_7'><ns0:head>Table 3 .</ns0:head><ns0:label>3</ns0:label><ns0:figDesc>Prediction performance comparison between the models</ns0:figDesc><ns0:table /><ns0:note>9/16PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:14035:1:1:NEW 14 Mar 2017)</ns0:note></ns0:figure>
<ns0:figure type='table' xml:id='tab_8'><ns0:head>Table 4</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>also indicates that there are the highest positive correlations at lag 0 to +1 for all commodities, meaning that a daily price value nowcasted from social media has a predictive power on the price value of the next</ns0:figDesc><ns0:table><ns0:row><ns0:cell>day.</ns0:cell><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /><ns0:cell /></ns0:row><ns0:row><ns0:cell>Commodity</ns0:cell><ns0:cell>-3</ns0:cell><ns0:cell>-2</ns0:cell><ns0:cell>-1</ns0:cell><ns0:cell>Lag (days) 0</ns0:cell><ns0:cell>+1</ns0:cell><ns0:cell>+2</ns0:cell><ns0:cell>+3</ns0:cell></ns0:row><ns0:row><ns0:cell>Beef</ns0:cell><ns0:cell cols='7'>0.28 0.19 0.62 0.85 0.79 0.50 0.41</ns0:cell></ns0:row><ns0:row><ns0:cell>Chicken</ns0:cell><ns0:cell cols='7'>0.29 0.24 0.77 0.84 0.63 0.42 0.33</ns0:cell></ns0:row><ns0:row><ns0:cell>Onion</ns0:cell><ns0:cell cols='7'>-0.13 0.32 0.68 0.85 0.83 0.67 0.13</ns0:cell></ns0:row><ns0:row><ns0:cell>Chilli</ns0:cell><ns0:cell cols='7'>0.41 0.09 0.49 0.76 0.81 0.31 -0.20</ns0:cell></ns0:row></ns0:table></ns0:figure>
<ns0:figure type='table' xml:id='tab_9'><ns0:head>Table 4 .</ns0:head><ns0:label>4</ns0:label><ns0:figDesc>Cross correlation between official and nowcasted prices across target commodities</ns0:figDesc><ns0:table /></ns0:figure>
<ns0:note place='foot' n='16'>/16 PeerJ Comput. Sci. reviewing PDF | (CS-2016:10:14035:1:1:NEW 14 Mar 2017) Manuscript to be reviewed Computer Science</ns0:note>
</ns0:body>
" | "Dear Editors of PeerJ,
The authors sincerely thank the editor and reviewers for giving us a chance to revise the manuscript entitled ‘Nowcasting commodity prices using social media’ and for providing us with helpful and thoughtful comments.
Below, we provide with specific responses to each of the reviewers’ comments and indicate which changes we have made in the paper to address them.
Reviews
Response
Editor:
This is a very interesting paper -- predicting food commodity price is a very relevant social-economic problem, and this paper is the first (that I know of) to tackle this problem by overcoming/implementing a number of technical solutions.
I strongly encourage the authors to take into account comments from both reviewers, including but are not limited to: describing data sources for ground truth, evaluation protocol, articulate example applications (e.g. gov make purchasing decisions based on expected price fluctuation), additional references, robustness of NLP techniques, and writing improvements.
Thank you for the overall feedback. We have revised the paper by addressing the reviewers’ comments, clarifying issues raised, re-organizing and improving the writing, enriching data description and references, and conducting additional evaluation analyses.
Editor:
Granger causality is a somewhat strange choice of measuring prediction quality since it measures the quality of x(1:t-1) vs y(t) but not x(t) against y(t). The error metric MAPE makes more sense. Perhaps the authors can also consider correlation metrics (e.g. Pearson).
Based on the Editor’s and the two reviewers’ suggestions, we now clearly describe the main goal of this work as ‘suggesting a Nowcast model that reproduces daily price time-series based on tweets’, and specify concrete sub-goals that we try to achieve. In addition, we clearly describe what metric is used for evaluating each sub-goal. These changes are highlighted in the revised manuscript.
(Line 73-87)
Editor:
Supplemental material shows sufficient care that the authors have taken to ensure the quality of data and present complete results. In my opinion they are an integral part of the paper and will be useful for someone who want to reproduce the system. Assuming there's no PeerJ page limit, I suggest that they are included in the main text, or after references, rather than as separate files. If this is not possible, then a short list of what the appendixes are should be included in the main text likely towards the end.
We have added a discussion on bot detection and performance plots for all target commodities in the main manuscript. Thank you for this suggestion.
(Line 152-170, Figure 8)
Reviewer 1: Basic Reporting
Although sufficient background/context is provided, I think it would greatly enhance the paper if the authors provided more information about significance of the domain of the study. Food shortages in developing countries constitutes a serious social and governmental issue, and this research provides a valuable contribution to solving such problems. For example, in the Summary section, the authors state that 'governments ... can make better policy decisions to protect vulnerable populations' (Lines 378-379). A brief sentence or two about how this research can enable 'better policy decisions' would add value to the paper, particularly for interdisciplinary readers in policy and administration.
We have added references on research and policy efforts related to food price fluctuation in developing country. We also clarify how the proposed research method can be adopted by the Indonesia government in making better policies.
(Line 4218-422)
Reviewer 1: Experimental design
By and large, the experimental design is sound and to be commended. However, there are two minor aspects that would need to be addressed (or at least clarified) prior to accepting the paper.
Firstly, the aims and research questions and/or hypotheses of the study are not clearly defined.
We appreciate the reviewer’s suggestion. We now clearly mention the goal of this study in the introduction and specify concrete sub-goals that we try to achieve. In addition, we clearly describe what metric is used for evaluating each sub-goal. These changes are highlighted in the revised manuscript.
(Line 73-87)
Secondly, the authors should be commended for their novel approach to filtering and analysing socially-generated data that are, by nature, noisy and error-prone. However, I query the removal of bot accounts (see Lines 123-126) from the data. The rationale for removing bots is that these accounts 'have a disproportionate impact on the price monitor'. However, the literature tells us that 'socialbots' play an important role in, for example, shaping markets and financial events (Steiner, 2012) and politics (Graham and Ackland, 2016; Ratkiewicz et al., 2011). In this way, the bots may have a potentially important 'social' role to play in commodity prices in Indonesia, even if their presence complicates the modelling problem. If the authors could clarify the removal of bots, that would be helpful.
In this work, we have eliminated bot accounts from certain sellers which simply keep echoing the redundant content with a vast volume. The literature have defined spam as a bot designed to give unfair influence on opinion by echoing the earlier information1. The bots we define in this study act as a spam rather than play a valuable social role because they provide unfair and significant statistical bias to information distribution, therefore we employ a basic bot detection method to eliminate them.
(Line 152-170)
Reviewer 1: Validity of the findings
Overall the findings are interesting and valid. I have two relatively minor comments for further improvements to the paper.
Firstly, the application of Granger causality (Lines 268-289) augments the analysis nicely and adds depth to the results. However, I have a point of clarification.
Secondly, as per Section 1 of this review, it would be helpful to clearly define the research questions at the beginning of the paper, in order to link these to the findings and conclusion at the end of the paper.
In the revised manuscript, we strengthen our findings via additionally providing results that are alternative to the Granger causality analysis.
(Line 317-329)
We have also followed up on the second suggestion to restructure the introduction section.
Reviewer 2: Experimental design
In terms of the evaluation of the proposed model, the article could be strengthened by clarifying (1) the nature of the gold standard data; and (2) the usage of the data.
We have added the explicit source of the government data used in the research and specify a link to download the data.
(Line 114-122)
We also added a clearer description of the training and the test dataset.
(Line 261-262, 293-295)
Reviewer 2: Comments for the author
Overall this article was a very insightful read. The use of social media analytics to commodity prices is an interesting application. The paper suffers at times from grammatical errors (see below for minor edits), however the overall presentation of the information and argument are clear and easy to follow.
Thank you for the detailed comments that help us improve the manuscript. We have followed up on all comments and below list prominent changes.
Reviewer 2: Related work
At a high level, it might be interesting to comment on other well-known uses of Twitter for prediction.
We have added the reference that the reviewer suggested.
Reviewer 2: 4. With respect to Natural Language Processing
The decision to use minimal natural language processing techniques is understandable. However the article presents the model as being robust to variations in the way one can express prices, leaving it to the model to make the appropriate inference.
We have added a discussion of how data processing might vary across languages. In the case of Indonesian tweets, most people mention prices in the same unit. However, this may not be true in other countries as the reviewer points out. We discuss how a taxonomy-based approach may be used to re-map prices for a given target unit.
(Line 132-135)
Reviewer 2: With respect to data/results
On Line 118, the authors write “a tweet is an exact duplication of another (22.9%)”. Exact duplicates are often retweets or people sharing news articles. To what extent does the data come from news articles, and what effect does that have on the approach? If these are indeed news, could one simply have analysed the news articles rather than Twitter?
This is a great suggestion and we have a plans for future work to examine how one might utilize publically gathered news articles as an alternative data source for monitoring prices in the discussion section.
Reviewer 2: With respect to generality of methods
Line 302, “The nowcast model … predict trends in other essential commodities and across countries”
While the proposed method is attractive, it’s applicability to other countries remains to be seen and so this claim may be too general.
We agree with the concern raised by the reviewer. We now clearly mention which countries our model might be more applicable.
(Line 337-339)
We thank the Editor and the reviewers once more for the kind and detailed suggestions. We have applied all the comments to our best and additionally had our manuscript proofread. The suggested references have also been added and cited in appropriate places throughout the manuscript.
Thank you for your warm consideration.
Sincerely yours,
Meeyoung Cha, Ph.D.
Graduate School of Culture Technology
KAIST, Daejeon, South Korea
042-350-2922 (Office) 042-350-2910 (Fax) Email: [email protected]
" | Here is a paper. Please give your review comments after reading it. |