entry_id
stringlengths
33
33
published
stringlengths
14
14
title
stringlengths
17
188
authors
sequence
primary_category
stringlengths
5
18
categories
sequence
text
stringlengths
2
629k
http://arxiv.org/abs/2307.10190v1
20230708141246
Summary of the 3rd BINA Workshop
[ "Eugene Semenko", "Manfred Cuntz" ]
astro-ph.IM
[ "astro-ph.IM", "astro-ph.SR" ]
1]Eugene Semenko 2]Manfred Cuntz [1]National Astronomical Research Institute of Thailand (Public Organization) 260 Moo 4, T. Donkaew, A. Maerim, Chiangmai, 50180 Thailand [2]Department of Physics, University of Texas at Arlington, Arlington, TX 76019, USA Summary of the BINA Workshop [ ============================ BINA-3 has been the third workshop of this series involving scientists from India and Belgium aimed at fostering future joint research in the view of cutting-edge observatories and advances in theory. BINA-3 was held at the Graphic Era Hill University, 22-24 March 2023 at Bhimtal (near Nainital), Uttarakhand, India. A major event was the inauguration of the International Liquid-Mirror Telescope (ILMT), the first liquid mirror telescope devoted exclusively to astronomy. BINA-3 provided impressive highlights encompassing topics of both general astrophysics and solar physics. Research results and future projects have been featured through invited and contributed talks, and poster presentations. § INDO-BELGIAN COLLABORATION IN SPACE AND TIME Without comprehensive international collaborations, it is difficult to imagine sustainable scientific progress in the modern age. In astronomy and astrophysics, such collaborations enabled the operation of observational facilities in the best places on the ground and in space. In big international cooperations like the European Southern Observatory, we can see how the technology exchange and mobility of human resources promote research on all levels, from universities to international institutions. Especially promising collaborations pertain to India, the world's most populous country according to the United Nations <cit.>, with exceptionally rapid economic growth. The Belgo-Indian Network for Astronomy and Astrophysics, or BINA, was initialized in 2014 to foster the existing contacts between Indian and Belgian researchers, mostly from the Aryabhatta Research Institute of Observational Sciences (ARIES) and the Royal Observatory of Brussels (ROB), and to expand this collaboration on the nation-wide scale in both countries. The third BINA workshop, which we have the pleasure of summarizing, marks the end of this project. Two previous workshops were held in 2016 in Nainital (India) and 2018 in Brussels (Belgium). We believe that our summary would not be complete without a brief comparison of the third workshop with the two preceding ones. This will help us to better understand BINA's importance and outcome. The first workshop (BINA-1) took place in Nainital on 15–18 November 2016. According to available statistics <cit.>, 107 astronomers from eight countries participated in the meeting, giving 36 oral talks and presenting 42 posters. Eighty-eight people from twelve partner institutes represented the Indian astronomical community, whereas six Belgian institutions sent ten representatives. The meetings' agenda focused primarily on the instrumentation of the newly commissioned 3.6-m Devastal Optical Telescope (DOT) and on the future of the 4-m International Liquid-Mirror Telescope (ILMT). The scientific talks covered a wide range of subjects, from solar system studies to individual stars, stellar clusters, exoplanets and extragalactic astronomy. The second BINA workshop (BINA-2) was held two years later, in 2018, in Brussels; it was aimed to further expand the existing collaborations. Despite the significantly smaller number of participants (i.e., 69 registered researchers from seven countries), the conference's scientific programme was rich in oral talks, totalling 44. Furthermore, there were eight poster presentations <cit.>. The scientific programme of the second workshop largely mirrored the agenda of the first meeting, accentuating the scientific application of the Belgo-Indian telescopes. A highly notable aspect of the second workshop's scientific programme was the presence of the review talks. In terms of participation and the number of oral talks, BINA-3, the final workshop, resembles the previous events, although, fortunately, a significant increase in participation and contributions occurred. Nearly one hundred fifty scientists from eleven countries participated in BINA-3, with the lion's share from India and Belgium. A total of 37 talks (10: invited, 27: contributory) talks were given in the main programme, and 21 contributory talks were given in the solar physics sessions. There have been 81 poster presentations; many of those were led by graduate and undergraduate students. There is significant progress hiding behind the numbers. Since 2016, the Belgo-Indian network has grown to involve new institutes from both partner countries. The members published numerous scientific papers with results obtained on the Belgo-Indian telescopes. Many of these were based on PhD theses pursued within BINA. The content of these proceedings, during 2016–2023, also reveals that many young researchers changed their affiliation, moving to new places and thus expanding the network of research contacts. Progress in instrumentation and scientific collaboration within BINA and with external institutes worldwide gave new impulses to solar and general physics studies. In general, we can count the significantly increased number of telescopes and instruments as the major indicator of progress achieved within the BINA project. The list of available instruments has been highly influential on BINA-3. In the following sections, we briefly summarize its scientific programme. § OBSERVATIONAL TECHNIQUES AND INSTRUMENTATION Telescopes and their instruments were in the spotlight of all BINA workshops. The ILMT has become the central theme of the current meeting. From a number of oral talks and poster presentations, one could get a comprehensive view of such telescopes' operation principles. It was particularly interesting to find out about the data reduction, calibration and access to the processed images obtained with the ILMT. Numerous results of the first observations with the ILMT, shown mostly in the poster presentations, have demonstrated a wide range of possible scientific applications of zenith telescopes with liquid mirrors. Given the short time that has passed since the beginning of the operation and obtained results, we can confirm that the ILMT has proven its scientific concept and significantly strengthened the observational facilities for the current and future Indo-Belgian projects. The Indo-Belgian 3.6-m Devastal Optical Telescope (DOT) remains Asia's largest so far fully steerable optical telescope, which has been in operation since 2016. Yet, accurately by the time of BINA-3, a park of Indian telescopes received strengthening with the commissioning of the 2.5-m telescope, which was built by the Advanced Mechanical and Optical Systems (AMOS) in Belgium for the Physical Research Laboratory (PRL) in Ahmedabad and installed at Mt Abu, Rajasthan, India. The development of new instruments and the upgrade of existing facilities was the central theme of the instrumentation section of the current conference. Notably, by 2028, the TIFR-ARIES Multi-Object Optical to Near-infrared Spectrograph (TA-MOONS) will bring new capabilities useful for the studies of stars in star formation regions, open clusters, and extended sources with DOT. Also, for this telescope, adding the polarimetric mode to the Aries-Devasthal Faint Object Spectrograph & Camera (ADFOSC), the existing device for observations of faint objects, will enable both linear and circular polarimetry. This new regime is of critical importance to the study of processes in star-forming regions, interacting stellar systems, supernovae, active galactic nuclei, and beyond. A spectropolarimetric mode might be a case to think of for the creators of the PRL Advanced Radial Velocity Abu Sky Search-2 (PARAS-2), a high-resolution spectrograph at the 2.5-m PRL telescope at Mt Abu. This highly stable device has been developed for precise measurements of radial velocities while providing very high spectral resolution. Due to the geographical location of Mt Abu, PARAS-2 can play a critical role in the continuous monitoring of radial velocities for a wide variety of relatively bright objects; however, with a spectropolarimetric mode being implemented (like HARPSpol at the High Accuracy Radial velocity Planet Searcher (HARPS); ), PARAS-2 can take its niche in observations of hot magnetic stars, either within Indo-Belgian collaboration or in third-party projects like MOBSTER <cit.>. (MOBSTER is an acronym for Magnetic OB[A] Stars with TESS: probing their Evolutionary and Rotational properties; it is a collaboration of more than 60 scientists from over the world.) With the completion of a High-Resolution Spectrograph for the 3.6-m Devastal Optical Telescope (DOT-HRS), the astronomical community of ARIES will possess the ability to independently carry out studies in the fields of asteroseismology and stellar abundances. Again, like in the case of PARAS-2, spectropolarimetry with DOT-HRS is expected to increase the list of potential applications of this device and could further expand the ongoing Nainital-Cape survey of pulsating early-type stars <cit.>. The rising number of telescopes in India poses questions about the most adequate time allocation policies and the optimal distribution of observational proposals between existing astronomical facilities. We found that the analysis of the time allocation for the 3.6-m DOT regarding the last six observational cycles, as presented at the workshop, indicated that it was particularly useful and appropriate for all facilities of ARIES — especially considering that the ILMT has started its operation and the upcoming arrival of the next-generation instruments for the 3.6-m DOT. From our perspective, in addition to the proposed improvements, we would also recommend the organisation of regular (e.g., on a yearly basis) conferences of the telescope's users under the auspices of the Time Allocation Committee (TAC), where the existing and potential applicants would be able to present their proposals or give feedback on the approved or running programmes. Such mini-conferences could be held online, speeding up communication between the TAC and the astronomical community. Naturally, this experience could be applied to other instruments in India and beyond as well. The theme of small telescopes has been raised in several talks. The Belgium-made High-Efficiency and high-Resolution Mercator Echelle Spectrograph (HERMES), operated at the 1.25-m Mercator telescope in La Palma (Spain), proved its effectiveness in studies of the chemical composition of single and multiple stars. This spectrograph is used for existing bilateral projects. Complimentary opportunities for high-resolution spectroscopy with the 1-m-class telescopes and the perspectives of affordable implementation of adaptive optics on small and moderate-size telescopes have been considered in BINA-3. The interest in these problems highlights the importance of small, properly equipped telescopes for big programmes complementary to missions like the Transiting Exoplanet Survey Satellite (TESS). § MAIN PROGRAMME SESSION BINA provides access to a wide variety of observational facilities located worldwide <cit.>. The observational component mostly determined the agenda of the BINA-3. Comets, planets, asteroids, and orbital debris were in the third BINA workshop's spotlight, though other topics such as stars, including stellar multiplicity, and compact objects have been discussed. The selection of objects is largely determined by the areas where optical spectroscopy and photometry are most effective with small and medium-sized telescopes. The exception is the study of planetary atmospheres using the method of stellar occultations. Similar techniques require bigger apertures, and being implemented in a 3–6-m class of telescopes can be very beneficial. The 3.6-m DOT is among those few instruments on the planet which have regularly been used for observation of such events <cit.>. Various instruments available within the Indo-Belgian collaboration promote the comprehensive study of processes occurring in star formation regions and during the ongoing evolution of stars. The efficiency of multi-wavelength observations was demonstrated in the example of the study of the star formation H ii region Sh 2-305. However, this is not a unique case where the Indian telescopes exploring the Universe in optical, radio, and X-ray domains were successfully combined. We cannot pass by the numerous results of the study of massive binary stars, stars with discs and circumstellar envelopes, introduced in the BINA-3 workshop. Stellar multiplicity runs the golden thread through many talks given in Bhimtal during the workshop. As companions significantly influence stellar lifes at all stages of evolution, proper accounting and evaluation of the companions' properties are crucial. In this regard, work with the catalogues of binary stars or their extensive study within the ongoing or future Indo-Belgian projects must receive high priority. In such programmes, high-resolution optical spectroscopy of binary and multiple stars must take a special place. Another problem passing through the scientific content of BINA-3 is stellar magnetism. As pointed out in the workshop, magnetic fields are ubiquitous on and beyond the main sequence, with their strengths varying substantially. Magnetic fields are responsible for different kinds of stellar activity and can impact stellar evolution. Besides the theoretical aspects pertaining to the physics of these processes, we would like to attract attention to the lack of observational facilities in the Asian region suitable to direct observations of stellar magnetic fields and processes. The worldwide selection of medium-sized and big telescopes equipped with sensitive spectropolarimetric devices is very limited, and Indian telescopes could fill this gap. Through the study of chemical composition, one can explore the evolution of individual stars, groups of stars, and the Galaxy at large. The last is the central task of galactic archaeology. Pursuing this task depends on the availability of spectra and proper modelling. Despite the various observational results presented in BINA-3, we find a lack of interactions between the BINA members and groups working, e.g., in the U.S., Sweden or Germany, on the theoretical aspects of abundance analysis. We believe tighter cooperation with the institutes outside of BINA would take the research of stellar abundances to a qualitatively new level. In contrast to the previous workshops, asteroseismology, a powerful tool for probing stellar interiors and validating stellar parameters, appears underrepresented in BINA-3. (On a lighter note, a superb cultural show successfully compensated for the lack of “music of the stars” in the conference programme.) This fact looks surprising to us as the Belgian groups in Brussels and Leuven are famous for their proficiency in this field. Apart from galactic archaeology, which deals with the evolution of chemical composition, probing the Galactic structure is another important direction of work within BINA. Even now, after decades of extensive exploration of the Galaxy using different methods, our knowledge of its structure is incomplete. Optical polarimetry helps to reveal the detailed fine structure of dust clouds in the star formation regions or in the areas of young open clusters. Indian astronomers are experienced in this kind of work, and their results, both published <cit.> and presented during BINA-3, deserve special attention. We look forward to further expanding this direction of galactic studies on a new technical level. § SOLAR PHYSICS SESSION The mainframe of the solar physics programme has been the study of small-scale structure, waves, flares as well as coronal mass ejections (CMEs). Science opportunities are often directly associated with instruments such as the Extreme Ultraviolet Imager (EUI) onboard of the Solar Orbiter. The EUI provides a crucial link between the solar surface, on the one hand, and the corona and solar wind, on the other hand, that ultimately shapes the structure and dynamics of the interplanetary medium. Several contributions focused on wave propagation, including their relevance to small-scale structures of the solar chromosphere, transition region and corona, such as flares, spicules and loop systems. This kind of research considered both observations and theoretical work, such as ab-initio simulations for standing waves and slow magneto-acoustic waves. Studies of the outer solar atmosphere also utilized the Interface Region Imaging Spectrograph (IRIS) and the Atmospheric Imaging Assembly (AIA), both onboard of the Solar Dynamics Observatory (SDO). In alignment with previous studies given in the literature, the potential of spectral lines, including line asymmetries, for the identification of solar atmospheric heating processes has been pointed out and carefully examined. Clearly, this approach is relevant to both solar physics and studies of solar-type stars of different ages and activity levels; it allows to embed solar studies into a broader context. Regarding CMEs, a major driver of space weather and geomagnetic stars, attention has been paid the EUropean Heliosphere FORcasting Information Asset (EUHFORIA), which is relevant for MHD modelling and the study of the evolution of CMEs in the heliosphere. In this regard, a pivotal aspect is the study of thermodynamic and magnetic properties of CMEs as well as CME forward-modeling, aimed at predicting CME breakouts as well as CME topologies and magnitudes. Relevant spectral line features include Fe XIV and Fe XI data, obtained with existing instruments or available in the archive. Another notable item has been the presentation of long-term variations of solar differential rotation and the solar cycle; the latter still poses a large set of unanswered scientific questions. § RETROSPECTIVE AND RECOMMENDATIONS A key element of BINA-3 is the future availability of the ILMT. The science goals of ILMT include cosmological research such as the statistical determination of key cosmological parameters through surveying quasars and supernovae as well as photometric variability studies of stars, transiting extra-solar planets and various types of transient events. Another aspect consists in the search for faint extended objects like low-surface brightness and star-forming galaxies. The pronounced use of ILMT, typically in conjunction with other available facilities, requires the ongoing pursuit of international collaborations; this activity is pivotal for future success. Another key aspect is the significance of theoretical studies. Regarding solar physics research, previous work encompasses the study of MHD waves and small-scale transients, with a focus on the solar chromosphere, transition region and corona. Some of this work made extensive use of the EUI onboard of the Solar Orbiter. The study of outer solar atmosphere fine structure utilized the IRIS and the AIA, both onboard of the SDO. Time-dependent coronal studies, especially CMEs, are of great significance for the Earth, such as the onset of geomagnetic storms and the safety of equipment, including those associated with satellite communication[See <https://www.swpc.noaa.gov> for further information.]. Further advances in this field are expected to benefit from additional observational studies as well as advances in theory, particularly the interface of those two. Regarding theoretical work, ongoing and future efforts should continue to focus on 3-D magneto-hydrodynamics studies in conjunction with the adequate inclusion of radiative transfer and statistical phenomena, as well as aspects of chaos theory. There are other items with the potential for future successful developments. Asteroseismology has been underrepresented in BINA-3. This is a powerful tool in the context of stellar evolution studies and the validation and improvement of stellar parameters; the latter is also relevant in the context of extrasolar planet investigations. Further important aspects concern the study of stellar magnetism and activity. Besides elementary stellar studies, these topics are also of critical importance regarding circumstellar habitability and astrobiology at large <cit.>. Moreover, studies of AGNs and GRBs are cardinal topics beyond solar and stellar physics; they have gained considerable steam within the scientific community. Processes in the extragalactic objects are characterized by high energy and rich spectra. Among the variety of works presented during BINA-3, studies of active galactic nuclei (AGN) and different transients like gamma-ray bursts (GRB) continue to deserve special attention. The members of BINA have an exhaustive set of instruments available for multi-wavelength observations of these extragalactic sources, yet there is still room for improvement. Considerable advances are attainable both in instrumentation and in techniques of analysis. In the study of intra-night variability of blazars presented in the workshop's programme <cit.>, we noted the lack of international contributors, although these types of objects are in the spotlight of groups working, e.g., at the 6-m telescope of the Special Astrophysical Observatory, located in the North Caucasus region of Russia <cit.>. Given the absence of polarimetric devices for observation with the 3.6-m DOT at the moment, such cooperation could open new opportunities. Connections established on the personal level between the member institutions of BINA and observatories operating big telescopes would facilitate future studies in extragalactic astronomy where the aperture matters. Similarly, we would recommend establishing collaborations with the institutes operating robotic telescopes for the observation of transients. However, a more radical future step might be an expansion of Indian observational facilities towards other continents, especially South America. A small network of medium-sized fully-robotic telescopes could provide easy access to observations and be used for educational purposes. It would reduce the dependence on astronomical monitoring occurring in South Asia — in consideration of possible drawbacks due to the regional climates. Last but not least, in the field of data analysis, the leitmotif now is the use of machine learning (ML) and artificial intelligence (AI). This theme was raised several times during the workshop, but we believe that it could find broader applications in projects related to the classification of light curves and spectra. At the same time, we would recommend researchers using ML and AI in their work not to ignore advances in theory, as without proper constraints and background information, these methods might lead to impractical results, especially if based on small samples. §.§.§ Acknowledgments The authors are grateful to the scientific and local organizing committees of BINA-3 for inviting them to summarize the workshop and for further assistance in preparing these proceedings. §.§.§ ORCID identifiers of the authors 0000-0002-1912-1342Eugene Semenko 0000-0002-8883-2930Manfred Cuntz §.§.§ Author contributions Both authors equally contributed to this publication. §.§.§ Conflicts of interest The authors declare no conflict of interest. apalike
http://arxiv.org/abs/2307.03949v1
20230708103948
Ergodic observables in non-ergodic systems: the example of the harmonic chain
[ "Marco Baldovin", "Raffaele Marino", "Angelo Vulpiani" ]
cond-mat.stat-mech
[ "cond-mat.stat-mech" ]
Institute for Complex Systems - CNR, P.le Aldo Moro 2, 00185, Rome, Italy Université Paris-Saclay, CNRS, LPTMS,530 Rue André Rivière, 91405, Orsay, France Dipartimento di Fisica e Astronomia, Universitá degli Studi di Firenze, Via Giovanni Sansone 1, 50019, Sesto Fiorentino, Italy Dipartimento di Fisica, Sapienza Universitá di Roma, P.le Aldo Moro 5, 00185, Rome, Italy In the framework of statistical mechanics the properties of macroscopic systems are deduced starting from the laws of their microscopic dynamics. One of the key assumptions in this procedure is the ergodic property, namely the equivalence between time averages and ensemble averages. This property can be proved only for a limited number of systems; however, as proved by Khinchin <cit.>, weak forms of it hold even in systems that are not ergodic at the microscopic scale, provided that extensive observables are considered. Here we show in a pedagogical way the validity of the ergodic hypothesis, at a practical level, in the paradigmatic case of a chain of harmonic oscillators. By using analytical results and numerical computations, we provide evidence that this non-chaotic integrable system shows ergodic behavior in the limit of many degrees of freedom. In particular, the Maxwell-Boltzmann distribution turns out to fairly describe the statistics of the single particle velocity. A study of the typical time-scales for relaxation is also provided. Ergodic observables in non-ergodic systems: the example of the harmonic chain Angelo Vulpiani August 12, 2023 ============================================================================== § INTRODUCTION Since the seminal works by Maxwell, Boltzmann and Gibbs, statistical mechanics has been conceived as a link between the microscopic world of atoms and molecules and the macroscopic one where everyday phenomena are observed <cit.>. The same physical system can be described, in the former, by an enormous number of degrees of freedom N (of the same order of the Avogadro number) or, in the latter, in terms of just a few thermodynamics quantities. Statistical mechanics is able to describe in a precise way the behavior of these macroscopic observables, by exploiting the knowledge of the laws for the microscopic dynamics and classical results from probability theory. Paradigmatic examples of this success are, for instance, the possibility to describe the probability distribution of the single-particle velocity in an ideal gas <cit.>, as well as the detailed behavior of phase transitions <cit.> and critical phenomena <cit.>. In some cases (Bose-Einstein condensation <cit.>, absolute negative temperature systems <cit.>) the results of statistical mechanics were able to predict states of the matter that were never been observed before. In spite of the above achievements, a complete consensus about the actual reasons for such a success has not been yet reached within the statistical mechanics community. The main source of disagreement is the so-called “ergodic hypothesis”, stating that time averages (the ones actually measured in physics experiments) can be computed as ensemble averages (the ones appearing in statistical mechanics calculations). Specifically, a system is called ergodic when the value of the time average of any observable is the same as the one obtained by taking the average over the energy surface, using the microcanonical distribution <cit.>. It is worth mentioning that, from a mathematical point of view, ergodicity holds only for a small amount of physical systems: the KAM theorem <cit.> establishes that, strictly speaking, non-trivial dynamics cannot be ergodic. Nonetheless, the ergodic hypothesis happens to work extremely well also for non-ergodic systems. It provides results in perfect agreement with the numerical and experimental observations, as seen in a wealth of physical situations <cit.>. Different explanations for this behavior have been provided. Without going into the details of the controversy, three main points of view can be identified: (i) the “classical” school based on the seminal works by Boltzmann and the important contribution of Khinchin, where the main role is played by the presence of many degrees of freedom in the considered systems  <cit.>; (ii) those, like the Prigogine school, who recognize in the chaotic nature of the microscopic evolution the dominant ingredient <cit.>; (iii) the maximum entropy point of view, which does not consider statistical mechanics as a physical theory but as an inference methodology based on incomplete information <cit.>. The main aim of the present contribution is to clarify, at a pedagogical level, how ergodicity manifests itself for some relevant degrees of freedom, in non-ergodic systems. We say that ergodicity occurs “at a practical level”. To this end, a classical chain of N coupled harmonic oscillators turns out to be an excellent case study: being an integrable system, it cannot be suspected of being chaotic; still, “practical” ergodicity is recovered for relevant observables, in the limit of N≫1. We believe that this kind of analysis supports the traditional point of view of Boltzmann, which identifies the large number of degrees of freedom as the reason for the occurrence of ergodic behavior for physically relevant observables. Of course, these conclusions are not new. In the works of Khinchin (and then Mazur and van der Lynden) <cit.> it is rigorously shown that the ergodic hypothesis holds for observables that are computed as an average over a finite fraction of the degrees of freedom, in the limit of N ≫ 1. Specifically, if we limit our interest to this particular (but non-trivial) class of observables, the ergodic hypothesis holds for almost all initial conditions (but for a set whose probability goes to zero for N →∞), within arbitrary accuracy. In addition, several numerical results for weakly non-linear systems  <cit.>, as well as integrable systems <cit.>, present strong indications of the poor role of chaotic behaviour, implying the dominant relevance of the many degrees of freedom. Still, we think it may be useful, at least from a pedagogical point of view, to analyze an explicit example where analytical calculations can be made (to some extent), without losing physical intuition about the model. The rest of this paper is organized as follows. In Section <ref> we briefly recall basic facts about the chosen model, to fix the notation and introduce some formulae that will be useful in the following. Section <ref> contains the main result of the paper. We present an explicit calculation of the empirical distribution of the single-particle momentum, given a system starting from out-of-equilibrium initial conditions. We show that in this case the Maxwell-Boltzmann distribution is an excellent approximation in the N→∞ limit. Section <ref> is devoted to an analysis of the typical times at which the described ergodic behavior is expected to be observed; a comparison with a noisy version of the model (which is ergodic by definition) is also provided. In Section <ref> we draw our final considerations. § MODEL We are interested in the dynamics of a one-dimensional chain of N classical harmonic oscillators of mass m. The state of the system is described by the canonical coordinates {q_j(t), p_j(t)} with j=1,..,N; here p_j(t) identifies the momentum of the j-th oscillator at time t, while q_j(t) represents its position. The j-th and the (j+1)-th particles of the chain interact through a linear force of intensity κ|q_j+1-q_j|, where κ is the elastic constant. We will assume that the first and the last oscillator of the chain are coupled to virtual particles at rest, with infinite inertia (the walls), i.e. q_0≡ q_N+1≡ 0. The Hamiltonian of the model reads therefore ℋ(𝐪,𝐩)=∑_j=0^N p_j^2/2 m + ∑_j=0^Nm ω_0^2 /2(q_j+1 - q_j)^2, where ω_0=√(κ/m). Such a system is integrable and, therefore, trivially non-ergodic. This can be easily seen by considering the normal modes of the chain, i.e. the set of canonical coordinates Q_k=√(2/N+1)∑_j=1^N q_j sinj k π/N+1 P_k=√(2/N+1)∑_j=1^N p_j sinj k π/N+1 , with k=1, ..., N. Indeed, by rewriting the Hamiltonian in terms of these new canonical coordinates one gets ℋ(𝐐,𝐏)=1/2∑_k=1^N P_k^2/m + ω_k^2 Q_k^2 , where the frequencies of the normal modes are given by ω_k=2 ω_0 sinπ k/2N +2 . In other words, the system can be mapped into a collection of independent harmonic oscillators with characteristic frequencies {ω_k}. This system is clearly non-ergodic, as it admits N integrals of motion, namely the energies E_k=1/2P_k^2/m + ω_k^2 Q_k^2 associated to the normal modes. In spite of its apparent simplicity, the above system allows the investigation of some nontrivial aspects of the ergodic hypothesis, and helps clarifying the physical meaning of this assumption. § ERGODIC BEHAVIOR OF THE MOMENTA In this section we analyze the statistics of the single-particle momenta of the chain. We aim to show that they approximately follow a Maxwell-Boltzmann distribution 𝒫_MB(p)=√(β/2π m)e^-β p^2/2m in the limit of large N, where β is the inverse temperature of the system. With the chosen initial conditions, β=N/E_tot. Firstly, extending some classical results by Kac <cit.>, we focus on the empirical distribution of the momentum of one particle, computed from a unique long trajectory, namely 𝒫_e^(j)p=1 T∫_0^T dt δp -p_j(t) . Then we consider the marginal probability distribution 𝒫_ep,t computed from the momenta {p_j} of all the particles at a specific time t, i.e. 𝒫_ep,t=1 N∑_j=1^N δp -p_j(t) . In both cases we assume that the system is prepared in an atypical initial condition. More precisely, we consider the case in which Q_j(0)=0, for all j, and the total energy E_tot, at time t=0, is equally distributed among the momenta of the first N^⋆ normal modes, with 1 ≪ N^⋆≪ N: P_j(0)= √(2m E_tot/N^⋆) for 1 ≤ j ≤ N^⋆ 0 for N^⋆< j ≤ N . In this case, the dynamics of the first N^⋆ normal modes is given by Q(t) =√(2 E_tot/ω_k^2N^⋆)sinω_k t P(t) =√(2 m E_tot/N^⋆)cosω_k t . §.§ Empirical distribution of single-particle momentum Our aim is to compute the empirical distribution of the momentum of a given particle p_j, i.e., the distribution of its values measured in time. This analytical calculation was carried out rigorously by Mazur and Montroll in Ref. <cit.>. Here, we provide an alternative argument that has the advantage of being more concise and intuitive, in contrast to the mathematical rigour of <cit.>. Our approach exploits the computation of the moments of the distribution; by showing that they are the same, in the limit of infinite measurement time, as those of a Gaussian, it is possible to conclude that the considered momentum follows the equilibrium Maxwell-Boltzmann distribution. The assumption N≫1 will enter explicitly the calculation. The momentum of the j-th particle can be written as a linear combination of the momenta of the normal modes by inverting Eq. (<ref>): p_j(t) =√(2/N+1)∑_k=1^N sinj k π/N+1 P_k(t) =2√(m E_tot/(N+1)N^⋆)∑_k=1^N^⋆sinkjπ/N+1cosω_k t where the ω_k's are defined by Eq. (<ref>), and the dynamics (<ref>) has been taken into account. The n-th empirical moment of the distribution is defined as the average p_j^n of the n-th powerof p_j over a measurement time T: p_j^n =1/T∫_0^Tdt p_j^n(t) =1/T∫_0^Tdt (C_N^⋆)^n ∏_l=1^n∑_k_l=1^N^⋆sink_l jπ/N+1cosω_k_l t =(C_N^⋆)^n ∑_k_1=1^N^⋆…∑_k_n=1^N^⋆sink_1jπ/N+1 …sink_njπ/N+1 1/T∫_0^Tdt cosω_k_1 t…cosω_k_n t with C_N^⋆=2√(m E_tot/(N+1)N^⋆) . We want to study the integral appearing in the last term of the above equation. To this end it is useful to recall that 1/2 π∫_0^2πd θcos^n(θ)= (n-1)!!/n!! for n even 0 for n odd . As a consequence, one has 1/T∫_0^Td t cos^n(ω t)≃(n-1)!!/n!! for n even 0 for n odd . Indeed, we are just averaging over ≃ω T/2 π periods of the integrated function, obtaining the same result we get for a single period, with a correction of the order O(ω T)^-1. This correction comes from the fact that T is not, in general, an exact multiple of 2 π/ω. If ω_1, ω_2, ..., ω_q are incommensurable (i.e., their ratios cannot be expressed as rational numbers), provided that T is much larger than (ω_j-ω_k)^-1 for each choice of 1 ≤ k < j ≤ q, a well known result <cit.> assures that 1/T∫_0^Td t cos^n_1(ω_1 t)·...·cos^n_q(ω_q t) ≃ 1/T∫_0^Td t cos^n_1(ω_1 t)·...·1/T∫_0^Td t cos^n_q(ω_1 t) ≃ (n_1-1)!!/n_1!!· ...·(n_q-1)!!/n_q!! if all n's are even , where the last step is a consequence of Eq. (<ref>). Instead, if at least one of the n's is odd, the above quantity vanishes, again with corrections due to the finite time T. Since the smallest sfrequency is ω_1, one has that the error is at most of the order Oq(ω_1 T)^-1≃ O(qN /ω_0 T). Let us consider again the integral in the last term of Eq. (<ref>). The ω_k's are, in general, incommensurable. Therefore, the integral vanishes when n is odd, since in that case at least one of the {n_l}, l=1,...,q, will be odd. When n is even, the considered quantity is different from zero as soon as the k's are pairwise equal, so that n_1=...=n_q=2. In the following we will neglect the contribution of terms containing groups of four or more equal k's: if n≪ N^⋆, the number of these terms is indeed ∼ O(N^⋆) times less numerous than the pairings, and it can be neglected if N^⋆≫1 (which is one of our assumptions on the initial condition). Calling Ω_n the set of possible pairings for the vector 𝐤=(k_1,...,k_l), we have then p_j^n≃C_N^⋆/√(2)^n ∑_𝐤∈Ω_n∏_l=1^n sink_ljπ/N+1 , with an error of O(1/N^⋆) due to neglecting groups of 4, 6 and so on, and an error O(nN/ω_0 T) due to the finite averaging time T, as discussed before. Factor 2^-n/2 comes from the explicit evaluation of Eq. (<ref>) . At fixed j, we need now to estimate the sums appearing in the above equation, recalling that the k's are pairwise equal. If j> N/N^⋆, the arguments of the periodic functions can be thought as if independently extracted from a uniform distribution 𝒫(k)=1/N^⋆. One has: sin^2 kj π/N+1≃∑_k=1^N^⋆1/N^⋆sin^2 kj π/N+1≃1/2 π∫_-π^πd θ sin^2(θ)=1/2 , and ∏_l=1^n sink_ljπ/N+1≃ 2^-n/2 , if 𝐤∈Ω_n. As a consequence p_j^n ≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m E_tot/N+1^n/2𝒩(Ω_n) , where 𝒩(Ω_n) is the number of ways in which we can choose the pairings. These are the moments of a Gaussian distribution with zero average and m E_tot/N+1 variance. Summarising, it is possible to show that, if n ≪ N^⋆≪ N, the first n moments of the distribution are those of a Maxwell-Boltzmann distribution. In the limit of N≫1 with N^⋆/N fixed, the Gaussian distribution is thus recovered up to an arbitrary number of moments. Let us note that the assumption Q_j(0)=0, while allowing to make the calculations clearer, is not really relevant. Indeed, if Q_j(0)≠ 0 we can repeat the above computation while replacing ω_k t by ω_k t + ϕ_k, where the phases ϕ_k take into account the initial conditions. Fig. <ref> shows the standardized histogram of the relative frequencies of single-particle velocities of the considered system, in the N ≫ 1 limit, with the initial conditions discussed before. As expected, the shape of the distribution tends to a Gaussian in the large-time limit. §.§ Distribution of momenta at a given time A similar strategy can be used to show that, at any given time t large enough, the histogram of the momenta is well approximated by a Gaussian distribution. Again, the large number of degrees of freedom plays an important role. We want to compute the empirical moments p^n(t)=1/N∑_j=1^N p_j^n(t) , defined according to the distribution 𝒫_e^(j)p introduced by Eq. (<ref>). Using again Eq. (<ref>) we get p^n(t)= 1/N∑_j=1^N(C_N^⋆)^n∑_k=1^N^⋆sinkjπ/N+1cosω_k t^n = 1/N(C_N^⋆)^n∑_k_1^N^⋆…∑_k_n=1^N^⋆∏_l=1^Ncosω_k_lt∑_j=1^Nsink_1 j π/N+1…sink_n j π/N+1 . Reasoning as before, we see that the sum over j vanishes in the large N limit unless the k's are pairwise equal. Again, we neglect the terms including groups of 4 or more equal k's, assuming that n≪ N^⋆, so that their relative contribution is O(1/N^⋆). That sum selects paired values of k for the product inside the square brackets, and we end with p^n(t)≃1/N(C_N^⋆)^n∑_𝐤∈Ω_n∏_l=1^Ncosω_k_lt . If t is “large enough” (we will come back to this point in the following section), different values of ω_k_l lead to completely uncorrelated values of cos(ω_k_l t). Hence, as before, we can consider the arguments of the cosines as extracted from a uniform distribution, obtaining p^n(t)≃C_N^⋆/2^n (N^⋆)^n/2 𝒩(Ω_n)≃m E_tot/N+1^n/2𝒩(Ω_n) . These are again the moments of the equilibrium Maxwell-Boltzmann distribution. We had to assume n ≪ N^⋆, meaning that a Gaussian distribution is recovered only in the limit of large number of degrees of freedom. The empirical distribution can be compared with the Maxwell-Boltzmann by looking at the Kullback-Leibler divergence K(𝒫_e(p,t), 𝒫_MB(p)) which provides a sort of distance between the empirical 𝒫_e(p,t) and the Maxwell-Boltzmann: K[𝒫_e(p,t), 𝒫_MB(p)]= - ∫𝒫_e(p,t) ln𝒫_MB(p)/𝒫_e(p,t) dp. Figure <ref> shows how the Kullback-Leibler divergences approach their equilibrium limit, for different values of N. As expected, the transition happens on a time scale that depends linearly on N. A comment is in order: even if this behaviour may look similar to the H-Theorem for diluited gases, such a resemblance is only superficial. Indeed, while in the cases of diluited gases the approach to the Maxwell-Boltzmann is due to the collisions among different particles that actually exchange energy and momentum, in the considered case the “thermalization” is due to a dephasing mechanism. § ANALYSIS OF THE TIME SCALES In the previous section, when considering the distribution of the momenta at a given time, we had to assume that t was “large enough” in order for our approximations to hold. In particular we required cos(ω_k_1t) and cos(ω_k_2t) to be uncorrelated as soon as k_1 k_2. Such a dephasing hypothesis amounts to asking that |ω_k_1t-ω_k_2t|> 2π c , where c is the number of phases by which the two oscillator have to differ before they can be considered uncorrelated. The constant c may be much larger than 1, but it is not expected to depend strongly on the size N of the system. In other words, we require t> c/|ω_k_1-ω_k_2| for each choice of k_1 and k_2. To estimate this typical relaxation time, we need to pick the minimum value of |ω_k_1-ω_k_2| among the possible pairs (k_1,k_2). This term is minimized when k_1=k̃ and k_2=k̃-1 (or vice-versa), with k̃ chosen such that ω_k̃-ω_k̃-1 is minimum. In the large-N limit this quantity is approximated by ω_k̃-ω_k̃-1=ω_0sink̃π/2N+2-ω_0sink̃π- π/2N+2≃ω_0cosk̃π/2N+2π/2N+2 , which is minimum when k̃ is maximum, i.e. for k̃=N^⋆. Dephasing is thus expected to occur at t> 4cN/ω_0cosN^⋆π/2N , i.e. t>4cN/ω_0 in the N^⋆/N ≪ 1 limit. It is instructive to compare this characteristic time with the typical relaxation time of the “damped” version of the considered system. For doing so, we assume that our chain of oscillators is now in contact with a viscous medium which acts at the same time as a thermal bath and as a source of viscous friction. By considering the (stochastic) effect of the medium, one gets the Klein-Kramers stochastic process <cit.> ∂ q_j/∂ t=p_j/m ∂ p_j/∂ t=ω_0^2(q_j+1 - 2 q_j + q_j-1) -γ p_j + √(2 γ T)ξ_j where γ is the damping coefficient and T is the temperature of the thermal bath (we are taking the Boltzmann constant k_B equal to 1). Here the {ξ_j} are time-dependent, delta-correlated Gaussian noises such that ξ_j(t)ξ_k(t')=δ_jkδ(t-t'). Such a system is surely ergodic and the stationary probability distribution is the familiar equilibrium one 𝒫_s(𝐪,𝐩) ∝ e^-H(𝐪,𝐩)/T. Also in this case we can consider the evolution of the normal modes. By taking into account Eqs. (<ref>) and (<ref>) one gets Q̇_̇k̇ =1/m P_k Ṗ_̇k̇ =- ω_k^2 Q_k - γ/m P + √(2 γ T)ζ_k where the {ζ_k} are again delta-correlated Gaussian noises. It is important to notice that also in this case the motion of the modes is independent (i.e. the friction does not couple normal modes with different k); nonetheless, the system is ergodic, because the presence of the noise allows it to explore, in principle, any point of the phase-space. The Fokker-Planck equation for the evolution of the probability density function 𝒫Q_k,P_k,t of the k-th normal mode can be derived using standard methods <cit.>: ∂_t𝒫=-∂_Q_kP_k𝒫+∂_P_kω_k^ 2Q_k𝒫+γ/mP_k𝒫+γ T∂_P_k^2 𝒫 . The above equation allows to compute also the time dependence of the correlation functions of the system in the stationary state. In particular one gets d/dtQ_k(t) Q_k(0)=1/mP_k(t)Q_k(0) and d/dtP_k(t) Q_k(0)-ω_k^2 m Q_k(t) Q_k(0) -γ/mP_k(t) Q_k(0) , which, once combined together, lead to d^2/d t^2Q_k(t) Q_k(0)+γ/md/dtQ_k(t) Q_k(0)+ ω_k^2Q_k(t) Q_k(0)=0 . For ω_k <γ/m the solution of this equation admits two characteristic frequencies ω̃_±, namely ω̃_±=γ/2m1 ±√(1-m^2 ω_k^2/γ^2). In the limit ω_k ≪γ/m one has therefore ω̃_- ≃m/4 γω_k^2 ≃m ω_0^2 π^2 k^2/γ N^2 . Therefore, as a matter of fact, even in the damped case the system needs a time that scales as N^2 in order to get complete relaxation for the modes. As we discussed before, the dephasing mechanism that guarantees for “practical” ergodicity in the deterministic version is instead expected to occur on time scales of order O(N). § CONCLUSIONS The main aim of this paper was to expose, at a pedagogical level, some aspects of the foundation of statistical mechanics, namely the role of ergodicity for the validity of the statistical approach to the study of complex systems. We analyzed a chain of classical harmonic oscillators (i.e. a paradigmatic example of integrable system, which cannot be suspected to show chaotic behaviour). By extending some well-known results by Kac <cit.>, we showed that the Maxwell-Bolzmann distribution approximates with arbitrary precision (in the limit of large number of degrees of freedom) the empirical distribution of the momenta of the system, after a dephasing time which scales with the size of the chain. This is true also for quite pathological initial conditions, where only a small fraction of the normal modes is excited at time t=0. The scaling of the typical dephasing time with the number of oscillators N may appear as a limit of our argument, since this time will diverge in the thermodynamic limit; on the other hand one should consider, as explicitely shown before, that the damped version of this model (which is ergodic by definition) needs times of the order O(N^2) to reach thermalization for each normal mode. This comparison clearly shows that the effective thermalization observed in large systems has little to do with the mathematical concept of ergodicity, and it is instead related to the large number of components concurring to define the global observales that are usually taken into account (in our case, the large number of normal modes that define the momentum of a single particle). When these components cease to be in phase, the predictions of statistical mechanics start to be effective; this can be observed even in integrable systems, without need for the mathematical notion of ergodicity to hold. In other words, we believe that the present work give further evidence of the idea (which had been substantiated mathematically by Khinchin, Mazur and van der Linden) that the most relevant ingredient of statistical mechanics is the large number of degrees of freedom, and the global nature of the observables that are typically taken into account. § ACKNOWLEDGEMENTS RM is supported by #NEXTGENERATIONEU (NGEU) and funded by the Ministry of University and Research (MUR), National Recovery and Resilience Plan (NRRP), project MNESYS (PE0000006) "A Multiscale integrated approach to the study of the nervous system in health and disease" (DN. 1553 11.10.2022).
http://arxiv.org/abs/2307.04162v1
20230709125749
A threshold model of plastic waste fragmentation: New insights into the distribution of microplastics in the ocean and its evolution over time
[ "Matthieu George", "Frédéric Nallet", "Pascale Fabre" ]
cond-mat.soft
[ "cond-mat.soft", "cond-mat.mtrl-sci" ]
Laboratoire Charles-Coulomb, UMR 5221 CNRS – université de Montpellier, Campus Triolet, Place Eugène-Bataillon – CC069, F-34095 Montpellier Cedex 5 – FRANCE Centre de recherche Paul-Pascal, UMR 5031 CNRS – université de Bordeaux, 115 avenue du Docteur-Schweitzer, F-33600 Pessac – FRANCE [Email for correspondence: ][email protected] Laboratoire Charles-Coulomb, UMR 5221 CNRS – université de Montpellier, Campus Triolet, Place Eugène-Bataillon – CC069, F-34095 Montpellier Cedex 5 – FRANCE Plastic pollution in the aquatic environment has been assessed for many years by ocean waste collection expeditions around the globe or by river sampling. While the total amount of plastic produced worldwide is well documented, the amount of plastic found in the ocean, the distribution of particles on its surface and its evolution over time are still the subject of much debate. In this article, we propose a general fragmentation model, postulating the existence of a critical size below which particle fragmentation becomes extremely unlikely. In the frame of this model, an abundance peak appears for sizes around 1mm, in agreement with real environmental data. Using, in addition, a realistic exponential waste feed to the ocean, we discuss the relative impact of fragmentation and feed rates, and the temporal evolution of microplastics (MP) distribution. New conclusions on the temporal trend of MP pollution are drawn. A threshold model of plastic waste fragmentation: new insights into the distribution of microplastics in the ocean and its evolution over time Pascale Fabre August 12, 2023 ============================================================================================================================================== § INTRODUCTION Plastic waste has been dumped into the environment for nearly 70 years, and more and more data are being collected in order to quantify the extent of this pollution. Under the action of degradation agents (UV, water, stress), plastic breaks down into smaller pieces that gradually invade all marine compartments. If the plastic pollution awareness initially stemmed from the ubiquitous presence of macro-waste, it has now become clear that the most problematic pollution is “invisible” i.e. due to smaller size debris, and the literature exploring microplastics (MPs, size between 1 μm and 5 mm) and nanoplastics (NPs, size below 1 μm) quantities and effects is rapidly increasing. The toxicity of plastic particles being dependent on their size and their concentration, it is crucial to know these two parameters in the natural environment to better predict their impacts. While the total amount of plastic produced worldwide is well-documented <cit.>, the total amount of plastic found in the ocean and its time evolution are still under debate: while many repeated surveys and monitoring efforts have failed to demonstrate any convincing temporal trend <cit.>, increasing amounts of plastic are found in some regions, especially in remote areas, and a global increase from ca. 2005 has been suggested <cit.>. Still, some features can be drawn from the available data from the field <cit.> about the size distribution of plastic particles. When browsing the sizes from the largest to the smallest, a first abundance peak is observed around 1 mm <cit.>. Between 1 mm and approximately 150 μm, very few particles are found <cit.>. The abundance increases again from 150 μm down to 10 μm, with an amount of particles which is several orders of magnitude larger than what is found around 1 mm <cit.>. To the best of our knowledge, the physical reason <cit.> for the existence of two very different size classes for microplastics (small MP <150 μm, large MP between 1 and 5 mm) is that there are two fragmentation pathways: i) bulk fragmentation with iterative splitting of one piece into two daughters for large MPs, and ii) delamination and disintegration of a thin surface layer (around 100 μm depth) into many particles for small MPs. This description does however not explain the deficit of microplastics of sizes between 150 μm and 1 mm. Early authors attempted to describe the large MP distribution by invoking a simple iterative fragmentation of plastic pieces into smaller objects, conserving the total plastic mass <cit.>, in accordance to pathway i). These models lead to a time-invariant power-law dependence of the MP abundance with size (refer to Supplementary Information <ref> for an elementary version of such models), which is in fair agreement with experimental observations for large MP. However, they fail to describe the occurrence of an abundance peak and the subsequent decrease of the number of MP when going to smaller sizes. Other mechanisms such as sinking, ingestion, etc. have been invoked to qualitatively explain the absence of particles smaller than 1 mm. Very recently, two papers have addressed this issue using arguments related to the fragmentation process itself. Considering the mechanical properties of a one-dimensional material (flexible and brittle fibres) submitted to controlled stresses in laboratory mimicking ocean turbulent flow, Brouzet et al <cit.> have shown both theoretically and experimentally in the one-dimensional case that smaller pieces are less likely to break. Aoki and Furue <cit.> reached theoretically the same conclusion in a two-dimensional case using a statistical mechanics model. Note that both approaches are based on the classical theory of rupture, insofar as plastics fragmenting at sea have generally been made brittle by a long exposure to UVs. In this paper, we also explore pathway i), keeping out of focus pathway ii), since delamination process produces directly very small plastic pieces. Regardless of the fracture mechanics details i.e. the specific characteristics of the plastic waste (shape, elastic moduli, aging behavior) and the exerted stresses, we postulate the existence of a critical size below which bulk fragmentation becomes extremely unlikely. Since many of the microplastics recovered from the surface of the ocean are film-like objects (two dimensions exceeding by a large margin the third one) like those coming from packaging, we construct the particle size distribution over time based on the very idea of a universal failure threshold for breaking two-dimensional objects. A very simple hand-waving argument from everyday's life that illustrates this breaking threshold, is that the smaller a parallelepipedic piece of sugar is, the harder it is to break it, hence the nickname sugar lump model used in this paper. Unlike many previous models, which make the implicit assumption of a stationary distribution, we explicitly describe the temporal evolution of the large MP quantity (see Sections <ref> and <ref>). Moreover, by injecting a realistic waste feed into the model, we discuss the synergistic effect of feeding and fragmentation rates on the large MP distribution, in particular in terms of evolution with time, and compare to the observed data in Section <ref>. § FRAGMENTATION MODEL WITH THRESHOLD The sugar lump iterative model implements the two following essential features: a size-biased probability of fragmentation on the one hand, and a controlled waste feed rate on the other hand. Initially, a constant feeding rate is used in the model. In a second step, the more realistic assumption of an exponentially growing feeding rate is introduced and discussed in comparison with field data (See Section <ref>). At each iteration, we assume that the ocean is fed with a given amount of large parallelepipedic fragments of length L_init, width ℓ_init and thickness h, where h is much smaller than the other two dimensions and length L_init is, by convention, larger than width ℓ_init. At each time step, every fragment potentially breaks into two parallelepipedic pieces of unchanged thickness h. The total volume (or mass) is kept invariant during the process. In addition, we assume that, if the fragment ever breaks during a given step, it always breaks perpendicular to its largest dimension L: A fragment of dimensions (L, ℓ, h) thus produces two fragments of respective dimensions (ρ L,ℓ,h) and ([1-ρ]L,ℓ,h), ρ being in our model a random number between 0 and 0.5. Note that, depending on the initial values of L,ℓ and ρ, one or both of the new dimensions ρ L and [1-ρ]L may become smaller than the previous intermediate size ℓ: the fragmentation of a film-like object, at contrast to the case of a fibre-like object, is not conservative in terms of its largest dimension <cit.>. Furthermore, in order to ensure that the fragment thickness h remains (nearly) constant all along the fragmentation process, ρ values leading to ρ L or (1-ρ)L significantly less than h are rejected in the simulation. This obviously introduces a short length scale cutoff, in the order of h, and a limiting, nearly cubic, shape for the smaller fragments (an “atomic limit”, according to the ancient Greek meaning). A second length scale, L_c, also enters the present model, originating in the mechanical sugar lump approach, described heuristically by means of a breaking efficiency E(L) sigmoidal in L. For the sake of convenience, this efficiency is built here from the classical Gauss error function. It is therefore close to 1 above a threshold value L_c (chosen large enough compared to h) and close to 0 below L_c. A representative example is shown in Fig. <ref>, with L_c/h=100. Note that throughout this paper, all lengths involved in the numerical model will be scaled by the thickness h. Qualitatively speaking, this feature of the model means that when the larger dimension L is below the threshold value L_c, fragments will “almost never” break, even if they haven't reached yet the limiting (approximately) cubic shape of fragments of size ≈ h. For the sake of simplicity, the threshold value is assumed not to depend on plastic type or on residence time in the ocean, considering that weathering occurs from the moment the waste is thrown in the environment and quickly renders all common plastics brittle. A unique L_c is thus used for all fragments. Technical details about the model are given in supplementary information <ref>. § RESULTS AND COMPARISON WITH FIELD DATA In this whole section, we discuss the results obtained with the sugar lump model and systematically compare with what we call the standard model  <cit.>, that is to say the case where fragments always break into two (identical) pieces at each generation, whatever their size. Whenever possible and meaningful, we also compare our results with available field data. Therefore, one needs to assign a numerical correspondence between the physical time scale and the duration of a step in the iterative models. The fragmentation rate of plastic pieces can be assessed using accelerated aging experiments <cit.>. The half-life time, corresponding to the time when the average particle size is divided by 2, has been found around 1000 hours, which roughly corresponds to one year of solar exposition <cit.>. Hence, the iterative step t used in all following sections can be considered to be in the order of one year. For typical plastic film dimensions, it is reasonable to assume that the thickness h is between 10 and 50 μm, and the initial largest lateral dimension L_init is in the range of 1 to 5 cm. These characteristic lengths, together with the other length scales involved in this paper are positioned relative to each other in Fig. <ref>. §.§ Evolution of the size distribution and of the total abundance of fragments with time The size distribution of plastic fragments over time is represented in Fig. <ref> for the sugar lump and confronted to the standard model size distribution. The origin of time corresponds to the date when the very first plastic waste was dumped into the ocean. According to the standard model (see Eq. (<ref>), Section <ref>), the amount of particles as a function of their size follows a power law of exponent -2 which leads to a divergence of the number of particles at very small size (dotted line in Fig  <ref>). For large MP, the prediction of the sugar lump model is broadly similar, i.e. following the same power law. By contrast, the existence of a mechanism inhibiting the break of smaller objects, as introduced in the sugar lump model, does lead to the progressive built of an abundance peak for intermediate size fragments due to the accumulation of fragments with size around L_c (see Section <ref> for details). Moreover, the particle abundance at the peak increases with time while the peak position shifts towards smaller size classes. This shift is fast for the first generations, and then slows down when time passes: Fig. <ref>. The inset in Fig. <ref> shows how the existence of a breaking threshold significantly slows down the production of very small particles compared to the standard model. As can be observed from the inset in Fig. <ref>, the peak position L_peak^th, around L_c, decreases in a small range typically between 1.5L_c and 0.5L_c for time periods up to a few tens of years. Let us discuss now the comparison to the experimental data. A sample of various field data from different authors  <cit.> is displayed in Fig. <ref>. In order to obtain a collapse of the data points for large MPs, a vertical scaling factor has been applied, since abundance values from different sources can not be directly compared in absolute units. The two main features of these curves are: A maximum abundance at a value of a few millimeters (indicated by a grey zone) and the collapse of the data points onto a single 1/L^2 master curve (indicated by a dashed line). The threshold value L_c is presumably defined by the energy balance between the bending energy required for breaking a film and the available turbulent energy of the ocean. The bending energy depends on the film geometry and on the mechanical properties of the weathered polymer. As shown by Brouzet et al <cit.>, for a fiber (1D), the threshold L_c is proportional to the fiber diameter d and varies as L_c= kE^1/4/(ρηϵ)^1/8d where E is the Young modulus of the brittle polymer fiber, ρ and η are the mass density and viscosity of water, ϵ is the mean turbulent dissipation rate and k is a prefactor in the order of 1. In two dimensions, the expression for the threshold L_c is more complex, since it depends both on the width ℓ and thickness h of the film. However, based on 2D mechanics, one can show that the order of magnitude and h-dependency for L_c remain the same as in 1D, while the prefactor slightly varies with ℓ. Reasonable assumptions on film geometry, mechanical properties of weathered brittle plastic and highly turbulent ocean events, such as made by Brouzet et al. <cit.> allow us to evaluate that L_c/h ≈ 100. For films of typical thicknesses lying between 10 and 50 μm, this gives a position of the peak between 1 and 5 mm in good agreement with the field data represented in Fig. <ref>. It is also interesting to discuss the power law exponent value exhibited by both standard and sugar-lump models at large MP sizes. In time-invariant models, the theoretical exponent actually varies with the dimensionality of the considered objects (fibres, films, lumps) ranging from -1 (fibres) to -3 (lumps). As expected, when the objects dimensionality is fixed, the value -2 observed in Fig. <ref> for the sugar-lump model is due to the hypothesis of film-like pieces breaking along their larger dimension only, keeping their thickness constant. In the same way, regarding the laboratory experiments performed on glass fibres <cit.>, the large MP distribution is compatible in the long-time limit with the expected -1 power law [provided that, of course, the depletion of very large objects that originates from the absence of feeding is disregarded.]. Coming back to the field data as displayed in Fig. <ref>, one can note that for large MP all data points collapse onto a single 1/L^2 master curve. This suggests that either most collected waste comprises film-like objects breaking along their larger dimension only, or, perhaps more likely, that one collects a mixture of all three types of objects leading to an “average” exponent, obviously lying somewhere between -1 and -3, that turns out to be close to -2. The total abundance N_tot of fragments (all sizes included) as a function of time is represented in Fig. <ref> for both the sugar lump and standard models. In the latter case, the abundance is simply described by an exponential law: N_tot = [2^t+1-1] N_0 when the ocean is fed by a constant number N_0 of (nearly identical) large fragments per iteration (Eq. <ref>, Section <ref>). The sugar lump model predicts a time evolution which deviates from the standard model prediction: The increase of total abundance slows down with time, due to the hindering of smaller fragments production, and the effect is all the more pronounced for larger threshold parameters L_c, as could have been expected. In the realistic case where L_c/h ≈ 100, the increasing rate of fragments production becomes very small for the largest feeding times, as can be observed in Fig. <ref> which shows that the number of MP would be multiplied every ten years by only a factor 2, compared to a factor of 1000 in the standard model. These theoretical results might explain why no clear temporal trend is observed in the field data <cit.>. §.§ Role of the mesh size on the size distribution and on its temporal evolution If one wants to go further in confronting models to field data, one needs to take into account that the experimental collection of particles in the environment always involves an observation window, and in particular a lower size limit L_mesh, e.g. due to the mesh size of the net used during ocean campaigns. The very existence of a lower limit leads to the appearance of transitory and steady-state regimes for the temporal evolution of the number of collected particles, as will be shown below. In the standard model case, when the feeding and breaking process starts, larger size classes are first filled, while smaller size classes are still empty (Fig. <ref>, Section <ref>). As long as the smaller fragments produced by the breaking process are larger than the lower size limit L_mesh of the collection tool, the number of collected fragments increases with time, de facto producing a transitory regime in the observed total abundance. The size of the smaller fragments reaches L_mesh after a given number of fragmentation steps corresponding to the duration of the transitory regime: t_c≈2ln(L_init/L_mesh)/ln2 where L_init is the initial largest dimension of the plastic fragments released into the ocean. From this time onward, both the size distribution and total number of collected fragments in the observation window no longer change. Even though the production of fragments smaller than L_mesh continues to occur, as well as the continuous feeding of large-scale objects, one therefore observes a steady-state regime. This is illustrated in Fig. <ref> for two different values of the mesh size L_mesh (filled symbols ∙ and ▪). For the sugar lump model case, one needs to also consider the size threshold length scale L_c, below which fragmentation is inhibited. When L_c is much smaller than L_mesh, the threshold length L_c is not in the observation window, hence the analysis is the same as in the standard case. At contrast, when L_c is close to L_mesh or larger, the transitory regime is expected to exhibit two successive time dependencies. This behavior is displayed in Fig. <ref> (open symbols ∘ and □) for the same mesh size values as in the standard model for comparison. At short times, since the smaller fragment size has not reached yet the breaking threshold L_c, the number of collected fragments follows the same law as in the standard case. When the smaller fragments get close to the size L_c, however, the inhibition of their breaking creates an accumulation of fragments around L_c, hence the abundance peak. As a consequence, the increase in the total number of fragments slows down. Since the abundance peak position shifts towards smaller values with time (Fig. <ref>, inset) albeit slowly, a final stationary state should be observed when the abundance peak position becomes significantly smaller than L_mesh. As shown in Fig. <ref>, this occurs within the explored time window for large L_mesh (∘), but the stationary state is not observed for small L_mesh (□), presumably because our simulation has not explored times large enough. When the steady-state regime is reached, the number of fragments above L_mesh, i.e. likely to be collected, remains constant with a value larger than that of the standard model, due to the overshoot induced by the accumulation on the right-hand side of the peak. Let us recall that the characteristic fragmentation time, defined as the typical duration for a piece to break into two, has been evaluated at one year. In the case of the standard model, this means that the size of each fragment is reduced by a factor 30 in about 10 years. Therefore, starting with debris size of the order of a centimeter, small MPs of typical size the mesh size (330 μm in Fig. <ref>) will be obtained within 10 years only. Thus, 10 years correspond to the duration of the transitory regime t_c established in Eq. (<ref>) and the oceans should be by far in the steady-state regime since the pollution started in the 1950's. It is however no longer controversial nowadays that the standard (steady-state) model fails to describe the size distribution of the field data. On the contrary, the sugar lump model predicts the existence of an abundance peak, in agreement with what is observed during collection campaigns. This peak is due to the accumulation of fragments whose size is in the order of the breaking threshold L_c. As discussed in paragraph <ref>, the failure threshold L_c can be soundly estimated to lie between 1 and 5 mm. Comparison with field data then corresponds to the case where L_c is about ten times larger than the mesh size L_mesh. As just shown in Fig. <ref>, this implies a drastic increase of duration of the transitory regime, that can be estimated to be above 100 years. These considerations lead us to the important conclusion that one is still nowadays in the transitory regime. Moreover, the sugar lump model also implies that the total abundance is correctly estimated through field data collection, i.e. that it is not biased by the mesh size. Because the peak position slowly shifts towards smaller sizes, the mesh size will eventually play a role, but at some much later point in time. Finally, let us recall that this paper does not take into account delamination processes, so the previous statement is only true for millimetric debris, that is to say debris produced through fragmentation, and that micrometric size debris might exhibit a completely different behavior, being probably much more numerous. §.§ Constant versus exponential feeding In the results discussed in Section <ref>, it was assumed that the rate of waste feeding in the ocean is constant with time. However, it is common knowledge that the production of plastics has increased significantly since the 1950's. Geyer et al <cit.> have shown that the discarded waste follows the same trend. Data from the above-quoted article has been extracted and fitted in Fig. <ref> and Fig. <ref> with exponential laws N= N_0(1+τ)^t, where τ represents an annual growth rate, of plastic production and discarded waste respectively. For plastic production, the annual growth rate is found about 16% until 1974, the year of the oil crisis, and close to 6% after 1974 with, perhaps, an even further decrease of the rate in the recent years. Not unexpectedly, the same trends are found when considering the discarded waste, with growth rates, respectively, 17% and 5%. In order to discuss now the effects of an increasing waste feeding in the ocean, we inject for simplicity a single exponential with an intermediate rate of 7% in the two models. When comparing this feeding law and the standard fragmentation law [2^t+1-1] N_0, one easily concludes that the total number of plastics items in the ocean is mainly determined by the fragmentation rate, regardless of the feeding rate. In order to verify what happens in the case of the sugar lump model, where the fragmentation process is hindered, the size distributions for both feeding hypotheses are numerically compared in Figs. <ref> and <ref>, respectively after 14 and 40 years. It can be observed that at short times, the size distribution is very little altered by the change in feeding. At longer times, a significant increase of the amount of the largest particles can be observed, while the amount of small particles is increasing much less. Besides, the size position of the abundance peak is almost not shifted. The total amount of fragments is represented in Fig. <ref> for the standard and sugar lump models for the two feeding cases considered. For exponential feeding, the sugar lump model still predicts a significant decrease in the rate of fragment generation over time, whereas one could have thought that exponential feeding could cancel out this slowdown. The conclusions drawn above, Section <ref> therefore remain valid in the more realistic case of an exponential feeding. Finally, one should keep in mind that, if the feeding rate is a reasonable indicator of plastic pollution, since it describes the evolution over time of the total mass of plastics present in the ocean, it is not enough to properly describe plastic pollution. For a given mass, the number–hence size–of particles produced is the major factor in assessing potential impacts. Indeed, the smaller the size the larger the particles number concentration, the larger their specific area hence their adsorption ability and the larger the ensuing eco-toxicity. It is shown here that the mass of waste roughly doubles every 10 years, whereas the number of particles doubles every year, making fragmentation the main factor driving plastic pollution and impacts. A lot of studies are devoted to making a mass balance and understanding the fluxes of plastic waste  <cit.> but, even in the case of a drastic immediate reduction of waste production, plastic pollution and impacts will affect the ocean life for still many years to come, due to fragmentation. § CONCLUSION The generalist model presented here is based on a few sound physical assumptions and sheds new light on global temporal trends in the distribution of microplastics at the surface of the oceans. The model shows that the existence of a physical size threshold below which fragmentation is strongly inhibited, leads to the accumulation of fragments at a given size, in line with what is observed in the field data. In other words, if one does not collect particles in the range 100 μm–1 mm, it is because only a few of them is actually generated by fragmentation at this scale. One would not necessarily need to invoke any other mechanism or bias such as ingestion by living organisms <cit.> or the mesh size of collection nets  <cit.>, to explain the field data for floating debris. As a consequence, the observed distribution does reflect, in our opinion, the real distribution of MPs at the surface of the ocean, down to 100 μm. Besides, the sugar lump model implies a slowdown in the rate of MPs production by fragmentation, due to the fact that fragmentation is inhibited when particles approach the threshold size. This may explain the absence of a clear increase in the MP numbers in different geographical areas <cit.>. observations <cit.>. Two other general facts have been pointed out in this paper: * for large MP, the predicted size distribution follows a power law, whose exponent depends on the dimensionality of the object (-1 for a fibre, -2 for a film and -3 for a lump). It is therefore worth sorting out collected objects according to their geometry, as it is done for instance when fibres are separated from 2D objects <cit.>. It is however interesting to note that, when the objects are not sorted in this way, an “average value” -2 is found for the exponent. * the model takes into account an exponentially-increasing waste feeding rate. We have fitted the plastic production since the 50' and found that there is not one but two exponential laws, the second one, slower than the first one, being visible after the second oil crisis in 1974. Comparing this feeding to the exponential fragmentation ratio, we show that the number of fragments is mainly predicted by the fragmentation process, regardless of the feeding details. To go further and estimate absolute values of MP concentrations in the whole range of sizes, it would be necessary, on one hand, to take into account delamination in order to get small particles distribution. On the other hand, one should also be aware of the spatial heterogeneity of particles concentration and therefore an interesting development could be to combine fragmentation with flow models developed for instance in Refs. <cit.>. § SUPPORTING INFORMATION §.§ Standard model In this model, as pictorially represented in Fig. <ref>, the ocean is fed at each iteration n with a fixed number a_0 of large 2D-like objects, mimicking plastic films. Neglecting size and shape dispersity for convenience, all 0^th-generation objects are assumed to be large square platelets of lateral size L_init and thickness h, with L_init≫ h. Between consecutive iteration steps, fragmentation produces p^th-generation objects, by splitting in two equal parts (p-1)^th-generation objects, thus generating square platelets when p is even, but rectangular platelets with aspect ratio 2:1 for odd p. If size is measured by the diagonal, a p^th-generation object has size √(2)L_init/2^p/2 (even p) or √(5)L_init/2^(p+1)/2 (odd p). With size classes described by the number of p^th-generation objects at iteration step n, C(n,p), the filling law of size classes is: [ C(n,0) = a_0 ; C(n,p) = 0 if p>n; C(n,p) = 2C(n-1,p-1) if 1≤ p≤ n ] The set of equations (<ref>) is readily solved: C(n,p)=2^pa_0 for 0≤ p≤ n, and C(n,p)=0 for p>n. Since size L scales with generation index p as 2^-p/2, the steady-sate scaling for the filling of size classes is C∝ L^-2. The cumulative abundance S_n≡∑_pC(n,p) at iteration step n is also easily obtained: S_n=[2^n+1-1]a_0 and displayed as a dashed line in Figs. <ref> and <ref>. As noticed in Ref. <cit.> where experimental data and model predictions are matched together, the standard model fails for small objects, and this occurs when a (nearly) cubic shape is reached. Since the typical (lateral) size of p^th-generation objects is ≈ L_init/2^p/2, the limit is reached for p_max≈2logL_init/h/log2 that is to say in about 20 generations with the rough estimate L_init/h=10^3. The set of equations describing the size-class filling law has to be altered to take into account this limit. Assuming for simplicity that p_max-generation objects cannot be fragmented anymore (“atomic” fragments), this set of equations becomes: [ C(n,0) = a_0 ; C(n,p) = 0 if p>n or p>p_max; C(n,p) = 2C(n-1,p-1) if 1≤ p<p_max and p ≤ n; C(n,p_max) = C(n-1,p_max)+2C(n-1,p_max-1) if n>p_max ] As shown by the explicit solution, Eq. (<ref>) below, the last line in this set of equations leads to an accumulation of “atomic” fragments (see also Fig. <ref> for a pictorial representation of this feature) [ C(n,p) = 2^pa_0 if 0≤ p≤ n<p_max; C(n,p_max) = (n+1-p_max)2^p_maxa_0 if n≥ p_max; C(n,p) = 0 for other cases ] associated to a significant (exponential to linear) slowing down of the cumulative abundance: S_n=[2^p_max(2+n-p_max)-1]a_0 for iteration steps n≥ p_max. §.§ Standard model with inflation As a first extension of the standard model, inflation in the feeding of the ocean with large 2D-like objects is now considered. Taking simultaneously into account the “atomic” nature of small fragments beyond p_max generations, the size-class filling set of equations (<ref>) has to be replaced by: [ C(n,0) = a_0(1+τ)^n ; C(n,p) = 0 if p>n or p>p_max; C(n,p) = 2C(n-1,p-1) if 1≤ p<p_max and p ≤ n; C(n,p_max) = C(n-1,p_max)+2C(n-1,p_max-1) if n>p_max ] Size classes are now described by C(n,p)=2^p(1+τ)^n-pa_0 for 0≤ p≤ n as long as the generation index p remains smaller than p_max and C(n,p_max)=2^p_max[(1+τ)^n-p_max+1-1]a_0/τ for n≥ p_max. Whereas the filling of the size class associated to “atomic” fragments was linear in n without inflation, it becomes here exponential. Consequently, the cumulative abundance, definitely slowed down, remains exponential in n for n>p_max: S_n={(1+τ)^n[(2/1+τ)^p_max-1/1-τ]+2^p_max(1+τ)^n-p_max+1-1/τ}a_0 As long as the “atomic limit” is not reached, the cumulative abundance exhibits a simpler form, namely: S_n=[2^n+1-(1+τ)^n+1]a_0/1-τ that does not significantly differ from Eq. (<ref>). The time-invariant features of the size distribution are nevertheless modified in two respects (see Fig. <ref>): * Inflation spoils the strict time-invariant feature previously observed for the size distribution N(L); * A (nearly) time-invariant behaviour remains as far as scaling is concerned, since N∝1/L^ν, but ν does depend, albeit rather weakly, on the time index n, while being significantly smaller than 2. Fitting data to a power law, an exponent ν close to 1.8 is obtained for inflation τ=7%. §.§ Sugar lump model Taking inspiration from the standard model, Section <ref>, at each iteration the ocean is fed with large parallelepipedic fragments of length L, width ℓ and thickness h, where h is much smaller than the other two dimensions and length L is, by convention, larger than width ℓ. Some size dispersity is introduced when populating the largest size class, by randomly distributing L in the interval [0.9L_init, L_init], and ℓ in [0.7L_init, 0.9L_init], but h is kept fixed. The number of objects feeding the system can be controlled at each iteration step, and two simple limits have been investigated: Constant, or exponentially-growing feeding rates, mimicking two variants of the Standard model, Sections <ref> and <ref>, respectively. Size-classes evenly sampling (in logarithmic scale) the full range of L/h, [1, L_init/h] are populated by sorting into the proper size class the fragments present in the system. Except for the 0^th, initialisation step, these fragments are either 0^th-generation fragments just introduced into the system, obviously belonging to the largest size class, or g-generation fragments (g≥1) that have been “weathered” during the time step from step n to step n+1 and then split, with a L-dependent efficiency, into two smaller fragments. As explained in Section <ref>, the splitting process, albeit random, explicitly ensures the existence of an “atomic” limit: Fragments belonging to the smallest size class cannot be fragmented any further. As tentatively illustrated in Fig. <ref>, a special feature of the model is that generations (g) and size-class (p) indices have to be distinguished because, at contrast with the standard model, although for a given fragment a “weathering” event (n→ n+1) is always associated to an “ageing” event (g increased by one), it is not always associated to populating one or two lower-size classes (and simultaneously decreasing by 1 the abundance of the considered size-class) because the splitting process is not 100% efficient. Keeping track of abundances in terms of time (n), age (g) and size (p) being computationally demanding for exponentially-growing populations, our simulations have been limited to, at most, n=g=40. The number of distinct size classes has also been limited to 28, as this corresponds to the number of size-classes reported in Ref. <cit.>.
http://arxiv.org/abs/2307.07662v1
20230714235449
MPDIoU: A Loss for Efficient and Accurate Bounding Box Regression
[ "Ma Siliang", "Xu Yong" ]
cs.CV
[ "cs.CV", "cs.AI" ]
SCUT]Siliang Ma SCUT]Yong Xu [SCUT]Institute of Computer Science and Engineering, South China University of Technology, Guangzhou 510000, China Bounding box regression (BBR) has been widely used in object detection and instance segmentation, which is an important step in object localization. However, most of the existing loss functions for bounding box regression cannot be optimized when the predicted box has the same aspect ratio as the groundtruth box, but the width and height values are exactly different. In order to tackle the issues mentioned above, we fully explore the geometric features of horizontal rectangle and propose a novel bounding box similarity comparison metric MPDIoU based on minimum point distance, which contains all of the relevant factors considered in the existing loss functions, namely overlapping or non-overlapping area, central points distance, and deviation of width and height, while simplifying the calculation process. On this basis, we propose a bounding box regression loss function based on MPDIoU, called ℒ_MPDIoU. Experimental results show that the MPDIoU loss function is applied to state-of-the-art instance segmentation (e.g., YOLACT) and object detection (e.g., YOLOv7) model trained on PASCAL VOC, MS COCO, and IIIT5k outperforms existing loss functions. Object detection instance segmentation bounding box regression loss function § INTRODUCTION Object detection and instance segmentation are two important problems of computer vision, which have attracted a large scale of researchers' interests during the past few years. Most of the state-of-the-art object detectors (e.g., YOLO series <cit.>, Mask R-CNN <cit.>, Dynamic R-CNN <cit.> and DETR <cit.>) rely on a bounding box regression (BBR) module to determine the position of objects. Based on this paradigm, a well-designed loss function is of great importance for the success of BBR. So far, most of the existing loss functions for BBR fall into two categories: ℓ_n-norm based loss functions and Intersection over Union (IoU)-based loss functions. However, most of the existing loss functions for bounding box regression have the same value under different prediction results, which decreases the convergence speed and accuracy of bounding box regression. Therefore, considering the advantages and drawbacks of the existing loss functions for bounding box regression, inspired by the geometric features of horizontal rectangle, we try to design a novel loss function ℒ_MPDIoU based on the minimum points distance for bounding box regression, and use MPDIoU as a new measure to compare the similarity between the predicted bounding box and the groundtruth bounding box in the bounding box regression process. We also provide an easy-implemented solution for calculating MPDIoU between two axis-aligned rectangles, allowing it to be used as an evaluation metric to incorporate MPDIoU into state-of-the-art object detection and instance segmentation algorithms, and we test on some of the mainstream object detection, scene text spotting and instance segmentation datasets such as PASCAL VOC <cit.>, MS COCO <cit.>, IIIT5k <cit.> and MTHv2 <cit.> to verify the performance of our proposed MPDIoU. The contribution of this paper can be summarized as below: 1. We considered the advantages and disadvantages of the existing IoU-based losses and ℓ_n-norm losses, and then proposed an IoU loss based on minimum points distance called ℒ_MPDIoU to tackle the issues of existing losses and obtain a faster convergence speed and more accurate regression results. 2. Extensive experiments have been conducted on object detection, character-level scene text spotting and instance segmentation tasks. Outstanding experimental results validate the superiority of the proposed MPDIoU loss. Detailed ablation studies exhibit the effects of different settings of loss functions and parameter values. § RELATED WORK §.§ Object Detection and Instance Segmentation During the past few years, a large number of object detection and instance segmentation methods based on deep learning have been proposed by researchers from different countries and regions. In summary, bounding box regression has been adopted as a basic component in many representative object detection and instance segmentation frameworks <cit.>. In deep models for object detection, R-CNN series <cit.>, <cit.>, <cit.> adopts two or three bounding box regression modules to obtain higher localization accuracy, while YOLO series <cit.> and SSD series <cit.> adopt one to achieve faster inference. RepPoints <cit.> predicts several points to define a rectangular box. FCOS <cit.> locates an object by predicting the Euclidean distances from the sampling points to the top, bottom, left and right sides of the groundtruth bounding box. As for instance segmentation, PolarMask <cit.> predicts the length of n rays from the sampling point to the edge of the object in n directions to segment an instance. There are other detectors, such as RRPN <cit.> and R2CNN <cit.> adding rotation angle regression to detect arbitrary-orientated objects for remote sensing detection and scene text detection. Mask R-CNN <cit.> adds an extra instance mask branch on Faster R-CNN <cit.>, while the recent state-of-the-art YOLACT <cit.> does the same thing on RetinaNet <cit.>. To sum up, bounding box regression is one key component of state-of-the-art deep models for object detection and instance segmentation. §.§ Scene Text Spotting In order to solve the problem of arbitrary shape scene text detection and recognition, ABCNet <cit.> and its improved version ABCNet v2 <cit.> use the BezierAlign to transform the arbitrary-shape texts into regular ones. These methods achieve great progress by using rectification module to unify detection and recognition into end-to-end trainable systems. <cit.> propose RoI Masking to extract the feature for arbitrarily-shaped text recognition. Similar to <cit.> try to use a faster detector for scene text detection. AE TextSpotter <cit.> uses the results of recognition to guide detection through language model. Inspired by <cit.>, <cit.> proposed a scene text spotting method based on transformer, which provides instance-level text segmentation results. §.§ Loss Function for Bounding Box Regression At the very beginning, ℓ_n-norm loss function was widely used for bounding box regression, which was exactly simple but sensitive to various scales. In YOLO v1 <cit.>, square roots for w and h are adopted to mitigate this effect, while YOLO v3 <cit.> uses 2-wh. In order to better calculate the diverse between the groundtruth and the predicted bounding boxes, IoU loss is used since Unitbox <cit.>. To ensure the training stability, Bounded-IoU loss <cit.> introduces the upper bound of IoU. For training deep models in object detection and instance segmentation, IoU-based metrics are suggested to be more consistent than ℓ_n-norm <cit.>. The original IoU represents the ratio of the intersection area and the union area of the predicted bounding box and the groundtruth bounding box (as Figure <ref>(a) shows), which can be formulated as IoU=ℬ_gt⋂ℬ_prd/ℬ_gt⋃ℬ_prd, where ℬ_gt denotes the groundtruth bounding box, ℬ_prd denotes the predicted bounding box. As we can see, the original IoU only calculates the union area of two bounding boxes, which can't distinguish the cases that two boxes do not overlap. As equation <ref> shows, if |ℬ_gt⋂ℬ_prd|=0, then IoU(ℬ_gt,ℬ_prd)=0. In this case, IoU can not reflect whether two boxes are in vicinity of each other or very far from each other. Then, GIoU <cit.> is proposed to tackle this issue. The GIoU can be formulated as GIoU=IoU-|𝒞 -ℬ_gt∪ℬ_prd|/|𝒞|, where 𝒞 is the smallest box covering ℬ_gt and ℬ_prd (as shown in the black dotted box in Figure <ref>(a)), and | C| is the area of box 𝒞. Due to the introduction of the penalty term in GIoU loss, the predicted box will move toward the target box in nonoverlapping cases. GIoU loss has been applied to train state-of-the-art object detectors, such as YOLO v3 and Faster R-CNN, and achieves better performance than MSE loss and IoU loss. However, GIoU will lost effectiveness when the predicted bounding box is absolutely covered by the groundtruth bounding box. In order to deal with this problem, DIoU <cit.> was proposed with consideration of the centroid points distance between the predicted bounding box and the groundtruth bounding box. The formulation of DIoU can be formulated as DIoU=IoU-ρ ^2 (ℬ_gt,ℬ_prd)/𝒞 ^2, where ρ ^2 (ℬ_gt,ℬ_prd) denotes Euclidean distance between the central points of predicted bounding box and groundtruth bounding box (as the red dotted line shown in Figure <ref>(b)). 𝒞 ^2 denotes the diagonal length of the smallest enclosing rectangle (as the black dotted line shown in Figure <ref>(b)). As we can see, the target of ℒ_DIoU directly minimizes the distance between central points of predicted bounding box and groundtruth bounding box. However, when the central point of predicted bounding box coincides with the central point of groundtruth bounding box, it degrades to the original IoU. To address this issue, CIoU was proposed with consideration of both central points distance and the aspect ratio. The formulation of CIoU can be written as follows: CIoU=IoU-ρ ^2 (ℬ_gt,ℬ_prd)/𝒞 ^2-α V, V =4/π ^2(arctanw^gt/h^gt-arctanw^prd/h^prd)^2, α =V/1-IoU+V. However, the definition of aspect ratio from CIoU is relative value rather than absolute value. To address this issue, EIoU <cit.> was proposed based on DIoU, which is defined as follows: EIoU=DIoU-ρ ^2 (w_prd,w_gt)/(w^c) ^2-ρ ^2 (h_prd,h_gt)/(h^c) ^2. However, as Figure <ref> shows, the loss functions mentioned above for bounding box regression will lose effectiveness when the predicted bounding box and the groundtruth bounding box have the same aspect ratio with different width and height values, which will limit the convergence speed and accuracy. Therefore, we try to design a novel loss function called ℒ_MPDIoU for bounding box regression with consideration of the advantages included in ℒ_GIoU <cit.>, ℒ_DIoU <cit.>, ℒ_CIoU <cit.>, ℒ_EIoU <cit.>, but also has higher efficiency and accuracy for bounding box regression. Nonetheless, geometric properties of bounding box regression are actually not fully exploited in existing loss functions. Therefore, we propose MPDIoU loss by minimizing the top-left and bottom-right points distance between the predicted bounding box and the groundtruth bounding box for better training deep models of object detection, character-level scene text spotting and instance segmentation. § INTERSECTION OVER UNION WITH MINIMUM POINTS DISTANCE After analyzing the advantages and disadvantages of the IoU-based loss functions mentioned above, we start to think how to improve the accuracy and efficiency of bounding box regression. Generally speaking, we use the coordinates of top-left and bottom-right points to define a unique rectangle. Inspired by the geometric properties of bounding boxes, we designed a novel IoU-based metric named MPDIoU to minimize the top-left and bottom-right points distance between the predicted bounding box and the groundtruth bounding box directly. The calculation of MPDIoU is summarized in Algorithm <ref>. In summary, our proposed MPDIoU simplifies the similarity comparison between two bounding boxes, which can adapt to overlapping or nonoverlapping bounding box regression. Therefore, MPDIoU can be a proper substitute for IoU in all performance measures used in 2D/3D computer vision tasks. In this paper, we only focus on 2D object detection and instance segmentation where we can easily apply MPDIoU as both metric and loss. The extension to non-axis aligned 3D cases is left as future work. §.§ MPDIoU as Loss for Bounding Box Regression In the training phase, each bounding box ℬ_prd =[x^prd,y^prd,w^prd,h^prd]^T predicted by the model is forced to approach its groundtruth box ℬ_gt = [x^gt,y^gt,w^gt,h^gt]^T by minimizing loss function below: ℒ=Θminℬ _gt∈𝔹_gt∑ℒ(ℬ_gt,ℬ_prd|Θ), where 𝔹_gt is the set of groundtruth boxes, and Θ is the parameter of deep model for regression. A typical form of ℒ is ℓ_n-norm, for example, mean-square error (MSE) loss and Smooth-ℓ_1 loss <cit.>, which have been widely adopted in object detection <cit.>; pedestrian detection <cit.>; scene text spotting <cit.>; 3D object detection <cit.>; pose estimation <cit.>; and instance segmentation <cit.>. However, recent researches suggest that ℓ_n-norm-based loss functions are not consistent with the evaluation metric, that is, interaction over union (IoU), and instead propose IoU-based loss functions <cit.>. Based on the definition of MPDIoU in the previous section, we define the loss function based on MPDIoU as follows: ℒ_MPDIoU=1-MPDIoU As a result, all of the factors of existing loss functions for bounding box regression can be determined by four points coordinates. The conversion formulas are shown as follow: |C|=(max(x_2^gt,x_2^prd)-min(x_1^gt,x_1^prd))*(max(y_2^gt,y_2^prd)-min(y_1^gt,y_1^prd)), x_c^gt=x_1^gt+x_2^gt/2, y_c^gt=y_1^gt+y_2^gt/2, y_c^prd=y_1^prd+y_2^prd/2, x_c^prd=x_1^prd+x_2^prd/2, w_gt=x_2^gt-x_1^gt, h_gt=y_2^gt-y_1^gt, w_prd=x_2^prd-x_1^prd, h_prd=y_2^prd-y_1^prd. where |C| represents the minimum enclosing rectangle's area covering ℬ_gt and ℬ_prd, (x_c^gt,y_c^gt) and (x_c^prd, y_c^prd) represent the coordinates of the central points of the groundtruth bounding box and the predicted bounding box, respectively. w_gt and h_gt represent the width and height of the groundtruth bounding box, w_prd and h_prd represent the width and height of the predicted bounding box. From Eq (<ref>)-(<ref>), we can find that all of the factors considered in the existing loss functions can be determined by the coordinates of the top-left points and the bottom-right points, such as nonoverlapping area, central points distance, deviation of width and height, which means our proposed ℒ_MPDIoU not only considerate, but also simplifies the calculation process. According to Theorem <ref>, if the aspect ratio of the predicted bounding boxes and groundtruth bounding box are the same, the predicted bounding box inner the groundtruth bounding box has lower ℒ_MPDIoU value than the prediction box outer the groundtruth bounding box. This characteristic ensures the accuracy of bounding box regression, which tends to provide the predicted bounding boxes with less redudancy. We define one groundtruth bounding box as ℬ_gt and two predicted bounding boxes as ℬ_prd1 and ℬ_prd2. The width and height of the input image are w and h, respectively. Assume the top-left and bottom-right coordinates of ℬ_gt, ℬ_prd1 and ℬ_prd2 are (x_1^gt,y_1^gt,x_2^gt,y_2^gt), (x_1^prd1,y_1^prd1,x_2^prd1,y_2^prd1) and (x_1^prd2,y_1^prd2,x_2^prd2,y_2^prd2), then the width and height of ℬ_gt, ℬ_prd1 and ℬ_prd2 can be formulated as (w_gt=y_2^gt-y_1^gt, h_gt=x_2^gt-x_1^gt), (w_prd1=y_2^prd1-y_1^prd1, h_prd1=x_2^prd1-x_1^prd1) and (w_prd2=y_2^prd2-y_1^prd2, h_prd2=x_2^prd2-x_1^prd2). If w_prd1=k*w_gt and h_prd1=k*h_gt, w_prd2=1/k*w_gt and h_prd2=1/k*h_gt, where k>1 and k∈ N* The central points of the ℬ_gt, ℬ_prd1 and ℬ_prd2 are all overlap. Then GIoU(ℬ_gt, ℬ_prd1)=GIoU(ℬ_gt, ℬ_prd2), DIoU(ℬ_gt, ℬ_prd1)=DIoU(ℬ_gt, ℬ_prd2), CIoU(ℬ_gt, ℬ_prd1)=CIoU(ℬ_gt, ℬ_prd2), EIoU(ℬ_gt, ℬ_prd1)=EIoU(ℬ_gt, ℬ_prd2), but MPDIoU(ℬ_gt, ℬ_prd1)> MPDIoU(ℬ_gt, ℬ_prd2). ∵ IoU(ℬ_gt, ℬ_prd1) = w_gt*h_gt/w_prd1*h_prd1=w_gt*h_gt/k*w_gt*k*h_gt=1/k^2, IoU(ℬ_gt, ℬ_prd2) = w_prd2*h_prd2/w_gt*h_gt=1/k*w_gt*1/k*h_gt/w_gt*h_gt=1/k^2 ∴ IoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd2) ∵ The central points of the ℬ_gt, ℬ_prd1 and ℬ_prd2 are all overlap. ∴ GIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)=1/k^2, GIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)=1/k^2, DIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)=1/k^2, DIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)=1/k^2. ∴ GIoU(ℬ_gt, ℬ_prd1)=GIoU(ℬ_gt, ℬ_prd2), DIoU(ℬ_gt, ℬ_prd1)=DIoU(ℬ_gt, ℬ_prd2). ∵ CIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)-(4/π ^2(arctanw_gt/h_gt-arctanw^prd1/h^prd1)^2)^2/1-IoU(ℬ_gt, ℬ_prd1)+4/π ^2(arctanw_gt/h_gt-arctanw^prd1/h^prd1)^2=1/k^2-(4/π ^2(arctanw_gt/h_gt-arctank*w_gt/k*h_gt)^2)^2/1-IoU(ℬ_gt, ℬ_prd1)+4/π ^2(arctanw_gt/h_gt-arctank*w_gt/k*h_gt)^2=1/k^2. CIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)-(4/π ^2(arctanw_gt/h_gt-arctanw^prd2/h^prd2)^2)^2/1-1/k^2+4/π ^2(arctanw_gt/h_gt-arctanw^prd2/h^prd2)^2=1/k^2-(4/π ^2(arctanw_gt/h_gt-arctan1/k*w_gt/1/k*h_gt)^2)^2/1-1/k^2+4/π ^2(arctanw_gt/h_gt-arctan1/k*w_gt/1/k*h_gt)^2=1/k^2. ∴ CIoU(ℬ_gt, ℬ_prd1)=CIoU(ℬ_gt, ℬ_prd2). ∵ EIoU(ℬ_gt, ℬ_prd1)=DIoU(ℬ_gt, ℬ_prd1)-(w_prd1-w_gt)^2/w_prd1^2-(h_prd1-h_gt)^2/h_prd1^2=1/k^2-(k*w_gt-w_gt)^2/k^2*w_gt^2-(k*h_gt-h_gt)^2/k^2*h_gt^2=4*k-2*k^2-1/k^2 EIoU(ℬ_gt, ℬ_prd2)=DIoU(ℬ_gt, ℬ_prd2)-(w_gt-w_prd2)^2/w_gt^2-(h_gt-h_prd2)^2/h_gt^2=1/k^2-(w_gt-1/kw_gt)^2/w_gt^2-(h_gt-1/kh_gt)^2/h_gt^2=4*k-2*k^2-1/k^2. ∴ EIoU(ℬ_gt, ℬ_prd1)=EIoU(ℬ_gt, ℬ_prd2). ∵ MPDIoU(ℬ_gt, ℬ_prd1)=IoU(ℬ_gt, ℬ_prd1)-(x_1^prd1-x_1^gt)^2+(y_1^prd1-y_1^gt)^2+(x_2^prd1-x_2^gt)^2+(y_2^prd1-y_2^gt)^2/w^2+h^2=1/k^2-2*((1/2*k*w_gt-1/2*w_gt)^2+(1/2*k*h_gt-1/2*h_gt)^2)/w^2+h^2, MPDIoU(ℬ_gt, ℬ_prd2)=IoU(ℬ_gt, ℬ_prd2)-(x_1^prd2-x_1^gt)^2+(y_1^prd2-y_1^gt)^2+(x_2^prd2-x_2^gt)^2+(y_2^prd2-y_2^gt)^2/w^2+h^2=1/k^2-2*((1/2*w_gt-1/2k*w_gt)^2+(1/2*h_gt-1/2k*h_gt)^2)/w^2+h^2, ∴ MPDIoU(ℬ_gt, ℬ_prd1)-MPDIoU(ℬ_gt, ℬ_prd2)=1/4*(k-1)^2*(w_gt^2+h_gt^2)-1/4*(1-1/k)^2*(w_gt^2+h_gt^2)=1/4*(w_gt^2+h_gt^2)*((k-1)^2-(1-1/k)^2) ∵ (k-1)^2>(1-1/k)^2 ∴ MPDIoU(ℬ_gt, ℬ_prd1)> MPDIoU(ℬ_gt, ℬ_prd2). Considering the groundtruth bounding box, ℬ_gt is a rectangle with area bigger than zero, i.e. A^gt > 0. Alg. <ref> (1) and the Conditions in Alg. <ref> (6) respectively ensure the predicted area A^prd and intersection area ℐ are non-negative values, i.e. A^prd≥ 0 and ℐ≥ 0, ∀ℬ_prd∈ℝ^4. Therefore union area 𝒰>0 for any predicted bounding box ℬ_prd=(x_1^prd,y_1^prd,x_2^prd,y_2^prd)∈ℝ^4. This ensures that the denominator in IoU cannot be zero for any predicted value of outputs. In addition, for any values of ℬ_prd=(x_1^prd,y_1^prd,x_2^prd,y_2^prd)∈ℝ^4, the union area is always bigger than the intersection area, i.e. 𝒰≥ℐ. As a result, ℒ_MPDIoU is always bounded, i.e. 0≤ℒ_MPDIoU< 3, ∀ℬ_prd∈ℝ^4. ℒ_MPDIoU behaviour when IoU = 0: For MPDIoU loss, we have ℒ_MPDIoU =1-MPDIoU=1+d_1^2/d^2+d_2^2/d^2-IoU. In the case of ℬ_gt and ℬ_prd do not overlap, which means IoU=0, MPDIoU loss can be simplified to ℒ_MPDIoU =1-MPDIoU=1+d_1^2/d^2+d_2^2/d^2. In this case, by minimizing ℒ_MPDIoU, we actually minimize d_1^2/d^2+d_2^2/d^2. This term is a normalized measure between 0 and 1, i.e. 0≤d_1^2/d^2+d_2^2/d^2< 2. § EXPERIMENTAL RESULTS We evaluate our new bounding box regression loss ℒ_MPDIoU by incorporating it into the most popular 2D object detector and instance segmentation models such as YOLO v7 <cit.> and YOLACT <cit.>. To this end, we replace their default regression losses with ℒ_MPDIoU , i.e. we replace ℓ_1-smooth in YOLACT <cit.> and ℒ_CIoU in YOLO v7 <cit.>. We also compare the baseline losses against ℒ_GIoU. §.§ Experimental Settings The experimental environment can be summarized as follows: the memory is 32GB, the operating system is windows 11, the CPU is Intel i9-12900k, and the graphics card is NVIDIA Geforce RTX 3090 with 24GB memory. In order to conduct a fair comparison, all of the experiments are implemented with PyTorch <cit.>. §.§ Datasets We train all object detection and instance segmentation baselines and report all the results on two standard benchmarks, i.e. the PASCAL VOC <cit.> and the Microsoft Common Objects in Context (MS COCO 2017) <cit.> challenges. The details of their training protocol and their evaluation will be explained in their own sections. PASCAL VOC 2007&2012: The Pascal Visual Object Classes (VOC) <cit.> benchmark is one of the most widely used datasets for classification, object detection and semantic segmentation, which contains about 9963 images. The training dataset and the test dataset are 50% for each, where objects from 20 pre-defined categories are annotated with horizontal bounding boxes. Due to the small scale of images for instance segmentation, which leads to weak performance, we only provide the instance segmentation results training with MS COCO 2017. MS COCO: MS COCO <cit.> is a widely used benchmark for image captioning, object detection and instance segmentation, which contains more than 200,000 images across train, validation and test sets with over 500,000 annotated object instances from 80 categories. IIIT5k: IIIT5k <cit.> is one of the popular scene text spotting benchmark with character-level annotations, which contains 5,000 cropped word images collected from the Internet. The character category includes English letters and digits. There are 2,000 images for training and 3,000 images for testing. MTHv2: MTHv2 <cit.> is one of the popular OCR benchmark with character-level annotations. The character category includes simplified and traditional characters. It contains more than 3000 images of Chinese historical documents and more than 1 million Chinese characters. §.§ Evaluation Protocol In this paper, we used the same performance measure as the MS COCO 2018 Challenge <cit.> to report all of our results, including mean Average Precision (mAP) over different class labels for a specific value of IoU threshold in order to determine true positives and false positives. The main performance measure of object detection used in our experiments is shown by precision and [email protected]:0.95. We report the mAP value for IoU thresholds equal to 0.75, shown as AP75 in the tables. As for instance segmentation, the main performance measure used in our experiments are shown by AP and AR, which is averaging mAP and mAR across different value of IoU thresholds, i.e. IoU = { .5, .55,..., .95}. All of the object detection and instance segmentation baselines have also been evaluated using the test set of the MS COCO 2017 and PASCAL VOC 2007&2012. The results will be shown in following section. §.§ Experimental Results of Object Detection Training protocol. We used the original Darknet implementation of YOLO v7 released by <cit.>. As for baseline results (training using GIoU loss), we selected DarkNet-608 as backbone in all experiments and followed exactly their training protocol using the reported default parameters and the number of iteration on each benchmark. To train YOLO v7 using GIoU, DIoU, CIoU, EIoU and MPDIoU losses, we simply replace the bounding box regression IoU loss with ℒ_GIoU, ℒ_DIoU, ℒ_CIoU, ℒ_EIoU and ℒ_MPDIoU losses explained in <ref>. Following the original code's training protocol, we trained YOLOv7 <cit.> using each loss on both training and validation set of the dataset up to 150 epochs. We set the patience of early stop mechanism as 5 to reduce the training time and save the model with the best performance. Their performance using the best checkpoints for each loss has been evaluated on the test set of PASCAL VOC 2007&2012. The results have been reported in Table <ref>. §.§ Experimental Results of Character-level Scene Text Spotting Training protocol. We used the similar training protocol with the experiments of object detection. Following the original code's training protocol, we trained YOLOv7 <cit.> using each loss on both training and validation set of the dataset up to 30 epochs. Their performance using the best checkpoints for each loss has been evaluated using the test set of IIIT5K <cit.> and MTHv2 <cit.>. The results have been reported in Table <ref> and Table <ref>. 0.45 LossEvaluation AP AP75 ℒ_GIoU 42.9 45 ℒ_DIoU 42.2 42.3 Relative improv(%) -1.6 -6 ℒ_CIoU 44.1 46.6 Relative improv(%) 2.7 3.5 ℒ_EIoU 41 42.6 Relative improv(%) -4.4 -5.3 ℒ_MPDIoU 44.5 46.6 Relative improv(%) 3.7 3.5 tableComparison between the performance of YOLO v7 <cit.> trained using its own loss (ℒ_CIoU) as well as ℒ_GIoU, ℒ_DIoU, ℒ_EIoU and ℒ_MPDIoU losses. The results are reported on the test set of IIIT5K. 0.45 LossEvaluation AP AP75 ℒ_GIoU 52.1 55.3 ℒ_DIoU 53.2 55.8 Relative improv(%) 2.1 0.9 ℒ_CIoU 52.3 53.6 Relative improv(%) 0.3 -3.0 ℒ_EIoU 53.2 54.7 Relative improv(%) 2.1 -1.0 ℒ_MPDIoU 54.5 58 Relative improv(%) 4.6 4.8 tableComparison between the performance of YOLO v7 <cit.> trained using its own loss (ℒ_CIoU) as well as ℒ_GIoU, ℒ_DIoU, ℒ_EIoU and ℒ_MPDIoU losses. The results are reported on the test set of MTHv2. As we can see, the results in Tab. <ref> and <ref> show that training YOLO v7 using ℒ_MPDIoU as regression loss can considerably improve its performance compared to the existing regression losses including ℒ_GIoU, ℒ_DIoU, ℒ_CIoU, ℒ_EIoU. Our proposed ℒ_MPDIoU shows outstanding performance on character-level scene text spotting. §.§ Experimental Results of Instance Segmentation Training protocol. We used the latest PyTorch implementations of YOLACT <cit.>, released by University of California. For baseline results (trained using ℒ_GIoU), we selected ResNet-50 as the backbone network architecture for both YOLACT in all experiments and followed their training protocol using the reported default parameters and the number of iteration on each benchmark. To train YOLACT using GIoU, DIoU, CIoU, EIoU and MPDIoU losses, we replaced their ℓ_1-smooth loss in the final bounding box refinement stage with ℒ_GIoU, ℒ_DIoU, ℒ_CIoU, ℒ_EIoU and ℒ_MPDIoU losses explained in <ref>. Similar with the YOLO v7 experiment, we replaced the original losses for bounding box regression with our proposed ℒ_MPDIoU. As Figure <ref>(c) shows, incorporating ℒ_GIoU, ℒ_DIoU, ℒ_CIoU and ℒ_EIoU as the regression loss can slightly improve the performance of YOLACT on MS COCO 2017. However, the improvement is obvious compared to the case where it is trained using ℒ_MPDIoU, where we visualized different values of mask AP against different value of IoU thresholds, i.e. 0.5≤ IoU≤ 0.95. Similar to the above experiments, detection accuracy improves by using ℒ_MPDIoU as regression loss over the existing loss functions. As Table <ref> shows, our proposed ℒ_MPDIoU performs better than existing loss functions on most of the metrics. However, the amount of improvement between different losses is less than previous experiments. This may be due to several factors. First, the detection anchor boxes on YOLACT <cit.> are more dense than YOLO v7 <cit.>, resulting in less frequent scenarios where ℒ_MPDIoU has an advantage over ℒ_IoU such as nonoverlapping bounding boxes. Second, the existing loss functions for bounding box regression have been improved during the past few years, which means the accuracy improvement is very limit, but there are still large room for the efficiency improvement. We also compared the trend of bbox loss and AP value during the training period of YOLACT with different regression loss functions. As Figure <ref>(a),(b) shows, training with ℒ_MPDIoU performs better than most of the existing loss functions, i.e. ℒ_GIoU, ℒ_DIoU, which achieve higher accuracy and faster convergence. Although the bbox loss and AP value show great fluctuation, our proposed ℒ_MPDIoU performs better at the end of training. In order to better reveal the performance of different loss functions for bounding box regression of instance segmentation, we provide some of the visualization results as Figure <ref> and <ref> shows. As we can see, we provide the instance segmentation results with less redudancy and higher accuracy based on ℒ_MPDIoU other than ℒ_GIoU, ℒ_DIoU, ℒ_CIoU and ℒ_EIoU. § CONCLUSION In this paper, we introduced a new metric named MPDIoU based on minimum points distance for comparing any two arbitrary bounding boxes. We proved that this new metric has all of the appealing properties which existing IoU-based metrics have while simplifing its calculation. It will be a better choice in all performance measures in 2D/3D vision tasks relying on the IoU metric. We also proposed a loss function called ℒ_MPDIoU for bounding box regression. We improved their performance on popular object detection, scene text spotting and instance segmentation benchmarks such as PASCAL VOC, MS COCO, MTHv2 and IIIT5K using both the commonly used performance measures and also our proposed MPDIoU by applying it into the state-of-the-art object detection and instance segmentation algorithms. Since the optimal loss for a metric is the metric itself, our MPDIoU loss can be used as the optimal bounding box regression loss in all applications which require 2D bounding box regression. As for future work, we would like to conduct further experiments on some downstream tasks based on object detection and instance segmentation, including scene text spotting, person re-identification and so on. With the above experiments, we can further verify the generalization ability of our proposed loss functions. elsarticle-num
http://arxiv.org/abs/2307.04651v1
20230710154937
Joint Salient Object Detection and Camouflaged Object Detection via Uncertainty-aware Learning
[ "Aixuan Li", "Jing Zhang", "Yunqiu Lv", "Tong Zhang", "Yiran Zhong", "Mingyi He", "Yuchao Dai" ]
cs.CV
[ "cs.CV" ]
Salient objects attract human attention and usually stand out clearly from their surroundings. In contrast, camouflaged objects share similar colors or textures with the environment. In this case, salient objects are typically non-camouflaged, and camouflaged objects are usually not salient. Due to this inherent contradictory attribute, we introduce an uncertainty-aware learning pipeline to extensively explore the contradictory information of salient object detection (SOD) and camouflaged object detection (COD) via data-level and task-wise contradiction modeling. We first exploit the dataset correlation of these two tasks and claim that the easy samples in the COD dataset can serve as hard samples for SOD to improve the robustness of the SOD model. Based on the assumption that these two models should lead to activation maps highlighting different regions of the same input image, we further introduce a contrastive module with a joint-task contrastive learning framework to explicitly model the contradictory attributes of these two tasks. Different from conventional intra-task contrastive learning for unsupervised representation learning, our contrastive module is designed to model the task-wise correlation, leading to cross-task representation learning. To better understand the two tasks from the perspective of uncertainty, we extensively investigate the uncertainty estimation techniques for modeling the main uncertainties of the two tasks, namely task uncertainty (for SOD) and data uncertainty (for COD), and aiming to effectively estimate the challenging regions for each task to achieve difficulty-aware learning. Experimental results on benchmark datasets demonstrate that our solution leads to both state-of-the-art performance and informative uncertainty estimation. Salient Object Detection, Camouflaged Object Detection, Task Uncertainty, Data Uncertainty, Difficulty-aware Learning Joint Salient Object Detection and Camouflaged Object Detection via Uncertainty-aware Learning Aixuan Li,  Jing Zhang*,  Yunqiu Lv,  Tong Zhang,  Yiran Zhong,  Mingyi He,  Yuchao Dai*  A. Li, Y. Lv, M. He and Y. Dai are with School of Electronics and Information, Northwestern Polytechnical University, Xi'an, China and Shaanxi Key Laboratory of Information Acquisition and Processing. J. Zhang is with School of Computing, the Australian National University, Canberra, Australia. T. Zhang is with IVRL, EPFL, Switzerland. Y. Zhong is with Shanghai AI Laboratory, Shanghai, China. A preliminary version of this work appeared at <cit.>. Our code and data are available at: <https://npucvr.github.io/UJSCOD/>. A. Li and J. Zhang contributed equally. Corresponding authors: Y. Dai ([email protected]) and J. Zhang ([email protected]). This research was supported in part by National Natural Science Foundation of China (62271410) and by the Fundamental Research Funds for the Central Universities. August 12, 2023 ================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================== § INTRODUCTION Visual salient object detection (SOD) aims to localize the salient object(s) of the image that attract human attention. The early work of saliency detection mainly relies on human visual priors based handcrafted features <cit.> to detect high contrast regions. Deep SOD models <cit.> use deep saliency features instead of handcrafted features to achieve effective global and local context modeling, leading to better performance. In general, existing SOD models <cit.> focus on two directions: 1) constructing effective saliency decoders <cit.> that facilitate high/low-level feature aggregation; and 2) designing appropriate loss functions <cit.> to achieve structure-preserving saliency detection. Unlike salient objects that immediately attract human attention, camouflaged objects evolve to blend into their surroundings, effectively avoiding detection by predators. The concept of camouflage has a long history <cit.>, and finds application in various domains including biology <cit.>, military <cit.> and other fields <cit.>. From a biological evolution perspective, prey species have developed adaptive mechanisms to camouflage themselves within their environment <cit.>, often by mimicking the structure or texture of their surroundings. These camouflaged objects can only be distinguished by subtle differences. Consequently, camouflaged object detection (COD) models <cit.> are designed to identify and localize these "subtle" differences, enabling the comprehensive detection of camouflaged objects. To address the contradictory nature of SOD and COD, we propose a joint-task learning framework that explores the relationship between these two tasks. Our investigation reveals an inverse relationship between saliency and camouflage, where a higher level of saliency typically indicates a lower level of camouflage, and vice versa. This oppositeness is clearly demonstrated in Fig. <ref>, where the object gradually transits from camouflaged to salient as the contrast level increases. Hence, we explore the correlation of SOD and COD from both data-wise and task-wise perspectives. For data-wise correlation modeling, we re-interpret the data augmentation by defining easy samples from COD as hard samples for SOD. By doing so, we achieve contradiction modeling from the dataset perspective. Fig. <ref> illustrates that typical camouflaged objects are never salient, but samples in the middle can be defined as hard samples for SOD. Thus, we achieve context-aware data augmentation by the proposed data interaction as data augmentation method. In addition, for COD, we find the performance is sensitive to the size of camouflaged objects. To explain this, we crop the foreground camouflaged objects with different percentages of background, and show their corresponding prediction maps and uncertainty maps in Fig. <ref>. We observe that the cropping based prediction uncertainty,  variance of multiple predictions, is relatively consistent with region-level detectability of the camouflaged objects, validating that performance of the model can be influenced by the complexity of the background. The foreground-cropping strategy can serve as an effective data augmentation technique and a promising uncertainty generation strategy for COD, which also simulates real-world scenarios that camouflaged objects in the wild may appear in different environments. We have also investigated the foreground cropping strategy for SOD, and observed relatively stable predictions, thus the foreground cropping is only applied to COD training dataset. Aside from data augmentation, we integrate contrastive learning into our framework to address task-wise contradiction modeling. Conventional contrastive learning typically constructs their positive/negative pairs based on semantic invariance. However, since both SOD and COD are class-agnostic tasks that rely on contrast-based object identification, we adopt a different approach for selecting positive/negative pairs based on region contrast. Specifically, given the same input image and its corresponding detected regions for the two tasks, we define region features with similar contrast as positive pairs, while features with different contrast serve as negative pairs. This contrastive module is designed to cater to class-agnostic tasks and effectively captures the contrast differences between the foreground objects in both tasks. Additionally, we observe two types of uncertainty for SOD and COD, respectively, as depicted in Fig. <ref>. For SOD, the subjective nature <cit.> and the prediction uncertainty due to themajority voting mechanism in labeling procedure, which we define as task uncertainty. On the other hand, in COD, uncertainty arises from the difficulty of accurately annotating camouflaged objects due to their resemblance to the background, which we refer to as data uncertainty. To address these uncertainties, as shown in the fifth column of Fig. <ref>, we extensively investigate uncertainty estimation techniques to achieve two main benefits: (1) a self-explanatory model that is aware of its prediction, with an additional uncertainty map to explain the model's confidence, and (2) difficulty-aware learning, where the estimated uncertainty map serves as an indicator for pixel-wise difficulty representation, facilitating practical hard negative mining. A preliminary version of our work appeared at <cit.>. Compared with the previous version, we have made the following extensions: 1): We have fully analyzed the relationship between SOD and COD from both dataset and task connection perspectives to further build their relationships. 2): To further investigate the cross-task correlations from the contrast perspective, we have introduced contrastive learning to our dual-task learning framework. 3): As an adversarial training based framework, we have investigated more training strategies for the discriminator, leading to more stable training. 4): We have conducted additional experiments to fully explain the task connections, the uncertainty estimation techniques, the experiment setting, and the hyper-parameters. Our main contributions are summarized as: * We propose that salient object detection and camouflaged object detection are tasks with opposing attributes for the first time and introduce the first joint learning framework which utilizes category-agnostic contrastive module to model the contradictory attributes of two tasks. * Based on the transitional nature between saliency and camouflage, we introduce data interaction as data augmentation by defining simple COD samples as hard SOD samples to achieve context-aware data augmentation for SOD. * We analyze the main sources of uncertainty in SOD and COD annotations. In order to achieve reliable model predictions, we propose an uncertainty-aware learning module as an indicator of model prediction confidence. * Considering the inherent differences between COD and SOD tasks, we propose random sampling-based foreground-cropping as the COD data augmentation technique to simulate the real-world scenarios of camouflaged objects, which significantly improves the performance. § RELATED WORK Salient Object Detection. Existing deep saliency detection models <cit.> are mainly designed to achieve structure-preserving saliency predictions. <cit.> introduced an auxiliary edge detection branch to produce a saliency map with precise structure information. Wei  <cit.> presented structure-aware loss function to penalize prediction along object edges. Wu  <cit.> designed a cascade partial decoder to achieve accurate saliency detection with finer detailed information. Feng  <cit.> proposed a boundary-aware mechanism to improve the accuracy of network prediction on the boundary. There also exist salient object detection models that benefit from data of other sources. <cit.> integrated fixation prediction and salient object detection in a unified framework to explore the connections of the two related tasks. Zeng  <cit.> presented to jointly learn a weakly supervised semantic segmentation and fully supervised salient object detection model to benefit from both tasks. Zhang  <cit.> used two refinement structures, combining expanded field of perception and dilated convolution, to increase structural detail without consuming significant computational resources, which are used for salient object detection task on high-resolution images. Liu  <cit.> designed the stereoscopically attentive multi-scale module to ensure the effectiveness of the lightweight salient object detection model, which uses a soft attention mechanism in any channel at any position, ensuring the presence of multiple scales and reducing the number of parameters. Camouflaged Object Detection. The concept of camouflage is usually associated with context <cit.>, and the camouflaged object detection models are designed to discover the camouflaged object(s) hidden in the environment. Cuthill  <cit.> concluded that an effective camouflage includes two mechanisms: background pattern matching, where the color is similar to the environment, and disruptive coloration, which usually involves bright colors along edge, and makes the boundary between camouflaged objects and the background unnoticeable. Bhajantri  <cit.> utilized co-occurrence matrix to detect defective. Pike  <cit.> combined several salient visual features to quantify camouflage, which could simulate the visual mechanism of a predator. Le  <cit.> fused a classification network with a segmentation network and used the classification network to determine the likelihood that the image contains camouflaged objects to produce more accurate camouflaged object detection. In the field of deep learning, Fan  <cit.> proposed the first publicly available camouflage deep network with the largest camouflaged object training set. Mei  <cit.> incorporated the predation mechanism of organisms into the camouflaged object detection model and proposed a distraction mining strategy. Zhai  <cit.> introduced a joint learning model for COD and edge detection based on graph networks, where the two modules simultaneously mine complementary information. Lv  <cit.> presented a triple-task learning framework to simultaneously rank, localize and segment the camouflaged objects. Multi-task Learning. The basic assumption of multi-task learning is that there exists shared information among different tasks. In this way, multi-task learning is widely used to extract complementary information about positively related tasks. Kalogeiton  <cit.> jointly detected objects and actions in a video scene. Zhen  <cit.> designed a joint semantic segmentation and boundary detection framework by iteratively fusing feature maps generated for each task with a pyramid context module. In order to solve the problem of insufficient supervision in semantic alignment and object landmark detection, Jeon  <cit.> designed a joint loss function to impose constraints between tasks, and only reliable matched pairs were used to improve the model robustness with weak supervision. Joung  <cit.> solved the problem of object viewpoint changes in 3D object detection and viewpoint estimation with a cylindrical convolutional network, which obtains view-specific features with structural information at each viewpoint for both two tasks. Luo  <cit.> presented a multi-task framework for referring expression comprehension and segmentation. Uncertainty-aware Learning. Difficulty-aware (or uncertainty-aware, confidence-aware) learning aims to explore the contribution of hard samples, leading to hard-negative mining <cit.>, which has been widely used in medical image segmentation <cit.>, semantic segmentation <cit.>, and other fields <cit.>. To achieve difficulty-aware learning, one needs to estimate model confidence. To achieve this, Gal  <cit.> used Monte Carlo dropout (MC-Dropout) as a Bayesian approximation, where model uncertainty can be obtained with dropout neural networks. Deep Ensemble <cit.> is another popular type of uncertainty modeling technique, which usually involves generating an ensemble of predictions to obtain variance of predictions as the uncertainty estimation. With extra latent variable involved, the latent variable models <cit.> can also be used to achieve predictive distribution estimation, leading to uncertainty modeling. Following the uncertainty-aware learning pipeline, Lin  <cit.> introduced focal loss to balance the contribution of simple and hard samples for loss updating. Li  <cit.> presented a deep layer cascade model for semantic segmentation to pay more attention to the difficult parts. Nie  <cit.> adopted adversarial learning to generate confidence levels for predicting segmentation maps, and then used the generated confidence levels to achieve difficulty-aware learning. Xie  <cit.> applied difficulty-aware learning to an active learning task, where the difficult samples are claimed to be more informative. Contrastive learning. The initial goal of contrastive learning <cit.> is to achieve effective feature representation via self-supervised learning. The main strategy to achieve this is through constructing positive/negative pairs via data augmentation techniques <cit.>, where the basic principle is that similar concepts should have similar representation, thus stay close to each other in the embedding space. On the contrary, dissimilar concepts should stay apart in the embedding space. Different from augmentation based self-supervised contrastive learning, supervised contrastive learning builds the positive/negative pairs based on the given labels <cit.>. Especially for image segmentation, the widely used loss function is cross-entropy loss. However, it's well known that cross-entropy loss is not robust to labeling noise <cit.> and the produced category margins are not separable enough for better generalizing. Further, it penalizes pixel-wise predictions independently without modeling the cross-pixel relationships. Supervised contrastive learning <cit.> can fix the above issues with robust feature embedding exploration, following the similar training pipeline as self-supervised contrastive learning. § OUR METHOD We propose an uncertainty-aware joint learning framework via contrastive learning (see Fig. <ref>) to learn SOD and COD in a unified framework. Firstly, we explain that these two tasks are both contradictory and closely related (Sec. <ref>), and a joint learning pipeline can benefit each other with effective context modeling. Then, we present a Contrastive Module to explicitly model the contradicting attributes of these two tasks (Sec. <ref>), with a data-interaction technique to achieve context-level data augmentation. Further, considering uncertainty for both tasks, we introduce a difficulty-aware learning network (Sec. <ref>) to produce predictions with corresponding uncertainty maps, representing the model's awareness of the predictions. §.§ Tasks Analysis §.§.§ Tasks Relationship Exploration Model Perspective: At the task level, both SOD and COD are class-agnostic binary segmentation tasks, where a UNet <cit.> structure is usually designed to achieve mapping from input (image) space to output (segmentation) space. Differently, the foreground of SOD usually stands out highly from the context, while camouflaged instances are evolved to conceal in the environment. With the above understanding about both SOD and COD, we observe complementary information between the two tasks. Given the same image, we claim that due to the contradicting attributes of saliency and camouflage, the extracted features for each task should be different from each other, and the localized region of each task should be different as well. Dataset Perspective: At the dataset level, we observe some samples within the COD dataset can also be included in the SOD dataset (see Fig. <ref>), where the camouflaged region is consistent with the salient region. However, due to the similar appearance of foreground and background, these samples are easy for COD but challenging for SOD, making them effective for serving as hard samples for SOD to achieve hard negative mining. On the other side, most of the salient foreground in the SOD dataset has high contrast, and the camouflaged regions of the same image usually differ from the salient regions. In this way, samples in the SOD dataset usually cannot serve as simple samples for COD. Considering the dataset relationships of both tasks, we claim that easy samples in the COD dataset can effectively serve as hard samples for SOD to achieve context-level data augmentation. §.§.§ Inherent Uncertainty Subjective Nature of SOD: To reflect the human visual system, the initial saliency annotation of each image is obtained with multiple annotators <cit.>, and then majority voting is performed to generate the final ground truth saliency map that represents the majority salient regions,  the DUTS dataset <cit.>, ECSSD <cit.>, DUT <cit.> dataset are annotated by five annotators and HKU-IS <cit.> is annotated by three annotators. Further, to maintain consistency of the annotated data, some SOD datasets adopt the pre-selection strategy, where the images contain no common salient regions across all the annotators will be removed before the labeling process,  HKU-IS <cit.> dataset first evaluates the consistency of the annotation of the three annotators, and removes the images with greater disagreement. In the end, 4,447 images are obtained from an initial dataset with 7,320 images. We argue that both the majority voting process for final label generation and the pre-selection process for candidate dataset preparation introduce bias to both the dataset and the models trained on it. We explain this as the subjective nature of saliency. Labeling Uncertainty of COD: Camouflaged objects are evolved to have similar texture and color information to their surroundings <cit.>. Due to the similar appearance of camouflaged objects and their habitats, it's more difficult to accurately annotate the camouflaged instance than generic object segmentation, especially along instance boundaries. This poses severe and inevitable labeling noise while generating the camouflaged object detection dataset, which we define as labeling uncertainty of camouflage. §.§ Joint-task Contrastive Learning As a joint learning framework, we have two sets of training dataset for each individual task, namely a SOD dataset D_s={x_i^s,y_i^s}_i=1^N_s for SOD and a COD dataset D_c={x_i^c,y_i^c}_i=1^N_c for COD, where {x_i^s,y_i^s} is the SOD image/ground truth pair and {x_i^c,y_i^c} is the COD image/ground truth pair, and i indexes images, N_s and N_c are the size of training dataset for each task. Motivated by both the task contradiction and data sharing attributes of the two tasks, we introduce a contrastive learning based joint-task learning pipeline for joint salient object detection and camouflaged object detection. Firstly, we model the task contradiction (Section <ref>) with a contrastive module. Secondly, we select easy samples by weighted MAE from the COD training dataset (Section <ref>), serving as hard samples for SOD. §.§.§ Task Correlation Modeling via Contrastive Learning To model the task-wise correlation, we design a Contrastive Module in Fig. <ref> and introduce another set of images from the PASCAL VOC 2007 dataset <cit.> as connection modeling dataset D_p={x_i^p}_i=1^N_p, from which we extract both the camouflaged features and the salient features. With the three datasets (SOD dataset D_s, COD dataset D_c and connection modeling dataset D_p), our contradicting modeling framework uses the Feature Encoder module to extract both the camouflage feature and the saliency feature. The Prediction Decoder is then used to produce the prediction of each task. We further present a Contrastive Module to model the connection of the two tasks with the connection modeling dataset. Feature Encoder: The Feature Encoder takes the RGB image (x^s or x^c) as input to produce task-specific predictions and also serves as the feature extractor for the Contrastive Module. We design both the saliency encoder E_α_s and camouflage encoder E_α_c with the same backbone network,  the ResNet50 <cit.>, where α_s and α_c are the corresponding network parameter sets. The ResNet50 backbone network has four groups[We define feature maps of the same spatial size as same group.] of convolutional layers of channel size 256, 512, 1024 and 2048 respectively. We then define the output features of both encoders as F_α_s={f^s_k}_k=1^4 and F_α_c={f^c_k}_k=1^4, where k indexes the feature group. Prediction Decoder: As shown in Fig. <ref>, we design a shared decoder structure for our joint learning framework. To reduce the computational burden, also to achieve feature with larger receptive field, we first attach a multi-scale dilated convolution <cit.> of output channel size C=32 to each backbone feature to generate the new backbone features F'_α_s={f^cs_k}_k=1^4 and F'_α_c={f^cc_k}_k=1^4 for each specific task from F_α_s and F_α_c. Then, we adopt the residual attention based feature fusion strategy from <cit.> to achieve high/low level feature aggregation. Specifically, the lower-level features are fed to a residual connection module <cit.> with two 3× 3 convolutional layers, which is then added to the higher level feature. The sum of the high/low level feature is then fed to another residual connection block of the same structure as above to generate the fused feature. We perform the above feature fusion operation until we reach the lowest level feature,  f^cc_1 or f^cs_1. To generate the prediction for each task, we design a classifier module, which is composed of three cascaded convolutional layers, where the kernel size of the first two convolutional layers is 3× 3, and that of the last convolutional layer is 1× 1. After generating initial predictions, we used the holistic attention module <cit.> for feature optimization to obtain further improved predictions, as the final predictions. To simplify the explanation, we only use prediction after the holistic attention module as the decoder output. We then define prediction of each task as: G_β(F_α_s) for SOD and G_β(F_α_c) for COD, where β represents the parameter set of the shared prediction decoder. Contrastive Module: The Contrastive Module 𝐶𝑡𝑟𝑠_θ aims to enhance the identity of each task with the feature of other tasks as guidance. Specifically, it takes image x^p from the connection modeling dataset D_p={x_i^p}_i=1^N_p as input to model the feature correlation of SOD and COD, where θ is parameter set of the contrastive module. For image x^p from the connection modeling dataset, its saliency and camouflage features are F^p_α_s={f^p_sk}_k=1^4 and F^p_α_c={f^p_ck}_k=1^4, respectively. With the shared decoder G_β, the prediction map are G_β(F^p_α_s) indicating the saliency map and G_β(F^p_α_c) as the camouflage map. The contrastive module decides positive/negative pairs based on contrast information, where regions of similar contrast are defined as positive pairs and the different contrast regions are defined as negative pairs. The intuition behind this is that COD and SOD are both contrast based class-agnostic binary segmentation tasks, making conventional category-aware contrastive learning infeasible to work in this scenario. Considering the goal of building the positive/negative pairs for contrastive learning is to learn representative features via exploring the inherent data correlation,  the category information, we argue the inherent correlation in our scenario is the contrast information. For SOD, the foreground shows higher contrast compared with the background, indicating the different contrast level. For COD, the contrast levels of foreground and background are similar. Thus given the same input image x^p, we decide positive/negative pairs based on the contrast information of the activated regions. In Fig. <ref>, we show the activation region (the processed predictions) of the same image from both the saliency encoder (first row) and camouflage encoder (second row). Specifically, given same image x^p, we compute its camouflage map and saliency map, and highlight the detected foreground region in red. Fig. <ref> shows that the two encoders focus on different regions of the image, where the saliency encoder pays more attention to the region that stands out from the context. The camouflage encoder focuses more on the hidden object with similar color or structure as the background, which is consistent with our assumption that these two tasks are contradicting with each other in general. Feature definition: Following the conventional practice of contrastive learning, our contrastive module Ctrs_θ maps image features,  F^p_α_s and F^p_α_c for the connection modeling data x^p, to the lower dimensional feature space via four spectral normed convolutional layers (SNconv) <cit.>, which is proven effective in preserving the geometric distance in the compressed space. We then compute saliency and camouflage features of the same image: F^p_sf =S(G_β(F^p_α_s),Ctrs_θ(F^p_α_s)), F^p_sb =S((1-G_β(F^p_α_s)),Ctrs_θ(F^p_α_s)), F^p_𝑐𝑓 =S(G_β(F^p_α_c),Ctrs_θ(F^p_α_c)), F^p_cb =S((1-G_β(F^p_α_c)),Ctrs_θ(F^p_α_c)), where S(·,·) computes the region feature via matrix multiplication <cit.>, where the feature maps,  Ctrs_θ(F^p_α_s), are scaled to be the same spatial size as the activation map,  G_β(F^p_α_s). F^p_sf∈ℝ^1× C and F^p_sb∈ℝ^1× C in Eq. (<ref>) represent the SOD foreground and background features, and F^p_𝑐𝑓 and F^p_cb are the COD foreground and background features, respectively. Positive/negative pair construction: According to our previous discussion, we define three sets of positive pairs based on contrast similarity: (1) The SOD background feature and COD background feature of the same image should be highly similar, indicating similar contrast information; (2) Due to the nature of the camouflaged object, the foreground and the background features of COD are similar as well as camouflaged object shares similar contrast with the background; (3) Similarly, the COD foreground feature and SOD background feature are also similar in contrast. On the other hand, the negative pair consists of SOD foreground feature and background feature. Contrastive loss: Given the positive/negative pairs, we follow <cit.> and define the contrastive loss as: ℒ_ctrs=-log∑_pos/∑_pos+exp(c(F^p_sf,F^p_sb)), where c(· ) measures the cosine similarity of the normalized vectors. ∑_pos represents the similarity of positive pairs, which is defined as: ∑_pos = exp(c(F^p_cf,F^p_cb))+exp(c(F^p_sb,F^p_cb))+exp(c(F^p_sb,F^p_cf)). §.§.§ Data Interaction In Section <ref>, we discuss the contradicting modeling strategy to model the two tasks from the model correlation perspective. In this section, we further explore the task relationships from dataset perspective, and introduce data interaction as data augmentation. Sample selection principle: As shown in Fig. <ref>, saliency and camouflage are two properties that can transfer from each other. We find that there exist samples in the COD dataset that are both salient and camouflaged. We argue that those samples can be treated as hard samples for SOD to achieve robust learning. The main requirement is that the activation of those samples for SOD and COD should be similar. In other words, the predictions of the selected images for both tasks need to be similar. To select those samples from the COD dataset, we resort to weighted Mean Absolute Error (𝑤𝑀𝐴𝐸), and select samples in the COD dataset <cit.> which achieve the smallest 𝑤𝑀𝐴𝐸 by testing it using a trained SOD model. The weighted mean absolute error 𝑤𝑀𝐴𝐸 is defined as : 𝑤𝑀𝐴𝐸 = ∑_u=1^W∑_v=1^H |y^u, v - p^u,v |/∑_u=1^W∑_v=1^H y^u, v, where u,v is the pixel index, p represents the model prediction, y is the corresponding ground-truth, and W and H indicate size of y. Compared with mean absolute error, 𝑤𝑀𝐴𝐸 avoids the biased selection caused by different sizes of the foreground object(s). Data interaction: For the COD training dataset D_c ={x_i^c, y_i^c}_i=1^N_c and the trained SOD model M_θ_s, we obtain saliency prediction of the images in D_c as P^c_s=M_θ_s({x^c})={p^c_i}_i=1^N_c, where p_i^c is the saliency prediction of the COD training dataset. We assume that easy samples for COD can be treated as hard samples for SOD as shown in Fig. <ref>. Then we select M=403 samples D_c^M with the smallest 𝑤𝑀𝐴𝐸 in D_c via Eq. (<ref>), and add in our SOD training dataset <cit.> as a data augmentation technique. We show the selected samples in Fig. <ref>, which clearly illustrates the partially positive connection of the two tasks at the dataset level. §.§.§ Foreground Cropping as Data Augmentation: Considering the real-life scenarios, camouflaged objects can appear in different sizes, we introduce foreground cropping to achieve context-aware data augmentation. Note that we only perform foreground cropping for COD as the prediction of SOD is relatively stable with different sizes of the foreground object(s). Specifically, we first define the largest bounding box region that covers all the camouflaged objects as the compact cropping (CCrop). Then, we obtain the median cropping (MCrop) and loose cropping (LCrop) by randomly extending 0-80 and 0-150 pixels respectively randomly outward along the compact bounding box. We perform cropping on the raw images and resize the cropped image back to the pre-defined training image size for training. §.§ Uncertainty-aware Learning In Section <ref>, we discussed that both SOD and COD have inherent uncertainty, where the subjective nature of SOD poses serious model uncertainty <cit.> for SOD and difficulty of labeling introduces data uncertainty <cit.> for COD. As shown in Fig. <ref>, for the SOD dataset, the uncertainty comes from the ambiguity of saliency. For the COD dataset, the uncertainty mainly comes from the difficulty of labeling (the accuracy of y_i). To model the uncertainty of both tasks for reliable model generation, we introduce an uncertainty-aware adversarial training strategy to model the task-specific uncertainty in our joint learning framework. Adversarial learning framework: Following the conventional practice of generative adversarial network (GAN) <cit.>, we design a fully convolutional discriminator network to evaluate confidence of the predictions. The fully convolutional discriminator network D_γ consists of five SNconv layers <cit.> of kernel size 3× 3. As a conditional generation task, the fully convolutional discriminator takes the prediction/ground truth and the conditional variable,  the RGB image, as input, and produces a one-channel confidence map, where γ is the network parameter set. Note that we have batch normalization and leaky relu layers after the first four convolutional layers. D_γ aims to distinguish areas of uncertainty, which produce all-zero output with ground truth y as input, and produce |p-y| output with prediction map p as input. In our case, the fully convolutional discriminator aims to discover the hard (or uncertain) regions of the input image. We use the same structure of discriminators with parameter sets γ_s and γ_c for SOD and COD respectively, to identify the two types of challenging regions,  the subjective area for SOD, and the ambiguous regions for COD. Uncertainty-aware learning: For the prediction decoder module, we first have the task-specific loss function to learn each task. Specifically, we adopt the structure-aware loss function <cit.> for both SOD and COD, and define the loss function as: ℒ_str(p,y)=ω*ℒ_ce(p,y)+ℒ_iou^ω(p,y), where ω is the edge-aware weight, which is defined as ω=1+5* | (avg_pool(y)-y) |, y is task-specific ground truth, ℒ_ce is the binary cross-entropy loss, ℒ_iou^ω is the weighted boundary-IOU loss <cit.>. In this way, the task specific loss functions ℒ_str^s and ℒ_str^c for SOD and COD are defined as: ℒ_str^s=ℒ_str(G_β(F_α_s),y^s), ℒ_str^c=ℒ_str(G_β(F_α_c),y^c), To achieve adversarial learning, following <cit.>, we further introduce adversarial loss function to both SOD and COD predictors, which is defined as a consistency loss between discriminators prediction of prediction map and discriminators prediction of ground-truth, aiming to fool the discriminators that the prediction of SOD or COD is the actual ground truth. The adversarial loss functions (ℒ_adv^s and ℒ_adv^c) for SOD and COD, respectively, are defined as: ℒ_adv^s = ℒ_ce(D_γ_s(x^s,G_β(F_α_s)), D_γ_s(x^s,y^s)), ℒ_adv^c =ℒ_ce(D_γ_c(x^c,G_β(F_α_c)), D_γ_c(x^c,y^c)), Both the task specific loss in Eq. (<ref>), Eq. (<ref>) and the adversarial loss in Eq. (<ref>), Eq. (<ref>) are used to update the task-specific network (the generator). To update the discriminator, following the conventional GAN, we want it to distinguish areas of uncertainty clearly. Due to the inherent uncertainty that cannot be directly described, the uncertainty in inputting the ground truth cannot be accurately represented. However, because the correctly annotated regions are dominant in the complete dataset, we believe that the network can perceive the areas that are difficult to learn. The adversarial learning mechanism makes it difficult for the discriminator to distinguish between predicted and ground truth maps, and it can differentiate between noisy ground truth images and areas where RGB images cannot be aligned. Therefore, the output of the discriminator when inputting ground truth is defined as an all-zero map. Additionally, it produces a residual output for the prediction map. The outputs corresponding to different inputs of the discriminator are shown in Fig. <ref>. Then, the discriminators (D_γ_s and D_γ_c) are updated via: ℒ_dis^s=ℒ_ce(D_γ_s(x^s,G_β(F_α_s)), |G_β(F_α_s)-y^s|), + ℒ_ce(D_γ_s(x^s,y^s),0), ℒ_dis^c=ℒ_ce(D_γ_c(x^c,G_β(F_α_c)), |G_β(F_α_c)-y^c|), + ℒ_ce(D_γ_c(x^c,y^c),0), Note that the two discriminators are updated separately. §.§ Objective Function As a joint confidence-aware adversarial learning framework, we further introduce the objective functions in detail for better understanding of our learning pipeline. Firstly, given a batch of images from the SOD training dataset x^s, we define the confidence-aware loss with contrastive modeling for the generator as: ℒ^s = ℒ_str^s +λ_adv*ℒ_adv^s+λ_ctrs*ℒ_ctrs, where ℒ_str^s is the task specific loss, defined in Eq. (<ref>), ℒ_avd^s is the adversarial loss in Eq. (<ref>), and ℒ_ctrs is the contrative loss in Eq. (<ref>). The parameters λ_adv=1,λ_ctrs=0.1 are used to balance the contribution of adversarial loss/contrastive loss for robust training. Similarly, for image batch x^c from the COD training dataset, the confidence-aware loss with contrastive modeling for the generator is defined as: ℒ^c = ℒ_str^c + λ_adv*ℒ_adv^c+λ_ctrs*ℒ_ctrs. The discriminators are optimized separately, where D_γ_s and D_γ_c are updated via Eq. (<ref>) and Eq. (<ref>). Note that, we only introduce contrastive learning to our joint-task learning framework after every 5 steps, which is proven more effective in practice. We show the training pipeline of our framework in Algorithm <ref> for better understanding of the implementation details. § EXPERIMENTAL RESULTS §.§ Setting: Dataset: For salient object detection, we train our model using the augmented DUTS training dataset <cit.> via data interaction (see Sec. <ref>), and testing on six other testing dataset, including the DUTS testing datasets, ECSSD <cit.>, DUT <cit.>, HKU-IS <cit.>, PASCAL-S dataset <cit.> and SOD dataset <cit.>. For camouflaged object detection, we train our model using the benchmark COD training dataset, which is a combination of COD10K training set <cit.> and CAMO training dataset <cit.>, and test on four camouflaged object detection testing sets, including the CAMO testing dataset <cit.>, CHAMELEON <cit.>, COD10K testing dataset <cit.> and NC4K dataset <cit.>. Evaluation Metrics: We use four evaluation metrics to evaluate the performance of the salient object detection models and the camouflaged object detection models, including Mean Absolute Error (ℳ), Mean F-measure (F_β), Mean E-measure <cit.> (E_ξ) and S-measure <cit.> (S_α). Mean Absolute Error (ℳ): measures the pixel-level pairwise errors between the prediction s and the ground-truth map y, which is defined as: ℳ = ∑_u=1^W∑_v=1^H |y^u, v - s^u,v |/W × H, where W and H indicate size of the ground-truth map. Mean F-measure (F_β): measures the precision and robustness of the model, which is defined as: F_β = TP/TP + 1/2(FP + FN), where TP denotes the number of true positives, FP shows the false positives and FN indicates the false negatives. Mean E-measure (E_ξ): measures the pixel-level matching and image-level statistics of the prediction <cit.>, which is defined as: E_ξ = 1/W × H∑_u=1^W∑_v=1^H ϕ_p(u, v), where ϕ_p(u, v) is the alignment matrix <cit.>, measuring the alignment of model prediction and the ground truth. S-measure (S_α): measures the regional and global structural similarities between the prediction and the ground-truth <cit.> as: S_α = α· S_o + (1 - α) · S_r. where S_o measures the global structural similarity, in terms of the consistencies in the foreground and background predictions and contrast between the foreground and background predictions, S_r measures the regional structure similarity, and α = 0.5 balances the two similarity measures following <cit.>. Training details: We train our model in Pytorch with ResNet50 <cit.> as backbone, as shown in Fig. <ref>. Both the encoders for saliency and camouflage branches are initialized with ResNet50 <cit.> trained on ImageNet, and other newly added layers are initialized by default. We resize all the images and ground truth to 352×352, and perform multi-scale training. The maximum step is 30000. The initial learning rate are 2e-5, 2e-5 and 1.2e-5 with Adam optimizer for the generator, discriminators and contrastive module respectively. The whole training takes 26 hours with batch size 22 on an NVIDIA GeForce RTX 3090 GPU. §.§ Performance Comparison Quantitative Analysis: We compare the performance of our SOD branch with SOTA SOD models as shown in Table <ref>. One observation from Table <ref> is that the structure-preserving strategy is widely used in the state-of-the-art saliency detection models, SCRN <cit.>, F^3Net <cit.>, ITSD <cit.>, and it can indeed improve model performance. Our method shows significant improvement in performance on four evaluation metrics compared to other SOD methods, except for the SOD dataset <cit.>. Due to the small size of the SOD dataset <cit.>(300 images), we believe that fluctuations in predictions are reasonable. We also compare the performance of our COD branch with SOTA COD models in Table <ref>. Except for COD10k<cit.>, where our method is slightly inferior to ZoomNet <cit.>, our method shows significant superiority over all other COD methods on all datasets. The reason for this may be that ZoomNet <cit.> was tested at resolution 384 × 384, while our method was tested at resolution 352 × 352, and resolution can affect the performance of COD. The consistent best performance of our camouflage model further illustrates the effectiveness of the joint learning framework. Qualitative Analysis: Further, we show predictions of ours and SOTA models of SOD method in Fig. <ref>, and COD method in Fig. <ref>, where the Uncertainty is obtained based on the prediction from the discriminator. Fig. <ref> shows that we produce both accurate prediction and reasonable uncertainty estimation, where the brighter areas of the uncertainty map indicate the less confident regions. It can be observed that our approach can better distinguish the boundaries between salient objects and the background. Fig. <ref> illustrates that our proposed joint learning approach and random-sampling based foreground cropping can better localize camouflaged targets. Further, the produced uncertainty map clearly represents model awareness of the prediction, leading to interpretable prediction for the downstream tasks. Run-time Analysis: For COD task, the inference time of our model is 53.9 ms per image. And for SOD task, the inference time of our model is 40.4 ms per image on an NVIDIA GeForce RTX 3090 GPU, which is comparable to the state-of-the-art model in terms of speed. §.§ Ablation Study We extensively analyze the proposed joint learning framework to explain the effectiveness of our strategies, and show the performance of our SOD and COD models in Table <ref> and Table <ref> respectively. Note that, unless otherwise stated, we do not perform multi-scale training for the related models. Train each individual task: We use the same Feature encoder, Prediction decoder in Fig. <ref> to train the SOD model with original DUTS dataset and the COD model trained without random-sampling based foreground cropping following the same training related setting as in the Training details section, and show their performance as SSOD and SCOD, respectively. And we used the augmented DUTS dataset and foreground cropping COD training dataset to train the SOD model and the COD model separately, the results are shown as ASOD and ACOD. The comparable performance of SSOD and SCOD with their corresponding SOTA models proves the effectiveness of our prediction decoder. Further, the two data augmentation based models show clear performance improvement compared with training directly with the raw dataset, especially for the COD task, where foreground cropping is applied. We generated the augmented SOD dataset via data interaction (see Sec. <ref> and Fig. <ref>). Experimental results show a reasonable performance improvement, indicating that our proposed data augmentation techniques are effective in enriching the diversity of the training data. Joint training of SOD and COD: We train the Feature encoder and Prediction decoder within a joint learning pipeline to achieve simultaneous SOD and COD. The performance is reported as JSOD1 and JCOD1, respectively. For the COD task, there was a slight improvement in performance compared to the uni-task setting, indicating that under the joint learning framework, SOD can provide effective prediction optimization for COD. For SOD task, there was a slight decrease in performance, which we believe is due to the lack of consideration of the contradicting attribute between the two tasks. The subsequent experiments in the paper fully demonstrate this point. Joint training of SOD and COD with contrastive learning: We add the task connection constraint to the joint learning framework,  the contrastive module in particular, and show performance as JSOD2 and JCOD2 respectively. As discussed in Sec. <ref>, our contrastive module is designed to enhance the context information, and the final results show performance improvement for SOD. However, we observe deteriorated performance for COD when the contrastive module is applied. We have analyzed the predictions and find that the context enhancement strategy via contrastive learning can be a double-edged sword, which is effective for SOD but leads to performance deterioration for COD. Different from the conventional way of constructing positive/negative pairs based on augmentation or category information, SOD and COD are both class-agnostic tasks, and our positive/negative pairs are designed based on contrast information. Experimental results explain its effectiveness for high-contrast based foreground detection,  salient object detection, while minimal context difference between foreground and background of COD poses new challenges for applying contrastive learning effectively to achieve distinguishable foreground/background feature representation. Joint adversarial training of SOD and COD: Based on the joint learning framework (JSOD1 and JCOD1), we further introduce the adversarial learning pipeline, and show performance as JSOD3 and JCOD3. We observe relatively comparable performance of JSOD3 (JCOD3) to JSOD1 (JCOD1), explaining that the adversarial training pipeline will not sacrifice model deterministic performance. Note that with adversarial training, our model can output prediction uncertainty with single forward, serving as an auxiliary output to explain confidence of model output (see Uncertainty in Fig. <ref> and Fig. <ref>). The proposed joint framework: We report our final model performance with both the contrastive module and the adversarial learning solution as Ours. As a dual-task learning framework, Ours shows improved performance compared with models with each individual strategy,  contrastive learning and adversarial training. As discussed in Sec. <ref>, the former is introduced to model the task-wise correlation, and the latter is presented to model the inherent uncertainty within the two tasks. Although these two strategies show limitations for some specific datasets, we argue that as a class-agnostic task, both our contrast based positive/negative pair construction for contrastive learning and residual learning based discriminator learning within the adversarial training pipeline are effective in general, and more investigation will be conducted to further explore their contributions for the joint learning of the contradictory tasks. §.§ Framework Analysis As discussed in Sec. <ref>, SOD and COD are correlated from both task's point of view and the data's perspective. In this Section, we further analyze their relationships and the inherent uncertainty modeling techniques for SOD and COD. §.§.§ Data interaction analysis SOD and COD are both context based tasks (see Fig. <ref>), and can be transformed into each other, where the former represents the attribute of object(s) with high-contrast and the latter is related to concealment. Considering the opposite object attribute of saliency and camouflage, we introduce a simple data selection strategy as data augmentation for saliency detection. Based on the nature of the two task, we explicitly connected the SOD and COD datasets. Experimental results show that incorporating an additional 3.8% of data, specifically 403 out of 10,553 images, led to performance improvement for SOD, comparing ASOD and SSOD in Tabel <ref>. §.§.§ Task interaction analysis In our preliminary version <cit.>, we used the entire PASCAL VOC 2007 as a bridge dataset to model the contradictory properties of SOD and COD via similarity modeling. Here, we apply contrative learning based on contrast information instead, which is proven effective for SOD, comparing JSOD2 and JSOD1 in Tabel <ref>. As contrastive learning is sensitive to the positive/negative pools, and PASCAL VOC 2007 dataset contains samples that pose challenges for either SOD or COD to decide the foreground, we thus selected a portion of the images from the bridge dataset as the updated PASCAL dataset. Specifically, we tested the PASCAL VOC 2007 dataset using the trained SOD and COD models to obtain the weighted MAE of the SOD and COD prediction maps. Then, we selected 200 images from the PASCAL VOC 2007 dataset with the smallest weighted MAE as the new bridge dataset for training the contradicting modeling module. The contradicting module is trained every 5 steps of the other modules to avoid involving feature conflicting for COD. Although our contrastive learning solution is proven effective for SOD, the final performance still shows deteriorated performance of COD, comparing JCOD2 and JCOD1 in Tabel <ref>. The main reason is that the contrastive learning module tries to push the feature spaces of foreground and background to be close as Eq. (<ref>), while the main task of COD is to distinguish the foreground from the background. The contradicting objectives pose challenges for the COD task to converge. §.§.§ Discriminator analysis Considering that the uncertainty regions of both tasks are associated with the image, we concatenate the prediction/ground truth with the image, and feed it to the discriminator. We define the portions of a network's incorrect predictions as areas that are difficult to learn following <cit.>. In the early stages of training, the network fits the correctly annotated regions, and in later training, the predicted maps gradually approach the ground truth maps with the uncertainty/noise annotations <cit.>. When introducing image information, the areas that are difficult to predict or annotated incorrectly (inherent uncertainty) can be gradually discovered under the guidance of RGB image. §.§ Hyper-parameters analysis In our joint learning framework, several hyper-parameters affect our final performance, including the maximum iterations, the base learning rates, weights for the contrastive learning loss function and the adversarial loss function. We found that although the training dataset size of SOD is three times of the COD dataset, the COD images are more complex than the SOD images. Therefore, we kept the same numbers of iterations for SOD and COD tasks. Due to the overlapping regions of saliency and camouflage, for the contrastive learning module, we trained it every 5 steps to avoid involving too much conflicting to COD. With the same goal, we set the weight of the contrastive loss to 0.1. For the Confidence estimation module, we observed that excessively large adversarial training loss may lead to over-fitting on noise. Our main goal of using the adversarial learning is to provide reasonable uncertainty estimation. In this case, we define the ground truth output of the discriminator as the residual between the main network prediction and the corresponding ground truth, and set the weight of Eq. (<ref>) and Eq. (<ref>) as 1.0, to achieve trade-off between model performance and effective uncertainty estimation. § CONCLUSION In this paper, we proposed the first joint salient object detection and camouflaged object detection framework to explore the contradicting nature of these two tasks. Firstly, we conducted an in-depth analysis on the intrinsic relationship of the two tasks. Based on it, we designed a contrastive module to model the task-wise correlation, and a data interaction strategy to achieve context-aware data augmentation for SOD. Secondly, considering that camouflage is a local attribute, we proposed random sampling-based foreground-cropping as the COD data augmentation technique. Finally, uncertainty-aware learning is explored to produce uncertainty estimation with single forward. Experimental results across different datasets prove the effectiveness of our proposed joint learning framework. We observed that although contrast-based task-wise contrastive learning is proven effective for SOD, it damages the performance of COD due to the contradicting attribute of these two tasks. More investigation will be conducted to further explore informative feature representation learning via contrastive learning for class-agnostic tasks. ieeetr
http://arxiv.org/abs/2307.04482v1
20230710110437
"Nonlinear and nonreciprocal transport effects in untwinned thin films of ferromagnetic Weyl metal S(...TRUNCATED)
["Uddipta Kar","Elisha Cho-Hao Lu","Akhilesh Kr. Singh","P. V. Sreenivasa Reddy","Youngjoon Han","Xi(...TRUNCATED)
cond-mat.mes-hall
[ "cond-mat.mes-hall", "cond-mat.mtrl-sci" ]
"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nInstitute of Physics, Academia Sinica, Na(...TRUNCATED)
http://arxiv.org/abs/2307.04035v1
20230708191401
A novel framework for Shot number minimization in Quantum Variational Algorithms
[ "Seyed Sajad Kahani", "Amin Nobakhti" ]
quant-ph
[ "quant-ph" ]
"\n\nHigh Fidelity 3D Hand Shape Reconstruction\n via Scalable Graph Frequency Decomposition\n \n(...TRUNCATED)
http://arxiv.org/abs/2307.05283v1
20230711142432
On the Identity and Group Problems for Complex Heisenberg Matrices
[ "Paul C. Bell", "Reino Niskanen", "Igor Potapov", "Pavel Semukhin" ]
cs.DM
[ "cs.DM", "math.CO" ]
"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\nproof*[1]\n \n \n \n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\(...TRUNCATED)
http://arxiv.org/abs/2307.04483v1
20230710111105
Towards Hypersemitoric Systems
[ "Tobias Våge Henriksen", "Sonja Hohloch", "Nikolay N. Martynchuk" ]
math.SG
[ "math.SG", "37J35 53D20 70H06" ]
"\n\n\nInvertible Low-Dimensional Modelling of X-ray Absorption Spectra for Potential Applications i(...TRUNCATED)
http://arxiv.org/abs/2307.10213v1
20230714133328
Mitigating Bias in Conversations: A Hate Speech Classifier and Debiaser with Prompts
[ "Shaina Raza", "Chen Ding", "Deval Pandya" ]
cs.CL
[ "cs.CL", "cs.AI" ]
"\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n Vector Institute of Artificial Intelligence\n Toronto\n ON\n Can(...TRUNCATED)

Dataset Card for "arxiv_july_week2_2023"

More Information needed

Downloads last month
29